text
stringlengths
4
2.78M
--- abstract: 'Using special polynomials related to the Andrews-Gordon identities and the colored Jones polynomial of torus knots, we construct classes of $q$-hypergeometric series lying in the Habiro ring. These give rise to new families of quantum modular forms, and their Fourier coefficients encode distinguished Maass cusp forms. The cuspidality of these Maass waveforms is proven by making use of the Habiro ring representations of the associated quantum modular forms. Thus, we provide an example of how the $q$-hypergeometric structure of the associated series to can be used to establish modularity properties which are otherwise non-obvious. We conclude the paper with a number of motivating questions and possible connections with Hecke characters, combinatorics, and still mysterious relations between $q$-hypergeometric series and the passage from positive to negative coefficients of Maass waveforms.' address: - 'Mathematical Institute, University of Cologne, Weyertal 86-90, 50931 Cologne, Germany' - 'CNRS LIAFA Universite Denis Diderot - Paris 7, Case 7014, 75205 Paris Cedex 13, France' - 'Hamilton Mathematics Institute & School of Mathematics, Trinity College, Dublin 2, Ireland' author: - Kathrin Bringmann - Jeremy Lovejoy - Larry Rolen title: 'On some special families of $q$-hypergeometric Maass forms' --- [^1] Introduction and Statement of Results {#Intro} ===================================== We begin by introducing the special polynomials $H_n(k,\ell;b;q)$ which play a key role in our constructions. To do so, we recall the *$q$-rising factorial*, defined by $$(a)_n = (a;q)_n := \prod_{k=0}^{n-1} \big(1-aq^{k}\big),$$ along with the *Gaussian polynomials*, given by $$\begin{bmatrix} n \\ k \end{bmatrix}_q := \begin{cases} \frac{(q)_n}{(q)_{n-k}(q)_k} & \text{if $0 \leq k \leq n$}, \\ 0 & \text{otherwise}. \notag \end{cases}$$ Then for $k\in\mathbb{N}$, $1 \leq \ell \leq k$, and $b\in\{0,1\}$, we define the polynomials $H_{n}(k,\ell;b;q)$ by $$\label{Hdef} H_{n}(k,\ell;b;q) := \sum_{n = n_k \geq n_{k-1} \geq \ldots \geq n_1 \geq 0} \prod_{j=1}^{k-1} q^{n_j^2+(1-b)n_j} \begin{bmatrix} n_{j+1}-n_j - bj + \sum_{r=1}^j (2n_r + \chi_{\ell > r}) \\ n_{j+1}-n_j \end{bmatrix}_q.$$ Here we use the usual charactersitic function $\chi_{A}$, defined to be $1$ if $A$ is true and $0$ otherwise. These polynomials occurred explicitly (in the case $b=1$) in recent work on torus knots [@Hi-Lo1], and they can also be related to generating functions for the partitions occurring in Gordon’s generalization of the Rogers-Ramanujan identities [@Wa1]. To describe the latter, let $G_{k,i,i',L}(q)$ be the generating function for partitions of the form $$\label{A-Grelation} \sum_{j=1}^{L-1} j f_j,$$ with $f_1 \leq i-1$, $f_{L-1} \leq i'-1$, and $f_i + f_{i+1} \leq k$ for $1 \leq k \leq L-2$. Using the fact that $$\begin{bmatrix} n \\ k \end{bmatrix}_{q^{-1}} = q^{-k(n-k)}\begin{bmatrix} n \\ k \end{bmatrix}_{q},$$ making some judicious changes of variable and comparing with Theorem 5 of [@Wa1], it can be shown that $$\label{A-Grelationbis} H_n\big(k,\ell;b,q^{-1}\big) = q^{(k-1)bn - 2(k-1)\binom{n+1}{2}} G_{k-1,\ell,k,2n-b+1}(q).$$ In the context of torus knots, the $n$-th coefficient in Habiro’s cyclotomic expansion of the colored Jones polynomial of the left-handed torus knot $T(2,2k+1)$ was shown in [@Hi-Lo1] to be $q^{n+1-k}H_{n+1}(k,1;1;q)$, and the general $H_{n}(k,\ell;1;q)$ were used to construct a class of $q$-hypergeometric series with interesting behavior both at roots of unity and inside the unit circle. As we shall see shortly, this is the heart of the quantum modular phenomenon; the reader is also referred to [@Hi-Lo1] for more details. In this paper, we consider classes of $q$-hypergeometric Maass cusp forms constructed from the polynomials $H_{n}(k,\ell;b;q)$. These functions, denoted [by]{} $F_j(k,\ell;q)$ $\big(j\in\{1,2,3,4\}\big)$, are defined as follows:[^2] $$\label{FFnsDefn}\begin{aligned} F_1(k,\ell;q) &:= \sum_{n \geq 0} (q)_{n}(-1)^{n}q^{\binom{n+1}{2}} {H}_{n}(k,\ell;0;q), \\ F_2(k,\ell;q) &:= \sum_{n \geq 0} \big(q^2;q^2\big)_{n}(-1)^{n} {H}_{n}(k,\ell;0;q), \\F_3(k,\ell;q) &:= \sum_{\substack{n \geq 1}} (q)_{n-1}(-1)^{n}q^{\binom{n+1}{2}} H_{n}(k,\ell;1;q), \\ F_4(k,\ell;q) &:= \sum_{\substack{n \geq 1}} (-1)_{n}(q)_{n-1}(-q)^{n} H_{n}(k,\ell;1;q). \end{aligned}$$ Note that when $k=1$ the polynomials in are identically $1$, and so the above contain two celebrated $q$-series of Andrews, Dyson, and Hickerson [@An-Dy-Hi1] as special cases. Namely, we have $$\label{ConnectionOurFamilySigma} 2F_2(1,1;q) = \sigma\big(q^2\big)$$ and $$\label{ConnectionOurFamilySigmaStar} F_4(1,1;q) = -\sigma^*(-q),$$ where $$\begin{aligned} \label{sigmadef} \sigma(q) :=& \sum_{n \geq 0} \frac{q^{\binom{n+1}{2}}}{(-q)_n} \\ =&\ 1 + \sum_{n \geq 0} (-1)^nq^{n+1}(q)_n \label{id1}\\ =&\ 2\sum_{n \geq 0}(-1)^n(q)_n , \label{id2}\\ \sigma^*(q) :=&\ 2\sum_{n \geq 1} \frac{(-1)^nq^{n^2}}{(q;q^2)_n} \label{sigma*def}\\ =&\ -2\sum_{n \geq 0} q^{n+1}\big(q^2;q^2\big)_n. \label{id3}\end{aligned}$$ The definitions in and are the original definitions of Andrews, Dyson, and Hickerson, while the identities and were established by Cohen [@Co1], and follows easily. The function $\sigma$ was first considered in Ramanujan’s “Lost” notebook (see [@AndrewsLostNotebookV]). Andrews, Dyson, and Hickerson showed [@An-Dy-Hi1] that this series satisfies several striking and beautiful properties, and in particular that if $ \sigma(q)=\sum_{n\geq0}S(n)q^n ,$ then $\lim \sup |S(n)|=\infty$ but $S(n)=0$ for infinitely many $n$. Their proof is closely related to indefinite theta series representations of $\sigma$, such as: $$\sigma(q)=\sum_{\substack{n\geq0\\ |\nu|\leq n}}(-1)^{n+\nu}q^{\frac{n(3n+1)}2-\nu^2}\big(1-q^{2n+1}\big) .$$ The coefficients of $\sigma^*(q)$ have the same properties. Subsequently Cohen [@Co1] showed how to nicely package the $q$-series of Andrews, Dyson, and Hickerson within a single modular object. Namely, he proved that if coefficients $\{T(n)\}_{n\in1+24{\mathbb Z}}$ are defined by $$\label{Tofndef} \sigma\big(q^{24}\big)=\sum_{n\geq0}T(n)q^{n-1} , \quad\quad\quad\quad \sigma^*\big(q^{24}\big)=\sum_{n<0}T(n)q^{1-n},$$ then the $T(n)$ are the Fourier coefficients of a Maass waveform. The definitions of Maass waveforms and the details of this construction are reviewed in Section \[MaassFormsSctn\]. In this paper, we show that the functions $F_j(k,\ell;q)$ have a similar connection to Maass waveforms, and by and may thus be considered as a $q$-hypergeometric framework containing the examples of Andrews, Dyson, and Hickerson and Cohen.\ In what follows, we let $f$ be a Maass waveform with eigenvalue $1/4$ (under the hyperbolic Laplacian $\Delta$) on a congruence subgroup of $\SL_2(\mathbb{Z})$ (and with a possible multiplier), which is cuspidal at $i\infty$. If the Fourier expansion of $f$ is given, as in Lemma \[Maass0Fourier\], by $(\tau=u+iv)$ $$f(\tau) = v^{\frac{1}{2}}\sum_{n\neq0}A(n)K_{0}\bigg(\frac{2\pi |n|v}{N}\bigg)e\bigg(\frac{n u}{N}\bigg) ,$$ where $e(w):=e^{2\pi i w}$, then the $q$-series associated to the positive coefficients of $f$ is defined by $$\label{pluspart} f^+(\tau):=\sum_{n>0}A(n)q^{\frac{n}{N}}.$$ We remark in passing that such a map from Maass forms to $q$-series was studied extensively by Lewis and Zagier [@LewisZagier1; @LewisZagier2], and, as we shall see, was used by Zagier [@Za1] to show that such functions are quantum modular forms. Such a construction is also closely related to the study of automorphic distributions in [@MS]. \[mainthm1\] For any $k, \ell\in\mathbb{N}$, with $1 \leq \ell \leq k$, and $j\in\{1,2,3,4\}$, there exists a Maass cusp form $G_{j,k,\ell}$ with eigenvalue $1/4$ for some congruence subgroup of $\operatorname{SL}_2({\mathbb Z})$, such that $$G_{j,k,\ell}^+(\tau) = q^{\alpha} F_j\big(k,\ell;q^d\big)$$ for some $\alpha\in{{\mathbb Q}}$, where $d=1$ if $j\in\{1,3\}$ and $d=2$ if $j\in\{2,4\}$ . The cuspidality of the Maass waveform $G_{j,k,\ell}$ is far from obvious. Indeed, the authors are aware of only two approaches to prove such a result: either to explicitly write down representations for the $G_{j,k,\ell}$ in terms of Hecke characters (which we suspect exist, but which we were unable to identify), or, as we show below, to use the $q$-hypergeometric representations of the $F_j$ directly. This connection, which was hinted at for certain examples in [@RobMaass], utilizes $q$-hypergeometric series to deduce modularity properties in an essential way. The proof of Theorem \[mainthm1\] relies on the Bailey pair machinery and important results of Zwegers [@ZwegersMockMaass] giving modular completions for indefinite theta functions of a general shape. In particular, this family of indefinite theta functions naturally describes the behavior of functions studied by many others in the literature, as described in a recent proof of Krauel, Woodbury, and the second author [@KRW] of unifying conjectures of Li, Ngo, and Rhoades [@RobMaass]. The indefinite theta functions considered here are given for $M\in{\mathbb N}_{\ge2}$ and vectors $a=(a_1,a_2)\in{{\mathbb Q}}^2$ and $b=(b_1,b_2)\in{{\mathbb Q}}^2$ such that $a_1 \pm a_2 \not \in \mathbb{Z}$: $$\label{Sdef} \begin{aligned} & S_{a,b;M}(\tau) := \\ & \Bigg(\sum_{n\pm \nu\geq-\lfloor a_1\pm a_2\rfloor}+\sum_{n\pm\nu<-\lfloor a_1\pm a_2\rfloor}\Bigg)e\big((M+1)b_1n-(M-1)b_2\nu\big)q^{\frac12\big((M+1)(n+a_1)^2-(M-1)(\nu+a_2)^2\big)}. \end{aligned}$$ The next theorem states conditions under which $S_{a,b;M}$ is the image of a Maass waveform under the map defined in . The definitions of $\gamma_M$, the equivalence relation $\sim$, and the operation $^*$ are given in Section \[ZwegersWorkSection\]. \[mainthm2\] Suppose that $a,b\in{{\mathbb Q}}^2$ with $a\neq0$, $a_1\pm a_2\not\in{\mathbb Z}$, $M\in{\mathbb N}_{\geq2}$, and $(\gamma_M a,\gamma_M b)\sim(a,b)$ or both $(\gamma_M a,\gamma_M b)\sim(a^*,b^*)$ and $(\gamma_M a^*,\gamma_M b^*)\sim(a,b)$ hold. Then $S_{a,b;M}=F^+$ for a Maass waveform $F$ of eigenvalue $1/4$ on a congruence subgroup of $\operatorname{SL}_2({\mathbb Z})$. The family of $q$-series $F_j$ in Theorem \[mainthm1\] are all specializations of the series $S_{a,b;M}$. We note that although all of the functions $F_j$ are cusp forms, for a general Maass form $F$ in Theorem \[mainthm2\] this is not always true. For example, the function $W_1$ considered in Theorem 2.1 of [@RobMaass] is shown not to be a cusp form, and using Theorem 4.2 and (4.4) one can easily check that the function $W_1$ fits into the family $S_{a,b;M}$. In addition to the relations of the $q$-series $F_j$ to Maass forms, following Zagier’s work, we find that these functions are instances of so-called quantum modular forms [@Za1]. These new types of modular objects, which are reviewed in Section \[MaassFormsHolomorphization\], are connected to many important combinatorial generating functions, knot and $3$-manifold invariants, and are intimately tied to the important volume conjecture for hyperbolic knots. Roughly speaking, a *quantum modular form* is a function which is defined on a subset of $\mathbb Q$ and whose failure to transform modularly is described by a particularly “nice” function. (See Definition \[cocycle\].) Viewed from a general modularity framework, the generating function of the set of positive coefficients of a Maass form automatically has quantum modular transformations when considered as (possibly divergent) asymptotic expansions. The situation becomes much nicer when, as happens for $\sigma$ and $\sigma^*$, $q$-hypergeometric representations can be furnished which show convergence at various roots of unity (as the series specialize to finite sums of roots of unity). This quantum modularity result, as well as the relation of the associated quantum modular forms to the cuspidality of the Maass form is described in Theorem \[MaassQMFThm\]. In particular, if such a $q$-series is an element of the Habiro ring, which essentially means that it can be written as $$\sum_{n\geq0}a_n(q)(q)_n$$ for polynomials $a_n(q)\in{\mathbb Z}[q]$, then it is apparent that it converges at all roots of unity $q$, and hence the associated Maass form is cuspidal. This observation, combined with Zagier’s ideas, yields the following corollary (the definitions of quantum modular forms and related terms are given in Section \[MaassFormsHolomorphization\]). \[mainthm3\] For any choice of $j,k,\ell$ as in Theorem \[mainthm1\], the functions $F_{j,k,\ell}$ are quantum modular forms of weight $1$ on a congruence subgroup with quantum set $\mathbb P^1({{\mathbb Q}})$. Moreover, the cocycles $r_{\gamma}$, defined in , are real-analytic on ${\mathbb R}\setminus\{\gamma^{-1}i\infty\}$. The paper is organized as follows. In Section \[PrelimSection\], we recall the basic preliminaries and definitions needed for the proofs and explicit formulations of the main theorems, which are then proven in Section \[ProofsSection\]. As mentioned above, the main tools are the Bailey pair method, work of Zwegers in [@ZwegersMockMaass], and ideas from Zagier’s seminal paper on quantum modular forms [@Za1]. We conclude in Section \[QuestionsSection\] with further commentary on related questions and possible future work. Preliminaries {#PrelimSection} ============= Bailey pairs {#BaileyPairsSection} ------------ In this subsection, we briefly recall the Bailey pair machinery, which is a powerful tool for connecting $q$-hypergeometric series with series such as indefinite theta functions. The basic input of this method is a *Bailey pair* relative to $a$, which is a pair of sequences $(\alpha_n,\beta_n)_{n \geq 0}$ satisfying $$\beta_n = \sum_{k=0}^n \frac{\alpha_k}{(q)_{n-k}(aq)_{n+k}}. \notag$$ Bailey’s lemma then provides a framework for proving many $q$-series identities. For our purposes, we need only a limiting form, which says that if $(\alpha_n,\beta_n)$ is a Bailey pair relative to $a$, then, provided both sums converge, we have the identity $$\label{limitBailey} \sum_{n \geq 0} (\rho_1)_n(\rho_2)_n \bigg(\frac{aq}{\rho_1 \rho_2}\bigg)^n \beta_n = \frac{\Big(\frac{aq}{\rho_1}\Big)_{\infty}\Big(\frac{aq}{\rho_2}\Big)_{\infty}}{(aq)_{\infty}\Big(\frac{aq}{\rho_1 \rho_2}\Big)_{\infty}} \sum_{n \geq 0} \frac{(\rho_1)_n(\rho_2)_n\Big(\frac{aq}{\rho_1 \rho_2}\Big)^n }{\Big(\frac{aq}{\rho_1}\Big)_n\Big(\frac{aq}{\rho_2}\Big)_n}\alpha_n.$$ For more on Bailey pairs and Bailey’s lemma, see [@An1; @An2; @war]. We record four special cases of for later use. \[Baileylemmaspecial\] The following are identities are true, provided that both sides converge. If $(\alpha_n,\beta_n)$ is a Bailey pair relative to $1$, then $$\begin{aligned} \sum_{n \geq 1} (-1)^n(q)_{n-1}q^{\binom{n+1}{2}}\beta_n &= \sum_{n \geq 1} \frac{(-1)^nq^{\binom{n+1}{2}}}{1-q^n}\alpha_n, \label{Baileya=1eq1} \\ \sum_{n \geq 1} \big(q^2;q^2\big)_{n-1} (-q)^n\beta_n &= \sum_{n \geq 1} \frac{(-q)^n}{1-q^{2n}}\alpha_n, \label{Baileya=1eq2}\end{aligned}$$ and if $(\alpha_n,\beta_n)$ is a Bailey pair relative to $q$, then $$\begin{aligned} \sum_{n \geq 0} (-1)^n(q)_{n}q^{\binom{n+1}{2}}\beta_n &= (1-q)\sum_{n \geq 0} (-1)^nq^{\binom{n+1}{2}}\alpha_n, \label{Baileya=qeq1} \\ \sum_{n \geq 0} \big(q^2;q^2\big)_{n} (-1)^n\beta_n &= \frac{1-q}{2}\sum_{n \geq 0} (-1)^n\alpha_n. \label{Baileya=qeq2}\end{aligned}$$ For the first two we set $a=1$ in , take the derivative $\frac{d}{d\rho_1} \big | _{\rho_1=1}$, and let $\rho_2 \to \infty$ or $\rho_2 = -1$. For the second two we set $a=q$, $\rho_1=q$, and let $\rho_2 \to \infty$ and $\rho_2 = -q$, respectively. Maass waveforms and Cohen’s example {#MaassFormsSctn} ----------------------------------- We now recall the basic definitions and facts from the theory of Maass waveforms. The interested reader is also referred to [@Bump; @Iwaniec02] for more details. Maass waveforms, or simply Maass forms, are functions on $\mathbb H$ which transform like modular functions but instead of being meromorphic are eigenfunctions of the hyperbolic Laplacian. For $\tau = u+i v \in \mathbb H$ (with $u, v\in {\mathbb R}$), this operator is defined by $$\Delta := -v^2 \bigg(\frac{\partial^2}{\partial u^2} + \frac{\partial^2}{\partial v^2}\bigg).$$ We also require the fact that any translation invariant function $f$, namely a function satisfying $f(\tau+1)=f(\tau)$, has a Fourier expansion at infinity of the form $$\begin{aligned} \label{Fexpgeneral} f(\tau) = \sum_{n\in\mathbb Z} a_f(v;n) e(nu), \end{aligned}$$ where $$a_f(v;n) := \int_{0}^1 f(t+iv)e(-nt)dt.$$ Similarly, such an $f$ has Fourier expansions at any cusp $\mathfrak a$ of a congruence subgroup $\Gamma \subseteq \SL_2(\mathbb{Z})$. We denote these Fourier coefficients by $a_{f,\mathfrak a}(v;n)$. \[MaassForm0Def\] Let $\Gamma \subseteq \textnormal{SL}_2(\mathbb Z)$ be a congruence subgroup. A [*[Maass waveform]{}*]{} $f$ on $\Gamma$ with eigenvalue $\lambda=s(1-s) \in \mathbb C$ is a smooth function $f:\mathbb H \to \mathbb C$ satisfying 1. $f(\gamma \tau) = f(\tau)$ for all $\gamma \in \Gamma;$ 2. $f$ grows at most polynomially at the cusps; 3. $\Delta (f)=\lambda f$. If, moreover, $a_{f,\mathfrak a}(0;0) = 0$ for each cusp $\mathfrak a$ of $\Gamma$, then $f$ is a [*[Maass cusp form]{}*]{}. We also require the general shape of Fourier expansions of such Maass forms. As all of our forms are cusp forms, the following is sufficient for our purposes. The proof may be found in any standard text on Maass forms (such as those listed above), but we note that it follows from the differential equation (iii), growth condition (ii), and the periodicity of $f$. \[Maass0Fourier\] Let [**$f$**]{} be a Maass cusp form with eigenvalue $\lambda=s(1-s)$. Then there exist $\kappa_1, \kappa_2, a_f(n) \in \mathbb C$, $n\neq 0$, such that $$f(\tau) = \kappa_1 v^s + \kappa_2 v^{1-s}\delta_s(v) + v^{\frac12} \sum_{n\neq 0} a_f(n) K_{s-\frac12} (2\pi |n| v)e(nu),$$ where $K_\nu$ is the modified Bessel function of the second kind, and $\delta_s(v)$ is equal to $\log(v)$ or $1$, depending on whether $s=1/2$ or $s\neq 1/2$, respectively. Such an expansion also exists at all cusps. Cohen proved (in the notation of ) that the function $$\notag f(\tau) := v^{\frac{1}{2}}\sum_{n\in1+24{\mathbb Z}}T(n)K_0\bigg(\frac{2\pi |n|{v}}{24}\bigg)e\bigg(\frac {nu}{24}\bigg)$$ is a Maass form on the congruence subgroup $\Gamma_0(2)$ with a multiplier. Namely, $u$ satisfies the transformations $$f\bigg(\!-\frac1{2\tau}\bigg)=\overline{f(\tau)}, \qquad f(\tau+1)=e\bigg(\frac1{24}\bigg)f(\tau),$$ and is an eigenfunction of $\Delta$ with eigenvalue $1/4$. Put another way, Cohen showed that $$f^+(\tau)=\sigma(q),$$ (in the notation of ) and that $\sigma^*$ similarly interprets the negative Fourier coefficients of $u$. Cohen’s proof relies on connections between $\sigma,\sigma^*$ and the arithmetic of a quadratic field, which also forms the basis of investigations by many authors of the series discussed by Li, Ngo, and Rhoades in [@RobMaass]. However, as noted above, computing the Hecke characters related to such $q$-series using Cohen’s methods quickly becomes computationally difficult. Instead, we use work of Zwegers which provides a convenient framework for giving examples of Maass forms and allows us to circumvent these problems. Work of Zwegers and related notation {#ZwegersWorkSection} ------------------------------------ In this section we summarize the important recent work of Zwegers [@ZwegersMockMaass], which allows us to study the relation between indefinite theta functions and Maass forms. Effectively, Zwegers showed for a large class of indefinite theta functions how to define eigenfunctions of $\Delta$, along with special completion terms which correct their (non)-modularity. What is especially useful in our case, is the theory which Zwegers provides for describing when these completion terms vanish. To describe the setup, suppose that $A$ is a symmetric $2\times 2$ matrix with integral coefficients such that the quadratic form $Q$ defined by $Q({r}) := \frac12 {r}^T A {r}$ is indefinite of signature $(1,1)$, where ${r}^T$ denotes the transpose of ${r}$. Let $B({r},\mu)$ be the associated bilinear form given by $$\notag B({r},\mu) := {r}^T A \mu = Q({r}+\mu)-Q(r)-Q(\mu),$$ and take vectors $c_1,c_2\in {\mathbb R}^2$ with $Q(c_j)=-1$ and $B(c_1,c_2)<0$. In other words, we are assuming that $c_1$ and $c_2$ belong to the same one of the two components of the space of vectors $c$ satisfying $Q(c)=-1$. We denote this choice of component by $$C_Q:=\{x \in {\mathbb R}^2\mid Q(x)=-1,B(x,c_1)<0\}.$$ It is easily seen that $Q$ splits over $\mathbb{R}$ as a product of linear factors $Q(r)=Q_0(Pr)$ for some (non-unique) $P\in \GL_2({\mathbb R})$, where $Q_0(r):=r_1r_2$ with $r=(r_1,r_2)$. Note that $P$ satisfies $A=P^T(\begin{smallmatrix} 0&1\\1&0\end{smallmatrix})P$. The choice of $P$ is not unique, however we fix a $P$ with sign chosen so that $P^{-1}\binom{ \hspace{2mm}1}{-1}\in C_Q$. Then, for each $c\in C_Q$, there is a unique $t \in \mathbb{R}$ such that $$\label{eq:c(t)} c = c(t):= P^{-1}\begin{pmatrix} e^t \\ -e^{-t} \end{pmatrix}.$$ Additionally, for $c\in C_Q$ we let $c^\perp =c^\perp(t):=P^{-1}{\left(\begin{smallmatrix} e^t \\ e^{-t} \end{smallmatrix}\right)}$. Note that $B(c,c^\perp)=0$, and $Q(c^\perp)=1$. It is easily seen that these two conditions determine $c^\perp$ up to sign. Set $$\begin{aligned} \rho_A(r):=\rho_{A}^{c_1,c_2}(r):=&\frac{1}{2} \Big(1-{\operatorname{sgn}}\big(B(r,c_1)B(r,c_2)\big) \Big) ,\end{aligned}$$ and for convenience, let $\rho_A^{\perp}:=\rho_A^{c_1^{\perp},c_2^{\perp}}$. Then, for $c_j=c(t_j)\in C_Q$, Zwegers defined the function $$\label{Phidef} \begin{aligned} \Phi_{a,b}(\tau)=\Phi_{a,b}^{c_1,c_2}(\tau) :&= {\operatorname{sgn}}(t_2-t_1) v^{\frac{1}{2}} \sum_{r \in a+{\mathbb Z}^2} \rho_A(r) e( Q(r)u+ B(r,b))K_0(2\pi Q(r)v) \\ & \quad + {\operatorname{sgn}}(t_2-t_1) v^{\frac{1}{2}} \sum_{r \in a+{\mathbb Z}^2} \rho_A^\perp (r) e( Q(r)u+B(r,b))K_0(-2\pi Q(r)v) . \end{aligned}$$ Note in particular that $$\label{plus} \Phi_{a, b}^+(\tau)={\operatorname{sgn}}(t_2-t_1) \sum_{r \in a+{\mathbb Z}^2} \rho_A(r) e(B(r,b))q^{Q(r)}.$$ Here we used that in the proof of convergence in [@ZwegersMockMaass] it is shown that for the first sum in the definition of $\Phi_{a,b}$, $Q$ is positive definite whereas in the second sum $Q$ is negative definite from. Given convergence of this series, it is immediate from the differential equation satisfied by $K_0$ that $\Phi_{a,b}$ is an eigenfunction of the Laplace operator $\Delta$ with eigenvalue $1/4$. Zwegers then found a completion of $\Phi_{a,b}$. Moreover, he gave useful conditions to determine when the extra completion term vanishes. To describe this, we first consider for $c\in C_Q$ the $q$-series $$\varphi_{a,b}^c(\tau) := {v}^{\frac{1}{2}}\sum_{{r}\in a+{\mathbb Z}^2} \alpha_{t} \big({r} {v}^{\frac{1}{2}} \big) q^{Q({r})}e(B({r},b)) ,$$ $t$ as defined in and $$\alpha_{t}({r}):= \begin{cases} \displaystyle{\int_{t}^\infty} e^{-\pi B({r},c(x))^2}dx & \mbox{ if }B({r},c)B\big({r},c^\perp\big)>0, \\[2ex] -\displaystyle{\int_{-\infty}^{t}} e^{-\pi B({r},c(x))^2}dx & \mbox{ if }B({r},c)B\big({r},c^\perp \big)<0, \\ 0 & \mbox{ otherwise, } \end{cases} $$ These functions satisfy the following transformation properties. \[Zlem\] For $c\in C_Q$ and $a,b\in{\mathbb R}^2$, we have that $$\begin{aligned} \varphi_{a+\lambda,b+\mu}^c &= e(B(a,\mu))\varphi_{a,b}^c\quad\mbox{for all $\lambda\in {\mathbb Z}^2$ and }\mu\in A^{-1}{\mathbb Z}^2, \\ \varphi_{-a,-b}^c &= \varphi_{a,b}^c, \\ \varphi_{\gamma a,\gamma b}^{\gamma c} &= \varphi_{a,b}^c \quad \mbox{for all }\gamma\in \mathrm{Aut}^+(Q,{\mathbb Z}),\end{aligned}$$ where $$\mathrm{Aut}^+(Q,{\mathbb Z}):=\big\{\gamma \in \operatorname{GL}_2({\mathbb R})\big\vert \gamma\circ Q=Q,\gamma{\mathbb Z}^2={\mathbb Z}^2, \gamma(C_Q)=C_Q, \det(\gamma)=1\big\}.$$ Zwegers’ main result is as follows, where $$\label{ZwegersPhiHatDefn} \widehat\Phi_{a,b}(\tau)=\widehat\Phi_{a,b}^{c_1,c_2}(\tau):=v^{\frac12}\sum_{r\in a+{\mathbb Z}^2}q^{Q(r)}e(B(r,b))\int_{t_1}^{t_2}e^{-\pi vB(r,c(x))^2}dx .$$ \[Zthm\] The function $\Phi_{a,b}$ converges absolutely for any choice of parameters $a,b$, and $Q$ such that $Q$ is non-zero on $a+{\mathbb Z}^2$. Moreover, the function $\widehat \Phi_{a,b}$ converges absolutely and can be decomposed as $$\notag \widehat{\Phi}^{c_1,c_2}_{a,b} = \Phi^{c_1,c_2}_{a,b}+\varphi_{a,b}^{c_1}-\varphi_{a,b}^{c_2} .$$ Moreover, it satisfies the elliptic transformations $$\begin{aligned} \widehat{\Phi}^{c_1,c_2}_{a+\lambda,b+\mu} &= e(B(a,\mu))\widehat{\Phi}^{c_1,c_2}_{a,b}\quad \mbox{for all $\lambda\in {\mathbb Z}^2$ and }\mu\in A^{-1}{\mathbb Z}^2,\\ \widehat{\Phi}^{c_1,c_2}_{-a,-b} &= \widehat{\Phi}^{c_1,c_2}_{a,b},\end{aligned}$$ and the modular relations $$\begin{aligned} \widehat{\Phi}^{c_1,c_2}_{a,b}(\tau+1) & = e\bigg(-Q(a)-\frac12 B\big(A^{-1}A^*,a\big)\bigg)\widehat{\Phi}^{c_1,c_2}_{a,a+b+\frac12 A^{-1}A^*}(\tau), \\ \widehat{\Phi}^{c_1,c_2}_{a,b}\bigg(-\frac{1}{\tau}\bigg) & = \frac{e(B(a,b))}{\sqrt{-\det{(A)}}} \sum_{p\in A^{-1}{\mathbb Z}^2\pmod{{\mathbb Z}^2}} \widehat{\Phi}^{c_1,c_2}_{-b+p,a}(\tau),\end{aligned}$$ where $A^*:=(A_{11},\ldots,A_{rr})^{\mathrm T}$. These results can be conveniently repackaged in the language of Theorem \[mainthm2\] as follows. We note that the addition of the transformation results involving $(a^*,b^*)$ is based on the discussion of the proof of (14) in [@ZwegersMockMaass]. For future reference, we also define the equivalence relation on ${\mathbb R}^2$ $$(a,b)\sim(\alpha,\beta)$$ if $a\pm \alpha\in{\mathbb Z}^2$ and $b\pm \beta=:\mu\in{\mathbb Z}^2$ with $B(a,\mu)\in{\mathbb Z}$ (note that the two $\pm$ are required to have the same sign). \[MainThm2ZwegersResult\] If $a,b$ are chosen so that $(\gamma a,\gamma b)\sim(a,b)$ for some $\gamma\in \mathrm{Aut}^+(Q,{\mathbb Z})$ with $\gamma c_1=c_2$, then $$\widehat{\Phi}^{c_1,c_2}_{a,b} = \Phi^{c_1,c_2}_{a,b} .$$ In particular, it is a Maass form (with a multiplier) on a congruence subgroup. The connection of the series $S_{a,b;M}$ with Maass forms (once these series are decorated with the proper modified Bessel functions of the second kind) follows from Zwegers’ work, given certain special conditions. To describe these, we define an equivalence relation on the set of pairs $(a,b)$. We also set $$\label{gm} \gamma_M:=\bigg(\begin{matrix}M&M-1\\ M+1&M\end{matrix}\bigg),$$ which is useful for our purposes as it lies in $\mathrm{Aut}^+(Q,{\mathbb Z})$ and satisfies $\gamma_M c=c'$. Finally, for a generic vector $x=(x_1,x_2)$, we let $$x^*:=(-x_1,x_2).$$ Quantum modular forms and the map $F\mapsto F^+$ {#MaassFormsHolomorphization} ------------------------------------------------ In this section, we review Lewis and Zagier’s construction [@LewisZagier2] of period functions for Maass waveforms, and following Zagier [@Za1] indicate how so-called quantum modular forms may be formed using them. We also use this construction in the proof of Theorem \[mainthm1\], as we shall see that the $q$-hypergeometric forms of the associated quantum modular forms are essential for showing cuspidality of the Maass waveforms. We begin by recalling the definition of quantum modular forms (see [@Za1] for a general survey). \[cocycle\] For any subset $X\subseteq\mathbb P^1({{\mathbb Q}})$, a function $f\colon X\rightarrow{\mathbb C}$ is a [*quantum modular form*]{} with [*quantum set $X$*]{} of weight $k\in\frac12{\mathbb Z}$ on a congruence subgroup $\Gamma$ if for all $\gamma\in\Gamma$, the cocycle ($|_k$ the usual slash operator) $$r_{\gamma}(x):=f|_{k}(1-\gamma)(x)$$ extends to an open subset of ${\mathbb R}$ and is real-analytic. Zagier left his definition of quantum modular forms more open only requiring for $r_\gamma$ to be “nice”. In general, one knows one is dealing with a quantum modular form if it has a certain feel, which Zagier brilliantly explained in his several motivating examples. The first main example Zagier gave, and the one most relevant for us here, is that of quantum modular forms attached to the positive (and negative) coefficients of Maass forms. Although Zagier only worked out this example explicitly in one case, and the work of Lewis and Zagier only studied Maass cusp forms of level one, for our purposes it is important to consider a more general situation. This is described in the following result, which extends observations of Lewis and Zagier for Maass Eisenstein series of Li, Ngo, and Rhoades for special examples in [@RobMaass], and where, for a Maass form $F$ on a congruence subgroup $\Gamma$, we set $$\Gamma_F:=\Gamma\cap\big\{\gamma\in\Gamma : F \text{ is cuspidal at } \gamma^{-1}i\infty\big\} .$$ \[MaassQMFThm\] Let $F$ be a Maass waveform on a congruence subgroup $\Gamma$ with eigenvalue $1/4$ under $\Delta$ which is cuspidal at $i\infty$. Then $F^+$ defines a quantum modular form of weight one on a subset $X\subseteq\mathbb P^1({{\mathbb Q}})$ on $\Gamma_F$. Moreover, $F$ is cuspidal exactly at those cusps which lie in the maximal such set $X$. 1. The quantum modular form defined by $F^+$ may formally be given on all of $\mathbb P^1({{\mathbb Q}})$. This is done by considering asymptotic expansions of $F^+$ near the cusps, instead of simply values. This consideration leads to Zagier’s notion of a [*strong quantum modular form*]{}. 2. There is also a quantum modular form associated to the negative coefficients of $F$, which is also a part of the object corresponding to $F$ under the Lewis-Zagier correspondence of [@LewisZagier2]. The key idea, already present in [@LewisZagier2], is to realize $F^+$ as an integral transform of $F$ defined in . To describe this, we require the real-analytic function $R_{\tau}$ given by ($z=x+iy$ with $x,y\in{\mathbb R}$) $$R_{\tau}(z):=\frac{y^{\frac12}}{\sqrt{(x-\tau)^2+y^2}} .$$ This function is an eigenfunction of $\Delta$ with eigenvalue $1/4$. For two real-analytic functions $f,g$ defined on $\mathbb H$, we also consider their Green’s form $$[f,g]:=\frac{\partial f}{\partial z}gdz+\frac{\partial g}{\partial \overline{z}}fd\overline{z}.$$ Then Lewis and Zagier showed (see also Proposition 3.5 of [@RobMaass] for a direct statement and a detailed proof) that $$F^+(\tau)=-\frac 2{\pi}\int_{\tau}^{i\infty}\big[F(z),R_{\tau}(z)\big] .$$ This formula, which may also be thought of as an Abel transform, can also be rephrased as in the proposition of Chapter II, Section 2 of [@LewisZagier2] in the following convenient form: $$\label{FPlusAltInt} F^+(\tau)\ = \mathcal C\int_{\tau}^{i\infty}\Bigg( \frac{\partial F(z)}{\partial z}\frac{y^{\frac12}}{(z-\tau)^{\frac12}(\overline z-\tau)^{\frac12}}dz +\frac i4 F(z)\frac{(z-\tau)^{\frac12}}{y^{\frac12}(\overline z-\tau)^{\frac32}}d\overline{z} \Bigg) ,$$ where $\mathcal C$ is a constant. Now for general functions $f,g$ which are eigenfunctions of $\Delta$ with eigenvalue $1/4$, the quantity $[f,g]$ is actually a closed one-form. This fact, combined with the modularity transformations of $F$ and the equivariance property $$R_{\gamma \tau}(\gamma z)=(c\tau+d)R_{\tau}(z)$$ for $\gamma=\big(\begin{smallmatrix} a & b \\ c & d\end{smallmatrix}\big)\in\operatorname{SL}_2({\mathbb R})$ directly shows (as in (14) of [@Za1]) that $$\label{MaassQuantumCocycle} F^+(\tau)-(c\tau+d)^{-1}F^+(\gamma \tau)=-\int_{\gamma^{-1}i\infty}^{i\infty}\big[F(z),R_{\tau}(z)\big]$$ for all $\gamma\in\Gamma_F$. This last integral converges since, by assumption, $F$ is cuspidal at $\gamma^{-1}i\infty$. As the integral on the right hand side of the last formula is real-analytic on ${\mathbb R}\setminus\{\gamma^{-1}i\infty\}$, this establishes the first claim, if we note that the values of the quantum modular form, if they converge, are given as the limits towards rational points from above. That is, the value of the quantum modular form at $\alpha\in{{\mathbb Q}}$ equals $$\label{LimitEquationFPlus} F^+(\alpha):=\lim_{t\rightarrow0^+} F^+(\alpha+it).$$ We next establish the second claim, which states that exists precisely for those $\alpha$ for which $F$ is cuspidal. By the existence of a Fourier expansion at all cusps in Lemma \[Maass0Fourier\], and using the exponential decay of $K_0(x)$ as $x\to\infty$, we find that, for $t>0$, $$\label{FourierExpansionCuspAsympExp} F(\alpha+it)\approx\frac{\kappa_1}{|c|\sqrt{t}}-\frac{\kappa_2}{|c|\sqrt{t}}\log\big(c^2t\big) ,$$ where $\gamma=\big(\begin{smallmatrix}a & b\\ c& d\end{smallmatrix}\big)\in\operatorname{SL}_2({\mathbb Z})$ is chosen such that $\gamma \alpha=i\infty$ and we write $f(t) \approx g(t)$ if $f-g$ decays faster than any polynomial in $t$, as $t\rightarrow0^+$. We have also used the fact that $c\alpha+d=0$ to note that the imaginary part of $\gamma(\alpha+it)$ is $t/|c\alpha+cit+d|^2=1/(c^2t)$. Our goal is to show that converges if and only if $\kappa_1=\kappa_2=0$. For this, we also require an estimate on $\frac{\partial F}{\partial z}(\alpha+it)$. To compute this, we note that $$\frac{\partial}{\partial z}[F(z)]_{z=\alpha+it} =\frac{\partial}{\partial z}\bigg[F(\gamma z)\bigg]_{z=\alpha+it}=\frac{1}{j(\gamma, \alpha+it)^2}F'\big(\gamma(\alpha+it)\big),$$ where $j(\gamma,z):=(cz+d)$. Now using Lemma \[Maass0Fourier\], we obtain $$F'(z)\approx\frac{\partial}{\partial z}\Big(\kappa_1y^{\frac{1}{2}}+\kappa_2y^{\frac{1}{2}}\log(y)\Big)=\frac{i}{4}\Big((\kappa_1+2\kappa_2)y^{-\frac{1}{2}}+\kappa_2y^{-\frac{1}{2}}\log(y)\Big).$$ Using that $ j(\gamma, \alpha+it)=cit \text{ and } \mathrm{Im}(\gamma(\alpha+it))=1/(c^2t), $ we obtain that $$\frac{\partial F}{\partial z}(\alpha+it)\approx \frac{-i}{4|c|t^{\frac{3}{2}}}\Big(\kappa_1+2\kappa_2-\kappa_2\log\big(c^2t\big)\Big).$$ To determine when $\lim_{t\to 0^+}F^+(\alpha+it)$ exists, we need the following to converge: $$\int_\alpha^{i\infty}\Bigg(\frac{\partial F(z)}{\partial z}\frac{y^{\frac12}}{(z-\alpha)^{\frac12}(\overline{z}-\alpha)^{\frac12}}dz+\frac{i}{4}F(z)\frac{(z-\alpha)^{\frac12}}{y^{\frac12}(\overline{z}-\alpha)^{\frac32}}d\overline{z}\Bigg).$$ Making the change of variables $z=\alpha+it$ (note that we need to conjugate the second term) gives $$i\int_0^\infty\Bigg( \frac{\partial}{\partial z}\big[F(z)\big]_{z=\alpha+it}(-it)^{-\frac12}+\frac{i}{4} \overline{F(\alpha+it)}(it)^{-\frac12}\Bigg).$$ The top part of this integral, say from $1$ to $\infty$, is always convergent, since we assumed that $F$ is cuspidal at $i\infty$. Towards $0$, the integrand behaves like $$-\frac{i}{4|c|t^{\frac32}}\Big(\kappa_1+2\kappa_2-\kappa_2\log\big(c^2t\big)\Big)(-it)^{-\frac12}+\frac{i}{4}\Bigg(\frac1{|c|t^{\frac12}}\Big(\overline{\kappa}_1-\overline{\kappa}_2\log\big(c^2 t\big)\Big)(it)^{-\frac12}\Bigg).$$ Comparing alike powers then gives that the integral only converges for $\kappa_1=\kappa_2=0$, i.e., if $F$ is cuspidal. Proofs of the main results {#ProofsSection} ========================== Proof of Theorem \[mainthm2\] ----------------------------- For any $M\in{\mathbb N}_{\geq2}$, consider the quadratic form $Q(x,y):=\frac12\big((M+1)x^2-(M-1)y^2\big)$ associated to the symmetric matrix $ A:=\big(\begin{smallmatrix} M+1 & 0 \\ 0 & 1-M \end{smallmatrix} \big) $ and for $\ell\in\{1,2\}$ the vectors $$\begin{aligned} c_\ell:=\frac{1}{\sqrt{M^2-1}}\big((-1)^\ell(M-1), M+1\big)^T.\end{aligned}$$ It is easily checked that $Q(c_\ell)=-1$ and $B(c_1,c_2)=-2M<0$, so that these two vectors lie in the same component $C_Q$. Choose $a=(a_1,a_2)\in{{\mathbb Q}}^2$ and $b=(b_1,b_2)\in{{\mathbb Q}}^2$. Then, for any vector $r=(n,\nu)^{\mathrm T}$, we find that $$B(r,c_1)B(r,c_2)=(M^2-1)(\nu - n)(\nu + n),$$ and thus $$\rho_A(a+r) = \frac12\Big(1+{\operatorname{sgn}}\big((a_1 - a_2 -\nu + n)(a_1 + a_2 + \nu + n)\big)\Big) .$$ Given these choices, we find that the family of indefinite theta functions $S_{a,b;M}$ may be understood in Zwegers’ notation via the relation $$\Phi_{a,b}^+=\operatorname{sgn}(t_2-t_1) e\big((M+1)a_1b_1-(M-1)a_2b_2\big)S_{a,b;M}.$$ Since $\gamma_M$, defined in , can easily be verified to lie in $\operatorname{Aut}^+(Q,{\mathbb Z})$ and $\gamma_Mc_1=c_2$, the theorem then follows from Proposition \[MainThm2ZwegersResult\] if $(\gamma_M a,\gamma_M b)\sim(a,b)$. We next prove the theorem if $(\gamma_M a,\gamma_M b)\sim(a^*,b^*)$ and $(\gamma_M a^*,\gamma_M b^*)\sim(a,b)$. The key step is to show that the involution $(a,b)\mapsto(a^*,b^*)$ fixes $\widehat \Phi_{a,b}^{c,c'}$. For this, we compute a parameterization $c(t)$ of $C_Q$. We find that a suitable choice for $P$ is given by $ P=\frac1{\sqrt2} \Big( \begin{smallmatrix} \sqrt{M+1} & \sqrt{M-1} \\ \sqrt{M+1} & -\sqrt{M-1} \end{smallmatrix} \Big) . $ Then we obtain that $ c(t) = \bigg(\begin{smallmatrix} \sqrt{\frac{2}{M+1}}\sinh(t) \\ \sqrt{\frac{2}{M-1}}\cosh(t) \end{smallmatrix}\bigg) . $ In this parameterization, we have $t_\ell=(-1)^{\ell+1}\operatorname{arcsinh}(-\sqrt{(M-1)/2})$ for $\ell\in\{1,2\}$, and $c(-t)=c^*(t)$. Hence, by sending $x\mapsto-x$ and $r\mapsto r^*$ in , we find that $\widehat \Phi_{a^*,b^*}^{c_1,c_2}=\widehat \Phi_{a,b}^{c_1,c_2}$. We then obtain, using Lemma \[Zlem\], that $$\begin{aligned} 2\widehat \Phi_{a,b}^{c_1,c_2} & = \widehat \Phi_{a,b}^{c_1,c_2}+\widehat \Phi_{a^*,b^*}^{c_1,c_2} \\ & = \Phi_{a,b}^{c_1,c_2}+\Phi_{a^*,b^*}^{c_1,c_2}+\varphi_{a,b}^{c_1}-\varphi_{a,b}^{c_2}+\varphi_{a^*,b^*}^{c_1}-\varphi_{a^*,b^*}^{c_2} \\ & = \Phi_{a,b}^{c_1,c_2}+\Phi_{a^*,b^*}^{c_1,c_2}+\varphi_{a^*,b^*}^{c_2}-\varphi_{a,b}^{c_2}+\varphi_{a,b}^{c_2}-\varphi_{a^*,b^*}^{c_2} \\ & = \Phi_{a,b}^{c_1,c_2}+\Phi_{a^*,b^*}^{c_1,c_2} , \end{aligned}$$ which shows that the completion terms in $\Phi_{a,b}^{c_1,c_2}$ cancel out, as desired. Finally, we note that since $M\in{\mathbb N}_{\ge2}$, $Q(x,y)$ cannot vanish at rational values $x,y$ unless $x=y=0$ since $M-1$ and $M+1$ are coprime and cannot both be squares. As we have supposed that $a\in{{\mathbb Q}}^2\setminus\{0\}$, it automatically follows that the quadratic form above cannot vanish on $a+\mathbb{Z}^2$, and hence our choice satisfies the convergence requirement in Theorem \[Zthm\]. Proof of Theorem \[mainthm1\] ----------------------------- We begin by showing the connection of the relevant $q$-series to indefinite theta functions. To do so, we use the Bailey pairs in the following lemma. These pairs have the rare and important feature that the $\beta_n$ are polynomials. \[twopairslemma\] Let $k,\ell \in \mathbb{N}$ with $1 \leq \ell \leq k$. We have the Bailey pair relative to $1$, $$\begin{aligned} \alpha_n &= -q^{(k+1)n^2-n}\big(1-q^{2n}\big)\sum_{\nu=-n}^{n-1}(-1)^\nu q^{-\frac{1}{2}(2k+1)\nu^2 - \frac{1}{2}(2k-(2\ell-1))\nu} \label{firstalpha} \\ \intertext{and} \beta_n &= H_n(k,\ell;1;q) \cdot \chi_{n \neq 0} \label{firstbeta},\end{aligned}$$ and the Bailey pair relative to $q$, $$\begin{aligned} \alpha_n &= \frac{1-q^{2n+1}}{1-q}q^{(k+1)n^2+kn}\sum_{\nu=-n}^{n}(-1)^\nu q^{-\frac{1}{2}(2k+1)\nu^2 - \frac{1}{2}(2k - (2\ell-1))\nu} \label{secondalpha}\\ \intertext{and} \beta_n &= H_n(k,\ell;0;q). \label{secondbeta}\end{aligned}$$ The Bailey pair relative to $1$ was established in [@Hi-Lo1 Section 5]. The proof of the Bailey pair relative to $q$ follows by using a similar argument. We begin by replacing $K$ by $k$ and $\ell$ by $k-\ell$ in part (i) of Theorem 1.1 of [@Lo1]. This gives that $(\alpha_n,\beta_n)$ is a Bailey pair relative to $q$, where $\alpha_n$ is given in and $\beta_n$ is the $z=1$ instance of $$\label{zcase} \beta_n(z) = \sum_{n \geq m_{2k-1} \geq \ldots \geq m_1 \geq 0} \frac{q^{\sum_{\nu=1}^{k-1} (m_{k+\nu}^2+m_{k+\nu}) + \binom{m_k+1}{2} - \sum_{\nu=1}^{k-1} m_\nu m_{\nu+1} - \sum_{\nu=1}^{k-\ell} m_\nu}(-z)^{m_k}}{(q)_{m_{2k}-m_{2k-1}}(q)_{m_{2k-1}-m_{2k-2}}\cdot\ldots\cdot (q)_{m_2-m_1}(q)_{m_1}},$$ where $m_{2k} := n$. To transform the above into , we argue as in Sections 3 and 5 of [@Hi-Lo1]. We replace $m_1,\dots,m_{2k-1}$ by the new summation variables $n_1,\dots,n_{k-1}$ and $u_1,\dots,u_k$ as follows: $$\label{replacement} m_{\nu} \mapsto \begin{cases} u_{k-\nu+1} + \cdots + u_k & \text{for $1 \leq \nu \leq k$}, \\ n_{\nu-k} + u_{\nu-k+1} + \cdots + u_k & \text{for $k+1 \leq \nu \leq 2k-1$}. \end{cases}$$ With $m_0 = n_0 = 0$ and $n_{k} = n$, the inequalities $m_{i+1} - m_i \geq 0$ in for $0 \leq i \leq k-1$ give $u_i \geq 0$ and the inequalities $m_{k+i+1} - m_{k+i} \geq 0$ for $0 \leq i \leq k-1$ then give $0 \leq u_i \leq n_{i+1} - n_i$. Thus after a calculation to determine the image of the summand of under the transformations in , we find that $$\beta_n(z) = \sum_{n \geq n_{k-1} \geq \cdots \geq n_1 \geq 0} \prod_{\nu = 1}^k \sum_{u_{\nu} = 0}^{n_{\nu} - n_{\nu-1}} \frac{\big(-zq^{\min\{\nu,\ell\} + 2\sum_{\mu = 1}^{\nu-1}n_{\mu}}\big)^{u_{\nu}}q^{\binom{u_{\nu}}{2} + 2\binom{n_{\nu-1} + 1}{2}}}{(q)_{n_{\nu} - n_{\nu-1}}} \begin{bmatrix} n_{\nu} - n_{\nu-1} \\ u_{\nu} \end{bmatrix}.$$ By the $q$-binomial theorem $$\label{qbin} \sum_{u=0}^n (-z)^uq^{\binom{u}{2}}\begin{bmatrix} n \\ u \end{bmatrix} = (z)_n,$$ each of the sums over $u_{\nu}$ may be carried out, giving $$\beta_n(z) = \sum_{n \geq n_{k-1} \geq \cdots \geq n_1 \geq 0} \prod_{\nu = 1}^k q^{2\binom{n_{\nu-1} + 1}{2}} \frac{\big(zq^{\min\{\nu,\ell\} + 2\sum_{\mu = 1}^{\nu-1} n_{\mu}}\big)_{n_{\nu} - n_{\nu-1}}}{(q)_{n_{\nu} - n_{\nu-1}}}.$$ Using the fact that $$\begin{bmatrix} n \\ k \end{bmatrix}_q = \frac{(q^{k+1})_{n-k}}{(q)_{n-k}},$$ we then have $$\begin{aligned} \beta_n(1) &= \sum_{n \geq n_{k-1} \geq \cdots \geq n_1 \geq 0} \prod_{\nu = 1}^k q^{2\binom{n_{\nu-1} + 1}{2}} \begin{bmatrix} \min\{\nu,\ell\} - 1 + n_{\nu} - n_{\nu-1} + 2\sum_{\mu=1}^{\nu-1} n_{\mu} \\ n_{\nu} - n_{\nu-1} \end{bmatrix} \\ &=\sum_{n \geq n_{k-1} \geq \cdots \geq n_1 \geq 0} \prod_{\nu = 1}^{k-1} q^{2\binom{n_{\nu} + 1}{2}} \begin{bmatrix} \min\{\nu,\ell - 1\} + n_{\nu+1} - n_{\nu} + 2\sum_{\mu=1}^{\nu} n_{\mu} \\ n_{\nu+1} - n_{\nu} \end{bmatrix},\end{aligned}$$ in agreement with the $H_n(k,\ell;0;q)$, defined in . The referee has observed that Lemma \[twopairslemma\] could also be proved by using together with ideas from [@Wa2]. With these Bailey pairs we prove the following key proposition. \[IndefThetaFj\] We have $$\begin{aligned} F_1(k,\ell;q) & = \sum_{n \geq 0} \sum_{|\nu | \leq n} (-1)^{n+\nu }q^{(k+1)n^2+kn+\binom{n+1}{2} - \frac12\big((2k+1)\nu ^2 + (2k - (2\ell-1))\nu \big)}\big(1-q^{2n+1}\big), \label{F1identity} \\ F_2(k,\ell;q) & = \frac{1}{2}\sum_{n \geq 0} \sum_{|\nu | \leq n} (-1)^{n+\nu }q^{(k+1)n^2+kn - \frac12\big((2k+1)\nu ^2 + (2k - (2\ell-1))\nu \big)}\big(1-q^{2n+1}\big), \notag \\ F_3(k,\ell;q) & = -\sum_{n \geq 1} \sum_{\nu = -n}^{n-1} (-1)^{n+\nu }q^{(k+1)n^2+ \binom{n}{2} - \frac12\big((2k+1)\nu ^2 + (2k - (2\ell-1)) \nu \big)}\big(1+q^n\big), \notag \\ F_4(k,\ell;q) & = -2\sum_{n \geq 1} \sum_{\nu = -n}^{n-1} (-1)^{n+\nu }q^{(k+1)n^2 - \frac12\big((2k+1)\nu ^2 + (2k - (2\ell-1)) \nu \big)}. \notag\end{aligned}$$ The first two identities follow upon using the Bailey pair in and in equations and , while the second two use and in equations and . We are now ready to prove our main result. We first apply Theorem \[mainthm2\] to the indefinite theta function representations of the $F_\nu $, given in Proposition \[IndefThetaFj\]. We begin with $F_1$. Using the term $(1-q^{2n+1})$ to split the right-hand side into two sums and then replacing $n$ by $-n-1$ in the second sum, we obtain $$F_1(k,\ell;q) = \Bigg(\sum_{n\pm \nu \geq 0} + \sum_{n\pm\nu < 0 }\Bigg) (-1)^{n+\nu }q^{(k+1)n^2+kn+\binom{n+1}{2} - \frac12\big((2k+1)\nu ^2 + (2k - (2\ell-1))\nu \big)}.$$ By completing the square, we directly compute that $$q^{\frac{(2k+1)^2}{8(2k+3)} - \frac{(2k-2\ell+1)^2}{8(2k+1)}}F_1(k,\ell;q) = \Bigg(\sum_{n\pm\nu \geq 0 } + \sum_{n\pm\nu < 0 }\Bigg) (-1)^{n+\nu } q^{\frac{1}{2}(2k+3)\big(n+\frac{2k+1}{2(2k+3)}\big)^2 - \frac{1}{2}(2k+1)\big(\nu +\frac{2k-2\ell+1}{2(2k+1)}\big)^2}.$$ We claim that the right-hand side is equal to $S_{a,b;M}$ defined in with $ M=2k + 2, a=(\frac{2k+1}{2(2k+3)},\frac{2k-2\ell+1}{2(2k+1)})^{\mathrm T}$, and $b=(\frac1{2(2k+3)},\frac1{2(2k+1)})^{\mathrm T}.$ The summand is directly seen to match that of . To show that the summation bounds are correct, we use the restrictions on $k$ and $\ell$ to verify the inequalities $0<a_1\pm a_2<1$. For example, to see this for $a_1-a_2$, we note that $$a_1-a_2=\frac{{(2k+3)\ell-2k-1}}{(2k+3)(2k+1)}$$ is positive exactly if $\ell>(2k+1)/(2k+3)$. As $0<(2k+1)/(2k+3)<1$ and $\ell\geq1$, this inequality automatically holds. To check the upper bound, note that $a_1-a_2<1$ exactly if $\ell<\frac{2(k+2)(2k+1)}{2k+3}$. This last expression is always bigger than $k$, and $\ell$ is, by assumption, bounded by $k$, so this inequality holds. The inequalities on $a_1+a_2$ may be checked in a similar manner. We then show that $$\gamma_M a+(\ell-2k-1)\begin{pmatrix}1\\ 1\end{pmatrix}=a^*,\quad \gamma_M a^*+\ell\begin{pmatrix}1\\ 1\end{pmatrix}=a, \quad \gamma_M b-\begin{pmatrix}1\\ 1\end{pmatrix}=b^*,\quad\text{and}\quad \gamma_M b^*=b$$ and also that $B(a,(-1,-1)^{\mathrm T})=-\ell\in{\mathbb Z}$ Theorem \[mainthm2\] yields the first claim in Theorem \[mainthm1\] for $F_1$, namely that it is the generating function for the positive coefficients of a Maass waveform. We return to the question of cuspidality of this Maass form below, after indicating the related calculations which must be performed on the other $F_j$. In the case of $F_2$, we find in the same manner that $F_2(k,\ell;q^2)$ is equal (up to a rational power of $q$) to $\frac12S_{a,b;M}$, where $M=4k+3, a=(\frac{k}{2(k+1)},\frac{2k-2\ell+1}{2(2k+1)})^{\mathrm T},\text{ and }b=(\frac1{8(k+1)},\frac1{4(2k+1)})^{\mathrm T}.$ As above, we check that $$\gamma_M a+(2\ell-4k-1)\begin{pmatrix}1\\ 1\end{pmatrix}=a^*, \quad\gamma_M a^*+(2\ell-1)\begin{pmatrix}1\\ 1\end{pmatrix}=a, \quad\gamma_M b-\begin{pmatrix}1\\ 1\end{pmatrix}=b^*, \quad\text{and}\quad\gamma_M b^*=b.$$ Here, we also have $0<a_1\pm a_2<1$, and we compute $B(a,(-1,-1)^{\mathrm T})=-2\ell+1\in{\mathbb Z}$, which establishes the theorem for $F_2$. For $F_3$, we use the specializations $M=2k+2, a=(-\frac1{2(2k+3)},\frac{2k-2\ell+1}{2(2k+1)})^\mathrm{T}, b=(\frac1{2(2k+3)}, \frac1{2(2k+1)})^\mathrm{T},$ and find that $0<a_1+ a_2<1, -1<a_1- a_2<0$ $$\gamma_M a+(\ell-k)\begin{pmatrix} 1\\ 1\end{pmatrix}=a^*, \quad \gamma_M a^*+(\ell-k-1)\begin{pmatrix} 1\\ 1\end{pmatrix}=a, \quad\gamma_M b-\begin{pmatrix}1\\ 1\end{pmatrix}=b^*, \quad\gamma_M b^*=b,$$ and $B(a,(-1,-1)^{\mathrm T})=k-\ell+1$. Finally, for $F_4$, we have $M=4k+3, a=(0, \frac{2k-2\ell+1}{2(2k+1)})^\mathrm{T}, b=(\frac1{8(k+1)}, \frac1{4(2k+1)})^\mathrm{T},$ and calculate that $0<a_1+ a_2<1, -1<a_1- a_2<0$, while $$\gamma_M a+(2\ell-2k-1)\begin{pmatrix} 1\\ 1\end{pmatrix}=a, \quad\gamma_M b^*=b, \quad\gamma_M b-\begin{pmatrix}1\\ 1\end{pmatrix}=b^* ,$$ and $B(a,(-1,-1)^{\mathrm T})=2k-2\ell+1$. Thus, we have shown that the $q$-series in Theorem \[mainthm1\] are indeed the positive parts of Maass forms. By the construction of Zwegers’ Maass forms via the Fourier expansions in , we see that the Maass forms here are all cuspidal at $i\infty$ whenever they converge. Theorems \[MaassQMFThm\] and \[mainthm3\] then imply that in fact each of the Maass forms in Theorem \[mainthm1\] are indeed cusp forms. Proof of Theorem \[mainthm3\] ------------------------------ Theorem \[mainthm3\] follows directly from Theorem \[mainthm1\] and Theorem \[MaassQMFThm\], together with the observation that the $q$-series in converge (as they are finite sums) at all roots of unity, which implies that their radial limits exist and equal these values by Abel’s theorem. Although Theorem \[MaassQMFThm\] is only stated for Maass forms with trivial multiplier for simplicity, a review of the proof shows that the method applies equally well to our Maass waveforms with multipliers. Further questions and outlook {#QuestionsSection} ============================= There are several outstanding questions which naturally arise from the main results considered here. In what follows, we outline five interesting directions for future investigation. [**1).**]{} As the example of $\sigma,\sigma^*$ indicates, it is worthwhile to look at the negative coefficients of the related Maass form. Thus, it is natural to ask: are there nice hypergeometric representations for the $q$-series formed by the negative coefficients of the Maass forms $G_{j ,k,\ell}$ in Theorem \[mainthm2\]? For example, one such series has the shape $$\sum_{n, \nu \in {\mathbb Z}\atop |(M+1)n+M-1|<2|(M-1)\nu+M-1-2 \ell|}(-1)^{n+\nu}q^{-\frac{1}{8(M+1)(M-1)}\big((M-1)(2(M+1)n+M-1)^2-(M+1)(2(M-1)\nu+M-1-2\ell)^2\big)}.$$ If so, do they have relations to the $q$-hypergeometric series defining the $F_j$-functions, as $\sigma$ and $\sigma^*$ satisfy? Such a connection could help explain relationships between passing from positive to negative coefficients of Maass waveforms and letting $q\mapsto q^{-1}$ in $q$-hypergeometric series. Examples of such relationships were observed by Li, Ngo, and Rhoades [@RobMaass], and further commented on in [@KRW]. However, the authors were unable to identify suitable Bailey pairs to make this idea work in our case. [**2).**]{} As in Theorem \[Zthm\], we may also think of the Maass forms corresponding to the $F_j$-functions as components of vector-valued Maass waveforms (as discussed in detail for the $\sigma,\sigma^*$ case in [@ZwegersMockMaass]). Is it possible to find nice $q$-hypergeometric interpretations for the corresponding positive (or negative) coefficients of the other components of such vectors as well? That is, are the $q$-series associated to the expansions of the Maass waveforms at other cusps than $i\infty$ interesting from a $q$-series or combinatorial point of view? [**3).**]{} Define the $q$-series $$\notag \mathcal{U}_k^{(\ell)}(x;q) := \sum_{n \geq 0}q^{n} (-x)_{n}\bigg(\frac{-q}{x}\bigg)_{n}H_{n}(k,\ell;0;q).$$ These are analogous to the series $U_k^{(\ell)}(x;q)$, defined by Hikami and the second author [@Hi-Lo1] by $$\notag U_k^{(\ell)}(x;q) := q^{-k}\sum_{n \geq 1} q^{n}(-xq)_{n-1}\bigg(\frac{-q}{x}\bigg)_{n-1} H_{n}(k,\ell;1;q).$$ At roots of unity the functions $U_k^{(\ell)}(-1;q)$ are (vector-valued) quantum modular forms which are “dual" to the generalized Kontsevich-Zagier functions $$\notag F_k^{(\ell)}(q) := q^k \sum_{n_1, \dots, n_k\geq 0} (q)_{n_k} \, q^{n_1^{2} + \cdots + n_{k-1}^{2} + n_{\ell} + \cdots + n_{k-1}} \, \prod_{j=1}^{k-1} \begin{bmatrix} n_{j+1} + \delta_{j,\ell-1} \\ n_j \end{bmatrix},$$ in the sense that $$\notag F_k^{(\ell)}(\zeta_N) = U_k^{(\ell)}\big(-1;\zeta_N^{-1}\big),$$ where $\zeta_N:=e^{2\pi i /N}$. Are the $\mathcal{U}_k^{(\ell)}(-1;q)$ also quantum modular forms like the $U_k^{(\ell)}(-1;q)$? Are they related at roots of unity to some sort of Kontsevich-Zagier type series? [**4).**]{} Using Bailey pair methods one can show that $$\begin{aligned} \mathcal{U}_k^{(\ell)}(-x;q) &= \frac{(x)_{\infty} \big(\frac{q}{x}\big)_\infty}{ (q)_\infty ^2} \notag \\ &\times {\bBigg@{4}}( \sum_{\substack{r,s,t \geq 0 \\ r \equiv s \pmod{2}}} + \sum_{\substack{r,s,t < 0 \\ r \equiv s \pmod{2}}} {\bBigg@{4}}) (-1)^{\frac{r-s}{2}}x^t q^{\frac{r^2}{8}+ \frac{4k+3}{4} r s + \frac{s^2}{8}+\frac{4k+3-2\ell}{4} r + \frac{1+2\ell}{4} s + t\frac{r+s}{2}} . \nonumber \end{aligned}$$ This is analogous to [@Hi-Lo1] $$\begin{aligned} U_k^{(\ell)}(-x;q) &= -q^{-\frac{k}{2}-\frac{\ell}{2}+\frac{3}{8}} \frac{(x q)_{\infty} \big(\frac{q}{x}\big)_\infty}{ (q)_\infty ^2} \notag \\ &\times {\bBigg@{4}}( \sum_{\substack{r,s,t \geq 0 \\ r \not \equiv s \pmod{2}}} + \sum_{\substack{r,s,t < 0 \\ r \not \equiv s \pmod{2}}} {\bBigg@{4}}) (-1)^{\frac{r-s-1}{2}}x^t q^{\frac{r^2}{8}+ \frac{4k+3}{4} r s + \frac{s^2}{8}+\frac{1+\ell+k}{2} r + \frac{1-\ell+k}{2} s + t\frac{r+s+1}{2}} . \nonumber \end{aligned}$$ What sort of modular behavior is implied by these expansions? [**5).**]{} As per the discussion in [@RobMaass], there is hope that the Maass forms in Theorem \[mainthm1\] are related to Hecke characters or multiplicative $q$-series. In fact, such connections were related to all related examples of $q$-hypergeometric examples found in the literature, although finding a general formulation seems intractable at the moment since as the discriminants of the quadratic fields grow, explicitly identifying such characters becomes computationally difficult. Acknowledgements {#acknowledgements .unnumbered} ================ We are grateful to the referee for many helpful comments, especially the observation that the polynomials $H_n(k,\ell;b;q)$ can be related to the Andrews-Gordon identities and for simplifying the proof of Lemma \[twopairslemma\]. [99]{} G. Andrews, *Multiple series Rogers-Ramanujan identities*, Pacific J. Math. [**114**]{} (1984), 267–283. G. Andrews, [*Ramanujan’s “Lost”Notebook V: Euler’s Partition Identity*]{}, Adv. Math. [**61**]{} (1986), 156–164. G. Andrews, *$q$-Series: Their Development and Application in Analysis, Number Theory, Combinatorics, Physics, and Computer Algebra*, volume 66 of Regional Conference Series in Mathematics. American Mathematical Society, Providence, RI, 1986. G. Andrews, F. Dyson, and D. Hickerson, *Partitions and indefinite quadratic forms*, Invent. Math. [**91**]{} (1988), no. 3, 391–407. D. Bump, *Automorphic forms and representations,* Cambridge Studies in Advanced Mathematics, [**55**]{}. Cambridge University Press, Cambridge, 1997. H. Cohen, *$q$-identities for Maass waveforms*, Invent. Math. [**91**]{} (1988), no. 3, 409–422. K. Hikami and J. Lovejoy, *Torus knots and quantum modular forms*, Res. Math. Sci. [**2**]{}:2, (2015). H. Iwaniec, *Spectral methods of automorphic forms. Second edition. Graduate Studies in Mathematics,* 53. American Mathematical Society, Providence, RI, 2002. M. Krauel, L. Rolen, and M. Woodbury, [*On a relation between certain $q$-hypergeometric series and Maass waveforms*]{}, submitted. J. Lewis and D. Zagier: [*Period functions and the Selberg zeta function for the modular group*]{}, “The Mathematical Beauty of Physics, A Memorial Volume for Claude Itzykson” (J.M. Drouffe and J.B. Zuber, eds.), Adv. Series in Mathematical Physics [**24**]{}, World Scientific, Singapore, 83–97 (1997). J. Lewis, D. Zagier: [*Period functions for Maass wave forms. I*]{}, Ann. Math. [**153**]{}, 191–258 (2001). Y. Li, H. Ngo, and R. Rhoades, [*Renormalization and quantum modular forms, part I*]{}, submitted. J. Lovejoy, Bailey pairs and indefinite quadratic forms, *J. Math. Anal. Appl.* [**410**]{} (2014), 1002–1013. S. D. Miller and W. Schmid, *Automorphic distributions, $L$-functions, and Voronoi summation for $\operatorname{GL}(3)$*, Ann. Math. [**164**]{} (2006), 423–488. S.O. Warnaar, *The Andrews-Gordon identities and $q$-multinomial coefficients*, Comm. Math. Phys. [**184**]{} (1997), 203–232. S.O. Warnaar, *50 years of Bailey’s lemma*, Algebraic combinatorics and applications (G[ö]{}[ß]{}weinstein, 1999), 333–347, Springer, Berlin, 2001. S.O. Warnaar, *Partial-sum analogues of the Rogers-Ramanujan identities*, J. Combin. Theory Ser. A [**99**]{} (2002), 143–161. D. Zagier, *Quantum modular forms*, in: Quanta of maths, 659–675, Clay Math. Proc. [**11**]{}, Amer. Math. Soc., Providence, RI, 2010. S. Zwegers, Mock Maass theta functions, Q. J. Math. **63** (2012), 753–770. [^1]: The research of the first author is supported by the Alfried Krupp Prize for Young University Teachers of the Krupp foundation and the research leading to these results receives funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant agreement n. 335220 - AQSER. The third author thanks the University of Cologne and the DFG for their generous support via the University of Cologne postdoc grant DFG Grant D-72133-G-403-151001011, funded under the Institutional Strategy of the University of Cologne within the German Excellence Initiative. [^2]: Note that $F_2(k,\ell;q)$ has a convergence issue, which we overcome by averaging over the even and odd partial sums with respect to $n$.
--- abstract: | We consider a procedure to reduce simply generated trees by iteratively removing all leaves. In the context of this reduction, we study the number of vertices that are deleted after applying this procedure a fixed number of times by using an additive tree parameter model combined with a recursive characterization. Our results include asymptotic formulas for mean and variance of this quantity as well as a central limit theorem. address: - 'Institut für Mathematik, Alpen-Adria-Universität Klagenfurt, Universitätsstraße 65–67, 9020 Klagenfurt, Austria' - ' Department of Mathematical Sciences, Stellenbosch University, 7602 Stellenbosch, South Africa' author: - Benjamin Hackl - Clemens Heuberger - Stephan Wagner bibliography: - 'bib/cheub.bib' title: Reducing Simply Generated Trees by Iterative Leaf Cutting --- =1 Introduction {#sec:introduction} ============ Trees are one of the most fundamental combinatorial structures with a plethora of applications not only in mathematics, but also in, e.g., computer science or biology. A matter of recent interest in the study of trees is the question of how a given tree family behaves when applying a fixed number of iterations of some given deterministic reduction procedure to it. See [@Hackl-Heuberger-Kropf-Prodinger:ta:treereductions; @Hackl-Prodinger:ta:catalan-stanley] for the study of different reduction procedures on (classes of) plane trees, and [@Hackl-Heuberger-Prodinger:2018:register-reduction] for a reduction procedure acting on binary trees related to the register function. In the scope of this extended abstract we focus on the, in a sense, most natural reduction procedure: we reduce a given rooted tree by cutting off all leaves so that only internal nodes remain. This process is illustrated in Figure \[fig:leaf-reduction\]. While in this extended abstract we are mainly interested in the family of simply generated trees, further families of rooted trees will be investigated in the full version. child [node\[draw, circle\] child\[dashed\] [node\[draw, circle\] ]{} child\[dashed\] [node\[draw, circle\] ]{} child [node\[draw, circle\] child [node\[draw, circle\] child\[dashed\] [node\[draw, circle\] ]{} ]{} child\[dashed\] [node\[draw, circle\] ]{} ]{} ]{} child\[dashed\] [node\[draw, circle\] ]{} child [node\[draw, circle\] child\[dashed\] [node\[draw, circle\] ]{} ]{} child [node\[draw, circle\] child [node\[draw, circle\] child [node\[draw, circle\] child\[dashed\] [node\[draw, circle\] ]{} ]{} ]{} child\[dashed\] [node\[draw, circle\] ]{} ]{}; $\quad\Rightarrow\quad$ child [node\[draw, circle\] child [node\[draw, circle\] child\[dashed\] [node\[draw, circle\] ]{} ]{} ]{} child\[dashed\] [node\[draw, circle\] ]{} child [node\[draw, circle\] child [node\[draw, circle\] child\[dashed\] [node\[draw, circle\] ]{} ]{} ]{}; $\quad\Rightarrow\quad$ child [node\[draw, circle\] child\[dashed\] [node\[draw, circle\] ]{} ]{} child [node\[draw, circle\] child\[dashed\] [node\[draw, circle\] ]{} ]{}; $\quad\Rightarrow\quad$ child\[dashed\] [node\[draw, circle\] ]{} child\[dashed\] [node\[draw, circle\] ]{}; It is easy to see that the number of steps it takes to reduce the tree so that only the root remains is precisely the height of the tree, i.e., the greatest distance from the root to a leaf. A more delicate question—the one in the center of this article—is to ask for a precise analysis of the number of vertices deleted when applying the “cutting leaves” reduction a fixed number of times. The key concepts behind our analysis are a recursive characterization and bivariate generating functions. Details on our model are given in Section \[sec:tree-parameter\]. The asymptotic analysis is then carried out in Section \[sec:simply-generated\], with our main result given in Theorem \[thm:simply-generated\]. It includes precise asymptotic formulas for the mean and variance of the number of removed vertices when applying the reduction a fixed number of times. Furthermore, we also prove a central limit theorem. Finally, in Section \[sec:outlook\] we give an outlook on the analysis of the “cutting leaves” reduction in the context of other classes of rooted trees. Qualitative results for these classes are given in Theorem \[thm:outlook:qualitative\]. The corresponding details will be published in the full version of this extended abstract. The computational aspects in this extended abstract were carried out using the module for manipulating asymptotic expansions [@Hackl-Heuberger-Krenn:2016:asy-sagemath] in the free open-source mathematics software system SageMath [@SageMath:2018:8.2]. A notebook containing our calculations can be found at <https://benjamin-hackl.at/publications/iterative-leaf-cutting/>. Preliminaries {#sec:tree-parameter} ============= So-called *additive tree parameters* play an integral part in our analysis of the number of removed nodes. A *fringe subtree* of a rooted tree is a subtree that consists of a vertex and all its descendants. An *additive tree parameter* is a functional $F$ satisfying a recursion of the form $$\label{eq:additive-tree-parameter} F({T}) = \sum_{j=1}^{k} F({T}^{j}) + f({T}),$$ where ${T}$ is some rooted tree, ${T}^{1}$, ${T}^{2}$, …, ${T}^{k}$ are the branches of the root of ${T}$, i.e., the fringe subtrees rooted at the children of the root of ${T}$, and $f$ is a so-called *toll function*. There are several recent articles on properties of additive tree parameters, see for example [@Wagner:2015:centr-limit], [@Janson:2016:normality-add-func], and [@Ralaivaosaona-Wagner:2018:d-ary-increasing]. It is easy to see that such an additive tree parameter can be computed by summing the toll function over all fringe subtrees, i.e., if ${T}^{(v)}$ denotes the fringe subtree rooted at the vertex $v$ of ${T}$, then we have $$F({T}) = \sum_{v\in {T}} f({T}^{(v)}).$$ In particular, the parameter is fully determined by specifying the toll function $f$. Tree parameters play an important role in our analysis because our quantity of interest—the number of removed vertices when applying the “cutting leaves” reduction $r$ times—can be seen as such a parameter. Let $a_r({T})$ denote this parameter for a given rooted tree ${T}$. \[prop:toll-function\] The toll function belonging to $a_r({T})$ is given by $$\label{eq:toll-function} f_r({T}) = \begin{cases} 1 & \text{ if the height of } {T}\text{ is less than } r,\\ 0 & \text { else.} \end{cases}$$ In other words, if $\mathcal{T}_r$ denotes the family of rooted trees of height less than $r$, the toll function can be written in Iverson notation[^1] as $f_r({T}) = \iverson{{T}\in \mathcal{T}_r}$. It is easy to see that the number of removed vertices satisfies this additive property—the number of deleted nodes in some tree ${T}$ is precisely the sum of all deleted nodes in the branches of ${T}$ in case the root is not deleted. Otherwise, the sum has to be increased by one to account for the root node. Thus, the toll function determines whether or not the root node of ${T}$ is deleted. The fact that the root node is deleted if and only if the number of reductions $r$ is greater than the height of the tree is already illustrated in Figure \[fig:leaf-reduction\]. Basically, our strategy to analyze the quantity $a_r({T})$ for simply generated families of trees uses the recursive structure of  together with the structure of the family itself to derive a functional equation for a suitable bivariate generating function $A_r(x,u)$. In this context, the trees ${T}$ in the family $\mathcal{T}$ are enumerated with respect to their size (corresponding to the variable $x$) and the value of the parameter $a_r({T})$ (corresponding to the variable $u$). Throughout the remainder of this extended abstract, $\mathcal{T}$ denotes the family of trees under investigation, and for all $r\in{\mathbb{Z}}_{\geq 1}$, $\mathcal{T}_r\subset\mathcal{T}$ denotes the class of trees of height less than $r$. The corresponding generating functions are denoted by $F(x)$ and $F_r(x)$. Furthermore, from now on, ${T}_n$ denotes a random[^2] tree of size $n$ (i.e., a tree that consists of $n$ vertices) from $\mathcal{T}$. This means that formally, the quantity we are interested in analyzing is the random variable $a_r({T}_n)$ for large $n$. Reducing Simply Generated Trees {#sec:simply-generated} =============================== Recursive Characterization {#sec:simply-generated:characterization} -------------------------- Let us begin by recalling the definition of simply generated trees. A simply generated family of trees $\mathcal{T}$ can be defined by imposing a weight function on plane trees. For a sequence of nonnegative weights $(w_k)_{k\geq 0}$ (we will make the customary assumption that $w_0 = 1$ without loss of generality; cf. [@Janson:2012:simply-generated-survey Section 4]), one defines the weight of a rooted ordered tree ${T}$ as the product $$w({T}) \coloneqq \prod_{j \geq 0} w_j^{N_j({T})},$$ where $N_j({T})$ is the number of vertices in ${T}$ with precisely $j$ children. The weight generating function $$\label{eq:f_def} F(x) = \sum_{{T}\in\mathcal{T}} w({T}) x^{|{T}|},$$ where $\abs{{T}}$ denotes the size of ${T}$ and where the sum is over all plane trees, is easily seen to satisfy a functional equation. By setting $\Phi(t) = \sum_{j \geq 0} w_j t^j$ and applying the symbolic method (see [@Flajolet-Sedgewick:ta:analy Chapter I]) to decompose a simply generated tree as the root node with some simply generated trees attached, we have $$\label{eq:sg_trees} F(x) = x \Phi(F(x)).$$ We define a probability measure on the set of all rooted ordered trees with $n$ vertices by assigning a probability proportional to $w({T})$ to every tree ${T}$. Several important families of trees are covered by suitable choices of weights: - plane trees are obtained from the weight sequence with $w_j = 1$ for all $j$, - labelled trees correspond to weights given by $w_j = \frac{1}{j!}$, - and $d$-ary trees (where every vertex has either $d$ or no children) are obtained by setting $w_0 = w_d = 1$ and $w_j = 0$ for all other $j$. In the context of simply generated trees, it is natural to define the bivariate generating function $A_r(x,u)$ to be a weight generating function, i.e., $$A_r(x,u) = \sum_{{T}\in\mathcal{T}} w({T}) x^{|{T}|} u^{a_r({T})}.$$ As explicitly stated in Proposition \[prop:toll-function\], the combinatorial class $\mathcal{T}_r$ of trees of height less than $r$ is integral for deriving a functional equation for $A_r(x,u)$. Write $F_r(x)$ for the weight generating function associated with $\mathcal{T}_r$, defined in the same way as $F(x)$ in . Clearly, $F_1(x) = x$, since there is only one rooted tree of height $0$, which only consists of the root. Moreover, via the decomposition mentioned in the interpretation of , we have $$\label{eq:iteration} F_r(x) = x \Phi(F_{r-1}(x))$$ for every $r > 1$. Now we are prepared to derive the aforementioned functional equation. \[prop:simply-generated:functional-equation\] The bivariate weight generating function $A_r(x,u)$ satisfies the functional equation $$\label{eq:simply-generated:functional-equation} A_r(x,u) = x\Phi(A_r(x,u)) + \Big( 1 - \frac{1}{u} \Big) F_r(xu).$$ We can express the sum over all trees ${T}$ in the definition of $A_r(x,u)$ as a sum over all possible root degrees $k$ and $k$-tuples of branches. In view of , this gives us $$\begin{aligned} A_r(x,u) & = \sum_{{T}\in\mathcal{T}\setminus\mathcal{T}_{r}} w({T}) x^{\abs{{T}}}u^{a_{r}({T})} + \sum_{{T}\in\mathcal{T}_{r}} w({T}) x^{\abs{{T}}} u^{\abs{{T}}}\\ & = \sum_{k \geq 0} w_k \sum_{{T}^{1}\in\mathcal{T}} \cdots \sum_{{T}^{k}\in\mathcal{T}} \Big(\prod_{j=1}^k w({T}^{j}) \Big) x^{1+|{T}^{1}|+\cdots+|{T}^{k}|} u^{a_r({T}^{1})+\cdots+a_r({T}^{k})}\\ & \quad + \sum_{{T}\in \mathcal{T}_r} w({T}) x^{|{T}|} \big(u^{|{T}|} - u^{|{T}|-1}\big) \\ & = x \sum_{k \geq 0} w_k \Big( \sum_{{T}\in\mathcal{T}} w({T}) x^{|{T}|} u^{a_r({T})} \Big)^k + \Big(1 - \frac{1}{u} \Big) \sum_{{T}\in \mathcal{T}_r} w({T}) (xu)^{|{T}|} \\ & = x\Phi(A_r(x,u)) + \Big( 1 - \frac{1}{u} \Big) F_r(xu). \qedhere\end{aligned}$$ Setting $u=1$ reduces this functional equation to , with $A_r(x,1) = F(x)$. The functional equation  provides enough leverage to carry out a full asymptotic analysis of the behavior of $a_r({T}_{n})$ for simply generated trees. Parameter Analysis {#sec:simply-generated:analysis} ------------------ Now we use the functional equation to determine mean and variance of $a_r$, which are obtained from the partial derivatives with respect to $u$, evaluated at $u=1$. To be more precise, if ${T}_n$ denotes a random (with respect to the probability distribution determined by the given weight sequence) simply generated tree of size $n$, then after normalization, the factorial moments $${\mathbb{E}}a_r({T}_n)^{\underline{k}} \coloneqq {\mathbb{E}}(a_r({T}_n) (a_r({T}_n) - 1) \cdots (a_r({T}_n) - k + 1))$$ can be extracted as the coefficient of $x^n$ in the partial derivative $\frac{\partial^k\,}{\partial u^k} A_r(x,u)\big|_{u=1}$. And from there, expectation and variance can be obtained in a straightforward way. From this point on, we make some reasonable assumptions on the weight sequence $(w_k)_{k\geq 0}$. In addition to $w_0 = 1$, we assume that there is a $k > 1$ with $w_k > 0$ to avoid trivial cases. Furthermore, we require that if $R > 0$ is the radius of convergence of the weight generating function $\Phi(t) = \sum_{k\geq 0} w_k t^k$, there is a unique positive $\tau$ (the *fundamental constant*) with $0 < \tau < R$ such that $\Phi(\tau) - \tau \Phi'(\tau) = 0$. This is to ensure that the singular behavior of $F(x)$ can be fully characterized (see, e.g., [@Flajolet-Sedgewick:ta:analy Section VI.7]). Let $r \in {\mathbb{Z}}_{\geq 1}$ be fixed, let $\mathcal{T}$ be a simply generated family of trees and let $\mathcal{T}_r \subset \mathcal{T}$ be the set of trees with height less than $r$. If ${T}_n$ denotes a random tree from $\mathcal{T}$ of size $n$ (with respect to the probability measure defined on $\mathcal{T}$), then for $n\to\infty$ the expected number of removed nodes when applying the “cutting leaves” procedure $r$ times to ${T}_n$ and the corresponding variance satisfy $$\label{eq:simply-generated:expectation-variance} {\mathbb{E}}a_r({T}_n) = \mu_r n + \frac{\rho \tau^2 F_r'(\rho) + 3\beta\tau F_r(\rho) - \alpha^2 F_r(\rho)}{2\tau^3} + O(n^{-1}), \quad\text{ and }\quad {\mathbb{V}}a_r({T}_n) = \sigma_r^2 n + O(1).$$ The constants $\mu_r$ and $\sigma_r^2$ are given by $$\label{eq:simply-generated:constants} \mu_r = \frac{F_r(\rho)}{\tau},\qquad \sigma_r^2 = \frac{4 \rho \tau^3 F_r'(\rho) - 4 \rho \tau^2 F_r(\rho)F_r'(\rho) + (2\tau^2 - \alpha^2) F_r(\rho)^2 - 2\tau^3 F_r(\rho)}{2\tau^4},$$ where $F_r(x)$ is the weight generating function corresponding to $\mathcal{T}_r$, $\rho$ is the radius of convergence of $F(x)$ and given by $\rho = \tau/\Phi(\tau)$, and the constants $\alpha$ and $\beta$ are given by $$\alpha = \sqrt{\frac{2\tau}{\rho \Phi''(\tau)}}, \qquad \beta = \frac{1}{\rho\Phi''(\tau)} - \frac{\tau \Phi'''(\tau)}{3\rho \Phi''(\tau)^2}.$$ For the sake of technical convenience, we are going to assume that $\Phi(t)$ is an aperiodic power series, meaning that the period $p$, i.e., the greatest common divisor of all indices $j$ for which $w_j\neq 0$, is 1. This implies (see [@Flajolet-Sedgewick:ta:analy Theorem VI.6]) that $F(x)$ has a unique square root singularity located at $\rho = \tau/\Phi(\tau)$, which makes some of our computations less tedious. However, all of our results also apply (mutatis mutandis) if this aperiodicity condition is not satisfied—with the restriction that then, $n-1$ has to be a multiple of the period $p$. First, we have $$\frac{\partial}{\partial u} A_r(x,u) = x \Phi'(A_r(x,u)) \frac{\partial}{\partial u} A_r(x,u) + \frac{1}{u^2} F_r(xu) + x \Big( 1 - \frac{1}{u} \Big) F_r'(xu),$$ so $$\frac{\partial}{\partial u} A_r(x,u) \Big|_{u=1} = \frac{F_r(x)}{1-x \Phi'(A_r(x,1))} = \frac{F_r(x)}{1-x \Phi'(F(x))}.$$ Analogously, we can use implicit differentiation on  to obtain $$x F'(x) = \frac{F(x)}{1-x \Phi'(F(x))},$$ so $$\frac{\partial}{\partial u} A_r(x,u) \Big|_{u=1} = \frac{x F'(x) F_r(x)}{F(x)}.$$ The second derivative is found in the same way: we obtain $$\frac{\partial^2}{\partial u^2} A_r(x,u) \Big|_{u=1} = \frac{2(x F_r'(x) - F_r(x))}{1-x \Phi'(F(x))} + \frac{x F_r(x)^2\Phi''(F(x))}{(1-x \Phi'(F(x)))^3}$$ by differentiating implicitly a second time. Again, this can be expressed in terms of the derivatives of $F$: $$\frac{\partial^2}{\partial u^2} A_r(x,u) \Big|_{u=1} = \Big( \frac{2F_r(x)^2}{F(x)^2} - \frac{2F_r(x)}{F(x)} + \frac{2x F_r'(x)}{F(x)} \Big) x F'(x) + \frac{x^2 F_r(x)^2 F''(x)}{F(x)^2} - \frac{2x^2 F_r(x)^2 F'(x)^2}{F(x)^3}.$$ By the assumptions made in this section, there is a positive real number $\tau$ that is smaller than the radius of convergence of $\Phi$ and satisfies the equation $\tau \Phi'(\tau) = \Phi(\tau)$. It is well known (see [@Flajolet-Sedgewick:ta:analy Section VI.7]) that in this case, $F(x)$ has a square root singularity at $\rho = \tau/\Phi(\tau) = 1/\Phi'(\tau)$, with singular expansion $$\label{eq:simply-generated:F-expansion} F(x) = \tau - \alpha \sqrt{1-x/\rho} + \beta (1-x/\rho) + O((1-x/\rho)^{3/2}).$$ Here, the coefficients $\alpha$ and $\beta$ are given by $$\label{eq:alpha} \alpha = \sqrt{\frac{2\tau}{\rho \Phi''(\tau)}}$$ and $$\beta = \frac{1}{\rho \Phi''(\tau)} - \frac{\tau \Phi'''(\tau)}{3\rho \Phi''(\tau)^2}$$ respectively. Note that in case more precise asymptotics are desired, further terms of the singular expansion can be computed easily. Due to our aperiodicity assumption, $\rho$ is the only singularity on $F$’s circle of convergence, and the conditions of singularity analysis (see [@Flajolet-Odlyzko:1990:singul] or [@Flajolet-Sedgewick:ta:analy Chapter VI], for example) are satisfied. Next we note that $F_r$ has greater radius of convergence than $F$. This follows from  by induction on $r$: it is clear for $r=1$, and if $F_{r-1}$ is analytic at $\rho$, then so is $F_r$, since $|F_{r-1}(\rho)| < F(\rho) = \tau$ is smaller than the radius of convergence of $\Phi$. So $F_r$ has greater radius of convergence than $F$. This implies that $F_r$ has a Taylor expansion around $\rho$: $$F_r(x) = F_r(\rho) - \rho F_r'(\rho) (1 - x/\rho) + O((1-x/\rho)^2).$$ We find that $$\begin{aligned} \frac{\partial}{\partial u} A_r(x,u) \Big|_{u=1} = \frac{x F'(x) F_r(x)}{F(x)} & = \frac{F_r(\rho)}{\tau} \cdot (x F'(x)) + \frac{\rho \tau^2 F_r'(\rho) + 3\beta \tau F_r(\rho) - \alpha^2 F_r(\rho)}{2\tau^3} \cdot F(x) \\ & \quad + C_1 + C_2 (1-x/\rho) + O((1-x/\rho)^{3/2}).\end{aligned}$$ for certain constants $C_1$ and $C_2$. The $n$th coefficient of the derivative $[z^n]\frac{\partial}{\partial u} A_r(x,u) \big|_{u=1}$ can now be extracted by means of singularity analysis. Normalizing the result by dividing by $[z^n] A_r(x,1) = [z^n] F(x)$ (again extracted by means of singularity analysis; the corresponding expansion is given in ) yields an asymptotic expansion for ${\mathbb{E}}a_r({T}_n)$. We find $$\label{eq:mean_sg} {\mathbb{E}}a_r({T}_n) = \frac{F_r(\rho)}{\tau} \cdot n + \frac{\rho \tau^2 F_r'(\rho) + 3\beta \tau F_r(\rho) - \alpha^2 F_r(\rho)}{2\tau^3} + O(n^{-1}).$$ Similarly, from $$\begin{aligned} \frac{\partial^2}{\partial u^2} A_r(x,u) \Big|_{u=1} &= \Big(\frac{2F_r(x)^2}{F(x)^2} - \frac{2F_r(x)}{F(x)} + \frac{2x F_r'(x)}{F(x)} \Big) x F'(x) + \frac{x^2 F_r(x)^2 F''(x)}{F(x)^2} - \frac{2x^2 F_r(x)^2 F'(x)^2}{F(x)^3} \\ &= \Big(\frac{F_r(\rho)}{\tau} \Big)^2 \cdot (x^2F''(x)+xF'(x)) \\ &\quad + \frac{4\rho \tau^3 F_r'(\rho) - 2\rho \tau^2 F_r(\rho) F_r'(\rho) + (2\tau^2 + 6\beta \tau - 3\alpha^2)F_r(\rho)^2 - 4\tau^3 F_r(\rho)}{2\tau^4} \cdot (xF'(x)) \\ &\quad + C_3 + O((1-x/\rho)^{1/2}),\end{aligned}$$ we can use singularity analysis to find an asymptotic expansion for the second factorial moment ${\mathbb{E}}a_r({T}_n)^{\underline{2}}$. Plugging the result and the expansion for the mean from (\[eq:mean\_sg\]) into the well-known identity $${\mathbb{V}}a_r({T}_n) = {\mathbb{E}}a_r({T}_n)^{\underline{2}} + {\mathbb{E}}a_r({T}_n) - ({\mathbb{E}}a_r({T}_n))^2$$ then yields $${\mathbb{V}}a_r({T}_n) = \frac{4 \rho \tau^3 F_r'(\rho) - 4 \rho \tau^2 F_r(\rho)F_r'(\rho) + (2\tau^2 - \alpha^2) F_r(\rho)^2 - 2\tau^3 F_r(\rho)}{2\tau^4} \cdot n + O(1). \qedhere$$ While this analysis provided us with a precise characterization for the mean and the variance of the number of deleted vertices, it would be interesting to have more information on how these quantities behave for a very large number of iterated reductions. The following proposition gives more details on the main contribution. \[prop:simply-generated:constants-asy\] For $r\to\infty$, the constants $\mu_r$ and $\sigma_r^2$ admit the asymptotic expansions $$\label{eq:simply-generated:constants-asy} \mu_r = 1 - \frac{2}{\rho\tau\Phi''(\tau)} r^{-1} + o(r^{-1}) \quad\text{ and }\quad \sigma_r^2 = \frac{1}{3\rho\tau\Phi''(\tau)} + o(1),$$ respectively. In order to obtain the behavior of $\mu_r$ and $\sigma_r^2$ for $r\to\infty$, we have to study the behavior of $c_r = F_r(\rho)$ and $d_r = F_r'(\rho)$ as $r \to \infty$. First, we have the recursion $$c_r = \rho \Phi(c_{r-1}).$$ We note that $c_r$ is increasing in $r$ (since the coefficients of $F_r(x)$ are all nondecreasing in $r$ in view of the combinatorial interpretation), and $c_r \to \tau$ as $r \to \infty$. By Taylor expansion around $\tau$, we obtain $$\tau - c_r = \rho \Phi(\tau) - \rho \Phi(c_{r-1}) = \rho \Phi'(\tau) (\tau - c_{r-1}) - \frac{\rho \Phi''(\tau)}{2} (\tau - c_{r-1})^2 + O((\tau - c_{r-1})^3),$$ and since $\rho \Phi'(\tau) = 1$, it follows that $$\frac{1}{\tau - c_r} = \frac{1}{\tau - c_{r-1}} + \frac{\rho \Phi''(\tau)}{2} + O(\tau - c_{r-1}).$$ Now we can conclude that $$\frac{1}{\tau - c_r} = \frac{\rho \Phi''(\tau)}{2} r + o(r),$$ so $$\mu_r = \frac{c_r}{\tau} = 1 - \frac{2}{\rho \tau \Phi''(\tau)} r^{-1} + o(r^{-1}).$$ Further terms can be derived by means of bootstrapping. Similarly, differentiating the identity $F_r(x) = x \Phi(F_{r-1}(x))$ gives us the recursion $$d_r = \rho \Phi'(c_{r-1}) d_{r-1} + \frac{c_r}{\rho}.$$ The sequence $d_r$ is increasing for the same reason $c_r$ is. Moreover, since $\rho \Phi'(c_{r-1}) < \rho \Phi'(\tau) = 1$, it follows from the recursion that $d_r = O(r)$. Now, we use Taylor expansion again to obtain $$\begin{aligned} d_r &= \rho \big(\Phi'(\tau) - \Phi''(\tau) (\tau-c_{r-1}) + O((\tau - c_{r-1})^2)\big) d_{r-1} + \frac{c_r}{\rho} \\ &= \big(\rho \Phi'(\tau) - \rho\Phi''(\tau) (\tau-c_{r-1}) + O((\tau - c_{r-1})^2) \big) d_{r-1} + \frac{c_r}{\rho} \\ &= \Big( 1 - \frac{2}{r} + o(r^{-1}) \Big) d_{r-1} + \frac{\tau}{\rho} + o(1). \end{aligned}$$ This can be rewritten as $r^2 d_r = (r-1)^2 d_{r-1} + \frac{\tau}{\rho} r^2 + o(r^2)$, which gives us $r^2 d_r = \frac{\tau}{3\rho} r^3 + o(r^3)$ and allows us to conclude that $$d_r = \frac{\tau}{3\rho} r + o(r).$$ Plugging the formulas for $c_r$ and $d_r$ into , we find that $$\sigma_r^2 = \frac{1}{3\rho \tau \Phi''(\tau)} + o(1). \qedhere$$ As a side effect of Proposition \[prop:simply-generated:constants-asy\], we can also observe that for sufficiently large $r$, the constant $\sigma_r^2$ is strictly positive. As a consequence, the parameter $a_r({T}_n)$ is asymptotically normally distributed in these cases. However, we can do even better: we can prove that $a_{r}({T}_{n})$ always admits a Gaussian limit law, except for an—in some sense—pathological case. \[prop:simply-generated:limit-law\] Let $\mathcal{T}$ be a simply generated family of trees and fix $r\in{\mathbb{Z}}_{\geq 1}$. Then the random variable $a_{r}({T}_{n})$ is asymptotically normally distributed, except in the case of $d$-ary trees when $r=1$. In all other cases we find that for $x\in{\mathbb{R}}$ we have $${\mathbb{P}}\Big(\frac{a_{r}({T}_{n}) - \mu_{r}n}{\sqrt{\sigma_{r}^{2} n}} \leq x\Big) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{x} e^{-t^{2}/2}~dt + O(n^{-1/2}).$$ Observe that as soon as we are able to prove that the variance of $a_{r}({T}_{n})$ is actually linear with respect to $n$, i.e., $\sigma_{r}^{2} \neq 0$, all conditions of [@Drmota:2009:random Theorem 2.23] hold and are checked easily, thus proving a Gaussian limit law. Our strategy for proving a linear lower bound for the variance relies on choosing two trees ${T}^{1}$, ${T}^{2}\in\mathcal{T}$ with $\abs{{T}^{1}} = \abs{{T}^{2}}$ such that $a_{r}({T}^{1}) \neq a_{r}({T}^{2})$, and $a_{r}({T}^{1}), a_{r}({T}^{2}) < \abs{{T}^{1}}$ (i.e., neither of the two is completely reduced after $r$ steps, and the number of vertices removed after $r$ steps differs between ${T}^{1}$ and ${T}^{2}$). While this is not possible in the case where $r=1$ and $\mathcal{T}$ is a family of $d$-ary trees (where the number of leaves, and thus the number of removed nodes when cutting the tree once only depends on the tree size), such trees can always be found in all other cases. To be more precise, if $r = 1$ and $\mathcal{T}$ is not a $d$-ary family of trees, there have to be at least three different possibilities for the number of children, namely $0$, $d$, and $e$. Then, in a sufficiently large tree ${T}^{1}$, the number of inner nodes with $d$ children can be reduced by $e$, and the number of inner nodes with $e$ children can be increased by $d$ in order to obtain a tree ${T}^{2}$ of the same size with a different number of leaves, thus satisfying our conditions. For the case $r\geq 2$, observe that the problem above cannot arise as cutting the leaves off some tree in $\mathcal{T}$ does not necessarily yield another tree in $\mathcal{T}$. Let $d$ be a positive integer for which the weight $w_{d}$ is positive (i.e., a node in a tree in $\mathcal{T}$ can have $d$ children). We choose ${T}^{1}$ to be the complete $d$-ary tree of height $r$. A second $d$-ary tree ${T}^{2}$ is then constructed by arranging the same number of internal vertices as a path and by attaching suitably many leaves. The handshaking lemma then guarantees that both trees have the same size; but $a_{r}({T}^{1})$ is obviously larger than $a_{r}({T}^{2})$. It is well-known (see e.g. [@Aldous:1991:asy-fringe-distributions] or [@Janson:2016:normality-add-func] for stronger results) that large trees (except for a negligible proportion) contain a linear (with respect to the size of the tree) number of copies of ${T}^{1}$ and ${T}^{2}$ as fringe subtrees. To be more precise, this means that there is a positive constant $c > 0$ such that the probability that a tree of size $n$ contains at least $cn$ copies of the patterns ${T}^{1}$ and ${T}^{2}$ is greater than $1/2$. Now, consider a large random tree ${T}$ in $\mathcal{T}$ and replace all occurrences of ${T}^{1}$ and ${T}^{2}$ by marked vertices. If $m$ denotes the number of marked vertices in the corresponding tree, then $m$ is of linear size with respect to the tree size $n$, except for a negligible proportion of trees. Given that, after replacing the patterns, the remaining tree contains $m$ marked nodes, the number of occurrences of ${T}^{1}$ in the original (random) tree follows a binomial distribution with size parameter $m$ and probability $p = w({T}^{1})/(w({T}^{1})+w({T}^{2})) \in (0,1)$ that only depends on the weights of the patterns. If we let $c_1$ and $c_2$ be the number of occurrences of ${T}^{1}$ and ${T}^{2}$ respectively, then we have $$a_{r}({T}) = c_1a_{r}({T}^{1}) + c_2a_{r}({T}^{2}) + A,$$ where $A$ only depends on the shape of the reduced tree with ${T}^{1}$ and ${T}^{2}$ replaced by marked vertices. Let $M$ denote the random variable modeling the number of marked nodes in the reduced tree obtained from a tree ${T}$ of size $n$. Then, via the law of total variance we find $${\mathbb{V}}a_{r}({T}_{n}) \geq {\mathbb{E}}({\mathbb{V}}(a_{r}({T}_{n}) | M)) \geq p(1-p){\mathbb{E}}M \geq p(1-p)\frac{c}{2} n.$$ The last inequality can be justified via the law of total expectation combined with the fact that the number of marked nodes in a tree with replaced patterns is at least $cn$ with probability greater than $1/2$. This proves that the variance of $a_{r}({T}_{n})$ has to be of linear order. Finally, in order to prove that the speed of convergence is $O(n^{-1/2})$, we replace the formulation of Hwang’s Quasi-Power Theorem without quantification of the speed of convergence (cf. [@Drmota:2009:random Theorem 2.22]) in the proof of [@Drmota:2009:random Theorem 2.23] with a quantified version (see [@Hwang:1998] or [@Heuberger-Kropf:2016:higher-dimen] for a generalization to higher dimensions). The following theorem summarizes the results of the asymptotic analysis in this section. \[thm:simply-generated\] Let $r\in{\mathbb{Z}}_{\geq 1}$ be fixed and $\mathcal{T}$ be a simply generated family of trees with weight generating function $\Phi$ and fundamental constant $\tau$, and set $\rho = \tau/\Phi(\tau)$. If ${T}_n$ denotes a random tree from $\mathcal{T}$ of size $n$ (with respect to the probability measure defined on $\mathcal{T}$), then for $n\to\infty$ the expected number of removed nodes when applying the “cutting leaves” procedure $r$ times to ${T}_n$ and the corresponding variance satisfy $${\mathbb{E}}a_r({T}_n) = \mu_r n + \frac{\rho \tau^2 F_r'(\rho) + 3\beta\tau F_r(\rho) - \alpha^2 F_r(\rho)}{2\tau^3} + O(n^{-1}), \quad\text{ and }\quad {\mathbb{V}}a_r({T}_n) = \sigma_r^2 n + O(1).$$ The constants $\mu_r$ and $\sigma_r^2$ are given by $$\mu_r = \frac{F_r(\rho)}{\tau},\qquad \sigma_r^2 = \frac{4 \rho \tau^3 F_r'(\rho) - 4 \rho \tau^2 F_r(\rho)F_r'(\rho) + (2\tau^2 - \alpha^2) F_r(\rho)^2 - 2\tau^3 F_r(\rho)}{2\tau^4},$$ with $$\alpha = \sqrt{\frac{2\tau}{\rho \Phi''(\tau)}}, \qquad \beta = \frac{1}{\rho\Phi''(\tau)} - \frac{\tau \Phi'''(\tau)}{3\rho \Phi''(\tau)^2}.$$ Furthermore, for $r\to\infty$ the constants $\mu_r$ and $\sigma_r^2$ behave like $$\mu_r = 1 - \frac{2}{\rho\tau\Phi''(\tau)} r^{-1} + o(r^{-1}) \quad\text{ and }\quad \sigma_r^2 = \frac{1}{3\rho\tau\Phi''(\tau)} + o(1).$$ Finally, if $r \geq 2$ or $\mathcal{T}$ is not a family of $d$-ary trees, then $a_r({T}_n)$ is asymptotically normally distributed, meaning that for $x\in{\mathbb{R}}$ we have $${\mathbb{P}}\Big(\frac{a_{r}({T}_{n}) - \mu_{r}n}{\sqrt{\sigma_{r}^{2} n}} \leq x\Big) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{x} e^{-t^{2}/2}~dt + O(n^{-1/2}).$$ Outlook {#sec:outlook} ======= Our approach for analyzing the “cutting leaves” reduction procedure on simply generated families of trees can be adapted to work for other families of trees as well. In this section, we describe two additional classes of rooted trees to which our approach is applicable and give qualitative results. Details on the analysis for these classes as well as quantitative results will be given in the full version of this extended abstract. The two additional classes of trees are *Pólya trees* and *noncrossing trees*. Pólya trees are unlabeled rooted trees where the ordering of the children is not relevant. Uncrossing trees, on the other hand, are special labeled trees that satisfy two conditions: - the root node has label $1$, - when arranging the vertices in a circle such that the labels are sequentially ordered, none of the edges of the tree are crossing. Obviously, noncrossing trees have their name from the second property. Both classes of trees, Pólya trees as well as noncrossing trees, are illustrated in Figure \[fig:more-tree-classes\]. [0.6]{} child [node\[draw, circle\] child [node\[draw, circle\] ]{} child [node\[draw, circle\] ]{} ]{} child [node\[draw, circle\] ]{} child [node\[draw, circle\] child [node\[draw, circle\] ]{} child [node\[draw, circle\] child [node\[draw, circle\] ]{} child [node\[draw, circle\] child [node\[draw, circle\] ]{} ]{} ]{} child [node\[draw, circle\] ]{} ]{}; child [node\[draw, circle\] child [node\[draw, circle\] child [node\[draw, circle\] child [node\[draw, circle\] ]{} ]{} child [node\[draw, circle\] ]{} ]{} child [node\[draw, circle\] ]{} child [node\[draw, circle\] ]{} ]{} child [node\[draw, circle\] child [node\[draw, circle\] ]{} child [node\[draw, circle\] ]{} ]{} child [node\[draw, circle\] ]{}; [0.35]{} \(1) at (90:2.5) [$1$]{}; (2) at (130:2.5) [$2$]{}; (3) at (170:2.5) [$3$]{}; (4) at (210:2.5) [$4$]{}; (5) at (250:2.5) [$5$]{}; (6) at (290:2.5) [$6$]{}; (7) at (330:2.5) [$7$]{}; (8) at (10:2.5) [$8$]{}; (9) at (50:2.5) [$9$]{}; (1) – (5) – (3) – (2) (3) – (4) (5) – (9) – (6) – (7) (6) – (8); The basic principle in the analysis of both of these tree classes is the same: we leverage the recursive nature of the respective family of trees to derive a functional equation for $A_{r}(x,u)$. From there, similar techniques as in Section \[sec:simply-generated:analysis\] (i.e., implicit differentiation and propagation of the singular expansion of the basic generating function $F(x)$) can be used to obtain (arbitrarily precise) asymptotic expansions for the mean and the variance of the number of deleted nodes when cutting the tree $r$ times. Qualitatively, in both of these cases we can prove a theorem of the following nature. \[thm:outlook:qualitative\] Let $r\in{\mathbb{Z}}_{\geq 1}$ be fixed and $\mathcal{T}$ be either the family of Pólya trees or the family of noncrossing trees. If ${T}_{n}$ denotes a uniformly random tree from $\mathcal{T}$ of size $n$, then for $n\to\infty$ the expected number of removed nodes when applying the “cutting leaves” procedure $r$ times to ${T}_{n}$ and the corresponding variance satisfy $${\mathbb{E}}a_{r}({T}_{n}) = \mu_{r} n + O(1),\quad \text{ and }\quad {\mathbb{V}}a_{r}({T}_{n}) = \sigma_{r}^{2} n + O(1),$$ for explicitly known constants $\mu_{r}$ and $\sigma_{r}^{2}$. Furthermore, the number of deleted nodes $a_{r}({T}_{n})$ admits a Gaussian limit law. Note that more precise asymptotic expansions for the mean and the variance (with explicitly known constants) can also be computed. [^1]: The Iverson notation, as popularized in [@Graham-Knuth-Patashnik:1994], is defined as follows: $\iverson{\mathrm{expr}}$ evaluates to 1 if $\mathrm{expr}$ is true, and to 0 otherwise. [^2]: The underlying probability distribution will always be clear from context.
--- abstract: 'A unified treatment of the cohesive and conducting properties of metallic nanostructures in terms of the electronic scattering matrix is developed. A simple picture of metallic nanocohesion in which conductance channels act as delocalized chemical bonds is derived in the jellium approximation. Universal force oscillations of order $\varepsilon_F/\lambda_F$ are predicted when a metallic quantum wire is stretched to the breaking point, which are synchronized with quantized jumps in the conductance.' address: - '$\mbox{}^1$Institut de Physique Théorique, Université de Fribourg, CH-1700 Fribourg, Switzerland' - '$\mbox{}^2$Institut Romand de Recherche Numérique en Physique des Matériaux CH-1015 Lausanne' author: - 'C. A. Stafford,$^{1,*}$ D. Baeriswyl,$^1$ and J. Bürki$^{1,2}$' date: 'Received by Phys.Rev.Lett. 17 March 1997' title: Jellium model of metallic nanocohesion --- Cohesion in metals is due to the formation of bands, which arise from the overlap of atomic orbitals. In a metallic constriction with nanoscopic cross section, the transverse motion is quantized, leading to a finite number of subbands below the Fermi energy $\varepsilon_F$. A striking consequence of these discrete subbands is the phenomenon of conductance quantization [@quantization]. The cohesion in a metallic nanoconstriction must also be provided by these discrete subbands, which may be thought of as chemical bonds which are delocalized over the cross section. In this Letter, we confirm this intuitive picture of metallic nanocohesion using a simple jellium model. Universal force oscillations of order $\varepsilon_F/\lambda_F$ are predicted in metallic nanostructures exhibiting conductance quantization, where $\lambda_F$ is the Fermi wavelength. Our results are in quantitative agreement with the recent pioneering experiment of Rubio, Agraït, and Vieira [@nanoforce], who measured simultaneously the force and conductance during the formation and rupture of an atomic-scale Au contact. Similar experimental results have been obtained independently by Stalder and Dürig [@nanoforce2]. Quantum-size effects on the mechanical properties of metallic systems have previously been observed in ultrasmall metal clusters [@clusters], which exhibit enhanced stability for certain [*magic numbers*]{} of atoms. These magic numbers have been rather well explained in terms of a shell model based on the jellium approximation [@clusters]. The success of the jellium approximation in these closed nanoscopic systems motivates its application to open (infinite) systems, which are the subject of interest here. We investigate the conducting and mechanical properties of a nanoscopic constriction connecting two macroscopic metallic reservoirs. The natural framework in which to investigate such an open system is the scattering approach developed by Landauer [@landauer] and Büttiker [@condtheory]. Here, we extend the formalism of Ref. [@condtheory], which describes electrical conduction, to describe the mechanical properties of a confined electron gas as well. For definiteness, we consider a constriction of length $L$ in an infinitely long cylindrical wire of radius $R$, as shown in Fig. \[fig.geometry\]. We neglect electron-electron interactions, and assume the electrons to be confined along the $z$ axis by a hard-wall potential at $r=r(z)$. This model is considerably simpler than a self-consistent jellium calculation [@clusters], but should suffice to capture the essential physics of the problem. Outside the constriction, the Schrödinger equation is separable, and the scattering states can be written as $$\psi_{kmn}^{\pm}(\phi,r,z) = e^{\pm i kz+im\phi} J_m(\gamma_{mn}r/R), \label{inoutstates}$$ where the quantum numbers $\gamma_{mn}$ are the roots of the Bessel functions $J_m(\gamma_{mn})=0$. These scattering states may be grouped into subbands characterized by the quantum numbers $m$ and $n$, and we shall use the notation $\nu=(m,n)$. The energy of an electron in subband $\nu$ is $\varepsilon(k)=\varepsilon_{\nu}+ \hbar^2k^2/2m$, where $$\varepsilon_{\nu} = \frac{\hbar^2 \gamma_{\nu}^2}{2mR^2}. \label{e.nu}$$ The fundamental theoretical quantity is the scattering matrix of the constriction $S(E)$, which connects the incoming and outgoing scattering states. For a two-terminal device, such as that shown in Fig. \[fig.geometry\], $S(E)$ can be decomposed into four submatrices $S_{\alpha\beta}(E)$, $\alpha$, $\beta=1,2$, where 1 (2) indicates scattering states to the left (right) of the constriction. Each submatrix $S_{\alpha \beta}(E)$ is a matrix in the scattering channels $\nu\nu'$. In terms of the scattering matrix, the electrical conductance is given by [@condtheory] $$G = \frac{2e^2}{h} \int dE\, \frac{-df}{dE} \mbox{Tr}\left\{ S_{12}^{\dagger}(E) S_{12}(E)\right\}, \label{condformula}$$ where $f(E)=\{\exp[\beta(E-\mu)]+1\}^{-1}$ is the Fermi distribution function and a factor of 2 has been included to account for spin degeneracy. The grand canonical potential of the system is $$\Omega = -k_B T \int dE\, D(E) \ln\left(1+e^{-\beta(E-\mu)}\right), \label{gibbs1}$$ where the density of states in the constriction may be expressed in terms of the scattering matrix as [@Iopen; @partial.dos] $$D(E) = \frac{1}{2\pi i} \sum_{\alpha,\beta} \mbox{Tr} \left\{ S_{\alpha\beta}^{\dagger}(E) \frac{\partial S_{\alpha\beta}}{ \partial E} - S_{\alpha\beta}(E) \frac{\partial S_{\alpha\beta}^{\dagger}}{\partial E} \right\}. \label{dos}$$ Eqs. (\[condformula\]) to (\[dos\]) allow one to treat the conducting and mechanical properties of a confined electron gas on an equal footing, and provide the starting point for our calculation. We are interested in the mechanical properties of a metallic nanoconstriction in the regime of conductance quantization. The necessary condition to have well-defined conductance plateaus in a three-dimensional constriction was shown by Torres, Pascual, and Sáenz [@torres] to be $(dr/dz)^2 \ll 1$. In this limit, Eqs. (\[condformula\]) to (\[dos\]) simplify considerably because one may employ the adiabatic approximation [@goldstein]. In the adiabatic limit, the transverse motion is separable from the motion parallel to the $z$ axis, so Eqs. (\[inoutstates\]) and (\[e.nu\]) remain valid in the region of the constriction, with $R$ replaced by $r(z)$. The channel energies thus become functions of $z$, $\varepsilon_{\nu}(z)=\hbar^2 \gamma_{\nu}^2/ 2mr(z)^2$. In this limit, the scattering matrices $S_{\alpha\beta}(E)$, $\alpha,\,\beta=1,\,2$ are diagonal in the channel indices, leading to an effective one-dimensional scattering problem. The condition $(dr/dz)^2 \ll 1$ and the requirement that the radius of the wire outside the constriction not be smaller than an atomic radius (i.e., $k_F R > 1$) automatically imply the validity of the WKB approximation. Since the energy differences between the transverse channels in an atomic-scale constriction are large compared to $k_B T$ at ambient temperature, we restrict consideration in the following to the case $T=0$. In the adiabatic approximation, the conductance becomes $$G=\frac{2e^2}{h}\sum_{\nu}T_{\nu}, \label{condform2}$$ where the transmission probability for channel $\nu$ may be calculated using a variant of the WKB approximation [@glazman; @brandbyge], which correctly describes the rounding of the conductance steps at threshold. The density of states in the constriction in the adiabatic approximation is $$D(E)=\frac{2}{\pi} \sum_{\nu} \frac{d\Theta_{\nu}}{dE}, \label{dnde}$$ where the total phase shift is given in the WKB approximation by $$\Theta_{\nu}(E)=\left(2m/\hbar^2\right)^{1/2} \int_0^L dz \, [E-\varepsilon_{\nu}(z)]^{1/2}, \label{theta}$$ the integral being restricted to the region where $\varepsilon_{\nu}(z)< E$. The grand canonical potential of the system is thus $$\Omega = -\frac{8\varepsilon_F}{3\lambda_F} \int_0^L dz \, \sum_{\nu} \mbox{}^{'} \left(1-\frac{\varepsilon_{\nu}(z)}{\varepsilon_F}\right)^{3/2}, \label{gibbs2}$$ the sum being over channels with $\varepsilon_{\nu}(z) < \varepsilon_F$. Under elongation, the tensile force is given by $F=-\partial \Omega/ \partial L$. It is easy to show that $F$ is invariant under a stretching of the geometry $r(z) \rightarrow r(\lambda z)$, i.e., $$F=\frac{\varepsilon_F}{\lambda_F} f(\Delta L/L_0, k_F R), \label{scaling1}$$ where $f(x,y)$ is a dimensionless function. Nonuniversal corrections to $F$ occur in very short constrictions, for which the adiabatic approximation breaks down. The leading order correction to the integrand in Eq. (\[gibbs2\]) is $-(3\pi/64) k_F r(z) (dr/dz)^2$, leading to a relative error in $F$ of $\sim 2\sin^2\theta/4$, where $\theta$ is the opening angle of the constriction. Using a modified Sharvin equation [@torres] to estimate the diameter of the contact versus elongation for the experiment of Ref. [@nanoforce] indicates an opening angle $\theta \lesssim 45^{\circ}$, for which the nonuniversal corrections are $\lesssim 8\%$, justifying the above approach. Fig. \[fig.fcond\] shows the conductance and force of a metallic nanoconstriction as a function of the elongation, calculated from Eqs.(\[condform2\]) and (\[gibbs2\]). Here an ideal plastic deformation was assumed, i.e., the volume of the constriction was held constant [@plastic]. The correlations between the force and the conductance are striking: $|F|$ increases along the conductance plateaus, and decreases sharply when the conductance drops. The constriction becomes unstable when the last conductance channel is cut off. Some transverse channels are quite closely spaced, and in these cases, the individual conductance plateaus \[[*e.g.*]{}, $G/(2e^2/h)=14$, 15, 19, 21\] and force oscillations are difficult to resolve. Fig. \[fig.fcond\] is remarkably similar to the experimental results of Refs.  and , both qualitatively and quantitatively. Inserting the value $\varepsilon_F/\lambda_F \simeq 1.7\mbox{nN}$ for Au, we see that both the overall scale of the force for a given value of the conductance and the heights of the last two force oscillations are in quantitative agreement with the data shown in Fig. 1 of Ref.  . We wish to emphasize that the calculation of $F$ presented in our Fig. \[fig.fcond\] contains no adjustable parameters [@parameters]. The increase of $|F|$ along the conductance plateaus and the rapid decrease at the conductance steps were described in Ref.[@nanoforce] as “elastic” and “yielding” stages, respectively. With our intuitive picture of a conductance channel as a delocalized metallic bond, it is natural to interpret these elastic and yielding stages as the stretching and breaking of these bonds. The fluctuations in $F$ due to the discrete transverse channels may be thought of as arising from finite-size corrections to the surface tension $\sigma$. However, as in the case of universal conductance fluctuations [@condfluct], it is more instructive to consider the extensive quantity $F$ itself, rather than the intensive quantity $\sigma$. Approximating the sum in Eq. (\[gibbs2\]) by an integral, and keeping the leading order corrections, one obtains $$\Omega=\omega V + \sigma S - \frac{2\varepsilon_F}{3\lambda_F} L + \delta \Omega, \label{fsize}$$ where $V$ is the volume of the system, $S$ is the surface area, $\omega=-2\varepsilon_F k_F^3/15\pi^2$ is the macroscopic free energy density, and $\sigma=\varepsilon_F k_F^2/16\pi$ is the macroscopic surface energy. The remaining term $\delta \Omega$ is a quantum correction due to the discrete transverse channels, and may be either positive or negative. Under an ideal plastic deformation, the volume of the system is unchanged, and the tensile force is $$F=-\sigma \frac{\partial S}{\partial L} + \frac{2\varepsilon_F}{3\lambda_F} + \delta F, \label{fexp}$$ where $\delta F = -\partial (\delta\Omega)/\partial L$. The first term in Eq. (\[fexp\]) is the contribution to the force due to the macroscopic surface tension. This is plotted as a dashed line in Fig. \[fig.fcond\], for comparison. The macroscopic surface tension determines the overall slope of $F$. The quantum corrections to $F$ due to the discrete transverse channels consist of a constant term plus the fluctuating term $\delta F$. Fig. \[fig.fosc\] shows $\delta F$ for three different geometries and for values of $k_F R$ from 6 to 1200, plotted versus the corrected Sharvin conductance [@torres] $$G_s = \frac{k_F^2 A_{\rm min} - k_F C_{\rm min}}{4\pi}, \label{sharvin}$$ where $A_{\rm min}$ and $C_{\rm min}$ are the area and circumference of the constriction at its narrowest point. $G_s$ gives a smooth approximation to $G$. As shown in Fig. \[fig.fosc\](a), the force oscillations obey the approximate scaling relation $$\delta F(\Delta L/L_0,k_F R) \simeq \frac{\varepsilon_F}{\lambda_F} \mbox{Y}(G_s), \label{scaling2}$$ where $Y$ is a dimensionless scaling function which is independent of the precise geometry $r(z)$. Eq. (\[scaling2\]) indicates that the force fluctuations, like the conductance, are dominated by the contribution from the narrowest part of the constriction, of radius $R_{\rm min}$. The scaling relation (\[scaling2\]) breaks down when $R_{\rm min}/R \gtrsim 0.8$. Fig. \[fig.fosc\] shows that the amplitude of the force fluctuations persists essentially unchanged to very large values of $G_s$. It was found to be $$\Delta Y=\left(\overline{Y^2}-\overline{Y}^2\right)^{1/2} \sim 0.3 \label{dY}$$ for $0<G_s\leq 10^4$. The detailed functional form of $Y(G_s)$, like the distribution of widths of the conductance plateaus, depends on the sequence of quantum numbers $\gamma_{\nu}$, which is determined by the shape of the cross section. However, the amplitude of the force fluctuations $\Delta Y$ was found to be the same for both circular and square cross sections. Both these geometries are integrable, and hence have Poissonian distributions of transverse modes. It is clearly of interest to investigate the force fluctuations for nonintegrable cross sections, with non-Poissonian level statistics. The experiments of Refs.  observed well-defined conductance steps, but found no clear evidence of conductance quantization for $G/(2e^2/h)>4$ [@agrait]. Deviations of the conductance plateaus from integer values in metallic point contacts are likely to be due to backscattering from imperfections in the lattice or irregularities in the shape of the constriction [@brandbyge]. We find that such disorder-induced coherent backscattering leads to noise-like fine structure [@sds.he] in the conductance steps and force oscillations, with a reduction of the conductance on the plateaus, but no shift of the overall force oscillations [@cas]. Our prediction of universal force oscillations is consistent with the experiments of Refs.  and , which found force oscillations with an amplitude comparable to our theoretical prediction for $G/(2e^2/h)$ up to 60. Molecular dynamics simulations by Landman [*et al.*]{} [@landman], Todorov and Sutton [@todorov], and Brandbyge [*et al.*]{} [@brandbyge] have suggested that the conductance steps and force oscillations observed in Refs.  and may be due to a sequence of abrupt atomic rearrangements. While the discreteness of the ionic background is not included in the jellium model, our results nevertheless suggest that such atomic rearrangements may be caused by the breaking of the extended metallic bonds formed by each conductance channel. However, it should be emphasized that our prediction of universal force fluctuations of order $\varepsilon_F/\lambda_F$ is not consistent with the simulations of Refs.  and , which predict force fluctuations which increase with increasing contact area. This discrepancy may arise because we consider the equilibrium deformation of a system with extended electronic wavefunctions, while Refs.  use a purely local interatomic potential and a fast, nonequilibrium deformation [@nonequilibrium]. In conclusion, we have presented a simple jellium model of metallic nanocohesion in which conductance channels act as delocalized metallic bonds. This model predicts universal force oscillations of order $\varepsilon_F/\lambda_F$ in metallic nanostructures in the regime of conductance quantization, and is able to explain quantitatively recent experiments on the mechanical properties of nanoscopic metallic contacts [@nanoforce; @nanoforce2]. The formalism developed here based on the electronic scattering matrix should be applicable to a wide variety of problems in the rapidly evolving field of nanomechanics. We thank Urs Dürig for helpful discussions in the early stages of this work, and for providing us with his results prior to publication. We have also profited from collaboration with Jean-Luc Barras and Michael Dzierzawa. This work was supported in part by Swiss National Foundation grant \# 4036-044033. Current address: Fakultät für Physik, Albert-Ludwigs-Universität, Hermann-Herder-Str. 3, D-79104 Freiburg, Germany. N. Garcia and J. L. Costa-Krämer, Europhys. News [**27**]{}, 89 (1996), and references therein. C. Rubio, N. Agraït, and S. Vieira, Phys. Rev. Lett. [**76**]{}, 2302 (1996). A. Stalder and U. Dürig, Appl. Phys. Lett. [**68**]{}, 637 (1996); Probe Microscopy (in press). W. A. de Heer, Rev. Mod. Phys. [**65**]{}, 611 (1993); M. Brack, [*ibid.*]{} [**65**]{}, 677 (1993). R. Landauer, IBM J. Res. Dev. [**1**]{}, 223 (1957); Phil. Mag. [**21**]{}, 863 (1970). M. Büttiker, in [*Nanostructured Systems*]{}, M. Reed ed., p. 191 (Academic Press, New York, 1992). E. Akkermans, A. Auerbach, J. E. Avron, and B. Shapiro, Phys. Rev. Lett. [**66**]{}, 76 (1991). V. Gasparian, T. Christen, and M. Büttiker, Phys. Rev. A [**54**]{}, 4022 (1996). J. A. Torres, J. I. Pascual, and J. J. Sáenz, Phys. Rev. B [**49**]{}, 16581 (1994). H. Goldstein, [*Classical Mechanics*]{}, pp. 531-540 (Addison Wesley, 1980). L. I. Glazman, G. B. Lesovik, D. E. Khmel’nitskii, and I. Shekhter, JETP Lett. [**48**]{}, 238 (1988). M. Brandbyge [*et al.*]{}, Phys. Rev. B [**52**]{}, 8499 (1995). Such a constraint is natural in the jellium model, but it may be more apt to describe elongation than compression in real systems. The force for a given value of the conductance is essentially independent of the asymptotic radius $R$ and of the shape of the constriction \[see Fig. \[fig.fosc\](a)\]. P. A. Lee and T. V. Ramakrishnan, Rev. Mod. Phys. [**57**]{}, 287 (1985). N. Agraït, private communication. S. Das Sarma and Song He, Int. J. Mod. Phys. B [**7**]{}, 3375 (1993). C. A. Stafford (unpublished). U. Landman, W. D. Luedtke, N. A. Burnham, and R. J. Colton, Science [**248**]{}, 454 (1990); U. Landman, W. D. Luedtke, B. E. Salisbury, and R. L. Whetten, Phys. Rev. Lett. [**77**]{}, 1362 (1996). T. N. Todorov and A. P. Sutton, Phys. Rev. Lett. [**70**]{}, 2138 (1993); Phys. Rev. B [**54**]{}, R14235 (1996). Deformation rates of order 1m/s were used in the simulations of Refs., some five orders of magnitude faster than in the experiments of Refs. . to 15cm [to 17cm [ ]{} ]{} to 19cm [to 17cm [ ]{} ]{} to 19cm [to 17cm [ ]{} ]{}
--- author: - | Csaba Balázs$^1$, Tianjun Li$^{2,3}$, Fei Wang$^1$ and Jin Min Yang$^2$\ $^1$ School of Physics, Monash University, Melbourne Victoria 3800, Australia\ $^2$ Key Laboratory of Frontiers in Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, P. R. China\ $^3$ George P. and Cynthia W. Mitchell Institute for Fundamental Physics, Texas A$\&$M University, College Station, TX 77843, USA title: '$SU(7)$ Unification of $SU(3)_C\tm SU(4)_W\tm U(1)_{B-L}$' --- Introduction ============ The standard model (SM) of electroweak interactions, based on the spontaneously broken $SU(2)_L{\times}U(1)_Y$ gauge symmetry, has been extremely successful in describing phenomena below the weak scale. However, the SM leaves some theoretical and aesthetical questions unanswered, two of which are the origin of parity violation and the smallness of neutrino masses. Both of these questions can be addressed in the left-right model based on the $SU(2)_L\times SU(2)_R\tm U(1)_{B-L}$ gauge symmetry [@mohapatra]. The supersymmetric extension of this model [@susylr] is especially intriguing since it automatically preserves R-parity. This can lead to a low energy theory without baryon number violating interactions after R-parity is spontaneously broken. However, in such left-right models parity invariance and the equality of the $SU(2)_L$ and $SU(2)_R$ gauge couplings is ad hoc and has to be imposed by hand. Only in Grand Unified Theories (GUTs) [@su5; @so10] can the equality of the two $SU(2)$ gauge couplings be naturally guaranteed through gauge coupling unification. Novel attempts for the unification of the left-right symmetries have been proposed in the literature, such as the $SU(3)_C\tm SU(4)_W\tm U(1)_{B-L}$ [@nandi; @fei1; @shafi] or $SU(3)_C\tm SU(4)_W$ [@fei2]. In these attempts, the equality of the left-right gauge couplings and the parity in the left-right model are understood by partial unification. In this work, we propose to embed the $SU(3)_C\tm SU(4)_W\tm U(1)_{B-L}$ partially unified model into an $SU(7)$ GUT. Unfortunately, the doublet-triplet splitting problem exists in various GUT models. An elegant solution to this is to invoke a higher dimensional space-time and to break the GUT symmetry by boundary conditions such as orbifold projection. Orbifold GUT models for $SU(5)$ were proposed in [@Kawamura:1999nj; @Kawamura:2000ev; @Kawamura:2000ir] and widely studied thereafter in [@at; @Hall:2001pg; @Kobakhidze:2001yk; @Hebecker:2001wq; @Hebecker:2001jb; @Li:2001qs; @Li:2001wz; @fei3; @fei4]. The embedding of the supersymmetric GUT group into the Randall-Sundrum (RS) model [@rs] with a warped extra dimension is especially interesting since it has a four-dimensional (4D) conformal field theory (CFT) interpretation [@nomura2; @nomura3]. By assigning different symmetry breaking boundary conditions to the two fixed point, the five-dimensional (5D) theory is interpreted to be the dual of a 4D technicolor-like theory or a composite gauge symmetry model. It is desirable to introduce supersymmetry in warped space-time[@tonysusy] because we can not only stabilize the gauge hierarchy by supersymmetry but also set the supersymmetry broken scale by warping. It is well known that supersymmetry (SUSY) can be broken by selecting proper boundary conditions in the high dimensional theory. For example, 5D ${\cal N}=1$ supersymmetry, which amounts to ${\cal N}=2$ supersymmetry in 4D, can be broken to 4D ${\cal N}=1$ supersymmetry by orbifold projection. Various mechanisms can be used to break the remaining ${\cal N}=1$ supersymmetry. One intriguing possibility is the recently proposed conformal supersymmetry breaking mechanism [@yanagida1; @yanagida2] in vector-like gauge theories which can be embedded into a semi-direct gauge mediation model. Such a semi-direct gauge mediation model can be very predictive having only one free parameter. It is interesting to recast it in a warped extra dimension via the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence [@maldacena], and use it to break the remaining supersymmetry. This paper is organized as follows. In Section \[sec-1\], as a warm up, we discuss the SUSY $SU(7)$ orbifold GUT model and its symmetry breaking chains in a flat extra dimension. In Section \[sec-2\], we present the SUSY $SU(7)$ GUT model with a warped extra dimension and its 4D CFT dual interpretation. In Section \[sec-3\] we consider gauge coupling unification in the RS background. In Section \[sec-4\] we discuss the AdS/CFT dual of the semi-direct gauge mediation model in the conformal window in vector-like gauge theories. Section \[sec-5\] contains our conclusions. SUSY $SU(7)$ Unification in a Flat Extra Dimension {#sec-1} ================================================== We consider ${\cal M}_4{\tm} S^1/Z_2$, the 5D space-time comprising of the Minkowski space ${\cal M}_4$ with coordinates $x_{\mu}$ and the orbifold $S^1/Z_2$ with coordinate $y \eqv x_5$. The orbifold $S^1/Z_2$ is obtained from $S^1$ by moduling the equivalent classes Z\_5:  y&&-y . There are two inequivalent 3-branes located at $y=0$ and $y=\pi R$, denoted by $O$ and $O^{\pr}$, respectively. The 5D ${\cal N}=1$ supersymmetric gauge theory has 8 real supercharges, corresponding to ${\cal N}=2$ SUSY in 4D. The vector multiplet contains a vector boson $A_M$ ($M=0, 1, 2, 3, 5$), two Weyl gauginos $\lambda_{1,2}$, and a real scalar $\sigma$. From the 4D ${\cal N}=1$ point of view, it contains a vector multiplet $V(A_{\mu}, \lambda_1)$ and a chiral multiplet $\Sigma((\sigma+iA_5)/\sqrt 2, \lambda_2)$ which transform in the adjoint representation of the gauge group. The 5D hypermultiplet contains two complex scalars $\phi$ and $\phi^c$, a Dirac fermion $\Psi$, and can be decomposed into two 4D chiral mupltiplets $\Phi(\phi, \psi \equiv \Psi_R)$ and $\Phi^c(\phi^c, \psi^c \equiv \Psi_L)$, which are conjugates of each other under gauge transformations. The general action for the gauge fields and their couplings to the bulk hypermultiplet $\Phi$ is [@nima; @nima2] $$\begin{aligned} S&=&\int{d^5x}\frac{1}{k g^2} {\rm Tr}\left[\frac{1}{4}\int{d^2\theta} \left(W^\alpha W_\alpha+{\rm H. C.}\right) \right.\nonumber\\&&\left. +\int{d^4\theta}\left((\sqrt{2}\partial_5+ {\bar \Sigma }) e^{-V}(-\sqrt{2}\partial_5+\Sigma )e^V+ \partial_5 e^{-V}\partial_5 e^V\right)\right] \nonumber\\&& +\int{d^5x} \left[ \int{d^4\theta} \left( {\Phi}^c e^V {\bar \Phi}^c + {\bar \Phi} e^{-V} \Phi \right) \right.\nonumber\\&&\left. + \int{d^2\theta} \left( {\Phi}^c (\partial_5 -{1\over {\sqrt 2}} \Sigma) \Phi + {\rm H. C.} \right)\right]~.~\, \label{VD-Lagrangian}\end{aligned}$$ We introduce the following orbifold projections Z\_5:x\_5&&-x\_5 ,    T\_5:x\_5x\_5+2R\_5 , and use them to impose the following boundary conditions on vector and hypermultiplets in terms of the fundamental representation V(-x\_5)&=&Z\_5 V(x\_5) Z\_5 ,    \_5(-x\_5)=-Z\_5 \_5(x\_5) Z\_5 ,\ (-x\_5)&=&\_Z\_5 (x\_5)  ,   \^c(-x\_5)=-\_Z\_5 (x\_5)  , and V(x\_5+2R\_5)&=&T\_5 V(x\_5) T\_5 ,   \_5(x\_5+2R\_5)=T\_5 \_5(x\_5) T\_5 ,\ (x\_5+2R\_5)&=&\_ T\_5 (x\_5) ,  \^c(x\_5+2R)=\_T\_5 (x\_5)  , with $\eta_\Phi=\pm1$, and $\zeta_\Phi=\pm1$. The 5D ${\cal N}=1$ supersymmetry, which corresponds to 4D ${\cal N}=2$ SUSY, reduces to 4D ${\cal N}=1$ supersymmetry after the $Z_5$ projection. It is well known that we can have different gauge symmetries at the two fixed points by assigning different boundary conditions. We can rewrite $(Z_5,T_5)$ in terms of $(Z_5,Z_6)$ by introducing the transformation Z\_6=T\_5Z\_5 , which gives Z\_6:y+R-y+R . Then the massless zero modes can preserve different gauge symmetries which are obtained by assigning proper $(Z_5,Z_6)$ boundary conditions to the two fixed points. In our setup, as a warm up, we consider a $SU(7)$ gauge symmetry in the 5D bulk of ${\cal M}_4\tm S^1/Z^2$. This implies the following different symmetry breaking possibilities. - Case I: Z\_5 = I\_[4,-3]{} ,     Z\_6 = I\_[1,-6]{} , where $I_{a,-b}$ denotes the diagonal matrix with the first $a$ entries $1$ and the last $b$ entries $-1$. These boundary conditions break the $SU(7)$ gauge symmetry down to $SU(4)_W\tm SU(3)_C\tm U(1)_{B-L}$ at the fixed point $y=0$, to $SU(6)\tm U(1)\tm U(1)_{X}$ at the fixed point $y=\pi R_5$, and preserve $SU(3)_C\tm SU(3)_L\tm U(1) \tm U(1)_X$ in the low energy 4D theory. - Case II: Z\_5 = I\_[4,-3]{} ,     Z\_6 = I\_[2,-5]{} , which break the gauge symmetry $SU(7)$ to $SU(4)_w\tm SU(3)_C\tm U(1)_{B-L}$ at the fixed point $y=0$, to $SU(5)\tm SU(2)\tm U(1)_{X}$ at the fixed point $y=\pi R_5$, and preserve $SU(3)_C\tm SU(2)_L\tm SU(2)_R\tm U(1)_{B-L}\tm U(1)_X$ in the low energy 4D theory. - Case III: Z\_5 = I\_[4,-3]{} ,     Z\_6 = I\_[3,-4]{} , which break $SU(7)$ to $SU(4)_w\tm SU(3)_C\tm U(1)_{B-L}$ at the fixed point $y=0$, to $SU(3)\tm SU(4)_c\tm U(1)_{X}$ at the fixed point $y=\pi R_5$, and preserve $SU(3)_C\tm SU(3)_L \tm U(1) \tm U(1)_X$ in the low energy 4D theory. - Case IV: Z\_5 = I\_[1,-6]{} ,     Z\_6 = I\_[2,-5]{} , which break $SU(7)$ to $SU(6)\tm U(1) $ at the fixed point $y=0$, to $SU(5)\tm SU(2)\tm U(1)_{X}$ at the fixed point $y=\pi R_5$ which preserves $SU(5) \tm U(1) \tm U(1)_X$ in the low energy 4D theory. - Case V: Z\_5 = I\_[1,-6]{} ,     Z\_6 = I\_[4,-3]{} , which break $SU(7)$ to $SU(6)\tm U(1) $ at the fixed point $y=0$, to $SU(3)_C\tm SU(4)\tm U(1)_{X}$ at the fixed point $y=\pi R_5$, and preserve $SU(3)_C \tm SU(3)_L \tm U(1)$ in the low energy 4D theory. We will not discuss these various symmetry breaking chains in detail, we simply note that several interesting low energy theories can be embedded into a 5D $SU(7)$ gauge theory. To construct a realistic theory, we must also introduce the proper matter content. The simplest possibility to introduce matter in this scenario is to localize it at the fixed point branes and fitting it into multiplets of the corresponding gauge symmetry preserved in the given brane. Bulk fermions which are $SU(7)$ invariant are possible in case $Z_5$ or $Z_6$ is trivial. For most general boundary conditions, bulk fermions do not always lead to realistic low energy matter content. In our case, the motivation for the $SU(7)$ gauge symmetry is the unification of $SU(3)_C\tm SU(4)_w\tm U(1)_{B-L}$. Thus, we will discuss in detail this symmetry breaking chain and the matter content of this scenario. The compactification of gauge symmetry in flat and warped extra dimensions share many common features. Consequently, we will concentrate on the orbifold breaking of $SU(7)$ in a warped extra dimension and discuss its AdS/CFT interpretation. The flat extra dimension results can be obtained by taking the AdS curvature radius to infinity. SUSY $SU(7)$ Unification in Warped Extra Dimension {#sec-2} ================================================== We consider the AdS$_5$ space warped on $S^1/Z_2$ with $SU(7)$ bulk gauge symmetry. The AdS metric can be written as \[RS\] ds\^2=e\^[-2]{}\_dx\^dx\^+dy\^2 , where $\sigma=k|y|$, $1/k$ is the AdS curvature radius, and $y$ is the coordinate in the extra dimension with the range $0\leq y\leq\pi R$. Here we assume that the warp factor $e^{-k\pi R}$ scales ultra-violet (UV) masses to TeV. As noted in [@susyads0], in AdS space the different fields within the same supersymmetric multiplets acquire different masses. The action for bulk vector multiplets $(V_M,\la^i,\Sigma)$ and hypermultiplets $(H^i,\Psi)$ can be written as [@susy-ads; @susy1-pomarol-tony; @susy2-pomarol-tony] S\_5=- d\^4xdy&& , with supersymmetry preserving mass terms for vector multiplets m\^2\_&=&-4k\^2+2\^ ,\ m\_&=&\^ , and for hypermultiplets m\^2\_[H\^[1,2]{}]{}&=&(c\^2c-)k\^2 +(c)\^ ,\ m\_&=&c\^ . We introduce the generic notation m\_\^2&=& ak\^2+b\^ ,\ m\_&=&c\^ , for the AdS mass terms of bosons ($\phi$) and fermions ($\psi$) with \^&=&=k(y) ,\ \^&=&2k$$\delta(y)-\delta(y-\pi R)$$ , where the step function is defined as $\epsilon(y)= +1~(-1)$ for positive (negative) $y$. With this notation, we can parametrize the bulk mass terms for vector multiplets as a=-4 ,     b=2 ,     c= , and for hypermultiplets as a=c\^2c- ,     b=c . The parameter $c$ controls the zero mode wave function profiles [@susy1-pomarol-tony; @tony-les]. When $c>1/2$, the massless modes will be localized towards the $y=0$ (UV) brane. The larger the value of $c$ the stronger is the localization. On the other hand, when $c<1/2$, the zero modes will be localized towards the $y=\pi R$ (IR) boundary. Kaluza-Klein (KK) modes localized near the IR brane, according to the AdS/CFT dictionary, correspond dominantly to CFT bound states. The $SU(7)$ gauge symmetry can be broken into $SU(3)_C\tm SU(4)_W\tm U(1)_{B-L}$ by the Higgs mechanism or by boundary conditions. The spontaneous breaking of $SU(7)$ will lead to the doublet-triplet (D-T) splitting problem. Thus, it is advantageous to consider the breaking of the gauge symmetry via boundary conditions which elegantly eliminates the D-T splitting problem. We chose the following boundary conditions in terms of $Z_5$ and $Z_6$ parity Z\_5&=&(+1 ,+1 ,+1 ,-1 ,-1 ,-1 ,-1 ) ,\ Z\_6&=&(+1 ,+1 ,+1 ,+1 ,+1 ,+1 ,+1 ) , which break $SU(7)$ to $SU(3)_C\tm SU(4)_W \tm U(1)_{B-L}$ at the fixed point $y=0$. The parity assignments of $SU(7)$ vector supermultiplets in terms of $(Z_5,Z_6)$ are V\^g([**48**]{}) &=&V\^[++]{}\_[([**8,1**]{})\_0]{}V\^[++]{}\_[([**1,15**]{})\_0]{} V\^[++]{}\_[([**1,1**]{})\_0]{}V\^[-+]{}\_[([**3,|[4]{}**]{})\_[7/3]{}]{}V\^[-+]{}\_[([**|[3]{},4**]{})\_[-7/3]{}]{} ,\ \^g([**48**]{})&=&\^[–]{}\_[([**8,1**]{})\_0]{}\^[–]{}\_[([**1,15**]{})\_0]{} \^[–]{}\_[([**1,1**]{})\_0]{}\^[+-]{}\_[([**3,|[4]{}**]{})\_[7/3]{}]{}\^[+-]{}\_[([**|[3]{},4**]{})\_[-7/3]{}]{} , where the lower indices show the $SU(3)_C\tm SU(4)_W \tm U(1)_{B-L}$ quantum numbers. After KK decomposition, only the ${\cal N}=1$ SUSY $SU(3)_C\tm SU(4)_W \tm U(1)_{B-L}$ components of the vector multiplet $V^g$ have zero modes. Kaluza-Klein modes which have warped masses of order $M_{UV} e^{-k\pi R}\sim {\rm TeV}$ are localized towards the symmetry preserving IR brane, so they are approximately $SU(7)$ symmetric. It is also possible to break the gauge symmetry on the $y=\pi R$ brane (by interchanging $Z_5$ and $Z_6$) which will have a different 4D CFT dual description in contrast to the previous case. If $SU(7)$ breaks to $SU(3)\tm SU(4)_W\tm U(1)_{B-L}$ on the IR brane, the dual description is a technicolor-like theory in which $SU(7)$ is broken to $SU(3)\tm SU(4)_W\tm U(1)_{B-L}$ by strong dynamics at the TeV scale. In case the gauge symmetry is broken on the UV brane, the dual descriptions is a theory with $SU(7)$ global symmetry and a weakly interacting $SU(3)\tm SU(4)_W\tm U(1)_{B-L}$ gauge group at the UV scale. The IR brane with a spontaneously broken conformal symmetry respects the $SU(7)$ gauge group. In this scenario, the $SU(7)$ gauge symmetry is composite(emergent) which is similar to the rishon model [@seiberg-rishon]. In order to reproduce the correct Weinberg angle $\sin^2\theta_W$, it is in general not advantageous to break the GUT symmetry on the IR brane because, from the AdS/CFT correspondence, the running of the gauge couplings is $SU(7)$ invariant at the TeV scale. It is possible to strictly localize matter on the UV brane that preserves the $SU(3)\tm SU(4)_W\tm U(1)_{B-L}$ gauge symmetry. However, in this case we will not get a prediction of the weak mixing angle because we lack the absolute normalization factor of the hypercharge of the SM particles.[^1] Thus it is preferable to place matter in $SU(7)$ multiplets into the 5D bulk so it can be approximately localized towards the UV brane by introducing bulk mass terms. In this case, the $U(1)_{B-L}$ charges of matter are quantized according to $SU(7)$ multiplet assignments and we could understand the observed electric charge quantization of the Universe. We arrange quark supermultiplets into ${\bf 28,\overline{28}}$ symmetric $SU(7)$ representations and lepton multiplets into ${\bf 7,\bar{7}}$ representations QX([**28**]{})\_a&=&$\bea{cc}({\bf {6},1})_{8/3}&({\bf 3,4})_{1/3}\\({\bf \bf{3},\bf{4}})_{1/3}&({\bf 1,10})_{-2}\eea$ ,\ ([****]{})\_a&=& $\bea{cc}({\bf \bar{6},1})_{-8/3}&({\bf \bar{3},\bar{4}})_{-1/3}\\({\bf \bar{3},\bar{4}})_{-1/3}&({\bf 1,\overline{10}})_{2}\eea$ ,\ LX([**7**]{})\_a&=&[**(3,1)\_[[4]{}/[3]{}]{}(1,4)\_[-1]{}**]{} ,\ ([**|[7]{}**]{})\_a&=&[**(|[3]{},1)\_[-4/3]{}(1,|[4]{})\_1**]{} , with the subscript $a$ being the family index. To obtain zero modes for chiral quark and lepton multiplets, we assign the following $(Z_5,Z_6)$ parities to them QX([**28**]{})\_a&=&[**([6]{},1)\_[8/3]{}\^[-,+]{}(1,10)\_[-2]{}\^[-,+]{}(3,4)\_[1/3]{}\^[+,+]{}**]{} ,\ ([****]{})\_a&=&[**(|[6]{},1)\_[-8/3]{}\^[-,+]{}(1,)\_[2]{}\^[-,+]{}(|[3]{},|[4]{})\_[-1/3]{}\^[+,+]{}**]{} ,\ LX([**7**]{})\_a&=&[**(3,1)\^[-,+]{}\_[[4]{}/[3]{}]{}(1,4)\^[+,+]{}\_[-1]{}**]{} ,\ ([**|[7]{}**]{})\_a&=&[**(|[3]{},1)\^[-,+]{}\_[-4/3]{}(1,|[4]{})\^[+,+]{}\_1**]{} . Parity assignments for the conjugate fields $\Phi^c$ are opposite to those for $\Phi$. Because of the unification of $SU(3)_C\tm SU(4)_W\tm U(1)_{B-L}$ into $SU(7)$, we can determine the normalization of the $U(1)_{B-L}$ charge based on the the matter sector charge assignments in the fundamental representation of $SU(7)$ Q\_[B-L]{}=( 4/3, 4/3, 4/3,-1,-1,-1,-1) . Then from the relation of the gauge couplings g\_[B-L]{}=g\_7 T\^[B-L]{} , we obtain g\_[B-L]{}=g\_7 . Here we normalize the $SU(7)$ generator as Tr(T\^aT\^b)=\^[ab]{} . Thus the tree-level weak mixing angle can be predicted to be \^2\_W==0.15 . In previous SUSY $SU(7)$ unification scenario with quark contents fitting in ${\bf 28},{\bf \overline{28}}$ dimensional representations, there is no ordinary proton decay problem related to heavy gauge boson exchanges (D-type operators) and dimension-five operators which can be seen from the charge assignments of the $SU(7)$ matter multiplets. Contributions from dimension four operators of the form $\la_{ijk}(QX)_i(\overline{LX})_j(\overline{LX})_k+ \tl{\la}_{ijk}(\overline{QX})_i({LX})_j({LX})_k$ can be forbidden by R-parity. However, it is also possible to fit the quark sectors in ${\bf 21},{\bf \overline{21}}$ dimensional representations of SU(7) instead of ${\bf 28},{\bf \overline{28}}$ dimensional representations. Then dangerous IR-brane localized dimension five F-type operators of the form =d\^2 &\[& \_[1ijkl]{}(QX)\_i(QX)\_j(QX)\_k(LX)\_l+ \_[2ijkl]{}()\_i()\_j ()\_k()\_l\], can be introduced. Such dimension five operators can arise from a diagram involving the coupling of matter to the ${\bf 35},\overline{\bf 35}$ Higgs multiplet and the insertion of $\mu$-term like mass terms for such Higgs fields. If matter is localized towards the IR brane, then the suppression scale $M$ is of order TeV and this results in rapid proton decay. Since the profile of zero modes for bulk matter with $c\gtrsim1/2$ is \_+\^[(0)]{}\~e\^[-(c-)ky]{} , we could assign bulk mass terms to matter with $c\gtrsim1/2$ to suppress the decay rates. Then for an IR brane localized dimension five operator, we require e\^[-\_[i]{}(c\_i-)kR]{}  , \[Proton-Bound\] which satisfies proton decay bounds [@Harnik:2004yp]. However such requirements will lead to difficulty in giving natural Yukawa couplings. So we consider only the case with quark sector fitting in ${\bf 28}$ and ${\bf \overline{28}}$ dimensional representations. There are several ways to introduce Yukawa couplings. Orbifold GUTs are well known to solve the D-T splitting problem by assigning appropriate boundary conditions to bulk Higgs fields. Thus, it is also possible to introduce bulk Higgs fields in our scenario. Since &=&[**1**]{} ,\ [**7**]{}&=&[**1**]{} ,\ &=& , we can introduce bulk Higgses $\Sigma,\tl{\Sigma}$ in the $SU(7)$ adjoint representation ${\bf 48}$, $\Delta_1,\Delta_2$ in $SU(7)$ symmetric representations ${\bf 28},\overline{\bf 28}$, and an $SU(7)$ singlet Higgs $S$ to construct $SU(7)$ gauge invariant Yukawa couplings. We impose the following boundary conditions on the bulk Higgs fields ,([**48**]{}) &=&([**8,1**]{})\_0\^[-,+]{}([**1,15**]{})\_0\^[+,+]{} ([**1,1**]{})\_0\^[-,+]{}([**3,|[4]{}**]{})\_[7/3]{}\^[-,+]{}([**|[3]{},4**]{})\_[-7/3]{}\^[-,+]{} ,\ \_1([**28**]{})&=&[**([6]{},1)\_[8/3]{}\^[-,+]{}(1,[10]{})\_[-2]{}\^[+,+]{}([3]{},[4]{})\_[1/3]{}\^[-,+]{}**]{} ,\ \_2()&=&[**(,1)\_[-8/3]{}\^[-,+]{}(1,)\_[2]{}\^[+,+]{}(|[3]{},|[4]{})\_[-1/3]{}\^[-,+]{}**]{} ,\ S([**1**]{})&=&([**1,1**]{})\_0\^[+,+]{} . In the orbifold projection above, we choose the most general boundary conditions [@csaki1; @csaki2; @csaki3] to eliminate unwanted zero modes. The results can be obtained from naive orbifolding by introducing the relevant heavy brane mass terms (on the UV brane) to change the Neumann boundary conditions to Dirichlet ones. Then the surviving zero modes give the Higgs content required in a 4D SUSY $SU(3)_C\tm SU(4)_W\tm U(1)_{B-L}$ theory[^2]. Bulk Yukawa couplings can be introduced as S\_[5D]{}&=&d\^4xdy \_[i=1,2,3]{}\_[1ij]{}\^[QX]{} (QX)\^i()\^j+\_[1ij]{}\^[LX]{}(LX)\^i()\^j\ &&+\_[2ij]{}\^[QX]{}(QX)\^iS()\^j+\_[2ij]{}\^[LX]{}(LX)\^iS()\^j+\_[3ij]{}\^[LX]{}()\^i\_1()\^j\ &&+\_[4ij]{}\^[LX]{}([LX]{})\^i\_2([LX]{})\^j . Then at low energies, after the heavy KK modes are projected out, the effective 4D Yukawa couplings are S\_[4D]{}=d\^4x &&\_[i=1,2,3]{}$y_{1ij}^QQ_L^i\Sigma_1(Q_L^c)^j+y_{1ij}^LL_L^i\Sigma_1(L_L^c)^j +y_{2ij}^QQ_L^iS(Q_L^c)^j\.\nn\\&&+\left.y_{2ij}^LL_L^iS(L_L^c)^j+y_{ij}^{N^c}(L_L^c)^i\Delta (L_L^c)^j+y_{ij}^{N}(L_L)^i\overline\Delta (L_L)^j$ . Here we denote the $SU(3)\tm SU(4)_W\tm U(1)_{B-L}$ multiplet ${\bf(3,4)_{1/3}}$ by $Q_L$, the ${\bf (\bar{3},\bar{4})_{-1/3}}$ by $Q_L^c$, the ${\bf(1,4)_{-1}}$ by $L_L$, the ${\bf (1,\bar{4})_{1}}$ by $L_L^c$, the $({\bf 1,15})_0$ by $\Sigma_1$, the $({\bf 1,10})_2$ by $\Delta$, the $({\bf 1,\overline{10}})_{-2}$ by $\overline{\Delta}$, and the ${\bf (1,1)_0}$ by $S$. As indicated in Ref. [@fei1], such Yukawa interactions are necessary in a 4D theory to give acceptable low energy spectra. The SM fermionic masses and mixing hierarchy, which is related to the coefficients of the 4D Yukawa couplings, can be understood from the wave function profile overlaps [@susy1-pomarol-tony; @AJ]. The profile of the bulk Higgs fields can also be determined from their bulk mass terms. In our scenario, bulk Higgses other than the ${\bf 35}$ multiplet are not responsible for proton decay and can be localized anywhere (such as towards the IR brane to generate enough hierarchy). For simplicity, we can set the zero modes of bulk Higgs profiles to be flat with the mass terms $c_\Sigma=c_{\tl{\Sigma}}=c_S=c_{\Delta_i}=c_{H}=1/2$. The low energy Yukawa coupling coefficients that appeared in previous expressions are of order y\_[ij]{}\~4\^[QX,LX]{}\_[ij]{}$\prod_i\sqrt{\f{1-2c_i}{e^{-2(c_i-\f{1}{2})k\pi R}-1}}$ e\^[-(c\_[Xi]{}+c\_[|[X]{}i]{}+c\_[H]{}-)kR]{} , which can generate the required mass hierarchy and CKM mixing by the Froggatt-Nielson mechanism [@FN]. We can, for example, chose c\_1\^[QX]{}&=& c\_1\^= +,   c\_1\^[LX]{}= c\_1\^=+ ,\ c\_2\^[QX]{}&=& c\_2\^= +,  c\_2\^[LX]{}= c\_2\^= + ,\ c\_3\^[QX]{}&=& c\_3\^= ,          c\_3\^[LX]{}= c\_3\^= . Here we use the fact that $e^{-k\pi R} \simeq ({\rm TeV})/k\simeq {\cal O}(10^{-16})$ and $\tl{y}^{QX,LX}_{ij}\sqrt{k}\sim {\cal O}(1)$. Besides, it is obvious from the charge assignment in the matter sector that there are no unwanted mass relations in our scenario, such as $m_{\mu}:m_{e}=m_s:m_d$, that appear in an $SU(5)$ GUT. Gauge Coupling Unification in SUSY SU(7) Unification {#sec-3} ==================================================== The Lagrangian relevant for the low energy gauge interactions has the following form \[gauge-beta\] S=d\^4x \_[0]{}\^[R]{}d y $$-\f{1}{4g_5^2}F^{aMN}F_{MN}^a-\delta(y)\f{1}{4g_{0}^2}F_{\mu\nu}^aF^{a\mu\nu}-\delta(y-\pi R)\f{1}{4g_{\pi}^2}F_{\mu\nu}^aF^{a\mu\nu}$$ ,where $g_5$ is the dimensionful gauge coupling in the 5D bulk, $g_{0}$ and $g_{\pi}$ are the relevant gauge couplings on the $y=0$ and $y=\pi R$ brane, respectively. The brane kinetic terms are necessary counter terms for loop corrections of the gauge field propagator. In AdS space there are several tree-level mass scales which are related as M\_[KK]{} kM\_\* . Thus, the 4D tree-level gauge couplings can be written as [@nomura2]()=+++(,Q) , where the first three terms contain the tree-level gauge couplings, and $\tl{\Delta}(\mu,Q)$ represents the one-loop corrections. The explicit dependence on the subtraction scale cancels that of the running boundary couplings in such a way that the quantity $g^2_a(\mu)$ is independent of the renormalization scale. We assume that the bulk and brane gauge groups become strongly coupled at the 5D Planck scale $M_{5D}=\Lambda$ with ()(e\^[-k R]{}) ,      ()(1) . The GUT breaking effects at the fixed points are very small compared to bulk GUT symmetry preserving effects. Thus, we can split the contributions to the gauge couplings into symmetry preserving and symmetry breaking pieces ()&=&()+()+(e\^[-k R]{})+$$\tl{\Delta}(\mu,\Lambda)+b_0^a\ln\f{\Lambda}{\mu}+b_{\pi }^a\ln\f{\Lambda e^{-k \pi R}}{\mu}$$ ,\ &&([SU(7)  symmetric]{})+$$\tl{\Delta}(\mu,\Lambda)+b_0^a\ln\f{\Lambda}{\mu}+b_{\pi }^a\ln\f{\Lambda e^{-k \pi R}}{\mu}$$ ,\ &&([SU(7)  symmetric]{})+(,) . The general expression for $\Delta(\mu,\Lambda)$ was calculated in [@CKS]. The contributions from the vector multiplets are ( \_[U(1)\_[B-L]{}]{}\ \_[SU(3)\_C]{}\ \_[SU(4)\_W]{})\_V=([SU(7)  symmetric]{})+$\bea{c}0\\-9\\-12\\\eea$() ,with &&T(V\_[++]{})=( 0, 3, 4) \ &&T(V\_[-+]{})=( 7, 4, 3) , for $U(1)_{B-L}$, $SU(3)_C$, and $SU(4)_W$, respectively. Here we normalize the $U(1)_{B-L}$ gauge coupling according to $g_{B-L}^2=3g_7^2/14$ . We can use the facts $c^{QX}_i,c^{LX}_i\geq1/2$ to simplify the matter contributions to \_M(,k)&=&T(H\_[++]{})$$\ln\left(\f{k}{\mu}\right)-c_H\ln\left(\f{k}{T}\right)$$+c\_HT(H\_[+-]{})()\ &-&c\_HT(H\_[-+]{})()+T(H\_[–]{})$$\ln\left(\f{k}{\mu}\right)-(1+c_H)\ln\left(\f{k}{T}\right)$$ . Thus the contributions from the bulk matter hypermultiplets are ( \_[U(1)\_[B-L]{}]{}\ \_[SU(3)\_C]{}\ \_[SU(4)\_W]{})\_H=([SU(7)  symmetric]{})+$\bea{c}\f{12}{7}\\12\\12\\\eea$()  ,with &&T(H\_[++]{})|\_[H+H\^c]{}\^m=( , 12, 12) ,\ &&T(H\_[-+]{})|\_[H+H\^c]{}\^m=(, 18, 18) . The contributions from the bulk Higgs hypermultiplets include two ${\bf 48}$ dimensional representations, ${\bf 28}$ and $\overline{\bf 28}$ dimensional representations and possible one singlet.[^3] The contributions from the bulk Higgs hypermultiplets are ( \_[U(1)\_[B-L]{}]{}\ \_[SU(3)\_C]{}\ \_[SU(4)\_W]{})\_M=([SU(7)  symmetric]{})+$\bea{c}\f{30}{7}\\~0\\~14\\\eea$() ,with &&T(H\_[++]{})|\_[H+H\^c]{}\^h=( , 0, 14) ,\ &&T(H\_[-+]{})|\_[H+H\^c]{}\^h=(, 23, 9) . Thus, the total contribution to the RGE running of the three gauge couplings are =([SU(7)  symmetric]{})+\_a , with ( \_[U(1)\_[B-L]{}]{}\ \_[SU(3)\_C]{}\ \_[SU(4)\_W]{})=([SU(7)  symmetric]{})+$\bea{c}~6\\~3\\~14\\\eea$() . We summarize the supermultiplets in SUSY SU(7) GUT model that contribute to running of the three gauge couplings upon $M_{\tl U}$ as follows: - Gauge: $V^g(\bf 48),\Sigma^g({\bf 48})$. - Matter: $QX_a({\bf 28}),\overline{QX}_a({\bf \overline{28}}),LX({\bf 7}),\overline{LX}_a({\bf \bar{7}})$  ($a=1,2,3$). - Higgs: $\Sigma({\bf 48}),\tl{\Sigma}({\bf 48}),\Delta_1({\bf 28}),\Delta_2({\bf 28})$. We can also consider the following symmetry breaking chain for the partial unification $SU(4)_W\tm U(1)_{B-L}$: SU(4)\_WU(1)\_[B-L]{}SU(2)\_LSU(2)\_RU(1)\_ZU(1)\_[B-L]{}SU(2)\_L U(1)\_Y .Detailed discussions on this symmetry breaking chain can be found in our previous work [@fei1]. Assuming that the left-right scale, which is typically the $SU(2)_R$ gauge boson mass scale $M_R$, is higher than that of the soft SUSY mass parameters $M_S$, the RG running of the gauge couplings below the $SU(4)_W\tm U(1)_{B-L}$ partial unification scale $M_{\tl{U}}$ is calculated as follows. - For $M_Z<E<M_S$, the $U(1)_Y, SU(2)_L$, and $SU(3)_C$ beta-functions are given by the two Higgs-doublet extension of the SM (b\_1,b\_2,b\_3)=$~7,-3,-7$ . - For $M_S<E<M_R$, the $U(1)_Y,SU(2)_L$, and $SU(3)_C$ beta-functions are given by (b\_1,b\_2,b\_3)=$12,~2,-3$ . - For $M_R<E<M_{\tl{U}}$, the ${\sqrt{2}}U(1)_Z,\sqrt{\f{14}{3}}U(1)_{B-L},SU(2)_L=SU(2)_R$, and $SU(3)_C$ beta functions are given by (b\_0,b\_1,b\_2,b\_3)=$22,~6,~16,~3$ . In our calculation the mirror fermions are fitted into $SU(4)_W$ multiplets and acquire masses of order $M_R$. The $\tl{\Sigma}({\bf 15})$ Higgs fields decouple at scales below $M_{\tl{U}}$ [@fei1]. We can calculate the $SU(7)$ unification scale when we know the $SU(4)_W\tm U(1)_{B-L}$ partial unification scale $M_{\tl{U}}$, which can be determined from the coupling of $U(1)_Z$ at $M_R$. Here we simply set $M_{\tl{U}}$ as a free parameter. At the weak scale our inputs are [@PDG] M\_Z&=&91.18760.0021  , \ \^2\_W(M\_Z)&=&0.23120.0002  , \ \^[-1]{}\_[em]{}(M\_Z)&=&127.9060.019  , \ \_3(M\_z)&=&0.11870.0020 , which fix the numerical values of the standard $U(1)_Y$ and $SU(2)_L$ couplings at the weak scale \_1(M\_Z)&=&=(98.3341)\^[-1]{} ,\ \_2(M\_Z)&=&=(29.5718)\^[-1]{} . The RGE running of the gauge couplings reads =\_i\^2  , where $E$ is the energy scale and $b_i$ are the beta functions. Our numerical results (See fig.1) show that successful unification of the three gauge couplings is only possible for small $M_R\lesssim 500$ GeV and relatively high $M_{\tl{U}}$. For example, if we choose $M_S=200$ GeV, $M_R=400$ GeV and $M_{\tl U}=2.0\tm 10^6$ GeV, we obtain successful $SU(7)$ unification at M\_U=9.010\^6  [GeV]{} ,and 4.65 . Such low energy $M_R$ may be disfavored by electro-weak[@MRbounds1] and flavor precision bounds[@MRbounds2]. In general, with additional matter and Higgs contents (for example, additional bulk ${\bf 7,\overline{7}}$ messenger fields), the low $M_R$ requirement for gauge coupling unifications can be relaxed. Besides, symmetry breaking of $SU(3)\tm SU(4)_W\tm U(1)_{B-L}$ will lead to non-minimal left-right model[@fei1]. Thus relatively low $M_R$ can be consistent with flavor precision bounds[@MRbounds2]. The choice of $M_{\tl U}=2.0\tm 10^6$ generates a hierarchy between the weak scale and the partial unification scale. Lacking the knowledge of $U(1)_Z$ gauge coupling strength upon SUSY left-right scale, the mild hierarchy between partial unification scale $M_{\tl U}$ and SUSY left-right scale can be the consequences of logarithm running of the various gauge couplings. It follows from the AdS/CFT correspondence that the $SU(7)$ unification in RS model are also a successful 4D unification. Our $SU(7)$ model is vector-like and thus anomaly free. The 5D theory in the bulk is also anomaly free because the theory on the UV (which is a $SU(3)\tm SU(4)_W\tm U(1)_{B-L}$ theory [@fei1]) and IR branes is non-anomalous [@nima3]. \[GUT\] ![\[spectrum\] One loop relative running of the three gauge coupling in SUSY SU(7) GUT model. Here $SU(4)_W$ gauge coupling (upon $M_{\tl U}$) is identified with $SU(2)_L$ gauge coupling. The $U(1)_{B-L}$ gauge coupling strength at the left-right scale $M_R$ is determined by $U(1)_Y$ and $SU(2)_L$ gauge coupling. Due to the discontinuity between $U(1)_{B-L}$ and $U(1)_Y$ gauge coupling at $M_R$, we do not show the $U(1)_Y$ running below $M_R$ in this figure.](fig1.eps){width="5in"} Supersymmetry Breaking and Semi-direct Gauge Mediation {#sec-4} ====================================================== The orbifold projection reduces the 5D ${\cal N}=1$ supersymmetry, which amounts to 4D ${\cal N}=2$ SUSY, to 4D ${\cal N}=1$ supersymmetry. We need to break the remaining ${\cal N}=1$ supersymmetry to reproduce the SM matter and gauge content. One interesting possibility is to use the predictive conformal supersymmetry breaking proposed for vector-like gauge theories [@yanagida1; @yanagida2]. Conformal supersymmetry breaking in a vector-like theory can be embedded into a semi-direct gauge mediation model [@semi-direct] by identifying a subgroup of the flavor group to be the unifying group of the SM. Supersymmetry Breaking in the Conformal Window ---------------------------------------------- The setup of conformal supersymmetry breaking in a vector-like theory involves an ${\cal N}=1$ $SU(N_c)$ gauge theory with $N_Q<N_c$ quarks $Q_i,\tl{Q}_i~(i=1,\cdots,N_Q)$ in fundamental and anti-fundamental representations, and $N_Q\tm N_Q$ gauge singlets $S_i^j$. Messenger fields $P_a,\tl{P}_a~(a=1,\cdots,N_P)$ with mass $m$ are also introduced to promote the model to a superconformal theory. The total number of flavors satisfies $3N_C/2< N_Q+N_P <3N_c$. The superpotential reads W=Tr(S Q )+mP , with $Tr(S Q \tl{Q})= S_i^j Q^i \tl{Q}_j$. When the mass parameter $m$ can be neglected, the theory has a infrared fixed point. When $S_i^j$ develop vacuum expectation values (VEVs), $Q_i$ and $\tl{Q}_i$ can be integrated out. Because $N_Q<N_C$, the theory has a runaway vacuum when all quark fields are integrated out. Such runaway vacuum can be stabilized by quantum corrections to the Kähler potential and leads to dynamical supersymmetry breaking. The conformal gauge mediation model is especially predictive because $m$ is its only free parameter. The AdS/CFT Dual of Seiberg Duality in the Conformal Region and Semi-Direct Gauge Mediation ------------------------------------------------------------------------------------------- The AdS/CFT correspondence [@maldacena] indicates that the compactification of Type IIB string theory on $AdS_5\tm S^5$ is dual to ${\cal N}=4$ super Yang-Mills theory. The duality implies a relation between the AdS radius $R$ and $g_{YM}^2N=g_sN$: $R^4=4\pi g_s N l_s^4$, in string units $l_s$. The source of an operator in the CFT sides correspond to the boundary value of a bulk field in gravity side. The generating function of the conformal theory is identified with the gravitational action in terms of $\phi_0$: (-d\^4x \_0[O]{}) \_[CFT]{}=(-) . The AdS/CFT correspondence can be extended to tell us that any 5D gravitational theory on $AdS_5$ is holographically dual to some strongly coupled, possibly large $N$, 4D CFT [@nima-ads; @rattazzi]. The metric of an $AdS_5$ slice can be written as ds\^2=$g_{\mu\nu}dx^\mu dx_\nu +dz^2$ , which is related to RS metric [@rs] by z=Le\^[y/L]{} ,with $L=1/k$ being the AdS radius. According to the AdS/CFT dictionary [@tony-les; @perez; @pomarol-fermion], the RG scale $\mu$ is related to the fifth coordinate by $\mu=1/z$. We introduce the bulk gauge symmetry $SU(N_F)\tm SU(N_P)\tm SU(N_Q)\tm U(1)_R$ with $SU(N_P)\tm SU(N_Q)\tm U(1)_R$ being the global symmetry of the 4D theory. The gauge symmetry $SU(N_F)$ is broken into $SU(N_C)\tm SU(N_F-N_C)$ at the boundary. The matter content is $N_F$ chiral multiplets in fundamental and $N_F$ chiral multiplets in anti-fundamental $SU(N_F)$ representations. We require the boundary conditions to yield $N_F$ chiral multiplets in both the fundamental and anti-fundamental representations of $SU(N_C)$ at the UV brane, and $N_F$ chiral multiplets in both the fundamental and anti-fundamental representations of $SU(N_F-N_C)$ at the IR brane. These boundary conditions can be realized by choosing the projection modes at $y=0$ and $y=\pi R$ as  P\_1(y=0)=P\_2(y=R)=(\_[N\_c]{},\_[N\_F-N\_c]{}) . Thus, in terms of $SU(N_c)\tm SU(N_F-N_C)$ quantum numbers, the field parities and projections are Q([N\_F]{})&=&(N\_C,1)\_[++]{}(1,N\_F-N\_C)\_[–]{} ,\ Q\^c()&=&(,1)\_[–]{}(1,)\_[++]{} ,\ P(N\_F)&=&(N\_C,1)\_[++]{}(1,N\_F-N\_C)\_[–]{} ,\ P\^c()&=&(,1)\_[–]{}(1,)\_[++]{} ,\ S(1)&=&(1,1)\_[++]{} ,  S\^c(1,1)=(1,1)\_[–]{} . For $Q(N_F)$ and $P(N_F)$ with $c \gg 1/2$, the $(N_C,1)$ multiplets (denoted by $Q,P$) are fully localized to the UV brane while the $(1,\overline{N_F-N_C})$ multiplets (denoted by ${q},{p}$ corresponding to $Q^c,P^c$ respectively) are strictly localized to the IR brane. The bulk zero modes localized towards the UV brane correspond to elementary fields. So, in the conformal supersymmetry breaking setting, we have the fundamental fields $Q_i,\tl{Q}_i,P,\tl{P}$ and we can introduce their interactions on the UV brane W|\_[UV]{}=Tr(SQ)+mP . The presence of the additional gauge symmetry $SU(N_F)$ is required by anomaly matching of $SU(N_C)$ and $SU(N_F-N_C)$ in the Seiberg duality. Anomaly matching in the Seiberg duality is equivalent to anomaly inflow of the Chern-Simmons terms of the 5D bulk, which gives opposite contributions on the two boundaries [@AB]. According to the setup of the conformal supersymmetry breaking scenario, we require the theory to enter a superconformal region when we can neglect the masses of $P$ and $\tl{P}$. To ensure that the theory is superconformal in a certain energy interval, and to be predictive, we need to determine the exact gauge beta functions. In the 5D picture, we can determine the beta-functions by calculating the variation of the gauge couplings with respect to the fifth dimensional coordinate. The gauge couplings are obtained by calculating the correlation functions of the conserved currents. Then from the 5D gauge coupling running [@CKS], we can obtain the dependence on the fifth dimension by replacing $k\pi R$ with $\ln(z/L)=- \ln(\mu L) $. This way, we obtain the following leading contributions &=&-+$$\f{3}{2}T_a(V_{++})+\f{3}{2}T_a(V_{+-})-\f{3}{2}T_a(V_{-+})-\f{3}{2}T_a(V_{--})$$ \ &&- $$(1-c_H)T_a(H_{++})+c_HT_a(H_{+-})-c_HT_a(H_{-+}) \right. \nn \\ && \left. +(1+c_H)T_a(H_{--})$$ . To determine the bulk couplings, we consider the $SU(N_F)$ gauge symmetry on the UV brane, IR brane and in the bulk. Then, by matching the beta function in the dual description b=-N\_F=-3N\_F , we obtain =-N\_F . In our case with a bulk gauge group $SU(N_F)$ and a gauge group $SU(N_c)$ on the UV and IR branes, for the $SU(N_c)$ gauge couplings we have T(V\_[++]{})=N\_c ,   T(V\_[–]{})=N\_F-N\_c ,  T\_a(H\_[++]{})= . The leading contributions are[^4] b\_a&=&-N\_c+(N\_F-N\_c)+(1-C\_P)N\_P+(1-C\_Q)N\_Q\ &=&-3N\_c+(1-C\_P)N\_P+(1-C\_Q)N\_Q . The sub-leading contributions to the gauge couplings depend on $\ln\ln\mu$ and correct the beta functions with b\_a=-T\_a(V\_[++]{}) . This expression is valid at two-loop level which we can see reproducing the NSVZ formula [@NSVZ] =- , by identifying \_P=C\_P ,   \_Q=C\_Q . Via the AdS/CFT correspondence, the bulk mass is related to the conformal dimension of the operator ${\cal O}$ that couples to p-forms [@witten] (+p)(+p-4)=m\^2 . Since the anomalous dimensions $\gamma_P$ and $\gamma_Q$ are determined by the superconformal invariance of the boundary, we can obtain the bulk mass terms for the $P$ and $Q$ hypermultiplets. We can obtain the scaling dimension of the 4D superconformal theory via the R-symmetry charge assignments =R\_[sc]{} . The $U(1)_R$ symmetry of the superconformal theory on the UV brane is determined by the $a$-maximization technique [@amax], with $a$ defined by t$^`$Hooft anomalies of the superconformal R-charge a=(3Tr R\^3-TrR) , and the R-charge being the combination of an arbitrarily chosen R-charge $R_0$ and other U(1) charges R=R\_0+\_i c\_iQ\_i . This value is the same as the one obtained in [@CGM-09051764]. For example with $N_c=4,N_Q=3,N_P=5$, it is [@CGM-09051764] \_s=1.48 ,    \_Q=0.765 . In the 4D picture the RG fixed points require $\gamma_S^*+2\gamma_Q^*=0$ because of the superconformal nature of the theory. From the AdS/CFT point of view the spontaneous breaking of the CFT originates from the IR brane. In the limit $c_Q \gg 1/2$ ($c_{\tl{Q}} \ll -1/2$) the $q$ and $\tl{q}$ fields are localized to the IR brane, which means that they are composites in the strongly interacting CFT. The UV brane interaction can be promoted to a bulk Yukawa coupling between bulk hypermultiplets $S$ and $Q,\tl{Q}$ S=d\^4x dyd\^2\_b S Q , which, after projection, will give the IR brane coupling S=d\^4x dyd\^2(y-R) S q . Thus, we can anticipate interactions of the form $\tl{\la} S \tl{q}q$ in the IR brane. If $q$ and $\tl{q}$ are not strongly localized, they are mixtures of composite and elementary particles. The coupling of $S$ to $q,\tl{q}$ will also lead to a coupling between $S$ and CFT operators ${\cal O}$ at the boundary. This can also be seen if we completely localize $q$ and $\tl{q}$. The hypermultiplet $S$ at the UV boundary is a source of conformal operators. With $c=1/2$ for $S$ the mixing of CFT states ($SU(N_F-N_c)$ singlets) and $S$ is marginal.[^5] According to the AdS/CFT interpretation[^6], they correspond to the Seiberg dual superpotential with the coupling of the form W= Sq+S[O]{} . The coefficients $\tl{\la}$ and $\omega$ can be determined by the AdS/CFT correspondence via two-point correlation functions. We simply match to the standard Seiberg dual result giving = ,       = . Here $\mu$ can be defined in the context of SQCD, where the beta function coefficients for the magnetic ($\tl{b}$) and electric ($b$) theories and their respective dynamical transmutation scales $\tl{\Lambda}$ and ${\Lambda}$ are related as \^b|\^=(-1)\^[F-N]{}\^[b+]{} . In the dual description the fields related to $P$ and $\tl{P}$ are integrated out after the RGE running from energies $z_{UV}^{-1}$ to $z_{IR}^{-1}$ if the mass parameter satisfies $z_{UV}^{-1}>m>z_{IR}^{-1}$. Thus we anticipate that $\tl{p}$ and $p$ does not appear as massless fields on the IR brane. This can also be understood by observing that adding only the UV mass terms spoils the zero mode solutions. So the original zero modes $\tl{p}$ and $p$, which are localized towards the IR brane, are no longer massless and will not appear in the dual superpotential. This AdS/CFT interpretation of the Seiberg duality is valid in the IR region for $3/2N_C<N_F<3N_C$ which are strongly coupled. If the mass parameter is small, $m<z_{IR}^{-1}$, then it appears as a small perturbation on the UV brane. We then can promote the mass parameter $m$ to a bulk field $L$, with $L(z_0)=m$, and introduce bulk Yukawa couplings between $L$ and the $\tl{P},P$ hypermultiplets. Similarly to the case of $\tl{Q}$ and $Q$, the dual description on the IR brane has the form[^7] W&\~&$$S\tl{q}q+ L\tl{p}p$$+ \_1 L+S[O]{} ,\ &\~&$$S\tl{q}q+ L\tl{p}p$$+m L+S [O]{} , with the coefficients, again, determined by matching to the Seiberg duality. Here we require the conformal symmetry is spontaneously broken by $\langle {\cal O}_1 \rangle \neq0$. After integrating out the fields $S$ and ${\cal O}$ such that S=0 ,   [O]{}=-q , we can see that the F-term of $L$ -F\_[L]{}\^=m+p , is non-vanishing (by rank conditions [@semi-direct]) which indicates that SUSY is broken. It was pointed out in [@semi-direct] that SUSY breaking by F-term VEVs of $L$ can cause some problems, such as a low energy Landau pole and vanishing gaugino masses if we identify the flavor symmetry with the SM gauge group. Thus it is preferable to study the case with $z_{UV}^{-1}>m>z_{IR}^{-1}$ where we can integrate out the fields related to $P$ and $\tl{P}$. Neglecting the additional contributions from $\tl{P}$ and $P$, the 5D action is [@pomarol] &=&d\^4(T+T\^) e\^[-(T+T\^)]{}$S^\da e^{-V}S+S^c e^V S^{c\da}+(S\leftrightarrow Q,\tl{Q})$ \ &+&d\^2e\^[-3T ]{}S\^c$$\pa_5-\f{1}{\sqrt{2}}\chi-(\f{3}{2}-c)T\sigma^\pr$$S+h.c.+(SQ,) \ &+& W\_0(y)+e\^[-3T ]{}W\_[R]{}(y-R) ,where $T$ is the radion supermultiplet T=R+iB\_5+\_R\^5+\^2 F\_ , $B_5$ is the fifth component of the graviphoton, $\Psi_R^5$ is the fifth component of the right-handed gravitino, and $F_{\tl{S}}$ is a complex auxiliary field. After the lowest component of the radion acquires a VEV, we can re-scale the fields (S , S\^c) (S , S\^c) . Neglecting the gauge sector, for the F-terms of $S$ and $S^c$ we have -F\_[S]{}\^&=&$$-\pa_5+(\f{1}{2}+c_S)k\epsilon(y)$$S\^c+\_bQ\ &+&(y)Q+(y-R)e\^[-2kR]{}$\f{1}{\mu}\tl{q}q+\la {\cal O}$ , \ -F\_[S\^c]{}\^&=&$$\pa_5-(\f{1}{2}-c_S)k\epsilon(y)$$S , while for the $Q$ and $\tl{Q}$ fields -F\_[Q]{}\^&=&$$-\pa_5+(\f{1}{2}+c_Q)k\epsilon(y)$$Q\^c+\_bS\ &+&(y)S+(y-R)e\^[-2kR]{} S  , \ -F\_[Q\^c]{}\^&=&$$\pa_5-(\f{1}{2}-c_Q)k\epsilon(y)$$Q . The solutions for $S$, $Q$, and $\tl{Q}$ are S(y)&=&C\_Se\^[(-c\_S)k|y|]{} ,\ Q(y)&=&C\_[Q]{}e\^[(-c\_Q)k|y|]{} ,\ (y)&=&C\_[Q]{}e\^[(-c\_[Q]{})k|y|]{} , with the boundary conditions C\_S=S ,     C\_Q=Q ,     Qe\^[(-c\_[Q]{})kR]{}=q , and $c_S=1/2$ for $S$. Substituting the previous expressions into the flatness conditions we can see that, except for the boundary terms, the solutions for $S^c$ and $Q^c$ are S\^c(y)&=&(y)e\^[(-c\_S-c\_Q-c\_[Q]{})k|y|]{} ,\ Q\^c(y)&=&(y)e\^[(-c\_S-c\_Q-c\_[Q]{})k|y|]{}  . The boundary conditions determine the SUSY relations S\^c(y=0)&=&Q ,     S\^c(y=R)=q+\ Q\^c(y=0)&=&S  ,     Q\^c(y=R)=S [q]{} . Substituting back into the previous solutions, we find that the $F_{S}^{\da}$ and $F_Q^\da$ flatness conditions cannot be satisfied at the same time. So supersymmetry is broken in this scenario. This conclusion agrees with the conjecture of [@yanagida2] for a vanishing $S$ VEV. The non-vanishing F-term VEV of $S$, which has an R-charge $2N_c/N_Q-2\neq 0$, breaks the R-symmetry spontaneously. Thus, gaugino masses are not prohibited. Sfermion masses can be generated by the operator which arises from integrating out the messengers $P$ and $\tl{P}$ K&\~&-$\f{g_{SM}^2}{16\pi^2}$\^2d\^4Tr(S\^S)(\^) ,which gives m\_\^2\~&$\f{g_{SM}^2}{16\pi^2}$\^2(F\_S\^F\_S) . Gaugino masses can be generated by an anti-instanton induced operator [@yanagida2] c\_2d\^4$\f{1}{16\pi^2}$ Tr(S\^S)(|[D]{}\^2 S\^)W\_aW\^a where $\Lambda_L^{\da}$ is the holomorphic dynamical scale below the thresholds of $P$ and $\tl{P}$. The gaugino masses m\_[gaugino]{}=c\_2$\f{g_{SM}^2}{16\pi^2}$ (F\_S\^F\_S) (F\_S\^)\^[N\_Q]{} , are not too small because the gauge couplings are large [@yanagida2]. Conclusion {#sec-5} ========== In this paper, we propose the SUSY $SU(7)$ unification of the $SU(3)_C\tm SU(4)_W\tm U(1)_{B-L}$ model. Such unification scenario has rich symmetry breaking chains in a five-dimensional orbifold. We study in detail the SUSY $SU(7)$ symmetry breaking into $SU(3)_C\tm SU(4)_W\tm U(1)_{B-L}$ by boundary conditions in a Randall- Sundrum background and its AdS/CFT interpretation. We find that successful gauge coupling unification can be achieved in our scenario. Gauge unification favors low left-right and unification scales with tree-level $\sin^2\theta_W=0.15$. We use the AdS/CFT dual of the conformal supersymmetry breaking scenario to break the remaining ${\cal N}=1$ supersymmetry. We employ AdS/CFT to reproduce the NSVZ formula and obtain the structure of the Seiberg duality in the strong coupling region for $\f{3}{2}N_c<N_F<3N_C$. We show that supersymmetry is indeed broken in the conformal supersymmetry breaking scenario with a vanishing singlet vacuum expectation value. We acknowledge the referee for useful suggestions. This research was supported in part by the Australian Research Council under project DP0877916 (CB and FW), by the National Natural Science Foundation of China under grant Nos. 10821504, 10725526 and 10635030, by the DOE grant DE-FG03-95-Er-40917, and by the Mitchell-Heep Chair in High Energy Physics. [99]{} R. N. Mohapatra, J. C. Pati, Phys. Rev. D11, 566 (1975). R. N. Mohapatra, Phys. Rev. D34,3457 (1986); A. Font, L. E. Ibanez, F. Quevedo, Phys. Lett. B228, 79 (1989); R. Kuchimanchi, R. N. Mohapatra, Phys. Rev. D48, 4352 (1993). H. Georgi, S. L. Glashow, Phys. Rev. Lett 32, 438 (1974); S. Dimopoulos, H. Georgi, Nucl. Phys. B193, 150 (1981). H. Georgi, in Particles and Fields (1975); H. Fritzsch, P. Minkowski, Ann. Phys. 93, 193 (1975). I. Gogoladze, Y. Mimura, S. Nandi, Phys. Lett. B560, 204 (2003). T. Li, F. Wang and J. M. Yang, Nucl. Phys.  B [**820**]{}, 534 (2009). Q. Shafi, Z. Tavartkiladze, hep-ph/0108247. C. Balazs, T. Li, F. Wang and J. M. Yang, JHEP [**0909**]{}, 015 (2009). Y. Kawamura, Prog. Theor. Phys.  [**103**]{}, 613 (2000) \[arXiv:hep-ph/9902423\]. Y. Kawamura, Prog. Theor. Phys.  [**105**]{}, 999 (2001) \[arXiv:hep-ph/0012125\]. Y. Kawamura, Prog. Theor. Phys.  [**105**]{}, 691 (2001) \[arXiv:hep-ph/0012352\]. G. Altarelli and F. Feruglio, Phys. Lett. B [**511**]{}, 257 (2001) \[arXiv:hep-ph/0102301\]. L. J. Hall and Y. Nomura, Phys. Rev. D [**64**]{}, 055003 (2001) \[arXiv:hep-ph/0103125\]. A. B. Kobakhidze, Phys. Lett. B [**514**]{}, 131 (2001) \[arXiv:hep-ph/0102323\]. A. Hebecker and J. March-Russell, Nucl. Phys. B [**613**]{}, 3 (2001). A. Hebecker and J. March-Russell, Nucl. Phys. B [**625**]{}, 128 (2002). T. Li, Phys. Lett.  B [**520**]{}, 377 (2001). T. Li, Nucl. Phys.  B [**619**]{}, 75 (2001). C. Balazs, Z. Kang, T. Li, F. Wang and J. M. Yang, JHEP [**1002**]{}, 096 (2010). C. Balazs, T. Li, D. V. Nanopoulos and F. Wang, arXiv:1006.5559 \[hep-ph\]. L. Randall, R. Sundrum, Phys. Rev. Lett. 83, 3370 (1999). W. D. Goldberger, Y. Nomura, D. R. Smith, Phys. Rev. D 67, 075021 (2003). Y. Nomura, D. Tucker-Smith, B. Tweedie, Phys. Rev. D 71, 075004 (2005). T. Gherghetta,\[hep-ph/0601213\],\[1008.2570\]. K.-I. Izawa, F. Takahashi, T. T. Yanagida, K. Yonekura, Phys. Rev. D80:085017 (2009). T. T. Yanagida and K. Yonekura, Phys. Rev.  D [**81**]{}, 125017 (2010). J. Maldacena, Adv. Theor. Math. Phys. 2, 231 (1998). N. Arkani-Hamed, M. Schmaltz, Phys. Rev. D 61, 033005 (2000). N. Arkani-Hamed, L. Hall, D. Smith, N. Weiner, Phys. Rev. D63 (2001) 056003; N. Arkani-Hamed, T. Gregoire, J. Wacker, JHEP 0203 (2002) 055. P. K. Townsend, Phys. Rev.  [**D15**]{} (1977) 2802; S. Deser and B. Zumino, Phys. Rev. Lett.  [**38**]{} (1977) 1433. E. Shuster, Nucl. Phys. [**B554**]{} (1999) 198. T. Gherghetta and A. Pomarol, Nucl. Phys. B586, 141(2000). T. Gherghetta and A. Pomarol, Phys. Rev. D 67, 085018 (2003). T. Gherghetta, arXiv:hep-ph/0601213. R. Harnik, D. T. Larson, H. Murayama and M. Thormeier, Nucl. Phys.  B [**706**]{}, 372 (2005). C. Csaki, C. Grojean, L. Pilo, J. Terning, Phys. Rev. Lett. 92:101802 (2004). C. Csaki, C. Grojean, J. Hubisz, Y. Shirman, J. Terning, Phys. Rev. D70:015012 (2004). C. Csaki, J. Hubisz and P. Meade, arXiv:hep-ph/0510275. C. Amsler [*et al.*]{} \[Particle Data Group\], Phys. Lett.  B[**667**]{}, 1 (2008). Jens Erler, Paul Langacker, Shoaib Munir, Eduardo Rojas, JHEP 0908:017(2009). Yue Zhang, Haipeng An, Xiangdong Ji, Rabindra N. Mohapatra, Nucl. Phys. B[**802**]{},247 (2008). N. Arkani-Hamed, A. G. Cohen, H. Georgi, Phys. Lett. B[**516**]{}, 395-402 (2001). H. Harari and N. Seiberg, Phys. Lett. B98 269 (1982); Nucl. Phys. B[**204**]{}, 141 (1982). A. Hebecker, J. March-Russell, Phys. Lett. B 541, 338-345 (2002). S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Phys. Lett. B [**428**]{}, 105 (1998). E. Witten, Adv. Theor. Math. Phys. 2, 253 (1998). N. Arkani-Hamed, M. Porrati and L. Randall, JHEP 0108, 017 (2001). R. Rattazzi and A. Zaffaroni, JHEP 0104, 021 (2001). M. Perez-Victoria, JHEP 0105, 064 (2001). K. Agashe, A. Delgado, Phys. Rev. D 67, 046003 (2003) R. Contino and A. Pomarol, JHEP 0411, 058 (2004). C. D. Froggatt, H. B. Nielsen, Nucl. Phys. B 147, 277 (1979). M. Ibe, Y. Nakayama and T. T. Yanagida, Phys. Lett. B 649, 292 (2007); N. Seiberg, T. Volansky, B. Wecht, JHEP 0811, 004 (2008). K. Choi, I.-W. Kim, W. Y. Song, Nucl. Phys. B687, 101-123 (2004). K. A. Intriligator and B. Wecht, Nucl. Phys. B 667, 183 (2003). V.A. Novikov, M.A. Shifman, A.I. Vainshtein, and V.I. Zakharov, Nucl. Phys. B 229, 381 (1983). K. I. Izawa, F. Takahashi, T. T. Yanagida, K. Yonekura, Phys. Rev. D80, 085017 (2009). S. Abel and F. Brummer, JHEP [**1005**]{}, 070 (2010). D. Marti and A. Pomarol, Phys. Rev. D[**64**]{}, 105025 (2001). [^1]: The relative normalization within the matter sector is determined by anomaly cancellation requirements. [^2]: We can also eliminate the bulk singlet Higgs field $S$ and choose the boundary conditions so that the $SU(3)_C\tm SU(4)_W\tm U(1)_{B-L}$ singlet comes from (projections of) bulk Higgs hypermultiplets $\Sigma({\bf 48})$. An additional $\Sigma_2({\bf 1,15})_0$ from $\tl{\Sigma}$ is required to break $SU(4)_W\tm U(1)_{B-L}$ to $SU(2)_L\tm SU(2)_R\tm U(1)_Z\tm U(1)_{B-L}$. [^3]: We can also add Higgs fields in the ${\bf 35}$ and $\overline{\bf 35}$ representations with flat profiles and impose the following boundary conditions &=&([**1,1**]{})\_4\^[-,+]{}([**1,|[4]{}**]{})\_[-3]{}\^[+,+]{} ([**|[3]{}, [4]{}**]{})\_[5/3]{}\^[+,+]{}([**[3]{},[6]{}**]{})\_[-2/3]{}\^[-,+]{} ,\ &=&([**1,1**]{})\_[-4]{}\^[-,+]{}([**1, [4]{}**]{})\_3\^[+,+]{} ([**[3]{}, |[4]{}**]{})\_[-5/3]{}\^[+,+]{}([**|[3]{},|[6]{}**]{})\_[2/3]{}\^[-,+]{} . Then the beta functions receive additional contributions: &&T(H\_[++]{})|\_[H+H\^c]{}\^h=( , 4, 4) ,\ &&T(H\_[-+]{})|\_[H+H\^c]{}\^h=(, 6, 6) . [^4]: The matter contributions are valid for $c_{++}>1/2$. For $c_{++}\leq1/2$, $1-c_P$ in front of $N_P$ is replaced by $c_P$. [^5]: The mixing is important for $|c|\leq1/2$ but marginal for $c=1/2$. [^6]: From the AdS/CFT dictionary [@pomarol-fermion] we can see that the operator ${\cal O}$ is dynamical appearing in the low energy superpotential. [^7]: In the presence of $P$ and $\tl{P}$ there are also terms of the form $(K\tl{q}p+M\tl{p}q)/\mu$, which is similar to the case of $S$ with $\la=0$.
--- abstract: | Continuous-time branching processes describe the evolution of a population whose individuals generate a random number of children according to a birth process. Such branching processes can be used to understand preferential attachment models in which the birth rates are linear functions. We are motivated by citation networks, where power-law citation counts are observed as well as aging in the citation patterns. To model this, we introduce fitness and age-dependence in these birth processes. The multiplicative fitness moderates the rate at which children are born, while the aging is integrable, so that individuals receives a [*finite*]{} number of children in their lifetime. We show the existence of a limiting degree distribution for such processes. In the preferential attachment case, where fitness and aging are absent, this limiting degree distribution is known to have power-law tails. We show that the limiting degree distribution has exponential tails for bounded fitnesses in the presence of integrable aging, while the power-law tail is restored when integrable aging is combined with fitness with unbounded support with at most exponential tails. In the absence of integrable aging, such processes are explosive. author: - Alessandro Garavaglia - Remco van der Hofstad - Gerhard Woeginger bibliography: - 'biblio2.bib' title: ' **** ' --- Introduction {#sec01-intr} ============ Preferential attachment models (PAMs) aim to describe dynamical networks. As for many real-world networks, PAMs present power-law degree distributions that arise directly from the dynamics, and are not artificially imposed as, for instance, in configuration models or inhomogeneous random graphs. PAMs were first proposed by Albert and Barab[á]{}si [@ABrB], who defined a random graph model where, at every discrete time step, a new vertex is added with one or more edges, that are attached to existing vertices with probability proportional to the degrees, i.e., $${\mathbb{P}}\left(\mbox{vertex}~(n+1)~\mbox{is attached to vertex}~i\mid \mbox{graph at time}~n\right)\propto D_i(n),$$ where $D_i(n)$ denotes the degree of a vertex $i\in\{1,\ldots,n\}=[n]$ at time $n$. In general, the dependence of the attachment probabilities on the degree can be through a [ *preferential attachment function*]{} of the degree, also called [*preferential attachment weights*]{}. Such models are called PAMs with [*general weight function*]{}. According to the asymptotics of the weight function $w(\cdot)$, the limiting degree distribution of the graph can behave rather differently. There is an enormous body of literature showing that PAMs present power-law decay in the limiting degree distribution precisely when the weight function is affine, i.e., it is a constant plus a linear function. See e.g., [@vdH1 Chapter 8] and the references therein. In addition, these models show the so-called [*old-get-richer*]{} effect, meaning that the vertices of highest degrees are the vertices present early in the network formation. An extension of this model is called preferential attachment models with a [*random number of edges*]{} [@Dei], where new vertices are added to the graph with a different number of edges according to a fixed distribution, and again power-law degree sequences arise. A generalization that also gives younger vertices the chance to have high degrees is given by PAMs with [*fitness*]{} as studied in [@der2014],[@der16]. Borgs et al. [@Borgs] present a complete description of the limiting degree distribution of such models, with different regimes according to the distribution of the fitness, using [*generalized Polyá’s urns*]{}. An interesting variant of a multi-type PAM is investigated in [@Rosen], where the author consider PAMs where fitnesses are not i.i.d. across the vertices, but they are sampled according to distributions depending on the fitnesses of the ancestors. This work is motivated by [*citation networks*]{}, where vertices denote papers and the directed edges correspond to citations. For such networks, other models using preferential attachment schemes and adaptations of them have been proposed mainly in the physics literature. Aging effects, i.e., considering the [*age of a vertex*]{} in its likelihood to obtain children, have been extensively considered as the starting point to investigate their dynamics [@WaMiYu], [@WangYu], [@Hajra], [@Hajra2], [@Csardi]. Here the idea is that old papers are less likely to be cited than new papers. Such aging has been observed in many citation network datasets and makes PAMs with weight functions depending only on the degree ill-suited for them. As mentioned above, such models could more aptly be called [*old-get-richer*]{} models, i.e., in general [*old*]{} vertices have the highest degrees. In citation networks, instead, papers with many citations appear all the time. Barabási, Wang and Song [@BarWang] investigate a model that incorporates these effects. On the basis of empirical data, they suggest a model where the aging function follows a lognormal distribution with paper-dependent parameters, and the preferential attachment function is the identity. In [@BarWang], the fitness function is estimated rather than the more classical approach where it is taken to be i.i.d.. Hazoglou, Kulkarni, Skiena Dill in [@Hazo] propose a similar dynamics for citation evolution , but only considering the presence of aging and cumulative advantage without fitness. Tree models, arising when new vertices are added with only one edge, have been analyzed in [@Athr], [@Athr2], [@RudValko], [@Rudas] and lead to continuous-time branching processes (CTBP). The degree distributions in tree models show identical qualitative behavior as for the non-tree setting, while their analysis is much simpler. Motivated by this and the wish to understand the qualitative behavior of PAMs with general aging and fitness, the starting point of our model is the CTBP or tree setting. Such processes have been intensively studied, due to their applications in other fields, such as biology. Detailed and rigorous analysis of CTBPs can be found in [@athrBook], [@Jagers], [@Nerman], [@RudValko], [@Athr], [@Athr2], [@Bhamidi]. A CTBP consists of individuals, whose children are born according to certain birth processes, these processes being i.i.d. across the individuals in the population. The birth processes $(V_t)_{t\geq0}$ are defined in term of point or jump processes on ${\mathbb{N}}$ [@Jagers], [@Nerman], where the birth times of children are the jump times of the process, and the number of children of an individual at time $t\in{\mathbb{R}}^+$ is given by $V_t$. ![[]{data-label="fig-tailtogether"}](001_number_publications.pdf){width="90.00000%"} Number of publication per year (logarithmic Y axis). ![[]{data-label="fig-tailtogether"}](002_degree_distribution.pdf){width="98.00000%"} Loglog plot for the in-degree distribution tail in citation networks In the literature, the CTBPs are used as a technical tool to study PAMs [@Athr2], [@RudValko], [@Rosen]. Indeed, the CTBP at the $n$th birth time follows the same law as the PAM consisting of $n$ vertices. In [@Athr2], [@RudValko], the authors prove an embedding theorem between branching processes and preferential attachment trees, and give a description of the degree distribution in terms of the asymptotic behavior of the weight function $w(\cdot)$. In particular, a power-law degree distribution is present in the case of (asymptotically) linear weight functions [@Rudas]. In the sub-linear case, instead, the degree distribution is [*stretched-exponential*]{}, while in the super-linear case it collapses, in the sense that one of the first vertices will receive all the incoming new edges after a certain step [@OliSpe05]. Due to the apparent exponential growth of the number of nodes in citation networks, we view the continuous-time process as the real network, which deviates from the usual perspective. Because of its motivating role in this paper, let us now discuss the empirical properties of citation networks in detail. Citation networks data {#sec-citnet} ---------------------- Let us now discuss the empirical properties of citation networks in more detail. We analyze the Web Of Science database, focusing on three different fields of science: [*Probability and Statistics*]{} (PS), [*Electrical Engineering*]{} (EE) and [*Biotechnology and Applied Microbiology*]{} (BT). We first point out some characteristics of citation networks that we wish to replicate in our models. Real-world citation networks possess five main characteristics: 1. In Figure \[fig-numberpublic\], we see that the number of scientific publications grows exponentially in time. While this is quite prominent in the data, it is unclear how this exponential growth arises. This could either be due to the fact that the number of journals that are listed in Web Of Science grows over time, or that journals contain more and more papers. 2. In Figure \[fig-tailtogether\], we notice that these datasets have empirical power-law citation distributions. Thus, most papers attract few citations, but the amount of variability in the number of citations is rather substantial. We are also interested in the dynamics of the citation distribution of the papers published in a given year, as time proceeds. This can be observed in Figure \[fig-dunamycpowerlaw\]. We see a [*dynamical power law*]{}, meaning that at any time the degree distribution is close to a power law, but the exponent changes over time (and in fact decreases, which corresponds to heavier tails). When time grows quite large, the power law approaches a fixed value. 3. In Figure \[fig-randomsample\], we see that the majority of papers stop receiving citations after some time, while few others keep being cited for longer times. This inhomogeneity in the evolution of node degrees is not present in classical PAMs, where the degree of [*every*]{} fixed vertex grows as a positive power of the graph size. Figure \[fig-randomsample\] shows that the number of citations of papers published in the same year can be rather different, and the majority of papers actually stop receiving citations quite soon. In particular, after a first increase, the average increment of citations decreases over time (see Figure \[fig-average\_degree\_increment\]). We observe a difference in this aging effect between the PS dataset and the other two datasets, due to the fact that in PS, scientists tend to cite older papers than in EE or BT. Nevertheless the average increment of citations received by papers in different years tends to decrease over time for all three datasets. 4. Figure \[fig-lindep\] shows the linear dependence between the past number of citations of a paper and the future ones. Each plot represents the average number of citations received by papers published in 1984 in the years 1993, 2006 and 2013 according to the initial number of citations in the same year. At least for low values of the starting number of citations, we see that the average number of citations received during a year grows linearly. This suggests that the attractiveness of a paper depends on the past number of citations through an affine function. 5. A last characteristic that we observe is the lognormal distribution of the age of cited papers. In Figure \[fig-ageCited\], we plot the distribution of cited papers, looking at references made by papers in different years. We have used a 20 years time window in order to compare different citing years. Notice that this lognormal distribution seems to be very similar within different years, and the shape is similar over different fields. Let us now explain how we translate the above empirical characteristics into our model. First, CTBPs grow exponentially over time, as observed in citation networks. Secondly, the aging present in citation networks, as seen both in Figures \[fig-randomsample\] and \[fig-average\_degree\_increment\], suggests that citation rates become smaller for large times, in such a way that typical papers stop receiving citations at some (random) point in time. The hardest characteristic to explain is the power-law degree sequence. For this, we note that citations of papers are influenced by many [*external factors*]{} that affect the attractiveness of papers (the journal, the authors, the topic,…). Since this cannot be quantified explicitly, we introduce another source of randomness in our birth processes that we call [*fitness*]{}. This appears in the form of multiplicative factors of the attractiveness of a paper, and for lack of better knowledge, we take these factors to be i.i.d. across papers, as often assumed in the literature. These assumptions are similar in spirit as the ones by Barabási et al. [@BarWang], which were also motivated by citation data, and we formalize and extend their results considerably. In particular, we give the precise conditions under which power-law citation counts are observed in this model. Our main goal is to define CTBPs with both aging as well as random fitness that keep having a power-law decay in the in-degree distribution. Before discussing our model in detail in Section \[sec-mainres\], we present the heuristic ideas behind it as well as the main results of this paper. Our main contribution {#sec-maincontribution} --------------------- The crucial point of this work is to show that it is possible to obtain power-law degree distributions in preferential attachment trees where the birth process [*is not just depending on an asymptotically linear weight sequence*]{}, in the presence of [*integrable aging*]{} and [*fitness*]{}. Let us now briefly explain how these two effects change the behavior of the degree distribution. #### **Integrable aging and affine preferential attachment without fitness.** In the presence of aging but without fitness, we show that the aging effect substantially slows down the birth process. In the case of affine weights, aging destroys the power-law of the stationary regime, generating a limiting distribution that consists of a power law with exponential truncation. We prove this under reasonable conditions on the underlying aging function (see Lemma \[Lem-adaptLap-age\]). #### **Integrable aging and super-linear preferential attachment without fitness.** Since the aging destroys the power-law of the affine PA case, it is natural to ask whether the combination of integrable aging and [*super-linear*]{} weights restores the power-law limiting degree distribution. Theorem \[th-explosive\] states that this is not the case, as super-linear weights imply explosiveness of the branching process, which is clearly unrealistic in the setting of citation networks (here, we call a weight sequence $k\mapsto f_k$ [*super-linear*]{} when $\sum_{k\geq 1} 1/f_k<\infty$). This result is quite general, because it holds for [*any*]{} integrable aging function. Due to this, it is impossible to obtain power-laws from super-linear preferential attachment weights. This suggests that (apart from slowly-varying functions), affine preferential attachment weights have the strongest possible growth, while maintaining exponential (and thus, in particular, non-explosive) growth. #### **Integrable aging and affine preferential attachment with unbounded fitness.** In the case of aging and fitness, the asymptotic behavior of the limiting degree distribution is rather involved. We estimate the asymptotic decay of the limiting degree distribution with affine weights in Proposition \[prop-pkasym\_fitage\]. With the example fitness classes analyzed in Section \[sec-fitexamples\], we prove that power-law tails are possible in the setting of aging and fitness, at least when the fitness has roughly exponential tail. So far, PAMs with fitness required the support of the fitness distribution to be [*bounded*]{}. The addition of aging allows the support of the fitness distribution to be unbounded, a feature that seems reasonable to us in the context of citation networks. Indeed, the relative attractivity of one paper compared to another one can be enormous, which is inconsistent with a bounded fitness distribution. While we do not know precisely what the necessary and sufficient conditions are on the aging and the fitness distribution to assure a power-law degree distribution, our results suggests that affine PA weights with integrable aging and fitnesses with at most an exponential tail in general do so, a feature that was not observed before. #### **Dynamical power laws.** In the case of fitness with exponential tails, we further observe that the number of citations of a paper of age $t$ has a power-law distribution with an exponent that depends on $t$. We call this a [*dynamical power law*]{}, and it is a possible explanation of the dynamical power laws observed in citation data (see Figure \[fig-dunamycpowerlaw\]). #### **Universality.** An interesting and highly relevant observation in this paper is that the limiting degree distribution of preferential attachment trees with aging and fitness shows a high amount of [*universality*]{}. Indeed, for integrable aging functions, the dependence on the precise choice of the aging function seems to be minor, except for the total integral of the aging function. Further, the dependence on fitness is quite robust as well. Our model and main results {#sec-mainres} ========================== In this paper we introduce the effect of aging and fitness in ${\mathrm{CTBP}}$ populations, giving rise to directed trees. Our model is motivated by the study of [*citation networks*]{}, which can be seen as directed graphs. Trees are the simplest case in which we can see the effects of aging and fitness. Previous work has shown that PAMs can be obtained from PA trees by collapsing, and their general degree structure can be quite well understood from those in trees. For example, PAMs with fixed out-degree $m\geq 2$ can be defined through a collapsing procedure, where a vertex in the multigraph is formed by $m\in{\mathbb{N}}$ vertices in the tree (see [@vdH1 Section 8.2]). In this case, the limiting degree distribution of the PAM preserve the structure of the tree case ([@vdH1 Section 8.4], [@Bhamidi Section 5.7]). This explains the relevance of the tree case results for the study of the effect of aging and fitness in PAMs. It could be highly interesting to prove this rigorously. Our CTBP model {#se-mainres} -------------- CTBPs represent a population made of individuals producing children independently from each other, according to i.i.d. copies of a birth process on ${\mathbb{N}}$. We present the general theory of CTBPs in Section \[sec-generalth\], where we define such processes in detail and we refer to general results that are used throughout the paper. In general, considering a birth process $(V_t)_{t\geq0}$ on ${\mathbb{N}}$, every individual in the population has an i.i.d. copy of the process $(V_t)_{t\geq0}$, and the number of children of individual $x$ at time $t$ is given by the value of the process $V^x_t$. We consider birth processes defined by a sequence of weights $(f_k)_{k\in{\mathbb{N}}}$ describing the birth rates. Here, the time between the $k$th and the $(k+1)$st jump is exponentially distributed with parameter $f_k$. The behavior of the whole population is determined by this sequence. The fundamental theorem for the CTBPs that we study is Theorem \[th-expogrowth\] quoted in Section \[sec-generalth\]. It states that, under some hypotheses on the birth process $(V_t)_{t\geq0}$, the population grows exponentially in time, which nicely fits the exponential growth of scientific publications as indicated in Figure \[fig-numberpublic\]. Further, using a so-called [*random vertex characteristic*]{} as introduced in [@Jagers], a complete class of properties of the population can be described, such as the fraction of individuals having $k$ children, as we investigate in this paper. The two main properties are stated in Definitions \[def-supercr\] and \[def-malthus\], and are called [*supercritical*]{} and [*Malthusian*]{} properties. These properties require that there exists a positive value $\alpha^*$ such that $${\mathbb{E}}\left[V_{T_{\alpha^*}}\right]=1, \quad \quad \mbox{ and }\quad \quad -\left.\frac{d}{d\alpha}{\mathbb{E}}\left[V_{T_{\alpha}}\right]\right|_{\alpha=\alpha^*}<\infty,$$ where $T_\alpha$ denotes an exponentially distributed random variable with rate $\alpha$ independent of the process $(V_t)_{t\geq0}$. The unique value $\alpha^*$ that satisfies both conditions is called the [*Malthusian parameter*]{}, and it describes the exponential growth rate of the population size. The aim is to investigate the ratio $$\frac{\mbox{number of individuals with}~k~\mbox{children at time}~t}{\mbox{size total population at time}~t}.$$ According to Theorem \[th-expogrowth\], this ratio converges almost surely to a deterministic limiting value $p_k$. The sequence $(p_k)_{k\in{\mathbb{N}}}$, which we refer to as the limiting degree distribution of the CTBP (see Definition \[def-limitdistr\]), is given by $$p_k = {\mathbb{E}}\left[{\mathbb{P}}\left(V_u=k\right)_{u=T_{\alpha^*}}\right].$$ The starting idea of our model of citation networks is that, given the history of the process up to time $t$, [$$\label{for-heuristic} \mbox{the rate of an individual of age}~t~\mbox{and}~k~\mbox{children to generate a new child is}~Yf_kg(t),$$]{} where $f_k$ is a non-decreasing PA function of the degree, $g$ is an integrable function of time, and $Y$ is a positive random variable called fitness. Therefore, the likelihood to generate children increases by having many children and/or a high fitness, while it is reduced by age. Recalling Figure \[fig-lindep\], we assume that the PA function $f$ is affine, so $f_k = ak+b$. In terms of a PA scheme, this implies $${\mathbb{P}}\left(\mbox{a paper cites another with past}~k~\mbox{citations}~|~\mbox{past}\right) \approx \frac{n(k) (ak+b)}{A},$$ where $n(k)$ denotes the number of papers with $k$ past citations, and $A$ is the normalization factor. Such behavior has already been observed by Redner [@Redner3] and Barabási et al. [@BarJeoNed]). We assume throughout the paper that the aging function $g$ is integrable. In fact, we start by the fact that the age of cited papers is lognormally distributed (recall Figure \[fig-ageCited\]). By normalizing such a distribution by the average increment in the number of citations of papers in the selected time window, we identify a universal function $g(t)$. Such function can be approximated by a lognormal shape of the form $$g(t) \approx c_1{\mathrm{e}}^{-c_2(\log(t+1)-c_3)^2},$$ for $c_1$, $c_2$ and $c_3$ field-dependent parameters. In particular, from the procedure used to define $g(t)$, we observe that $$g(t)\approx \frac{\mbox{number of references to year}~t}{\mbox{number of papers of age}~t}~ \frac{\mbox{total number of papers considered}}{\mbox{total number of references considered}},$$ which means in terms of PA mechanisms that $${\mathbb{P}}\left(\mbox{a paper cites another of age}~t~|~\mbox{past}\right)\approx \frac{n(t)g(t)}{B},$$ where $B$ is the normalization factor, while this time $n(t)$ is the number of paper of age $t$. This suggests that the citing probability depends on age through a lognormal aging function $g(t)$, which is integrable. This is one of the main assumptions in our model, as we discuss in Section \[sec-maincontribution\]. It is known from the literature ([@Rudas], [@RudValko], [@Athr]) that CTBPs show power-law limiting degree distributions when the infinitesimal rates of jump depend only on a sequence $(f_k)_{k\in{\mathbb{N}}}$ that is asymptotically [*linear*]{}. Our main aim is to investigate whether power-laws can also arise in branching processes that include aging and fitness. The results are organized as follows. In Section \[sec-res-aging\], we discuss the results for CTBPs with aging in the absence of fitness. In Section \[sec-res-aging-fitness\], we present the results with aging and fitness. In Section \[sec-res-aging-fitness-exp\], we specialize to fitness with distributions with exponential tails, where we show that the limiting degree distribution is a power law with a [*dynamic*]{} power-law exponent. Results with aging without fitness {#sec-res-aging} ---------------------------------- In this section, we focus on aging in PA trees in the absence of fitness. The aging process can then be viewed as a time-changed stationary birth process (see Definition \[def-statnonfit\]). A stationary birth process is a stochastic process $(V_t)_{t\geq0}$ such that, for $h$ small enough, $${\mathbb{P}}\left(V_{t+h}=k+1 \mid V_t=k\right) = f_kh+o(h).$$ In general, we assume that $k\mapsto f_k$ is increasing. The [*affine case*]{} arises when $f_k = ak+b$ with $a,b>0$. By our observations in Figure \[fig-lindep\], as well as related works ([@Redner3], [@BarJeoNed]), the affine case is a reasonable approximation for the attachment rates in citation networks. For a stationary birth process $(V_t)_{t\geq0}$, under the assumption that it is supercritical and Malthusian, the limiting degree distribution $(p_k)_{k\in{\mathbb{N}}}$ of the corresponding branching process is given by [$$\label{deg-distr-PA-tree} p_k = \frac{\alpha^*}{\alpha^* + f_k}\prod_{i=0}^{k-1}\frac{f_i}{\alpha^* + f_i}.$$]{} For a more detailed description, we refer to Section \[sec-stat-nonfit\]. Branching processes defined by stationary processes (with no aging effect) have a so-called [*old-get-richer*]{} effect. As this is not what we observe in citation networks (recall Figure \[fig-randomsample\]), we want to introduce [*aging*]{} in the reproduction process of individuals. The aging process arises by adding age-dependence in the infinitesimal transition probabilities: \[def-nonstatbirth\] Consider a non-decreasing PA sequence $(f_k)_{k\in{\mathbb{N}}}$ of positive real numbers and an aging function $g\colon {\mathbb{R}}^+\rightarrow{\mathbb{R}}^+$. We call a stochastic process $(N_t)_{t\geq 0}$ an [*aging birth process*]{} (without fitness) when 1. $N_0=0$, and $N_t\in{\mathbb{N}}$ for all $t\in{\mathbb{N}}$; 2. $N_t\leq N_s$ for every $t\leq s$; 3. for fixed $k\in{\mathbb{N}}$ and $t\geq0$, as $h\rightarrow0$, $${\mathbb{P}}\left(N_{t+h}=k+1 \mid N_t=k\right) = f_k g(t)h + o(h).$$ Aging processes are time-rescaled versions of the corresponding stationary process defined by the same sequence $(f_k)_{k\in{\mathbb{N}}}$. In particular, for any $t\geq0$, $N_t$ has the same distribution as $V_{G(t)}$, where $G(t) = \int_0^tg(s)ds$. In general, we assume that the aging function is [*integrable*]{}, which means that $G(\infty) := \int_0^\infty g(s)ds<\infty$. This implies that the number of children of a single individual in its entire lifetime has distribution $V_{G(\infty)}$, which is finite in expectation. In terms of citation networks, this assumption is reasonable since we do not expect papers to receive an infinite number of citations ever (recall Figure \[fig-average\_degree\_increment\]). Instead, for the stationary process $(V_t)_{t\geq0}$ in Definition \[def-statnonfit\], we have that ${\mathbb{P}}$-a.s. $V_t\rightarrow\infty$, so that also the aging process diverges ${\mathbb{P}}$-a.s. when $G(\infty) = \infty$. For aging processes, the main result is the following theorem, proven in Section \[sec-existence\]. In its statement, we rely on the Laplace transform of a function. For a precise definition of this notion, we refer to Section \[sec-generalth\]: \[th-limitdist-nonstat\] Consider an integrable aging function and a PA sequence $(f_k)_{k\in{\mathbb{N}}}$. Denote the corresponding aging birth process by $(N_t)_{t\geq0}$. Then, assuming that $(N_t)_{t\geq0}$ is supercritical and Malthusian, the limiting degree distribution of the branching process ${\boldsymbol{N}}$ defined by the birth process $(N_t)_{t\geq0}$ is given by [$$\label{for-nonstatdist} p_k = \frac{\alpha^*}{\alpha^*+f_k\hat{\mathcal{L}}^g(k,\alpha^*)}\prod_{i=0}^{k-1}\frac{f_i\hat{\mathcal{L}}^g(i,\alpha^*)} {\alpha^*+f_{i}\hat{\mathcal{L}}^g(i,\alpha^*)},$$]{} where $\alpha^*$ is the Malthusian parameter of ${\boldsymbol{N}}$. Here, the sequence of coefficients $(\hat{\mathcal{L}}^g(k,\alpha^*))_{k\in{\mathbb{N}}}$ appearing in is given by [$$\label{for-lkratio} \hat{\mathcal{L}}^g(k,\alpha^*) = \frac{\mathcal{L}({\mathbb{P}}\left(N_\cdot=k\right)g(\cdot))(\alpha^*)} {\mathcal{L}({\mathbb{P}}\left(N_\cdot=k\right))(\alpha^*)},$$]{} where, for $h\colon {\mathbb{R}}^+\rightarrow{\mathbb{R}}$, $\mathcal{L}(h(\cdot))(\alpha)$ denotes the Laplace transform of $h$.\ Further, considering a fixed individual in the branching population, the total number of children in its entire lifetime is distributed as $V_{G(\infty)}$, where $G(\infty)$ is the $L^1$-norm of $g$. The limiting degree distribution maintains a product structure as in the stationary case (see for comparison). Unfortunately, the analytic expression for the probability distribution $(p_k)_{k\in{\mathbb{N}}}$ in given by the previous theorem is not explicit. In the stationary case, the form reduces to the simple expression in . In general, the asymptotics of the coefficients $(\hat{\mathcal{L}}^g(k,\alpha^*))_{k\in{\mathbb{N}}}$ is unclear, since it depends both on the aging function $g$ as well as the PA weight sequence $(f_k)_{k\in{\mathbb{N}}}$ itself in an intricate way. In particular, we have no explicit expression for the ratio in , except in special cases. In this type of birth process, the cumulative advantage given by $(f_k)_{k\in{\mathbb{N}}}$ and the aging effect given by $g$ cannot be separated from each other. Numerical examples in Figure \[fig-distr-agepower\] show how aging destroys the power-law degree distribution. In each of the two plots, the limiting degree distribution of a stationary process with affine PA weights gives a power-law degree distribution, while the process with two different integrable aging functions does not. In the examples we have used $g(t) = {\mathrm{e}}^{-\lambda t}$ and $g(t) = (1+t)^{-\lambda}$ for some $\lambda>1$, and we observe the insensitivity of the limiting degree distribution with respect to $g$. The distribution given by can be seen as the limiting degree distribution of a CTBP defined by preferential attachment weight $(f_k\hat{\mathcal{L}}^g(k,\alpha^*))_{k\in{\mathbb{N}}}$. This suggests that $f_k\hat{\mathcal{L}}^g(k,\alpha^*)$ is not asymptotically linear in $k$. In Section \[sec-examples-age\], we investigate the two examples in Figure \[fig-distr-agepower\], showing that the limiting degree distribution has exponential tails, a fact that we know in general just as an upper bound (see Lemma \[lem-exp-tails-aging-bd-fitness\]). In order to apply the general CTBP result in Theorem \[th-expogrowth\] below, we need to prove that an aging process $(N_t)_{t\geq0}$ is supercritical and Malthusian. We show in Section \[sec-existence\] that, for an integrable aging function $g$, the corresponding process is supercritical if and only if [$$\label{cond-eg_inf1} \lim_{t\rightarrow\infty}{\mathbb{E}}\left[V_{G(t)}\right] = {\mathbb{E}}\left[V_{G(\infty)}\right]>1.$$]{} Condition heuristically suggests that the process $(N_t)_{t\geq0}$ has a Malthusian parameter if and only if the expected number of children in the entire lifetime of a fixed individual is larger than one, which seems quite reasonable. In particular, such a result follows from the fact that if $g$ is integrable, then the Laplace transform is always finite for every $\alpha>0$. In other words, since $N_{T_{\alpha^*}}$ has the same distribution as $V_{G(T_{\alpha^*})}$, ${\mathbb{E}}[N_{T_{\alpha^*}}]$ is always bounded by ${\mathbb{E}}[V_{G(\infty)}]$. This implies that $G(\infty)$ cannot be too small, as otherwise the Malthusian parameter would not exist, and the CTBP would die out ${\mathbb{P}}$-a.s.. The aging effect obviously slows down the birth process, and makes the limiting degree distribution have exponential tails for affine preferential attachment weights. One may wonder whether the power-law degree distribution could be restored when $(f_k)_{k\in{\mathbb{N}}}$ grows super-linearly instead. Here, we say that a sequence of weights $(f_k)_{k\in{\mathbb{N}}}$ grows super-linearly when $\sum_{k\geq1}1/f_k<\infty$ (see Definition \[def-superlin\]). In the super-linear case, however, the branching process is [*explosive*]{}, i.e., for every individual the probability of generating an infinite number of children in finite time is $1$. In this situation, the Malthusian parameter does not exist, since the Laplace transform of the process is always infinite. One could ask whether, by using an integrable aging function, this explosive behavior is destroyed. The answer to this question is given by the following theorem: \[th-explosive\] Consider a stationary process $(V_t)_{t\geq0}$ defined by super-linear PA weights $(f_k)_{k\in{\mathbb{N}}}$. For any aging function $g$, the corresponding non-stationary process $(N_t)_{t\geq0}$ is explosive. The proof of Theorem \[th-explosive\] is rather simple, and is given in Section \[sec-aging-gen-PA\]. We investigate the case of affine PA weights $f_k = ak+b$ in more detail in Section \[sec-adaptedLap\]. Under a hypothesis on the regularity of the integrable aging function, in Proposition \[prop-pkage\_asym\], we give the asymptotic behavior of the corresponding limiting degree distribution. In particular, as $k\rightarrow\infty$, $$p_k = C_1\frac{\Gamma(k+b/a)}{\Gamma(k+1)}{\mathrm{e}}^{-C_2k}\mathcal{G}(k,g)(1+o(1)),$$ for some positive constants $C_1,C_2$. The term $\mathcal{G}(k,g)$ is a function of $k$, the aging function $g$ and its derivative. The precise behavior of such term depends crucially on the aging function. Apart from this, we notice that aging generates an exponential term in the distribution, which explains the two examples in Figure \[fig-distr-agepower\]. In Section \[sec-examples-age\], we prove that the two limiting degree distributions in Figure \[fig-distr-agepower\] indeed have exponential tails. Results with aging and fitness {#sec-res-aging-fitness} ------------------------------ The analysis of birth processes becomes harder when we also consider fitness. First of all, we define the birth process with aging and fitness as follows: \[def-nonstatfit\] Consider a birth process $(V_t)_{t\geq0}$. Let $g\colon {\mathbb{R}}^+\rightarrow{\mathbb{R}}^+$ be an aging function, and $Y$ a positive random variable. The process $M_t := V_{YG(t)}$ is called a birth process with [*aging and fitness*]{}. Definition \[def-nonstatfit\] implies that the infinitesimal jump rates of the process $(M_t)_{t\geq0}$ are as in , so that the birth probabilities of an individual depend on the PA weights, the age of the individual and on its fitness. Assuming that the process $(M_t)_{t\geq0}$ is supercritical and Malthusian, we can prove the following theorem: \[th-degagefit\] Consider a process $(M_t)_{t\geq0}$ with integrable aging function $g$, fitnesses that are i.i.d. across the population, and assume that it is supercritical and Malthusian with Malthusian parameter $\alpha^*$. Then, the limiting degree distribution for the corresponding branching process is given by $$p_k = {\mathbb{E}}\left[\frac{\alpha^*}{\alpha^*+f_kY\hat{\mathcal{L}}(k,\alpha^*,Y)}\prod_{i=0}^{k-1} \frac{f_{i}Y\hat{\mathcal{L}}(i,\alpha^*,Y)}{\alpha^*+f_iY\hat{\mathcal{L}}(i,\alpha^*,Y)}\right].$$ For a fixed individual, the distribution $(q_k)_{k\in{\mathbb{N}}}$ of the number of children it generates over its entire lifetime is given by $$q_k = {\mathbb{P}}\left(V_{YG(\infty)}=k\right).$$ Similarly to Theorem \[th-limitdist-nonstat\], the sequence $(\hat{\mathcal{L}}(k,\alpha^*,Y))_{k\in{\mathbb{N}}}$ is given by $$\hat{\mathcal{L}}(k,\alpha^*,Y) = \left(\frac{\mathcal{L}({\mathbb{P}}\left(V_{uG(\cdot)}=k\right)g(\cdot))(\alpha^*)} {\mathcal{L}({\mathbb{P}}\left(V_{uG(\cdot)}=k\right))(\alpha^*)}\right)_{u=Y},$$ where again $\mathcal{L}(h(\cdot))(\alpha)$ denotes the Laplace transform of a function $h$. Notice that in this case, with the presence of the fitness $Y$, this sequence is no longer deterministic but random instead. We still have the product structure for $(p_k)_{k\in{\mathbb{N}}}$ as in the stationary case, but now we have to average over the fitness distribution. We point out that Theorem \[th-limitdist-nonstat\] is a particular case of Theorem \[th-degagefit\], when we consider $Y\equiv 1$. We state the two results as separate theorems to improve the logic of the presentation. We prove Theorem \[th-degagefit\] in Section \[sec-pf-aging-fitness-gen\]. In Section \[sec-aging-gen-PA\] we show how Theorem \[th-limitdist-nonstat\] can be obtained from Theorem \[th-degagefit\], and in particular how Condition is obtained from the analogous Condition stated below for general fitness distributions. With affine PA weights, in Proposition \[prop-pkasym\_fitage\], we can identify the asymptotics of the limiting degree distribution we obtain. This is proved by similar techniques as in the case of aging only, even though the result cannot be expressed so easily. In particular, we prove $$p_k =\frac{\Gamma(k+b/a)}{\Gamma(b/a)\Gamma(k+1)}\frac{2\pi}{\sqrt{\mathrm{det}(kH_k(t_k,s_k))}} {\mathrm{e}}^{-k\Psi_k(t_k,s_k)}{\mathbb{P}}\left(\mathcal{N}_1\geq -t_k,\mathcal{N}_2\geq -s_k\right)(1+o(1)),$$ where the function $\Psi_k(t,s)$ depends on the aging function, the density $\mu$ of the fitness and $k$. The point $(t_k,s_k)$ is the absolute minimum of $\Psi_k(t,s)$, $H_k(t,s)$ is the Hessian matrix of $\Psi_k(t,s)$, and $(\mathcal{N}_1,\mathcal{N}_2)$ is a bivariate normal vector with covariance matrix related to $H_k(t,s)$. We do not know the necessary and sufficient conditions for the existence of such a minimum $(t_k,s_k)$. However, in Section \[sec-fitexamples\], we consider two examples where we can apply this result, and we show that it is possible to obtain power-laws for them. In the case of aging and fitness, the supercriticality condition in is replaced by the analogous condition that [$$\label{cond-eg_inf2} {\mathbb{E}}\left[V_{YG(t)}\right]<\infty \quad\mbox{for every }t\geq0 \quad \quad \mbox{and} \quad \lim_{t\rightarrow\infty}{\mathbb{E}}\left[V_{YG(t)}\right]>1.$$]{} Borgs et al. [@Borgs] and Dereich [@der16], [@der2014] prove results on stationary CTBPs with fitness. In these works, the authors investigate models with affine dependence on the degree and bounded fitness distributions. This is necessary since unbounded distributions with affine weights are explosive and thus [*do not have Malthusian parameter*]{}. We refer to Section \[sec-fitconditions\] for a more precise discussion of the conditions on fitness distributions. In the case of integrable aging and fitness, it is possible to consider affine PA weights, even with unbounded fitness distributions, as exemplified by . In particular, for $f_k = ak+b$, $${\mathbb{E}}[V_t] = \frac{b}{a}\left({\mathrm{e}}^{at}-1\right).$$ As a consequence, Condition can be written as [$$\label{for-AgeFitLap} \forall t\geq0 \quad {\mathbb{E}}\left[{\mathrm{e}}^{aYG(t)}\right]<\infty \quad\quad \mbox{and}\quad\quad \lim_{t\rightarrow\infty}{\mathbb{E}}\left[{\mathrm{e}}^{aYG(t)}\right]>1+ \frac ab.$$]{} The expected value ${\mathbb{E}}\left[{\mathrm{e}}^{aYG(t)}\right]$ is the moment generating function of $Y$ evaluated in $aG(t)$. In particular, a necessary condition to have a Malthusian parameter is that the moment generating function is finite on the interval $[0,aG(\infty))$. As a consequence, denoting ${\mathbb{E}}[{\mathrm{e}}^{sY}]$ by $\varphi_Y(s)$, we have effectively moved from the condition of having bounded distributions to the condition [$$\label{for-varpYcond} \varphi_Y(x)<+\infty \quad \mbox{on}\quad [0,aG(\infty)),\quad\quad \mbox{and}\quad \lim_{x\rightarrow aG(\infty)}\varphi_Y(x)>\frac{a+b}{a}.$$]{} Condition is weaker than assuming a bounded distribution for the fitness $Y$, which means we can consider a larger class of distributions for the aging and fitness birth processes. Particularly for citation networks, it seems reasonable to have unbounded fitnesses, as the relative popularity of papers varies substantially. Dynamical power-laws for exponential fitness and integrable aging {#sec-res-aging-fitness-exp} ----------------------------------------------------------------- In Section \[sec-fitexamples\] we introduce three different classes of fitness distributions, for which we give the asymptotics for the limiting degree distribution of the corresponding ${\mathrm{CTBP}}$. The first class is called [*heavy-tailed*]{}. Recalling , any distribution $Y$ in this class satisfies, for any $t>0$, [$$\label{def-powerlawfit} \varphi_Y(t) = {\mathbb{E}}\left[{\mathrm{e}}^{tY}\right] = +\infty.$$]{} These distributions have a tail that is thicker than exponential. For instance, power-law distributions belong to this first class. Similarly to unbounded distributions in the stationary regime, such distributions generate [*explosive*]{} birth processes, independent of the choice of the integrable aging functions. The second class is called [*sub-exponential*]{}. The density $\mu$ of a distribution $Y$ in this class satisfies [$$\label{def-subexpfitness} \forall ~\beta>0, \quad \quad \lim_{s\rightarrow+\infty}\mu(s){\mathrm{e}}^{\beta s}=0.$$]{} An example of this class is the density $\mu(s) = C{\mathrm{e}}^{-\theta s^{1+\varepsilon}}$, for some $\varepsilon,C,\theta>0$. For such density, we show in Proposition \[prop-subexpfit\] that the corresponding limiting degree distribution has a thinner tail than a power-law. The third class is called [*general-exponential*]{}. The density $\mu$ of a distribution $Y$ in this class is of the form [$$\label{def-generalexpfit} \mu(s) = Ch(s){\mathrm{e}}^{-\theta s},$$]{} where $h(s)$ is a twice differentiable function such that $h'(s)/h(s)\rightarrow0$ and $h''(s)/h(s)\rightarrow0$ as $s\rightarrow\infty$, and $C$ is a normalization constant. For instance, exponential and Gamma distributions belong to this class. From , we know that in order to obtain a non-explosive process, it is necessary to consider the exponential rate $\theta>aG(\infty)$. We will see that the limiting degree distribution obeys a power law as $\theta>aG(\infty)$ with tails becoming thinner when $\theta$ increases. For a distribution in the general exponential class, as proven in Proposition \[prop-expfit\_general\], the limiting degree distribution of the corresponding ${\mathrm{CTBP}}$ has a power-law term, with slowly-varying corrections given by the aging function $g$ and the function $h$. We do not state Propositions \[prop-expfit\_general\] and \[prop-subexpfit\] here, as these need notation and results from Section \[sec-adaptedLap\]. For this reason, we only state the result for the special case of purely exponential fitness distribution: \[th-degexpfitness\] Let the fitness distribution $Y$ be exponentially distributed with parameter $\theta$, and let $g$ be an integrable aging function. Assume that the corresponding birth process $(M_t)_{t\geq0}$ is supercritical and Malthusian. Then, the limiting degree distribution $(p_k)_{k\in{\mathbb{N}}}$ of the corresponding CTBP ${\boldsymbol{M}}$ is $$p_k = {\mathbb{E}}\left[\frac{\theta}{\theta+f_kG(T_{\alpha^*})}\prod_{i=0}^{k-1}\frac{f_iG(T_{\alpha^*})}{\theta+f_iG(T_{\alpha^*})}\right].$$ The distribution $(q_k)_{k\in{\mathbb{N}}}$ of the number of children of a fixed individual in its entire lifetime is given by $$q_k = \frac{\theta}{\theta+G(\infty) f_k}\prod_{i=0}^{k-1}\frac{G(\infty) f_i}{\theta+G(\infty) f_i}.$$ Using exponential fitness makes the computation of the Laplace transform and the limiting degree distribution easier. We refer to Section \[sec-expfitness\] for the precise proof. In particular, the sequence defined in Corollary \[th-degexpfitness\] is very similar to the limiting degree distribution of a stationary process with a bounded fitness. Let $(\xi^Y_t)_{t\geq0}$ be a birth process with PA weights $(f_k)_{k\in{\mathbb{N}}}$ and fitness $Y$ with bounded support. As proved in [@der2014 Corollary 2.8], and as we show in Section \[sec-fitconditions\], the limiting degree distribution of the corresponding branching process, assuming that $(\xi^Y_t)_{t\geq0}$ is supercritical and Malthusian, has the form $$p_k = {\mathbb{E}}\left[\frac{\alpha^*}{\alpha^* +Yf_k}\prod_{i=0}^{k-1}\frac{Yf_i}{\alpha^* + Yf_i}\right] = {\mathbb{P}}\left(\xi^Y_{T_{\alpha^*}} = k\right).$$ We notice the similarities with the limiting degree sequence given by Corollary \[th-degexpfitness\]. When $g$ is integrable, the random variable $G(T_{\alpha^*})$ has bounded support. In particular, we can rewrite the sequence of the Corollary \[th-degexpfitness\] as $$p_k = {\mathbb{P}}\left(\xi^{G(T_{\alpha^*})}_{T_{\theta}}=k\right).$$ As a consequence, the limiting degree distribution of the process $(M_t)_{t\geq0}$ equals that of a stationary process with fitness $G(T_{\alpha^*})$ and Malthusian parameter $\theta$. In the case where $Y$ has exponential distribution and the PA weights are affine, we can also investigate the occurrence of [*dynamical power laws*]{}. In fact, with $(M_t)_{t\geq0}$ such a process, the exponential distribution $Y$ leads to [$$\begin{split} P_k[M](t) = {\mathbb{P}}\left(M_t=k\right) & = \frac{\theta}{\theta+f_kG(t)}\prod_{i=0}^{k-1}\frac{f_iG(t)}{\theta+f_iG(t)}\\ & = \frac{\theta}{aG(t)}\frac{\Gamma((b+\theta)/(aG(t))}{\Gamma(aG(t))}\frac{\Gamma(k+b/(aG(t)))}{\Gamma(k+b/(aG(t))+ 1+\theta/(aG(t)))}. \label{pkt-formula} \end{split}$$]{} Here, $M_t$ describes the number of children of an individual of age $t$. In other words, $({\mathbb{P}}(M_t=k))_{k\in{\mathbb{N}}}$ is a distribution such that, as $k\rightarrow\infty$, $$P_k[M](t) = {\mathbb{P}}\left(M_t = k\right) = k^{-(1+\theta/aG(t))}(1+o(1)).$$ This means that for every time $t\geq0$, the random variable $M_t$ has a power-law distribution with exponent $\tau(t) = 1+\theta/aG(t)>2$. In particular, for every $t\geq0$, $M_t$ has finite expectation. We call this behavior where power laws occur that vary with the age of the individuals a [*dynamical power law*]{}. This occurs not only in the case of pure exponential fitness, but in general for every distribution as in , as shown in Proposition \[prop-expfit\_general\] below. Further, we see that when $t\rightarrow \infty$, the dynamical power-law exponent coincides with the power-law exponent of the entire population. Indeed, the limiting degree distribution equals [$$\label{for-expfit-pk} p_k = {\mathbb{E}}\left[\theta/(aG(T_{\alpha^*})))\frac{\Gamma(\theta/(aG(T_{\alpha^*}))+b/(aG(T_{\alpha^*}))}{\Gamma(b/(aG(T_{\alpha^*})))} \frac{\Gamma(k+b/(aG(T_{\alpha^*})))}{\Gamma(k+b/(aG(T_{\alpha^*}))+1+\theta/(aG(T_{\alpha^*})))}\right].$$]{} In Figure \[fig-pwrlwtime\], we show a numerical example of the dynamical power-law for a process with exponential fitness distribution and affine weights. When time increases, the power-law exponent monotonically decreases to the limiting exponent $\tau\equiv \tau(\infty)>2$, which means that the limiting distribution still has finite first moment. Note the similarity to the case of citation networks in Figure \[fig-dunamycpowerlaw\]. When $t\rightarrow\infty$, the power-law exponent converges, and also $M_t$ converges in distribution to a limiting random distribution $M_\infty$ given by [$$\label{exp-fitness-entire} q_k = {\mathbb{P}}\left(M_\infty=k\right) = \frac{\theta}{aG(\infty)}\frac{\Gamma((b+\theta)/(aG(\infty))}{\Gamma(b/(aG(\infty)))}\frac{\Gamma(k+b/(aG(\infty)))}{\Gamma(k+b/(aG(\infty))+ 1+\theta/(aG(\infty)))}.$$]{} $M_\infty$ has a power-law distribution, where the power-law exponent is $$\tau = \lim_{t\rightarrow\infty}\tau(t) = 1+ \theta/(aG(\infty))>2.$$ In particular, since $\tau>2$, a fixed individual has finite expected number of children also in its entire lifetime, unlike the stationary case with affine weights. In terms of citation networks, this type of processes predicts that papers do not receive an infinite number of citations after they are published (recall Figure \[fig-average\_degree\_increment\]). Figure \[fig-distr-agepower\] shows the effect of aging on the stationary process with affine weights, where the power-law is lost due to the aging effect. Thus, aging [*slows down*]{} the stationary process, and it is not possible to create the amount of high-degree vertices that are present in power-law distributions. Fitness can [*speed up*]{} the aging process to gain high-degree vertices, so that the power-law distribution is restored. This is shown in Figure \[fig-stat-nonstat-fit\], where aging is combined with exponential fitness for the same aging functions as in Figure \[fig-distr-agepower\]. In the stationary case, it is not possible to use unbounded distributions for the fitness to obtain a Malthusian process if the PA weights $(f_k)_{k\in{\mathbb{N}}}$ are affine. In fact, using unbounded distributions, the expected number of children at exponential time $T_{\alpha}$ is not finite [*for any*]{} $\alpha>0$, i.e., the branching process is [*explosive*]{}. The aging effect allows us to relax the condition on the fitness, and the restriction to bounded distributions is relaxed to a condition on its moment generating function. Conclusion and open problems {#sec-struc} ---------------------------- #### **Beyond the tree setting.** In this paper, we only consider the [*tree setting*]{}, which is clearly unrealistic for citation networks. However, the analysis of PAMs has shown that the qualitative features of the degree distribution for PAMs are identical to those in the tree setting. Proving this remains an open problem that we hope to address hereafter. Should this indeed be the case, then we could summarize our findings in the following simple way: The power-law tail distribution of PAMs is destroyed by integrable aging, and cannot be restored either by super-linear weights or by adding bounded fitnesses. However, it [*is*]{} restored by [*unbounded*]{} fitnesses with at most an exponential tail. Part of these results are example based, while we have general results proving that the limiting degree distribution exists. #### **Structure of the paper.** The present paper is organized as follows. In Section \[sec-generalth\], we quote general results on CTBPs, in particular Theorem \[th-expogrowth\] that we use throughout our proofs. In Section \[sec-stat-nonfit\], we describe known properties of the stationary regime. In Section \[sec-fitconditions\], we briefly discuss the Malthusian parameter, focusing on conditions on fitness distributions to obtain supercritical processes. In Section \[sec-existence\], we prove Theorem \[th-explosive\] and \[th-degagefit\], and we show how Theorem \[th-limitdist-nonstat\] is a particular case of Theorem \[th-degagefit\]. In Section \[sec-laplaceSection\] we specialize to the case of affine PA function, giving precise asymptotics. General theory of Continuous-Time Branching Processes {#sec-generalth} ===================================================== General set-up of the model {#sec-gen-set-up} --------------------------- In this section we present the general theory of continuous-time branching processes ($\mathrm{CTBPs}$). In such models, individuals produce children according to i.i.d. copies of the same birth process. We now define birth processes in terms of point processes: A [*point process*]{} $\xi$ is a random variable from a probability space $(\Omega,\mathcal{A},{\mathbb{P}})$ to the space of integer-valued measures on ${\mathbb{R}}^+$. A point process $\xi$ is defined by a sequence of positive real-valued random variables $(T_{k})_{k\in{\mathbb{N}}}$. With abuse of notation, we can denote the density of the point process $\xi$ by $$\xi(dt) = \sum_{k\in{\mathbb{N}}}\delta_{T_k}(dt),$$ where $\delta_x(dt)$ is the delta measure in $x$, and the random measure $\xi$ evaluated on $[0,t]$ as $$\xi(t) = \xi([0,t]) = \sum_{k\in{\mathbb{N}}}{\mathbbm{1}}_{[0,t]}(T_k).$$ We suppose throughout the paper that $T_k<T_{k+1}$ with probability 1 for every $k\in{\mathbb{N}}$. \[rem-pointproc\] Equivalently, considering a sequence $(T_k)_{k\in{\mathbb{N}}}$ (where $T_0=0$) of positive real-valued random variables, such that $T_k< T_{k+1}$ with probability $1$, we can define $$\xi(t) = \xi([0,t]) = k \quad\quad\mbox{when}\quad\quad t\in[T_k,T_{k+1}).$$ We will often define a point process from the jump-times sequence of an integer-valued process $(V_t)_{t\geq 0}$. For instance, consider $(V_t)_{t\geq0}$ as a Poisson process, and denote $T_k =\inf\{t>0\mbox{ : }V_t\geq k\}$. Then we can use the sequence $(T_k)_{k\in{\mathbb{N}}}$ to define a point process $\xi$. The point process defined from the jump times of a process $(V_t)_{t\geq0}$ will be denoted by $\xi_V$. We now introduce some notation before giving the definition of ${\mathrm{CTBP}}$. We denote the set of individuals in the population using Ulam-Harris notation for trees. The set of individuals is $$\mathcal{N} = \bigcup_{n\in{\mathbb{N}}}{\mathbb{N}}^n.$$ For $x\in{\mathbb{N}}^n$ and $k\in{\mathbb{N}}$ we denote the $k$-th child of $x$ by $xk\in{\mathbb{N}}^{n+1}$. This construction is well known, and has been used in other works on branching processes (see [@Jagers], [@Nerman], [@RudValko] for more details). We now are ready to define our branching process: \[def-brproc\] Given a point process $\xi$, we define the ${\mathrm{CTBP}}$ associated to $\xi$ as the pair of a probability space $$(\Omega,\mathcal{A},{\mathbb{P}}) = \prod_{x\in\mathcal{N}}\left(\Omega_x,\mathcal{A}_x,{\mathbb{P}}_x\right),$$ and an infinite set $(\xi^x)_{x\in\mathcal{N}}$ of i.i.d. copies of the process $\xi$. We will denote the branching process by ${\boldsymbol{\xi}}$. Throughout the paper, we will define point processes in terms of jump times of processes $(V_t)_{t\geq0}$. In order to keep the notation light, we will denote branching processes defined by point processes given by jump times of the process $V_t$ by ${\boldsymbol{V}}$. To make it more clear, by ${\boldsymbol{V}}$ we denote a probability space as in Definition \[def-brproc\] and an infinite set of measures $(\xi_V^x)_{x\in{\mathbb{N}}}$, where $\xi_V$ is the point process defined by the process $V$. According to Definition \[def-brproc\], a branching process is a pair of a probability space and a sequence of random measures. It is possible though to define an [*evolution*]{} of the branching population. At time $t=0$, our population consists only of the root, denoted by ${\varnothing}$. Every time $t$ an individual $x$ gives birth to its $k$-th child, i.e., $\xi^x(t)=k+1$, assuming that $\xi^x(t-)=k$, we start the process $\xi^{xk}$. Formally: We define the sequence of birth times for the process ${\boldsymbol{\xi}}$ as $\tau^\xi_{\varnothing}=0$, and for $x\in\mathcal{N}$, $$\tau^\xi_{xk} = \tau^\xi_x+\inf\left\{s\geq 0\mbox{ : } \xi^x(s)\geq k\right\}.$$ In this way we have defined the set of individuals, their birth times and the processes according to which they reproduce. We still need a way to count how many individuals are alive at a certain time $t$. \[charact\] A [*random characteristic*]{} is a real-valued process $\Phi\colon \Omega\times{\mathbb{R}}\rightarrow{\mathbb{R}}$ such that $\Phi(\omega,s)=0$ for any $s<0$, and $\Phi(\omega,s) = \Phi(s)$ is a deterministic bounded function for every $s\geq 0$ that only depends on $\omega$ through the birth time of the individual, as well as the birth process of its children. An important example of a random characteristic is obtained by the function ${\mathbbm{1}}_{{\mathbb{R}}^+}(s)$, which measures whether the individual has been born at time $s$. Another example is ${\mathbbm{1}}_{{\mathbb{R}}^+}(s){\mathbbm{1}}_{\{k\}}(\xi)$, which measures whether the individual has been born or not at time $s$ and whether it has $k$ children presently. For each individual $x\in\mathcal{N}$, $\Phi_x(\omega,s)$ denotes the value of $\Phi$ evaluated on the progeny of $x$, regarding $x$ as ancestor, when the age of $x$ is $s$. In other words, $\Phi_x(\omega,s)$ is the evaluation of $\Phi$ on the tree rooted at $x$, ignoring the rest of the population. If we do not specify the individual $x$, then we assume that $\Phi = \Phi_{\varnothing}$. We use random characteristics to describe the properties of the branching population. Consider a random characteristic $\Phi$ as in Definition \[charact\]. We define the evaluated branching processes with respect to $\Phi$ at time $t\in{\mathbb{R}}^+$ as $${\boldsymbol{\xi}}_t^\Phi = \sum_{x\in\mathcal{N}}\Phi_x(t-\tau^\xi_x).$$ The meaning of the evaluated branching process is clear when we consider the random characteristic $\Phi(t) = {\mathbbm{1}}_{{\mathbb{R}}^+}(t)$, for which $${\boldsymbol{\xi}}_t^{{\mathbbm{1}}_{{\mathbb{R}}^+}} = \sum_{x\in\mathcal{N}}({\mathbbm{1}}_{{\mathbb{R}}^+})_x(t-\tau^\xi_x),$$ which is the number of $x\in\mathcal{N}$ such that $t-\tau^\xi_x\geq 0$, i.e., the total number of individuals already born up to time $t$. Another characteristic that we consider in this paper is, for $k\in{\mathbb{N}}$, $\Phi_k(t) = {\mathbbm{1}}_{\{k\}}(\xi_{t})$, for which $${\boldsymbol{\xi}}_t^{\Phi_k} = \sum_{x\in\mathcal{N}}{\mathbbm{1}}_{\{k\}}\left(\xi^x_{t-\tau^\xi_x}\right)$$ is the number of individuals with $k$ children at time $t$. As known from the literature, the properties of the branching process are determined by the behavior of the point process $\xi$. First of all, we need to introduce some notation. Consider a function $f:{\mathbb{R}}^+\rightarrow{\mathbb{R}}$. We denote the Laplace transform of $f$ by $$\mathcal{L}(f(\cdot))(\alpha) = \int_0^\infty {\mathrm{e}}^{-\alpha t}f(t)dt.$$ With a slight abuse of notation, if $\mu$ is a positive measure on ${\mathbb{R}}^+$, then we denote $$\mathcal{L}(\mu(d\cdot))(\alpha) = \int_0^\infty {\mathrm{e}}^{-\alpha t}\mu(dt).$$ We use the Laplace transform to analyze the point process $\xi$: \[def-supercr\] Consider a point process $\xi$ on ${\mathbb{R}}^+$. We say $\xi$ is [*supercritical*]{} when there exists $\alpha^*>0$ such that $$\mathcal{L}({\mathbb{E}}\xi(d\cdot))(\alpha^*) = \int_0^\infty {\mathrm{e}}^{-\alpha^* t}{\mathbb{E}}\xi(dt) =\sum_{k\in{\mathbb{N}}}{\mathbb{E}}\left[\int_0^\infty {\mathrm{e}}^{-\alpha^* t}\delta_{T_k}(dt)\right] = \sum_{k\in{\mathbb{N}}}{\mathbb{E}}\left[{\mathrm{e}}^{-\alpha^* T_k}\right]=1.$$ We call $\alpha^*$ the [*Malthusian parameter*]{} of the process $\xi$. We point out that ${\mathbb{E}}\xi(d\cdot)$ is an abuse of notation to denote the density of the [*averaged*]{} measure ${\mathbb{E}}[\xi([0,t])]$. A second fundamental property for the analysis of branching processes is the following: \[def-malthus\] Consider a supercritical point process $\xi$, with Malthusian parameter $\alpha^*$. The process $\xi$ is [*Malthusian*]{} when $$\left.-\frac{d}{d\alpha}\left(\mathcal{L}({\mathbb{E}}\xi(dt))\right)(\alpha)\right|_{\alpha^*} = \int_0^\infty t{\mathrm{e}}^{-\alpha^* t}{\mathbb{E}}\xi(d\cdot) = \sum_{k\in{\mathbb{N}}}{\mathbb{E}}\left[T_k{\mathrm{e}}^{-\alpha^* T_k}\right]<\infty.$$ We denote [$$\label{for-tildealpha} \tilde{\alpha} = \inf\left\{\alpha>0 ~:~ \mathcal{L}\left({\mathbb{E}}\xi(d\cdot)\right)(\alpha)<\infty\right\},$$]{} and we will also assume that the process satisfies the condition [$$\label{for-larger1} \lim_{\alpha\searrow\tilde{\alpha}}\mathcal{L}\left({\mathbb{E}}\xi(d\cdot)\right)(\alpha)>1.$$]{} Integrating by parts, it is possible to show that, for a point process $\xi$, $$\mathcal{L}\left({\mathbb{E}}\xi(d\cdot)\right)(\alpha) = {\mathbb{E}}\left[V_{T_\alpha}\right],$$ where $T_\alpha$ is an exponentially distributed random variable independent of the process $(V_t)_{t\geq0}$. Heuristically, the Laplace transform of a point process $\xi_V$ is the expected number of children born at exponentially distributed time $T_\alpha$. In this case the Malthusian parameter is the exponential rate $\alpha^*$ such that at time $T_{\alpha^*}$ exactly one children has been born. These two conditions are required to prove the main result on branching processes that we rely upon: \[th-expogrowth\] Consider the point process $\xi$, and the corresponding branching process ${\boldsymbol{\xi}}$. Assume that $\xi$ is supercritical and Malthusian with parameter $\alpha^*$, and suppose that there exists $\bar{\alpha}<\alpha^*$ such that $$\int_0^\infty {\mathrm{e}}^{-\bar{\alpha}t}{\mathbb{E}}\xi(dt)<\infty.$$ Then 1. there exists a random variable $\Theta$ such that as $t\rightarrow\infty$, [$$\label{th-expogrowth-f1} {\mathrm{e}}^{-\alpha^*t}{\boldsymbol{\xi}}^{{\mathbbm{1}}_{{\mathbb{R}}^+}}_t\stackrel{{\mathbb{P}}-as}{\longrightarrow}\Theta;$$]{} 2. for any two random characteristics $\Phi$ and $\Psi$, [$$\label{th-expogrowth-f2} \frac{{\boldsymbol{\xi}}^{\Phi}_t}{{\boldsymbol{\xi}}^{\Psi}_t}\stackrel{{\mathbb{P}}-as}{\longrightarrow}\frac{\mathcal{L}({\mathbb{E}}[\Phi(\cdot)])(\alpha^*)}{\mathcal{L}({\mathbb{E}}[\Psi(\cdot)])(\alpha^*)}.$$]{} This result is stated in [@RudValko Theorem A], which is a weaker version of [@Nerman Theorem 6.3]. Formula implies that, ${\mathbb{P}}$-a.s., the population size grows exponentially with time. It is relevant though to give a description of the distribution of the random variable $\Theta$: \[th-W\] Under the hypothesis of Theorem \[th-expogrowth\], if [$$\label{for-xlogx} {\mathbb{E}}\left[\mathcal{L}(\xi(d\cdot))(\alpha^*)\log^+\left(\mathcal{L}(\xi(d\cdot))(\alpha^*)\right)\right]<\infty,$$]{} then, on the event $\{{\boldsymbol{\xi}}^{{\mathbbm{1}}_{{\mathbb{R}}^+}}_t\rightarrow\infty\}$, i.e., on the event that the branching population keeps growing in time, the random variable $\Theta$ in is positive with probability 1, and ${\mathbb{E}}[\Theta]=1$. Otherwise, $\Theta=0$ with probability 1. Condition is called the $(\mathrm{xlogx})$ condition. This result is proven in [@Jagers Theorem 5.3], and it is the CTBPs equivalent of the Kesten-Stigum theorem for Galton-Watson processes ([@kes Theorem 1.1]). Formula says that the ratio between the evaluation of the branching process with two different characteristics converges ${\mathbb{P}}$-a.s. to a constant that depends only on the two characteristics involved. In particular, if we consider, for $k\in{\mathbb{N}}$, $$\begin{array}{ccc} \displaystyle \Phi(t) = {\mathbbm{1}}_{\{k\}}(\xi_t),& \mbox{ and }& \displaystyle \Psi(t) = {\mathbbm{1}}_{{\mathbb{R}}^+}(t), \end{array}$$ then Theorem \[th-expogrowth\] gives [$$\label{rem-ratiochar} \frac{{\boldsymbol{\xi}}^{\Phi}_t}{{\boldsymbol{\xi}}^{{\mathbbm{1}}_{{\mathbb{R}}^+}}_t}\stackrel{{\mathbb{P}}-as}{\longrightarrow}\alpha^*\mathcal{L}({\mathbb{P}}\left(\xi(\cdot)=k\right))(\alpha^*),$$]{} since $\mathcal{L}({\mathbb{E}}[{\mathbbm{1}}_{{\mathbb{R}}^+}(\cdot)])(\alpha^*) = 1/\alpha^*$. The ratio in the previous formula is the fraction of individuals with $k$ children in the whole population: \[def-limitdistr\] The sequence $(p_k)_{k\in{\mathbb{N}}}$, where $$p_k = \alpha^*\mathcal{L}({\mathbb{P}}\left(\xi(\cdot)=k\right))(\alpha^*) = \alpha^*\int_0^\infty {\mathrm{e}}^{-\alpha^* t}{\mathbb{P}}\left(\xi(t)=k\right)dt$$ is the [*limiting degree distribution*]{} for the branching process ${\boldsymbol{\xi}}$. The aim of the following sections will be to study when point processes satisfy the conditions of Theorem \[th-expogrowth\], in order to analyze the limiting degree distribution in Definition \[def-limitdistr\]. Stationary birth processes with no fitness {#sec-stat-nonfit} ------------------------------------------ In this section we present the theory of birth processes that are stationary and have deterministic rates. This is relevant since the definition of aging processes starts with a stationary process. In particular, we give description of the affine case, which plays a central role in the present work: \[def-statnonfit\] Consider a non-decreasing sequence $(f_k)_{k\in{\mathbb{N}}}$ of positive real numbers. A [*stationary non-fitness birth process*]{} is a stochastic process $(V_t)_{t\geq0}$ such that 1. $V_0=0$, and $V_t\in{\mathbb{N}}$ for all $t\in{\mathbb{R}}^+$; 2. $V_t\leq V_s$ for every $t\leq s$; 3. for $h$ small enough, [$$\label{form-jumpprobstat} {\mathbb{P}}\left(V_{t+h}=k+1 \mid V_t=k\right) = f_kh + o(h), ~\mbox{and for}~j\geq 2,~ {\mathbb{P}}\left(V_{t+h}=k+j \mid V_t=k\right) = o(h^2).$$]{} We denote the jump times by $(T_k)_{k\in{\mathbb{N}}}$, i.e., $$T_k = \inf\left\{t\geq 0 \mbox{ : } V_t\geq k\right\}.$$ We denote the point process corresponding to $(V_t)_{t\geq0}$ by $\xi_V$. In this case, $(V_t)_{t\geq0}$ is an inhomogeneous Poisson process, and for every $k\in{\mathbb{N}}$, $T_{k+1}-T_k$ has exponential law with parameter $f_k$ independent of $(T_{h+1}-T_h)_{h=0}^{k-1}$. It is possible to show the following proposition: \[prop-eqdiffV\] Consider a stationary non-fitness birth process $(V_t)_{t\geq 0}$. Denote, for every $k\in{\mathbb{N}}$, ${\mathbb{P}}(V_t=k)=P_k[V](t)$. Then [$$\label{for-probfunct1} P_0[V](t) = \mathrm{exp}\left(-f_0t\right),$$]{} and, for $k\geq 1$, [$$\label{for-probfunct2} P_k[V](t) = f_{k-1}\mathrm{exp}\left(-f_k t\right)\int_0^t\mathrm{exp}\left(f_kx\right)P_{k-1}[V](x)dx.$$]{} For a proof, see [@athrBook Chapter 3, Section 2]. From the jump times, it is easy to compute the explicit expression for the Laplace transform of $\xi_V$ as $$\mathcal{L}({\mathbb{E}}\xi_V(d\cdot))(\alpha)=\sum_{k\in{\mathbb{N}}}{\mathbb{E}}\left[\int_0^\infty {\mathrm{e}}^{-\alpha t}\delta_{T_k}(dt)\right] = \sum_{k\in{\mathbb{N}}}{\mathbb{E}}\left[{\mathrm{e}}^{-\alpha T_k}\right] = \sum_{k\in{\mathbb{N}}}\prod_{i=0}^{k-1}\frac{f_i}{\alpha+f_i},$$ since every $T_k$ can be seen as sum of independent exponential random variables with parameters given by the sequence $(f_k)_{k\in{\mathbb{N}}}$. Assuming now that $\xi_V$ is supercritical and Malthusian with parameter $\alpha^*$, we have the explicit expression for the limit distribution $(p_k)_{k\in{\mathbb{N}}}$, given by . An analysis of the behavior of the limit distribution of branching processes is presented in [@Athr] and [@Rudas], where the authors prove that $(p_k)_{k\in{\mathbb{N}}}$ has a power-law tail only if the sequence of rates $(f_k)_{k\in{\mathbb{N}}}$ is asymptotically linear with respect to $k$. \[prop-V-linear\] Consider the sequence $f_k = ak+b$. Then: 1. for every $\alpha\in{\mathbb{R}}^+$, $$\mathcal{L}({\mathbb{E}}\xi_V(d\cdot))(\alpha) =\frac{\Gamma(\alpha^*/a+b/a)}{\Gamma(b/a)}\sum_{k\in{\mathbb{N}}}\frac{\Gamma(k+b/a)}{\Gamma(k+b/a+\alpha/a)} = \frac{b}{\alpha-a}.$$ 2. The Malthusian parameter is $\alpha^*=a+b$, and $\tilde{\alpha} = a$, where $\tilde{\alpha}$ is defined as in . 3. The derivative of the Laplace transform is $$-\frac{b}{(\alpha-a)^2},$$ which is finite whenever $\alpha>a$; 4. The process $(V_t)_{t\geq0}$ satisfies the $(\mathrm{xlogx})$ condition . The proof can be found in [@RudValko Theorem 2], or [@Athr Theorem 2.6]. For affine PA weights $(f_k)_{k\in{\mathbb{N}}} = (ak+b)_{k\in{\mathbb{N}}}$, the Malthusian parameter $\alpha^*$ exists. Since $\alpha^* = a+b$, the limiting degree distribution of the branching process ${\boldsymbol{V}}$ is given by [$$\label{for-degnormale} p_k = (1+b/a)\frac{\Gamma(1+2b/a)}{\Gamma(b/a)}\frac{\Gamma\left(k+b/a\right)}{\Gamma\left(k+b/a+2+b/a\right)}.$$]{} Notice that $p_k$ has a power-law decay with exponent $\tau = 2+\frac ba$. Branching processes of this type are related to PAM, also called the Barabási-Albert model ([@ABrB]). This model shows the so-called [*old-get-richer*]{} effect. Clearly this is not true for real-world citation networks. In Figure \[fig-average\_degree\_increment\], we notice that, on average, the increment of the citation received by old papers is smaller than the increment of younger papers. Rephrasing it, old papers tend to be cited less and less over time. The Malthusian parameter {#sec-fitconditions} ------------------------ The existence of the Malthusian parameter is a necessary condition to have a branching process growing at exponential rate. In particular, the Malthusian parameter does not exist in two cases: when the process is subcritical and grows slower than exponential, or when it is explosive. In the first case, the branching population might either die out or grow indefinitely with positive probability, but slower than at exponential rate. In the second case, the population size explodes in finite time with probability one. In both cases, the behavior of the branching population is different from what we observe in citation networks (Figure \[fig-numberpublic\]). For this reason, we focus on supercritical processes, i.e., on the case where the Malthusian parameter exists. Denote by $(V_t)_{t\geq0}$ a stationary birth process defined by PA weights $(f_k)_{k\in{\mathbb{N}}}$. In general, we assume $f_k\rightarrow\infty$. Denote the sequence of jump times by $(T_k)_{k\in{\mathbb{N}}}$. As we quote in Section \[sec-stat-nonfit\], the Laplace transform of a birth process $(V_t)_{t\geq0}$ is given by $$\mathcal{L}({\mathbb{E}}V(d\cdot))(\alpha) = {\mathbb{E}}\left[\sum_{k\in{\mathbb{N}}}{\mathrm{e}}^{-\alpha T_k}\right] = {\mathbb{E}}\left[V_{T_{\alpha}}\right] = \sum_{k\in{\mathbb{N}}}\prod_{i=0}^{k-1}\frac{f_i}{\alpha+f_i}.$$ Such expression comes from the fact that, in stationary regime, $T_k$ is the sum of $k$ independent exponential random variables. We can write $$\sum_{k\in{\mathbb{N}}}\mathrm{exp}\left(-\sum_{i=0}^{k-1}\log\left(1+\frac{\alpha}{f_i}\right)\right) = \sum_{k\in{\mathbb{N}}}\mathrm{exp}\left(-\alpha\sum_{i=0}^{k-1}\frac{1}{f_i}(1+o(1))\right).$$ The behavior of the Laplace transform depends on the asymptotic behavior of the PA weights. We define now the terminology we use: \[def-superlin\] Consider a PA weight sequence $(f_k)_{k\in{\mathbb{N}}}$. We say that the PA weights are [*superlinear*]{} if $\sum_{i=0}^{\infty}1/f_i<\infty$. As a general example, consider $f_k = ak^q+b$, where $q>0$. In this case, the sequence is affine when $q=1$, superlinear when $q>1$ and sublinear when $q<1$. When the weights are superlinear, since $C = \sum_{i=0}^{\infty}1/f_i<\infty$, we have [$$\label{for-superliBound} \sum_{k\in{\mathbb{N}}}\mathrm{exp}\left(-\alpha\sum_{i=0}^{k-1}\frac{1}{f_i}(1+o(1))\right)\geq \sum_{k\in{\mathbb{N}}}\mathrm{exp}\left(-\alpha C\right) = +\infty.$$]{} This holds for every $\alpha>0$. As a consequence, the Laplace transform ${\mathcal{L}}({\mathbb{E}}V(d\cdot))(\alpha)$ is always infinite, and there exist no Malthusian parameter. In particular, if we denote by $T_\infty = \lim_{k\rightarrow\infty}T_k$, then $T_\infty<\infty$ a.s.. This means that the birth process $(V_t)_{t\geq0}$ explodes in a finite time. When the weights are at most linear, the bound in does not hold anymore. In fact, consider as example affine weights $f_k = ak+b$. We have that $\sum_{i=0}^{k-1}\frac{1}{f_i} = (1/a)\log k(1+o(1))$. As a consequence, the Laplace transform can be written as [$$\label{for-linBound} \sum_{k\in{\mathbb{N}}}\mathrm{exp}\left(-\frac{\alpha}{a}\log k(1+o(1))\right) = \sum_{k\in{\mathbb{N}}} k^{-\frac{\alpha}{a}}(1+o(1)).$$]{} In this case, the Laplace transform is finite for $\alpha>a$. For the sublinear case, for which $\sum_{i=0}^{k-1}1/f_i = Ck^{(1-q)}(1+o(1))$, we obtain $$\sum_{k\in{\mathbb{N}}}\mathrm{exp}\left(-C\alpha k^{1-q}\right).$$ This sum is finite for any $\alpha>0$. We can now introduce fitness in the stationary process: \[rem-laplacemonotone\] Consider the process $(V_t)_{t\geq0}$ defined by the sequence of PA weights $(f_k)_{k\in{\mathbb{N}}}$ as in Section \[sec-stat-nonfit\]. For $u\in{\mathbb{R}}^+$ we denote by $(V^u_t)_{t\geq0}$ the process defined by the sequence $(uf_k)_{k\in{\mathbb{N}}}$. It is easy to show that $$\mathcal{L}({\mathbb{E}}\xi_{V^u}(d\cdot))(\alpha) = \mathcal{L}({\mathbb{E}}\xi_V(d\cdot))(\alpha/u).$$ The behavior of the degree sequence of $(V^u_t)_{t\geq0}$ is the same of the process $V_t$. Remark \[rem-laplacemonotone\] shows a sort of monotonicity of the Laplace transform with respect to the sequence $(f_k)_{k\in{\mathbb{N}}}$. This is very useful to describe the Laplace transform of a birth process with fitness, which we define now: \[def-birthfitness\] Consider a birth process $(V_t)_{t\geq0}$ defined by a sequence of weights $(f_k)_{k\in{\mathbb{N}}}$. Let $Y$ be a positive random variable. We call stationary fitness birth processes the process $(V^Y_t)_{t\geq0}$, defined by the random sequence of weights $(Y f_k)_{k\in{\mathbb{N}}}$, i.e., conditionally on $Y$, $${\mathbb{P}}\left(V^Y_{t+h} = k+1 \mid V^Y_t = k, Y\right) = Y f_k h+ o(h).$$ By Definition \[def-birthfitness\], it is obvious that the properties of the process $(V^Y_t)_{t\geq0}$ are related to the properties of $(V_t)_{t\geq0}$. Since we consider a random fitness $Y$ independent of the process $(V_t)_{t\geq0}$, from Remark \[rem-laplacemonotone\] it follows that [$$\label{for-condFIT-1} {\mathcal{L}}({\mathbb{E}}V^Y(d\cdot))(\alpha) = {\mathbb{E}}\left[\mathcal{L}({\mathbb{E}}\xi_{V^u}(d\cdot))(\alpha)_{u=Y}\right] = {\mathbb{E}}\left[\sum_{k\in{\mathbb{N}}}\prod_{i=0}^{k-1}\frac{Yf_i}{\alpha+Yf_i}\right].$$]{} For affine weights the fitness distribution needs to be bounded, as discussed in Section \[sec-res-aging-fitness-exp\]. In this section we give a qualitative explanation of this fact. Consider the sum in the expectation in the right hand term of . We can rewrite the sum as [$$\label{for-condFIT-2} \sum_{k\in{\mathbb{N}}}\prod_{i=0}^{k-1}\frac{Yf_i}{\alpha+Yf_i} = \sum_{k\in{\mathbb{N}}}\mathrm{exp}\left(-\sum_{i=0}^{k-1}\log\left(1+\frac{\alpha}{Yf_i}\right)\right)=\sum_{k\in{\mathbb{N}}}\mathrm{exp}\left(-\frac{\alpha}{Y}\sum_{i=0}^{k-1}\frac{1}{f_i}(1+o(1))\right).$$]{} The behavior depends sensitively on the asymptotic behavior of the PA weights. In particular, a necessary condition for the existence of the Malthusian parameter is that the sum in is finite on an interval of the type $(\tilde{\alpha},+\infty)$. In other words, since the Laplace transform is a decreasing function (when finite), we need to prove the existence of a minimum value $\tilde{\alpha}$ such that it is finite for every $\alpha>\tilde{\alpha}$. Using in , we just need to find a value $\alpha$ such that the right hand side of equals 1. In the case of affine weights $f_k = ak+b$, we have $\sum_{i=0}^{k-1}\frac{1}{f_i} = C\log k(1+o(1))$, for a constant $C$. As a consequence, is equal to [$$\label{for-condFIT-4} {\mathbb{E}}\left[\sum_{k\in{\mathbb{N}}}\mathrm{exp}\left(-C\frac{\alpha}{Y}\log k\right)\right] = {\mathbb{E}}\left[\sum_{k\in{\mathbb{N}}} k^{-C\alpha/Y}\right].$$]{} The sum inside the last expectation is finite only on the event $\{Y<C\alpha\}$. If $Y$ has an unbounded distribution, then for every value of $\alpha>0$ we have that $\{Y\geq C\alpha\}$ is an event of positive probability. As a consequence, for every $\alpha>0$, the Laplace transform of the birth process $(V^Y_y)_{t\geq0}$ is infinite, which means there exists no Malthusian parameter. This is why a bounded fitness distribution is necessary to have a Malthusian parameter using affine PA weights. The situation is different in the case of sublinear weights. For example, consider $f_k = (1+k)^q$, where $q\in(0,1)$. Then, the difference to affine weights is that now $\sum_{i=0}^{k-1}1/f_i = Ck^{1-q}(1+o(1))$. Using this in , we obtain $${\mathbb{E}}\left[\sum_{k\in{\mathbb{N}}}\mathrm{exp}\left(-C\frac{\alpha}{Y}k^{(1-q)}\right)\right].$$ In this case, since both $\alpha$ and $Y$ are always positive, the last sum is finite with probability $1$, and the expectation might be finite under appropriate moment assumptions on $Y$. Assume now that the fitness $Y$ satisfies the necessary conditions, so that the process $(V_t^Y)_{t\geq0}$ is supercritical and Malthusian with parameter $\alpha^*$. We can evaluate the limiting degree distribution. Conditioning on $Y$, the Laplace transform of ${\mathbb{E}}\xi_{V^Y}(dx)$ is $$\sum_{k\in{\mathbb{N}}}\prod_{i=0}^{k-1}\frac{Y f_i}{\alpha+Y f_i},$$ so, as a consequence, the limiting degree distribution of the branching processes is [$$\label{for-pkfitstat} p_k = {\mathbb{E}}\left[\frac{\alpha^*}{\alpha^*+Yf_k}\prod_{i=0}^{k-1}\frac{Y f_i}{\alpha^*+Y f_i}\right].$$]{} It is possible to see that the right-hand side of is similar to the distribution of the simpler case with no fitness given by . We still have a product structure for the limit distribution, but in the fitness case it has to be averaged over the fitness distribution. This result is similar to [@der2014 Theorem 2.7, Corollary 2.8]. Considering affine weights $f_k = ak+b$, we can rewrite as $$p_k = {\mathbb{E}}\left[\frac{\Gamma((\alpha^*+b)/(aY))}{\Gamma(b/(aY))} \frac{\Gamma(k+b/(aY))}{\Gamma(k+b/(aY)+1 +\alpha^*/(aY))}\right].$$ Asymptotically in $k$, the argument of the expectation in the previous expression is random with a power-law exponent $\tau(Y) = 1+\alpha^*/(aY)$. For example, in this case averaging over the fitness distribution, it is possible to obtain power-laws with logarithmic corrections (see eg [@Bhamidi Corollary 32]). Existence of limiting distributions {#sec-existence} =================================== In this section, we give the proof of Theorems \[th-limitdist-nonstat\], \[th-explosive\] and \[th-degagefit\], proving that the branching processes defined in Section \[sec-mainres\] do have a limiting degree distribution. As mentioned, we start by proving Theorem \[th-degagefit\], and then explain how Theorem \[th-limitdist-nonstat\] follows as special case. Before proving the result, we do need some remarks on the processes we consider. Birth process with aging alone and aging with fitness are defined respectively in Definition \[def-nonstatbirth\] and \[def-nonstatfit\]. Consider then a process with aging and fitness $(M_t)_{t\geq0}$ as in Definition \[def-nonstatfit\]. Let $(T_k)_{k\in{\mathbb{N}}}$ denote the sequence of birth times, i.e., $$T_k = \inf\left\{t\geq 0 \colon M_t\geq k\right\}.$$ It is an immediate consequence of the definition that, for every $k\in{\mathbb{N}}$, [$$\label{for-scaledtime} {\mathbb{P}}\left(T_k\leq t\right) = {\mathbb{P}}\left(\bar{T}_k\leq YG(t)\right),$$]{} where $(\bar{T}_k)_{k\in{\mathbb{N}}}$ is the sequence of birth times of a stationary birth process $(V_t)_{t\geq0}$ defined bu the same PA function $f$. Consider then the sequence of functions $(P_k[V](t))_{k\in{\mathbb{N}}}$ associated with the stationary process $(V_t)_{t\geq0}$ defined by the same sequence of weights $(f_k)_{k\in{\mathbb{N}}}$ (see Proposition \[prop-eqdiffV\]). As a consequence, for every $k\in{\mathbb{N}}$, ${\mathbb{P}}(M_t=k)={\mathbb{E}}[P_k[V](YG(t))]$, and the same holds for an aging process just considering $Y\equiv 1$. Formula implies that the aging process is the stationary process with a deterministic time-change given by $G(t)$. A process with aging and fitness is the stationary process with a random time-change given by $YG(t)$. Assume now that $g$ is integrable, i.e. $\lim_{t\rightarrow\infty}G(t)=G(\infty)<\infty$. Using we can describe the limiting degree distribution $(q_k)_{k\in{\mathbb{N}}}$ of a fixed individual in the branching population, i.e., the distribution $N_\infty$ (or $M_\infty$) of the total number of children an individual will generate in its entire lifetime. In fact, for every $k\in{\mathbb{N}}$, [$$\label{for-limitG} \lim_{t\rightarrow\infty}{\mathbb{P}}\left(N_t = k\right) =\lim_{t\rightarrow\infty}P[V](G(t)) = {\mathbb{P}}\left(V_{G(\infty)}=k\right),$$]{} which means that $N_\infty$ has the same distribution as $V_{G(\infty)}$. With fitness, $$\lim_{t\rightarrow\infty}{\mathbb{P}}\left(M_t = k\right) =\lim_{t\rightarrow\infty}{\mathbb{E}}[P[V](YG(t))] = {\mathbb{P}}\left(V_{YG(\infty)}=k\right).$$ For example, in the case of aging only, this is rather different from the stationary case, where the number of children of a fixed individual diverges as the individual gets old (see e.g [@Athr Theorem 2.6]). Proof of Theorem \[th-degagefit\] {#sec-pf-aging-fitness-gen} --------------------------------- Birth processes with continuous aging effect and fitness are defined in Definition \[def-nonstatfit\]. We now identify conditions on the fitness distribution to have a Malthusian parameter: \[lem-lapAGEFIT\] Consider a stationary process $(V_t)_{t\geq0}$, an integrable aging function $g$ and a random fitness $Y$. Assume that ${\mathbb{E}}[V_t]<\infty$ for every $t\geq0$. Then the process $(V_{YG(t)})_{t\geq0}$ is supercritical if and only if Condition holds, i.e., $${\mathbb{E}}\left[V_{YG(t)}\right]<\infty \quad\mbox{for every }t\geq0 \quad \quad \mbox{and} \quad \lim_{t\rightarrow\infty}{\mathbb{E}}\left[V_{YG(t)}\right]>1.$$ For the if part, we need to prove that $$\lim_{\alpha\rightarrow0^+}{\mathbb{E}}\left[V_{YG(T_{\alpha^*})}\right]>1 \quad \quad \mbox{and}\quad\quad \lim_{\alpha\rightarrow\infty}{\mathbb{E}}\left[V_{YG(T_{\alpha^*})}\right]=0.$$ As before, $(\bar{T}_k)_{k\in{\mathbb{N}}}$ are the jump times of the process $(V_{G(t)})_{t\geq0}$. Then $${\mathbb{E}}\left[V_{YG(T_{\alpha^*})}\right] = \sum_{k\in{\mathbb{N}}}{\mathbb{E}}\left[{\mathrm{e}}^{-\alpha \bar{T}_k/Y}\right].$$ When $\alpha\rightarrow0$, we have ${\mathbb{E}}\left[{\mathrm{e}}^{-\alpha \bar{T}_k}\right] \rightarrow {\mathbb{P}}\left(\bar{T}_k/Y<\infty\right)$. Now, $$\sum_{k\in{\mathbb{N}}}{\mathbb{P}}\left(\bar{T}_k<\infty\right) = \lim_{t\rightarrow\infty}\sum_{k\in{\mathbb{N}}}{\mathbb{P}}\left(\bar{T}_k/Y\leq t\right) = \lim_{t\rightarrow\infty}{\mathbb{E}}\left[V_{YG(t)}\right]>1.$$ For $\alpha\rightarrow\infty$, $$\int_0^\infty \alpha {\mathrm{e}}^{-\alpha t}{\mathbb{E}}\left[V_{YG(t)}\right]dt = \int_0^\infty {\mathrm{e}}^{-u}{\mathbb{E}}\left[V_{YG(u/\alpha)}\right]du.$$ When $\alpha\rightarrow\infty$ we have ${\mathbb{E}}\left[V_{YG(u/\alpha)}\right]\rightarrow0$. Then, fix $\alpha_0>0$ such that ${\mathbb{E}}\left[V_{YG(u/\alpha)}\right]<1$ for every $\alpha>\alpha_0$. As a consequence, ${\mathrm{e}}^{-u}{\mathbb{E}}\left[V_{YG(u/\alpha)}\right]du\leq {\mathrm{e}}^{-u}$ for any $\alpha>\alpha_0$. By dominated convergence, $$\lim_{\alpha\rightarrow\infty}\int_0^\infty \alpha {\mathrm{e}}^{-\alpha t}{\mathbb{E}}\left[V_{YG(t)}\right]dt=0.$$ Now suppose Condition does not hold. This means that ${\mathbb{E}}[V_{YG(t_0)}]= +\infty$ for some $t_0\in[0,G(\infty))$ or $\lim_{t\rightarrow\infty}{\mathbb{E}}[V_{YG(t_0)}]\leq 1$. If the first condition holds, then there exists $t_0\in(0,aG(\infty))$ such that ${\mathbb{E}}\left[V_{YG(t)}\right]=+\infty$ for every $t\geq t_0$ (recall that ${\mathbb{E}}\left[V_{YG(t)}\right]$ in an increasing function in $t$). As a consequence, for every $\alpha>0$, we have ${\mathbb{E}}\left[V_{YG(T_{\alpha})}\right]=+\infty$, which means that the process is explosive. If the second condition holds, then for every $\alpha>0$ the Laplace transform of the process is strictly less than $1$, which means there exists no Malthusian parameter. Lemma \[lem-lapAGEFIT\] gives a weaker condition on the distribution $Y$ than requiring it to be bounded. Now, we want to investigate the degree distribution of the branching process, assuming that the process $(M_t)_{t\geq0}$ is supercritical and Malthusian. Denote the Malthusian parameter by $\alpha^*$. The above allows us to complete the proof of Theorem \[th-degagefit\]: We start from [$$\label{for-start-pkfit} p_k = {\mathbb{E}}\left[P_k[V](YG(T_{\alpha^*}))\right].$$]{} Conditioning on $Y$ and integrating by parts in the integral given by the expectation in , gives $$-f_kY\int_0^\infty{\mathrm{e}}^{-\alpha^*t}P_k[V](YG(t))g(t)dt + f_{k-1}Y\int_0^\infty{\mathrm{e}}^{-\alpha^*t}P_{k-1}[V](YG(t))g(t)dt.$$ Now, we define [$$\label{for-hatL-fit} \hat{\mathcal{L}}(k,\alpha^*,Y) = \left(\frac{\mathcal{L}({\mathbb{P}}\left(V_{uG(\cdot)}=k\right)g(\cdot))(\alpha^*)}{\mathcal{L}({\mathbb{P}}\left(V_{uG(\cdot)}=k\right))(\alpha^*)}\right)_{u=Y}$$]{} Notice that the sequence $(\hat{\mathcal{L}}(k,\alpha^*,Y))_{k\in{\mathbb{N}}}$ is a sequence of random variables. Multiplying both sides of the equation by $\alpha^*$, on the right hand side we have $$-f_kY\hat{\mathcal{L}}(k,\alpha^*,Y){\mathbb{E}}\left[P_k[V](uG(T_{\alpha^*}))\right]_{u=Y}+ f_{k-1}Y\hat{\mathcal{L}}(k-1,\alpha^*,Y){\mathbb{E}}\left[P_{k-1}[V](uG(T_{\alpha^*}))\right]_{u=Y},$$ while on the left hand side we have $$\alpha^* {\mathbb{E}}\left[P_k[V](uG(T_{\alpha^*}))\right]_{u=Y}.$$ As a consequence, [$$\label{for-fitrecur} {\mathbb{E}}\left[P_k[V](uG(T_{\alpha^*}))\right]_{u=Y} = \frac{f_{k-1}Y\hat{\mathcal{L}}(k-1,\alpha^*,Y)}{\alpha^*+f_kY\hat{\mathcal{L}}(k,\alpha^*,Y)}{\mathbb{E}}\left[P_{k-1}[V](uG(T_{\alpha^*}))\right]_{u=Y}.$$]{} We start from $p_0$, that is given by $${\mathbb{E}}\left[P_0[V](uG(T_{\alpha^*}))\right]_{u=Y} = \frac{\alpha^*}{\alpha^*+f_{0}Y\hat{\mathcal{L}}(0,\alpha^*,Y)}.$$ Recursively using , gives $${\mathbb{E}}\left[P_k[V](uG(T_{\alpha^*}))\right]_{u=Y} = \frac{\alpha^*}{\alpha^*+f_kY\hat{\mathcal{L}}(k,\alpha^*,Y)} \prod_{i=0}^{k-1} \frac{f_{i}Y\hat{\mathcal{L}}(i,\alpha^*,Y)}{\alpha^*+f_iY\hat{\mathcal{L}}(i,\alpha^*,Y)}.$$ Taking expectation on both sides gives $$p_k = {\mathbb{E}}\left[\frac{\alpha^*}{\alpha^*+f_kY\hat{\mathcal{L}}(k,\alpha^*,Y)}\prod_{i=0}^{k-1} \frac{f_{i}Y\hat{\mathcal{L}}(i,\alpha^*,Y)}{\alpha^*+f_iY\hat{\mathcal{L}}(i,\alpha^*,Y)}\right].$$ Now the sequence $(\hat{\mathcal{L}}(k,\alpha^*,Y))_{k\in{\mathbb{N}}}$ creates a relation among the sequence of weights, the aging function and the fitness distribution, so that these three ingredients are deeply related. Proof of Theorems \[th-limitdist-nonstat\] and \[th-explosive\] {#sec-aging-gen-PA} --------------------------------------------------------------- As mentioned, Theorem \[th-limitdist-nonstat\] follows immediately by considering $Y\equiv 1$. The proof in fact is the same, since we can express the probabilities ${\mathbb{P}}(N_t=k)$ as function of the stationary process $(V_t)_{t\geq0}$ defined by the same PA function $f$. Condition immediately follows from Condition . In fact, considering $Y\equiv 1$, Condition becomes [$$\label{cond-eg_inf2-NEW} {\mathbb{E}}\left[V_{G(t)}\right]<\infty \quad\mbox{for every }t\geq0 \quad \quad \mbox{and} \quad \lim_{t\rightarrow\infty}{\mathbb{E}}\left[V_{G(t)}\right]>1.$$]{} The first inequality in general true for the type of stationary process we consider (for instance with $f$ affine). The second inequality is exaclty Condition . The expression of the sequence $(\hat{{\mathcal{L}}}^g(k,\alpha^*))_{k\in{\mathbb{N}}}$ is simplier than the general case given in . In fact, in , the sequence $(\hat{\mathcal{L}}(k,\alpha^*,Y))_{k\in{\mathbb{N}}}$ is actually a squence of random variables. In the case of aging alone, $$\hat{{\mathcal{L}}}^g(k,\alpha^*) = \frac{\mathcal{L}({\mathbb{P}}\left(V_{G(\cdot)}=k\right)g(\cdot))(\alpha^*)}{\mathcal{L}({\mathbb{P}}\left(V_{G(\cdot)}=k\right))(\alpha^*)},$$ which is a deterministic sequence. Notice that $\hat{{\mathcal{L}}}^g(k,\alpha^*)=1$ when $g(t)\equiv 1$, so that $G(t)=t$ for every $t\in{\mathbb{R}}^+$ and there is no aging, and we retrieve the stationary process $(V_t)_{t\geq0}$. Unfortunately, the explicit expression of the coefficients $(\hat{\mathcal{L}}^g(k,\alpha^*))_{k\in{\mathbb{N}}}$ is not easy to find, even though they are deterministic. Theorem \[th-explosive\], which states that even if $g$ is integrable, the aging does not affect the explosive behavior of a birth process with superlinear weights, is a direct consequence of : Consider a birth process $(V_t)_{t\geq0}$, defined by a sequence of superlinear weights $(f_k)_{k\in{\mathbb{N}}}$ (in the sense of Definition \[def-superlin\]), and an integrable aging function $g$. Then, for every $t>0$, $${\mathbb{P}}\left(N_t=\infty\right) = {\mathbb{P}}\left(V_{G(t)}=\infty\right)>0.$$ Since this holds for every $t>0$, the process $(N_t)_{t\geq0}$ is explosive. As a consequence, for any $\alpha>0$, ${\mathbb{E}}\left[N_{T_\alpha}\right]=\infty$, which means that there exists no Malthusian parameter. Affine weights and adapted Laplace method {#sec-laplaceSection} ========================================= Aging and no fitness {#sec-adaptedLap} -------------------- In this section, we consider affine PA weights, i.e., we consider $f_k = ak+b$. The main aim is to identify the asymptotic behavior of the limiting degree distribution of the branching process with aging. Consider a stationary process $(V_t)_{t\geq0}$, where $f_k = ak+b$. Then, for any $t\geq0$, it is possible to show by induction and the recursions in and that [$$\label{for-Pktstat} P_k[V](t) = {\mathbb{P}}\left(V_t = k\right) = \frac{1}{\Gamma(b/a)}\frac{\Gamma(k+b/a)}{\Gamma(k+1)}{\mathrm{e}}^{-bt}\left(1-{\mathrm{e}}^{-at}\right)^k.$$]{} We omit the proof of . As a consequence, since the corresponding aging process is $(V_{G(t)})_{t\geq0}$, the limiting degree distribution is given by [$$\label{for-pkAdap} p_k = {\mathbb{P}}\left(V_t = k\right) = \frac{\Gamma(k+b/a)}{\Gamma(b/a)\Gamma(k+1)}\int_0^\infty \alpha^*{\mathrm{e}}^{-\alpha^* t}{\mathrm{e}}^{-bG(t)}\left(1-{\mathrm{e}}^{-aG(t)}\right)^kdt.$$]{} We can obtain an immediate upper bound for $p_k$, in fact $$p_k = \frac{\Gamma(k+b/a)}{\Gamma(b/a)\Gamma(k+1)}\int_0^\infty \alpha^*{\mathrm{e}}^{-\alpha^* t}{\mathrm{e}}^{-bG(t)}\left(1-{\mathrm{e}}^{-aG(t)}\right)^kdt\leq \frac{\Gamma(k+b/a)}{\Gamma(b/a)\Gamma(k+1)}(1-{\mathrm{e}}^{-aG(\infty)})^k,$$ which implies that the distribution $(p_k)_{k\in{\mathbb{N}}}$ has at most an exponential tail. A more precise analysis is hard. Instead we will give an asymptotic approximation, by adapting the Laplace method for integrals to our case. The Laplace method states that, for a function $f$ that is twice differentiable and with a unique absolute minimum $x_0\in(a,b)$, as $k\rightarrow\infty$, [$$\label{for-laplaceTheor} \int_a^b {\mathrm{e}}^{-k\Psi(x)}dx=\sqrt{\frac{2\pi}{k\Psi''(x_0)}}{\mathrm{e}}^{-k\Psi(x_0)}(1+o(1)).$$]{} In this situation, the interval $[a,b]$ can be infinite. The idea behind this result is that, when $k\gg 1$, the major contribution to the integral comes from a neighborhood of $x_0$ where ${\mathrm{e}}^{-k\Psi(x)}$ is maximized. In the integral in , we do not have this situation, since we do not have an integral of the type . Defining [$$\label{for-Psidefinition} \Psi_k(t) := \frac{\alpha^*}{k}t+\frac{b}{k}G(t)-\log\left(1-{\mathrm{e}}^{-aG(t)}\right),$$]{} we can rewrite the integral in as [$$\label{forIkdef} I(k):=\int_0^\infty \alpha^* {\mathrm{e}}^{-k\Psi_k(t)}dt.$$]{} The derivative of the function $\Psi_k(t)$ is [$$\label{for-PSider} \Psi_k'(t) = \frac{\alpha^*}{k}+\frac{b}{k}g(t)-\frac{ag(t){\mathrm{e}}^{-aG(t)}}{1-{\mathrm{e}}^{-aG(t)}}.$$]{} In particular, if there exists a minimum $t_k$, then it depends on $k$. In this framework, we cannot directly apply the Laplace method. We now show that we can apply a result similar to even to our case: \[Lem-adaptLap-age\] Consider $\alpha,a,b>0$. Let the integrable aging function $g$ be such that 1. for every $t\geq0$, $0<g(t)\leq A<\infty$; 2. $g$ is differentiable on ${\mathbb{R}}^+$, and $g'$ is finite almost everywhere; 3. there exists a positive constant $B<\infty$ such that $g(t)$ is decreasing for $t\geq B$; 4. assume that the solution $t_k$ of $\Psi_k'(t)=0$, for $\Psi_k'(t)$ as in , is unique, and $g'(t_k)<0$. Then, for $\sigma_k^2 = (k\Psi_k''(t_k))^{-1}$, there exists a constant $C$ such that, as $k\rightarrow\infty$, $$I(k) = C\sqrt{2\pi\sigma_k^2}{\mathrm{e}}^{-k\Psi_k(t_k)}\left(\frac{1}{2}+{\mathbb{P}}\left(\mathcal{N}(0,\sigma_k^2)\geq t_k\right)\right)(1+o(1)),$$ where $\mathcal{N}(0,\sigma_k^2)$ denotes a normal distribution with zero mean and variance $\sigma_k^2$. Since Lemma \[Lem-adaptLap-age\] is an adapted version of the classical Laplace method, we move the proof to Appendix \[sec-appendix\]. We can use the result of Lemma \[Lem-adaptLap-age\] to prove: \[prop-pkage\_asym\] Consider the affine PA weights $f_k=ak+b$, an integrable aging function $g$, and denote the limiting degree distribution of the corresponding branching process by $(p_k)_{k\in{\mathbb{N}}}$. Then, under the hypotheses of Lemma \[Lem-adaptLap-age\], there exists a constant $C>0$ such that, as $k\rightarrow\infty$, [$$\label{for-pkage_asym} p_k = \frac{\Gamma(k+b/a)}{\Gamma(k+1)}\left(Cg(t_k)-\frac{g'(t_k)}{g(t_k)}\right)^{1/2}{\mathrm{e}}^{-\alpha^* t_k}(1-{\mathrm{e}}^{-aG(\infty)})^kD_k(g)(1+o(1)),$$]{} where $$D_k(g) = \frac{1}{2}+\frac{1}{2\sqrt{\pi}}\int_{-C_k(g)}^{C_k(g)}{\mathrm{e}}^{-\frac{u^2}{2}}du,$$ and $C_k(g) = t_k\left(Cg(t_k)-\frac{g'(t_k)}{g(t_k)}\right)^{1/2}$. Aging and fitness case {#sec-adapLapFit} ---------------------- In this section, we investigate the asymptotic behavior of the limiting degree distribution of a CTBP, in the case of affine PA weights. The method we use is analogous to that in Section \[sec-adaptedLap\]. We assume that the fitness $Y$ is absolutely continuous with respect to the Lebesgue measure, and we denote its density function by $\mu$. The limiting degree distribution of this type of branching process is given by [$$\label{for-pkagefit_integ} p_k = {\mathbb{P}}\left(V_{YG(T_{\alpha^*})}=k\right) = \frac{\Gamma(k+b/a)}{\Gamma(b/a)\Gamma(k+1)}\int_{{\mathbb{R}}^+\times {\mathbb{R}}^+}\alpha^*{\mathrm{e}}^{-\alpha^*t}\mu(s){\mathrm{e}}^{-bsG(t)}\left(1-{\mathrm{e}}^{-asG(t)}\right)^k dsdt.$$]{} We immediately see that the degree distribution has exponential tails when the fitness distribution is bounded: \[lem-exp-tails-aging-bd-fitness\] When there exists $\gamma$ such that $\mu([0,\gamma])=1$, i.e., the fitness has a bounded support, then [$$p_k\leq \frac{\Gamma(k+b/a)}{\Gamma(b/a)\Gamma(k+1)} \left(1-{\mathrm{e}}^{-a \gamma G(\infty)}\right)^k.$$]{} In particular, $p_k$ has exponential tails. Obvious. Like in the situation with only aging, the explicit solution of the integral in may be hard to find. We again have to adapt the Laplace method to estimate the asymptotic behavior of the integral. We write [$$\label{for-psi_k_bivar} I(k) := \int_{{\mathbb{R}}^+\times {\mathbb{R}}^+}{\mathrm{e}}^{-k\Psi_k(t,s)}dsdt,$$]{} where [$$\label{forPSi2def} \Psi_k(t,s) := \frac{\alpha^*}{k}t+\frac{b}{k}sG(t)-\frac{1}{k}\log\mu(s)-\log(1-{\mathrm{e}}^{-saG(t)}).$$]{} As before, we want to minimize the function $\Psi_k$. We state here the lemma: \[lem-adapLapfitAge\] Let $\Psi_k(t,s)$ as in . Assume that 1. $g$ satisfies the assumptions of Lemma \[Lem-adaptLap-age\]; 2. $\mu$ is twice differentiable on ${\mathbb{R}}^+$; 3. there exists a constant $B'>0$ such that, for every $s\geq B'$, $\mu$ is monotonically decreasing; 4. $(t_k,s_k)$ is the unique point where both partial derivatives are zero; 5. $(t_k,s_k)$ is the absolute minimum for $\Psi_k(t,s)$; 6. the hessian matrix $H_k(t_k,s_k)$ of $\Psi_k(t,s)$ evaluated in $(t_k,s_k)$ is positive definite. Then, $$I(k) = {\mathrm{e}}^{-k\Psi_k(t_k,s_k)}\frac{2\pi}{\sqrt{\mathrm{det}(kH_k(t_k,s_k))}}{\mathbb{P}}\left(\mathcal{N}_1(k)\geq -t_k,\mathcal{N}_2(k)\geq -s_k\right)(1+o(1)),$$ where $(\mathcal{N}_1(k),\mathcal{N}_2(k)) := \mathcal{N}({\boldsymbol{0}},(kH_k(t_k,s_k))^{-1})$ is a bivariate normal distributed vector and ${\boldsymbol{0}} = (0,0)$. The proof of Lemma \[lem-adapLapfitAge\] can be found in Appendix \[sec-app-agefit\]. Using Lemma \[lem-adapLapfitAge\] we can describe the limiting degree distribution $(p_k)_{k\in{\mathbb{N}}}$: \[prop-pkasym\_fitage\] Consider affine PA weights $f_k = ak+b$, an integrable aging function $g$ and a fitness distribution density $\mu$. Assume that the corresponding branching process is supercritical and Malthusian. Under the hypotheses of Lemma \[lem-adapLapfitAge\], the limiting degree distribution $(p_k)_{k\in{\mathbb{N}}}$ of the corresponding ${\mathrm{CTBP}}$ satisfies $$p_k = \frac{k^{b/a-1}}{\Gamma(b/a)}\frac{2\pi}{\sqrt{\mathrm{det}(kH_k(t_k,s_k))}}{\mathrm{e}}^{-k\Psi_k(t_k,s_k)}{\mathbb{P}}\left(\mathcal{N}_1\geq -t_k,\mathcal{N}_2\geq -s_k\right)(1+o(1)).$$ Three classes of fitness distributions {#sec-fitexamples} -------------------------------------- Proposition \[prop-pkasym\_fitage\] in Section \[sec-adapLapFit\] gives the asymptotic behavior of the limiting degree distribution of a ${\mathrm{CTBP}}$ with integrable aging and fitness. Lemma \[lem-adapLapfitAge\] requires conditions under which the function $\Psi_k(t,s)$ as in has a unique minimum point denoted by $(t_k,s_k)$. In this section we consider the three different classes of fitness distributions that we have introduced in Section \[sec-res-aging-fitness-exp\]. For the heavy-tailed class, i.e., for distributions with tail thicker than exponential, there is nothing to prove. In fact, immediately implies that such distributions are explosive. For the other two cases, we apply Proposition \[prop-pkasym\_fitage\], giving the precise asymptotic behavior of the limiting degree distributions of the correponding ${\mathrm{CTBPs}}$. Propositions \[prop-expfit\_general\] and \[prop-subexpfit\] contain the results respectively on the general-exponential and sub-exponential classes. The proof of these propositions are moved to Appendix \[sec-proof-3class\]. \[prop-expfit\_general\] Consider a general exponential fitness distribution as in . Let $(M_t)_{t\geq0}$ be the corresponding birth process. Denote the unique minimum point of $\Psi_k(t,s)$ as in by $(t_k,s_k)$. Then 1. for every $t\geq0$, $M_t$ has a dynamical power law with exponent $ \tau(t) = 1+\frac{\theta}{aG(t)}$; 2. the asymptotic behavior of the limiting degree distribution $(p_k)_{k\in{\mathbb{N}}}$ is given by $$p_k= {\mathrm{e}}^{-\alpha^* t_k}h(s_k)\left(\tilde{C}-\alpha^*\frac{g'(t_k)}{g(t_k)}\right)^{-1/2}k^{-(1+\theta/(aG(\infty)))}(1+o(1)),$$ where the power law term has exponent $\tau = 1+\theta/aG(\infty)$; 3. the distribution $(q_k)_{k\in{\mathbb{N}}}$ of the total number of children of a fixed individual has a power law behavior with exponent $\tau = 1+\theta/aG(\infty)$. By it is necessary to consider the exponential rate $\theta>aG(\infty)$ to obtain a non-explosive process. In particular, this implies that, for every $t\geq0$, $\tau(t)$, as well as $\tau$, are strictly larger than $2$. As a consequence, the three distributions $(P_k[M](t))_{k\in{\mathbb{N}}}$, $(p_k)_{k\in{\mathbb{N}}}$ and $(q_k)_{k\in{\mathbb{N}}}$ have finite first moment. Increasing the value of $\theta$ leads to power-law distributions with exponent larger than $3$, so with finite variance. A second observation is that, independently of the aging function $g$, the point $s_k$ is of order $\log k$. In particular, this has two consequences. First the correction to the power law given by $h(s_k)$ is a power of $\log k$. Since $h'(s)/h(s)\rightarrow0$ as $s\rightarrow\infty$. Second the power-law term $k^{-(1+\theta/(aG(\infty)))}$ arises from $\mu(s_k)$. This means that the exponential term in the fitness distribution $\mu$ not only is necessary to obtain a non-explosive process, but also generates the power law. The third observation is that the behavior of the three distributions $(P_k[M](t))_{k\in{\mathbb{N}}}$, $(p_k)_{k\in{\mathbb{N}}}$ and $(q_k)_{k\in{\mathbb{N}}}$ depends on the integrability of the aging function, [*but does only marginally depends on its precise shape*]{}. The contribution of the aging function $g$ to the exponent of the power law in fact is given only by the value $G(\infty)$. The other terms that depend directly on the shape of $g$ are ${\mathrm{e}}^{-\alpha ^* t_k}$ and the ratio $g'(t_k)/g(t_k)$. The ratio $g'/g$ does not contribute for any function $g$ whose decay is in between power law and exponential. The term ${\mathrm{e}}^{-\alpha ^* t_k}$ depends on the behavior of $t_k$, that can be seen as roughly $g^{-1}(1/\log k)$. For any function between power law and exponential, ${\mathrm{e}}^{-\alpha^* t_k}$ is asymptotic to a power of $\log k$. The last observation is that every distribution in the general exponential class shows a [*dynamical power law*]{} as for the pure exponential distribution, as shown in Section \[sec-expfitness\]. The pure exponential distribution is a special case where we consider $h(s)\equiv 1$. Interesting is the fact that $\tau$ actually does not depend on the choice of $h(s)$, but only on the exponential rate $\theta>aG(\infty)$. In particular, Proposition \[prop-expfit\_general\] proves that the limiting degree distribution of the two examples in Figure \[fig-pwrlwtime\] have power-law decay. We move to the class of sub-exponential fitness. We show that the power law is lost due to the absence of a pure exponential term. We prove the result using densities of the form [$$\label{def-subfit-2} \mu(s) = C{\mathrm{e}}^{-s^{1+\varepsilon}},$$]{} for $\varepsilon>0$ and $C$ the normalization constant. The result is the following: \[prop-subexpfit\] Consider a sub-exponential fitness distribution as in . Let $(M_t)_{t\geq0}$ be the corresponding birth process. Denote the minimum point of $\Psi_k(t,s)$ as in by $(t_k,s_k)$. Then 1. for every $t\geq0$, $M_t$ satisfies $${\mathbb{P}}\left(M_t=k\right) = k^{-1}(\log k)^{-\varepsilon/2}{\mathrm{e}}^{-\frac{\theta}{(aG(t))^{1+\varepsilon}} (\log k)^{1+\varepsilon}}(1+o(1));$$ 2. the limiting degree distribution $(p_k)_{k\in{\mathbb{N}}}$ of the ${\mathrm{CTBP}}$ has asymptotic behavior given by $$p_k= {\mathrm{e}}^{-\alpha^* t_k}k^{-1}\left(C_1-s_k^\varepsilon\frac{g'(t_k)}{g(t_k)}\right){\mathrm{e}}^{-\frac{\theta}{(aG(\infty))^{1+\varepsilon}} (\log k)^{1+\varepsilon}}(1+o(1));$$ 3. the distribution $(q_k)_{k\in{\mathbb{N}}}$ of the total number of children of a fixed individual satisfies $$q_k = k^{-1}(\log k)^{-\varepsilon/2}{\mathrm{e}}^{-\frac{\theta}{(aG(\infty))^{1+\varepsilon}} (\log k)^{1+\varepsilon}}(1+o(1)).$$ In Proposition \[prop-subexpfit\] the distributions $(P_k[M](t))_{k\in{\mathbb{N}}}$, $(p_k)_{k\in{\mathbb{N}}}$ and $(q_k)_{k\in{\mathbb{N}}}$ decay faster than a power law. This is due to the fact that a sub-exponential tail for the fitness distribution does not allow the presence of sufficiently many individuals in the branching population whose fitness value is sufficiently high to restore the power law. In this case, we have that $s_k$ is roughly $c_1\log k-c_2\log\log k$. Hence, as first approximation, $s_k$ is still of logarithmic order. The power-law term is lost because there is no pure exponential term in the distribution $\mu$. In fact, in this case $\mu(s_k)$ generates the dominant term ${\mathrm{e}}^{-\theta (\log k)^{1+\varepsilon}}$. The case of exponentially distributed fitness: Proof of Corollary \[th-degexpfitness\] {#sec-expfitness} -------------------------------------------------------------------------------------- The case when the fitness $Y$ is exponentially distributed turns out to be simpler. In this section, denote the fitness by $T_\theta$, where $\theta$ is the parameter of the exponential distribution. First of all, we investigate the Laplace transform of the process. In fact, we can write $${\mathbb{E}}\left[M_{T_\alpha}\right] = \int_0^\infty\theta{\mathrm{e}}^{-\theta s}{\mathbb{E}}\left[V_{sG(T_\alpha)}\right]ds,$$ which is the Laplace transform of the stationary process $(V_{sG(T_\alpha)})_{s\geq0}$ with bounded fitness $G(T_\alpha)$ in $\theta$. As a consequence, $${\mathbb{E}}\left[M_{T_\alpha}\right] = \sum_{k\in{\mathbb{N}}}{\mathbb{E}}\left[\prod_{i=0}^{k-1}\frac{f_iG(T_\alpha)}{\theta+f_iG(T_\alpha)}\right].$$ Suppose that there exists a Malthusian parameter $\alpha^*$. This means that, for fixed $(f_k)_{k\in{\mathbb{N}}}$, $g$ and $\theta$, $\alpha^*$ is the unique value such that ${\mathbb{E}}\left[M_{T_{\alpha^*}}\right]=1$. As a consequence, if we fix $(f_k)_{k\in{\mathbb{N}}}$, $g$ and $\alpha^*$, $\theta$ is the unique value such that $$\sum_{k\in{\mathbb{N}}}{\mathbb{E}}\left[\prod_{i=0}^{k-1}\frac{f_iG(T_\alpha)}{\theta+f_iG(T_\alpha)}\right]=1.$$ Therefore $\theta$ is the Malthusian parameter of the process $(V_{sG(T_\alpha)})_{s\geq0}$. We are now ready to prove Corollary \[th-degexpfitness\]: We can write ${\mathbb{P}}\left(M_t = k\right) = {\mathbb{P}}\left(V_{T_\theta G(t)}=k\right)$, which means that we have to evaluate the Laplace transform of ${\mathbb{P}}\left(V_{sG(t)}=k\right)$ in $\theta$. Using the first part follows immediately by simple calculations. For the second part, we just need to take the limit as $t\rightarrow\infty$. For the sequence $(p_k)_{k\in{\mathbb{N}}}$, the result is immediate since $p_k = {\mathbb{E}}[P_k[M](T_{\alpha^*})]$. The case of affine PA weights $f_k = ak+b$ is particularly nice. As already mentioned in Section \[sec-mainres\], the process $(M_t)_{t\geq0}$ has a power-law distribution at every $t\in{\mathbb{R}}^+$ and follows immediately. Further, and follow directly. Limiting distribution with aging effect, no fitness =================================================== In this section, we analyze the limiting degree distribution $(p_k)_{k\in{\mathbb{N}}}$ of CTBPs with aging but no fitness. In Section \[sec-app-age\] we prove the adapted Laplace method for the general asymptotic behavior of $p_k$. In Section \[sec-examples-age\] we consider some examples of aging function $g$, giving the asymptotics for the corresponding distributions. Proofs of Lemma \[Lem-adaptLap-age\] and Proposition \[prop-pkage\_asym\] {#sec-app-age} ------------------------------------------------------------------------- First of all, we show that $t_k$ is actually a minimum. In fact, $$\lim_{t\rightarrow 0}\frac{d}{dt}\Psi_k(t) = -\infty,\quad\quad \mbox{and}\quad \lim_{t\rightarrow\infty}\frac{d}{dt}\Psi_k(t) = \frac{\alpha}{k}>0.$$ As a consequence, $t_k$ is a minimum. Then, [$$\label{for-gtkasymp} \lim_{k\rightarrow\infty}g(t_k)\left(\frac{\alpha^*}{k}\frac{1-{\mathrm{e}}^{-aG(\infty)}}{a{\mathrm{e}}^{-aG(\infty)}}\right)^{-1} = \lim_{k\rightarrow\infty}g(t_k)\frac{ak}{\alpha^*({\mathrm{e}}^{aG(\infty)}-1)}= 1.$$]{} In particular, $g(t_k)$ is of order $1/k$. Then, since $t_k$ is the actual minimum, and $g$ is monotonically decreasing for $t\geq B$, [$$\label{for-Phi2} \Psi_k''(t_k) = \frac{b}{k}g'(t_k)+g(t_k)^2 \frac{a^2{\mathrm{e}}^{-aG(t_k)}(2-{\mathrm{e}}^{-aG(t_k)})}{(1-{\mathrm{e}}^{-aG(t_k)})^2}-g'(t_k)\frac{a{\mathrm{e}}^{-aG(t_k)}}{1-{\mathrm{e}}^{-aG(t_k)}}>0.$$]{} We use the fact that we are evaluating the second derivative in the point $t_k$ where the first derivative is zero. This means $$g(t_k)\frac{a{\mathrm{e}}^{-aG(t_k)})}{1-{\mathrm{e}}^{-aG(t_k)}}= \frac{\alpha}{k}+\frac{b}{k}g(t_k).$$ We use this in to obtain [$$\label{for-Phi3} \begin{split} k\Psi_k''(t_k) & = bg'(t_k)+g(t_k)\frac{a(2-{\mathrm{e}}^{-aG(t_k)})}{1-{\mathrm{e}}^{-aG(t_k)}}\left(\alpha+bg(t_k)\right)-\frac{g'(t_k)}{g(t_k)}\left(\alpha+bg(t_k)\right)\\ & = g(t_k)\frac{a(2-{\mathrm{e}}^{-aG(t_k)})}{1-{\mathrm{e}}^{-aG(t_k)}}\left(\alpha+bg(t_k)\right)-\alpha\frac{g'(t_k)}{g(t_k)}. \end{split}$$]{} Now, we use Taylor expansion around $t_k$ of $\Psi_k(t)$ in the integral in . Since we use the expansion around $t_k$, which is the minimum of $\Psi_k(t)$, the first derivative of $\Psi_k$ is zero. As a consequence, we have $$I(k) = \int_0^\infty {\mathrm{e}}^{-k\left(\Psi_k(t_k)+\frac{1}{2}\Psi_k''(t_k)(t-t_k)^2+o((t-t_k)^2)\right)}dt.$$ First of all, notice that the contribution of the terms with $|t-t_k|\gg 1$ is negligible. In fact, we have [$${\mathrm{e}}^{-k\Psi_k(t)}\leq {\mathrm{e}}^{-\alpha^* t}(1-{\mathrm{e}}^{-aG(\infty)})^k,$$]{} which means that such terms are exponentially small, so we can ignore them. Now we make a change of variable $u = t-t_k$. Then $$I(k) = \int_{-t_k}^\infty {\mathrm{e}}^{-k\left(\Psi_k(t_k)+\frac{1}{2}\Psi_k''(t_k)u^2+o(u^2)\right)}du.$$ In particular, since the term $ {\mathrm{e}}^{-k\Psi_k(t_k)}$ does not depend on $u$, we can write $$I(k) = {\mathrm{e}}^{-k\Psi_k(t_k)}\int_{-t_k}^\infty {\mathrm{e}}^{-k\left(\frac{1}{2}\Psi_k''(t_k)u^2+o(u^2)\right)}du.$$ We use the notation $k\Psi_k(t_k)= \frac{1}{\sigma_k^2}$, which means we can rewrite the integral as $${\mathrm{e}}^{-k \Psi_k(t_k)}\sqrt{2\pi\sigma_k^2}\int_{-\infty}^{t_k}\frac{1}{\sqrt{2\pi\sigma_k^2}}{\mathrm{e}}^{-\frac{u^2}{2\sigma_k^2}}dt = {\mathrm{e}}^{-k\Psi_k(t_k)}\sqrt{2\pi\sigma_k^2}{\mathbb{P}}\left(\mathcal{N}(0,\sigma_k^2)\leq t_k\right).$$ Since the distribution $\mathcal{N}(0,\sigma_k^2)$ is symmetric with respect to $0$, for every $k\in{\mathbb{N}}$, [$$\label{for-prNterm} {\mathbb{P}}\left(\mathcal{N}(0,\sigma_k^2)\leq t_k\right) = \frac{1}{2}\left[1+\frac{1}{\sqrt{\pi}}\int_{-t_k/\sigma_k}^{t_k/\sigma_k}{\mathrm{e}}^{-\frac{u^2}{2}}du\right].$$]{} The behavior of the above integral depends on the ratio $t_k/\sigma_k$, which is bounded between $0$ and $1$. As a consequence, the term ${\mathbb{P}}\left(\mathcal{N}(0,\sigma_k^2)\leq t_k\right)$ is bounded between $1/2$ and $1$. Using Lemma \[Lem-adaptLap-age\], we can prove Proposition \[prop-pkage\_asym\]: Recall that $\sigma^2_k = (k\Psi(t_k)'')^{-1}$. Using , the fact that $g$ is bounded almost everywhere, and $g'(t_k)<0$, we can write [$$\label{for-var} k\Psi(t_k)''= \alpha\left(\frac{a(2-{\mathrm{e}}^{-aG(\infty)})}{1-{\mathrm{e}}^{-aG(\infty)}}g(t_k)-\frac{g'(t_k)}{g(t_k)}\right)(1+o(1)).$$]{} Notice that in the terms $g(t_k)-\frac{g'(t_k)}{g(t_k)}$ are always strictly positive, since $g(t)$ is decreasing and $t_k\rightarrow\infty$ as $k\rightarrow\infty$. As a consequence, we can replace the term $\sqrt{2\pi/\sigma^2_k}$ by $\left(Cg(t_k)-\frac{g'(t_k)}{g(t_k)}\right)^{1/2}$, for $C=\frac{a(2-{\mathrm{e}}^{-aG(\infty)})}{1-{\mathrm{e}}^{-aG(\infty)}}$. We also have that $${\mathrm{e}}^{-k\Psi_k(t_k)} = \mathrm{exp}\left[-\alpha^*t_k -bG(t_k)+k\log\left(1-{\mathrm{e}}^{-aG(t_k)}\right)\right]= {\mathrm{e}}^{-\alpha^*t_k}(1-{\mathrm{e}}^{-aG(\infty)})^k(1+o(1)),$$ since $G(t_k)$ converges to $G(\infty)$. For the term in , it is easy to show that it is asymptotic to $D_k(g)$. This completes the proof. Examples of aging functions {#sec-examples-age} --------------------------- In this section, we analyze two examples of aging functions, in order to give examples of the limiting degree distribution of the branching process. We consider affine weights $f_k = ak+b$, and three different aging functions: $$g(t) = {\mathrm{e}}^{-\lambda t}, \quad\quad g(t) = (1+t)^{-\lambda}, \quad \quad\mbox{and}\quad \quad g(t) = \lambda_1{\mathrm{e}}^{-\lambda_2(\log (t+1)-\lambda_3)^2}.$$ We assume that in every case the aging function $g$ is integrable, so we consider $\lambda>0$ for the exponential case, $\lambda>1$ for the power-law case and $\lambda_1,\lambda_2,\lambda_3>0$ for the lognormal case. We assume that $g$ satisfies Condition in order to have a supercritical process. We now apply to these three examples, giving their asymptotics. In general, we approximate $t_k$ with the solution of, for $c_1 = \frac{a{\mathrm{e}}^{-aG(\infty)}}{1-{\mathrm{e}}^{-aG(\infty)}}$, [$$\label{for-tkappr} \frac{\alpha^*}{k}+\frac{b}{k}g(t)-c_1g(t) =0.$$]{} We start considering the exponential case $g(t) = {\mathrm{e}}^{-\lambda t}$. In this case, from we obtain that, ignoring constants, [$$\label{for-expage-1} t_k = \log k(1+o(1)).$$]{} As we expected, $t_k\rightarrow\infty$. We now use , which gives a bound on $\sigma_k^2$ in in terms of $g$ and its derivatives. As a consequence, $$\left(g(t_k)-\frac{g'(t_k)}{g(t_k)}\right)^{-1/2} = \left({\mathrm{e}}^{-\lambda t_k}+\lambda\right)^{1/2}\sim \lambda^{1/2}(1+o(1)).$$ Looking at ${\mathrm{e}}^{-k\Psi_k(t_k)}$, it is easy to compute that, with $t_k$ as in , $$\mathrm{exp}\left[-\alpha^* \log k+ bG(t_k)+k\log (1-{\mathrm{e}}^{-aG(t_k)})\right]= k^{-\alpha^*}(1+o(1)).$$ Since $t_k/\sigma_k\rightarrow\infty$, then ${\mathbb{P}}\left(\mathcal{N}(0,\sigma_k^2)\leq t_k\right)\rightarrow 1$, so that $$p_k =\frac{\Gamma(k+b/a)}{\Gamma(b/a)}\frac{1}{\Gamma(k+1)}C_1 k^{-\alpha^*}{\mathrm{e}}^{-C_2 k}(1+o(1)),$$ which means that $p_k$ has an exponential tail with power-law corrections. We now apply the same result to the power-law aging function, so $g(t) = (1+t)^{-\lambda}$, and $G(t) = \frac{1}{\lambda-1}(1+t)^{1-\lambda}$. In this case $$(1+t_k) = \left(\frac{\alpha^*}{c_1k}\right)^{-1/\lambda}(1+o(1)).$$ We use again , so $$\left(g(t_k)-\frac{g'(t_k)}{g(t_k)}\right) = \left(\frac{\alpha^*}{c_1k}+\lambda\left(\frac{c_1k}{\alpha^*}\right)^{1/\lambda}\right)^{1/2}\sim k^{\alpha^*/2\lambda}(1+o(1)).$$ In conclusion, $$p_k = \frac{\Gamma(k+b/a)}{\Gamma(b/a)}\frac{1}{\Gamma(k+1)}k^{\alpha^*/2\lambda}{\mathrm{e}}^{-\alpha^*\left(\frac{\alpha^*}{c_1k}\right)^{-1/\lambda}-C_2k}(1+o(1)),$$ which means that also in this case we have a power-law with exponential truncation. In the case of the lognormal aging function, implies that $$[\log(t_k+1)-\lambda_3]^2 \approx +\frac{1}{\lambda_2}\log\left(\frac{c_1}{\alpha^*}k\right).$$ By we can say that $$\left(g(t_k)-\frac{g'(t_k)}{g(t_k)}\right)=\left(\lambda_1\log\left(\frac{c_1}{\alpha^*}k\right)+ 2\lambda_2\frac{\log(t_k+1)}{t_k+1}\right)(1+o(1))= \lambda_1\log\left(\frac{c_1}{\alpha^*}k\right)(1+o(1)).$$ We conclude then, for some constant $C_3>0$, $$p_k = \frac{\Gamma(k+b/a)}{\Gamma(b/a)}\frac{1}{\Gamma(k+1)}\left(\lambda_1\log\left(\frac{c_1}{\alpha^*}k\right)\right)^{1/2}{\mathrm{e}}^{-\alpha^* {\mathrm{e}}^{(\log (\frac{c_1}{\alpha^*}k))^{1/2}}} {\mathrm{e}}^{-C_3 k}(1+o(1)).$$ Limiting distribution with aging and fitness {#sec-appendix} ============================================ In this section, we consider birth processes with aging and fitness. We prove Lemma \[lem-adapLapfitAge\], used in the proof of Proposition \[prop-pkasym\_fitage\]. Then we give examples of limiting degree distributions for different aging functions and exponentially distributed fitness. Proofs of Lemma \[lem-adapLapfitAge\] and Proposition \[prop-pkasym\_fitage\] {#sec-app-agefit} ----------------------------------------------------------------------------- We use again second order Taylor expansion of the function $\Psi_k(t,s)$ centered in $(t_k,s_k)$, where the first order partial derivatives are zero. As a consequence we write $$\mathrm{exp}\left[-k\Psi_k(t,s)\right] = \mathrm{exp}\left[- k\Psi_k(t_k,s_k)+\frac{1}{2}{\boldsymbol{x}}^T \left(kH_k(t_k,s_k)\right){\boldsymbol{x}}+o(||{\boldsymbol{x}}||^2)\right] ,$$ where $${\boldsymbol{x}} = \left[\begin{array}{c} t-t_k\\ s-s_k\end{array}\right],\quad\quad \mbox{and} \quad \quad H_k(t_k,s_k) = \left[ \begin{array}{cc} \displaystyle\frac{{\partial}^2\Psi_k}{{\partial}t^2}(t_k,s_k) & \displaystyle\frac{{\partial}^2\Psi_k}{{\partial}s{\partial}t} (t_k,s_k) \\ & \\ \displaystyle\frac{{\partial}^2\Psi_k}{{\partial}s{\partial}t} (t_k,s_k) & \displaystyle\frac{{\partial}^2\Psi_k}{{\partial}s^2}(t_k,s_k) \end{array}\right].$$ As for the proof of Lemma \[Lem-adaptLap-age\], we start by showing that we can ignore the terms where $||{\boldsymbol{x}}||^2\gg1$. In fact, $${\mathrm{e}}^{-k\Psi_k(t,s)} \leq \mathrm{exp}\left(-\alpha^*t-bsG(t)+k\log(\mu(s))\right).$$ Since $\mu$ is a probability density, $\mu(s)<1$ for $s\gg 1$. As a consequence, $\log(\mu(s))<0$, which means that the above bound is exponentially decreasing whenever $t$ and $s$ are very large. As a consequence, we can ignore the contribution given by the terms where $|t-t_k|\gg1$ and $|s-s_k|\gg1$. The term ${\mathrm{e}}^{- k\Psi_k(t_k,s_k)}$ is independent of $t$ and $s$, so we do not consider it in the integral. Writing $u= t-t_k$ and $v= s-s_k$, we can write $$\int_{{\mathbb{R}}^+\times{\mathbb{R}}^+}{\mathrm{e}}^{- \frac{1}{2}{\boldsymbol{x}}^T (kH_k(t_k,s_k)){\boldsymbol{x}}}dsdt = \int_{-t_k}^\infty\int_{-s_k}^\infty {\mathrm{e}}^{- \frac{1}{2}{\boldsymbol{y}}^T (kH_k(t_k,s_k)){\boldsymbol{y}}}dudv,$$ where this time ${\boldsymbol{y}}^T = [u~ v]$. As a consequence, [$$\label{for-I_kasympFit} \int_{-t_k}^\infty\int_{-s_k}^\infty {\mathrm{e}}^{- \frac{1}{2}{\boldsymbol{y}}^T (kH_k(t_k,s_k)){\boldsymbol{y}}}dudv = \frac{2\pi}{\sqrt{\mathrm{det}(kH_k(t_k,s_k))}} {\mathbb{P}}\left(\mathcal{N}_1(k)\geq -t_k,\mathcal{N}_2(k)\geq -s_k\right),$$]{} provided that the covariance matrix $(kH_k(t_k,s_k))^{-1}$ is positive definite. As a consequence, we can use to obtain that, for the corresponding limiting degree distribution of the branching process $(p_k)_{k\in{\mathbb{N}}}$, as $k\rightarrow\infty$, $$p_k=\frac{\Gamma(k+b/a)}{\Gamma(b/a)}\frac{1}{\Gamma(k+1)}{\mathrm{e}}^{- k\Psi_k(t_k,s_k)}\frac{2\pi}{\sqrt{\mathrm{det}(kH_k(t_k,s_k))}} {\mathbb{P}}\left(\mathcal{N}_1(k)\geq -t_k,\mathcal{N}_2(k)\geq -s_k\right)(1+o(1)).$$ This results holds if the point $(t_k,s_k)$ is the absolute minimum of $\Psi_k$, and the Hessian matrix is positive definite at $(t_k,s_k)$. The Hessian matrix of $\Psi_k(t,s)$ {#sec-app-hessian} ----------------------------------- First of all, we need to find a point $(t_k,s_k)$ which is the solution of the system $$\begin{aligned} &\displaystyle\frac{{\partial}\Psi_k}{{\partial}t} &= \displaystyle \frac{\alpha^*}{k}+\frac{b}{k}sg(t)-\frac{sag(t){\mathrm{e}}^{-saG(t)}}{1-{\mathrm{e}}^{-saG(t)}} = 0 \label{for-psipart_t},\\ &\displaystyle\frac{{\partial}\Psi_k}{{\partial}s} &=\displaystyle \frac{b}{k}G(t)-\frac{1}{k}\frac{\mu'(s)}{\mu(s)}-\frac{aG(t){\mathrm{e}}^{-saG(t)}}{1-{\mathrm{e}}^{-saG(t)}}=0 \label{for-psipart_s}.\end{aligned}$$ Denote the solution by $(t_k,s_k)$. Then [$$\begin{split} \frac{{\partial}^2\Psi_k}{{\partial}t^2} & = \frac{b}{k}sg'(t_k)+g(t_k)^2 \frac{s^2a^2{\mathrm{e}}^{-asG(t_k)}}{(1-{\mathrm{e}}^{-asG(t_k)})^2}-g'(t_k)\frac{as{\mathrm{e}}^{-asG(t_k)}}{1-{\mathrm{e}}^{-asG(t_k)}},\\ \frac{{\partial}^2\Psi_k}{{\partial}s^2} & = -\frac{1}{k}\frac{\mu''(s)\mu(s)-\mu'(s)^2}{\mu(s)^2}+ \frac{a^2G(t)^2{\mathrm{e}}^{-saG(t)}}{(1-{\mathrm{e}}^{-saG(t)})^2},\\ \frac{{\partial}^2\Psi_k}{{\partial}s{\partial}t} & = \frac{b}{k}g(t_k) +\left(1-\frac{1}{as_k}\right)\left(\frac{b}{k}G(t_k)-\frac{1}{k}\frac{\mu'(s_k)}{\mu(s_k)}\right)\left(\frac{\alpha^*}{k}+\frac{b}{k}s_kg(t_k)\right). \end{split}$$]{} From and we know [$$\label{for-substit} \begin{split} \frac{\alpha^*}{k}+\frac{b}{k}s_kg(t_k) & = \frac{s_kag(t_k){\mathrm{e}}^{-s_kaG(t_k)}}{1-{\mathrm{e}}^{-s_kaG(t_k)}},\\ \frac{b}{k}G(t_k)-\frac{1}{k}\frac{\mu'(s_k)}{\mu(s_k)}& = \frac{aG(t_k){\mathrm{e}}^{-s_kaG(t_k)}}{1-{\mathrm{e}}^{-s_kaG(t_k)}}. \end{split}$$]{} Using in the expressions for the second derivatives, [$$\begin{split} \frac{{\partial}^2\Psi_k}{{\partial}t^2} & = \frac{b}{k}s_kg'(t_k) + \frac{as_kg(t_k)}{(1-{\mathrm{e}}^{-as_kG(t_k)})}\left(\frac{\alpha}{k}+\frac{b}{k}s_kg(t_k)\right)-\frac{g'(t_k)}{g(t_k)}\left(\frac{\alpha}{k}+\frac{b}{k}s_kg(t_k)\right)\\ & = \frac{as_kg(t_k)}{(1-{\mathrm{e}}^{-as_kG(t_k)})}\left(\frac{\alpha}{k}+\frac{b}{k}s_kg(t_k)\right)-\frac{\alpha}{k}\frac{g'(t_k)}{g(t_k)}, \end{split}$$]{} [$$\begin{split} \frac{{\partial}^2\Psi_k}{{\partial}s^2} & =-\frac{1}{k}\frac{\mu''(s_k)\mu(s_k)-\mu'(s_k)^2}{\mu(s_k)^2}+\frac{aG(t_k)}{1-{\mathrm{e}}^{-as_kG(t_k)}}\left(\frac{b}{k}G(t_k)-\frac{1}{k}\frac{\mu'(s_k)}{\mu(s_k)}\right)\\ & = -\frac{1}{k}\frac{\mu''(s_k)}{\mu(s_k)}+\frac{1}{k}\left(\frac{\mu'(s_k)}{\mu(s_k)}\right)^2-\frac{1}{k}\frac{\mu'(s_k)}{\mu(s_k)}\frac{aG(t_k)}{1-{\mathrm{e}}^{-as_kG(t_k)}}+\frac{1}{k}\frac{abG(t_k)^2}{1-{\mathrm{e}}^{-as_kG(t_k)}}. \end{split}$$]{} In conclusion, the matrix $kH_k(t_k,s_k)$ is given by [$$\label{for-hessian-element} \begin{split} \left(kH_k(t_k,s_k)\right)_{1,1} & = \frac{as_kg(t_k)}{(1-{\mathrm{e}}^{-as_kG(t_k)})}\left(\alpha+bs_kg(t_k)\right)-\alpha\frac{g'(t_k)}{g(t_k)};\\ \left(kH_k(t_k,s_k)\right)_{2,2} & = -\frac{\mu''(s_k)}{\mu(s_k)}+\left(\frac{\mu'(s_k)}{\mu(s_k)}\right)^2-\frac{\mu'(s_k)}{\mu(s_k)}\frac{aG(t_k)}{1-{\mathrm{e}}^{-as_kG(t_k)}}+\frac{abG(t_k)^2}{1-{\mathrm{e}}^{-as_kG(t_k)}};\\ \left(kH_k(t_k,s_k)\right)_{2,1}& = bg(t_k) +\left(1-\frac{1}{as_k}\right)\left(\frac{b}{k}G(t_k)-\frac{1}{k}\frac{\mu'(s_k)}{\mu(s_k)}\right)\left(\alpha^*+bs_kg(t_k)\right). \end{split}$$]{} We point out that, solving in terms of $s$, it follows that [$$\label{for-solution_s} s = \frac{1}{aG(t)}\log\left(1+k\frac{aG(t)}{bG(t)-\frac{\mu'(s)}{\mu(s)}}\right).$$]{} As a consequence, [$$\label{for-sgt-asym} s_kg(t_k) = \alpha^* G(t_k)\frac{\mu(s_k)}{\mu'(s_k)}.$$]{} We use , and the expressions for the elements of the Hessian matrix given in for the examples in Section \[sec-append-exfit\]. We also use the formulas of this section in the proof of Propositions \[prop-expfit\_general\] and \[prop-subexpfit\] given in Section \[sec-proof-3class\]. Examples of aging functions {#sec-append-exfit} --------------------------- Here we give examples of limiting degree distributions. We consider the same three examples of aging functions we considered in Section \[sec-examples-age\], so $$g(t) = {\mathrm{e}}^{-\lambda t}, \quad\quad g(t) = (1+t)^{-\lambda}, \quad \quad\mbox{and}\quad \quad g(t) = \lambda_1{\mathrm{e}}^{-\lambda_2(\log (t+1)-\lambda_3)^2}.$$ We consider exponentially distributed fitness, so $\mu(s) = \theta{\mathrm{e}}^{-\theta s}$. In order to have a supercritical and Malthusian process, we can rewrite Condition for exponentially distributed fitness as $aG(\infty)<\theta<(a+b)G(\infty)$. In general, we identify the minimum point $(t_k,s_k)$, then use . For all three examples, replacing $G(t)$ by $G(\infty)$ and using , it holds that $$s_k \approx \frac{1}{aG(\infty)}\log\left(k\frac{aG(\infty)}{bG(\infty)+\theta}\right),$$ and $s_kg(t_k)\approx \alpha^* G(\infty)\theta$. For the exponential aging function, using , it follows that ${\mathrm{e}}^{-\lambda t_k}\approx \log k$. In this case, since $g'(t)/g(t) = -\lambda$, the conclusion is that, ignoring the constants, $$p_k = k^{-(1+\lambda\theta/a)}(\log k)^{\alpha^*/\lambda}(1+o(1)).$$ For the inverse-power aging function $t_k\approx (\log k)^{1/\lambda}$, which implies (ignoring again the constants) that $$p_k= k^{-(1+(\lambda-1)\theta/a)}{\mathrm{e}}^{-\alpha^*(\log k)^{1/\lambda}}(1+o(1)),$$ where we recall that, for $g$ being integrable, $\lambda>1$. For the lognormal case, $$t_k\approx {\mathrm{e}}^{\left(\log k\right)^{1/2}},$$ which means that $$p_k = k^{-(1+\theta/aG(\infty))}{\mathrm{e}}^{-\alpha^* {\mathrm{e}}^{\left(\log k\right)^{1/2}}}(1+o(1)).$$ Proof of propositions \[prop-expfit\_general\] and \[prop-subexpfit\] {#sec-proof-3class} ===================================================================== In the present section, we prove Propositions \[prop-expfit\_general\] and \[prop-subexpfit\]. These proofs are applications of Proposition \[prop-pkasym\_fitage\], and mainly consist of computations. In the proof of the two propositions, we often refer to Appendix \[sec-app-hessian\] for expressions regarding the Hessian matrix of $\Psi_k(t,s)$ as in . Proof of Proposition \[prop-expfit\_general\] --------------------------------------------- We start by proving the existence of the dynamical power-law. We already know that [$$\label{for-prop-fitexp-1} {\mathbb{P}}\left(M_t=k\right) = \frac{\Gamma(k+b/a)}{\Gamma(b/a)\Gamma(k+1)}\int_0^ \infty \mu(s){\mathrm{e}}^{-bsG(t)}\left(1-{\mathrm{e}}^{-asG(t)}\right)^kds.$$]{} We write [$$\label{for-jk-1} J(k) = \int_0^\infty {\mathrm{e}}^{-k\psi_k(s)}ds,$$]{} where [$$\label{for-psik-classes} \psi_k(s) = \frac{bG(t)}{k}s-\frac{1}{k}\log(\mu(s))-\log\left(1-{\mathrm{e}}^{-asG(t)} \right).$$]{} In order to give asymptotics on $J(k)$ as in , we can use a Laplace method similar to the one used in the proof of Lemma \[Lem-adaptLap-age\], but the analysis is simpler since in this case $\psi_k(s)$ is a function of only one variable. The idea is again to find a minimum point $s_k$ for $\psi_k(s)$, and to use Taylor expansion inside the integral, so $$\psi_k(s) = \psi_k(s_k)+\frac{1}{2}{\psi''}_k(s_k)(s-s_k)^2+o((s-s_k)^2).$$ We can ignore the contribution of the terms where $(s-s_k)^2\gg 1$, since ${\mathrm{e}}^{-k\psi_k(s)}\leq {\mathrm{e}}^{-bsG(t)}$, so that the error is at most exponentially small. As a consequence, [$$J(k) = \sqrt{\frac{\pi}{{\psi''}_k(s_k)}}{\mathrm{e}}^{-k\psi_k(s_k)}(1+o(1)).$$]{} The minimum $s_k$ is a solution of [$$\frac{d\psi_k(s)}{ds} = \frac{bG(t)}{k}-\frac{1}{k}\frac{\mu'(s)}{\mu(s)}- \frac{aG(t){\mathrm{e}}^{-saG(t)}}{1-{\mathrm{e}}^{-asG(t)}}=0.$$]{} In particular, $s_k$ satisfies the following equality, which is similar to : [$$s_k = \frac{1}{aG(t)}\log\left(1+k\frac{aG(t)}{bG(t)-\mu'(s_k)/\mu(s_k)} \right).$$]{} When $\mu(s) = Ch(s){\mathrm{e}}^{-\theta s}$, [$$\frac{\mu'(s)}{\mu(s)} = \frac{h'(s){\mathrm{e}}^{-\theta s}-\theta h(s){\mathrm{e}}^{-\theta s }}{h(s){\mathrm{e}}^{-\theta s}} = -\theta\left(1-\frac{h'(s)}{\theta h(s)}\right)\approx -\theta.$$]{} In particular, this implies [$$s_k= \frac{1}{aG(t)}\log\left(1+k\frac{aG(t)}{bG(t)+\theta}\right)(1+o(1)).$$]{} Similarly to the element $\left(kH_k(t_k,s_k)\right)_{2,2}$ in , [$$k\frac{d^2\psi_k(s_k)}{ds^2} = -\frac{\mu''(s_k)}{\mu(s_k)}+\left(\frac{\mu'(s_k)}{\mu(s_k)}\right)^2-\frac {\mu'(s_k)}{\mu(s_k)}\frac{aG(t)}{1-{\mathrm{e}}^{-as_kG(t)}}+\frac{abG(t)^2}{1-{\mathrm{e}}^{-as_kG(t)}}.$$]{} For the general exponential class, the ratio $$\frac{\mu''(s_k)}{\mu(s_k)} = \frac{h''(s)}{h(s)}-2\theta+\theta^2.$$ As a consequence, $k\frac{d^2\psi_k(s)}{ds^2}$ converges to a positive constant, which means that $s_k$ is an actual minimum. Then $J(k) = c_1{\mathrm{e}}^{-k\psi_k(s_k)}(1+o(1))$. Using this in and ignoring the constants, [$$\begin{split} {\mathbb{P}}\left(M_t=k\right) & = \frac{\Gamma(k+b/a)}{\Gamma(b/a)\Gamma(k+1)}{\mathrm{e}}^{-s_kbG(t)}\mu(s_k)(1+o(1))\\ & =k^{-1}k^{-b/a} k^{b/a}h(s_k)k^{-\theta/aG(t)}(1+o(1))= h(s_k)k^{-(1+\theta/aG(t))}(1+o(1)), \end{split}$$]{} which is a power-law distribution with exponent $\tau(t) = 1+\theta/aG(t)$, and minor corrections given by $h(s_k)$. This holds for every $t\geq0$. In particular, considering $G(\infty)$ instead of $G(t)$, with the same argument we can also prove that the distribution of the total number of children obeys a power-law tail with exponent $\tau(\infty) = 1+\theta/aG(\infty)$. We now prove the result on the limiting distribution $(p_k)_{k\in{\mathbb{N}}}$ of the ${\mathrm{CTBP}}$, for which we apply directly Proposition \[prop-pkasym\_fitage\], using the analysis on the Hessian matrix given in Section \[sec-app-hessian\]. First of all, from it follows that [$$\label{for-proof-genexp1} s_k= \frac{1}{aG(t_k)}\log\left(1+k\frac{aG(t_k)}{bG(t_k)+\theta}\right)(1+o(1)),$$]{} and by [$$\label{for-proof-genexp2} s_kg(t_k)\stackrel{k\rightarrow\infty}{\longrightarrow}\alpha\frac{G(\infty)}{\theta}.$$]{} For the Hessian matrix, using and in , for any integrable aging function $g$ we have $$\left(kH_k(t_k,s_k)\right)_{2,2} = C_2+o(1)>0,\quad \quad \mbox{and}\quad \quad \left(kH_k(t_k,s_k)\right)_{2,1}= o(1),$$ but $\left(kH_k(t_k,s_k)\right)_{1,1}$ behaves according to $g'(t_k)/g(t_k)$. If this ratio is bounded, then $\left(kH_k(t_k,s_k)\right)_{1,1}= C_1+o(1)>0$, while $\left(kH_k(t_k,s_k)\right)_{1,1}\rightarrow\infty$ whenever $g'(t_k)/g(t_k)$ diverges. In both cases, $(t_k,s_k)$ is a minimum. In particular, again ignoring the multiplicative constants and using and in the definition of $\Psi_k(t,s)$, the limiting degree distribution of the ${\mathrm{CTBP}}$ is asymptotic to [$$\label{for-asympt_expfit-precise} k^{-(1+\theta/(aG(t_k)))}h(s_k){\mathrm{e}}^{-\alpha^* t_k}\left(\tilde{C}-\alpha^*\frac{g'(t_k)}{g(t_k)}\right)^{-1/2},$$]{} where the term $\left(\tilde{C}-\alpha^*\frac{g'(t_k)}{g(t_k)}\right)^{-1/2}$, which comes from the determinant of the Hessian matrix, behaves differently according to the aging function. With this, the proof of Proposition \[prop-expfit\_general\] is complete. Proof of Proposition \[prop-subexpfit\] --------------------------------------- This proof is identical to the proof of Proposition \[prop-expfit\_general\], but this time we consider a sub-exponential distribution. First, we start looking at the distribution of the birth process at a fixed time $t\geq0$. We define $\psi_k(s)$ and $J(k)$ as in and . We use again , so $$s_k = \frac{1}{aG(t)}\log\left(1+k\frac{aG(t)}{bG(t)-\mu'(s_k)/\mu(s_k)} \right).$$ In this case, we have [$$\label{for-subexp-proof-1} \frac{\mu'(s)}{\mu(s)} = -\theta(1+\varepsilon)s^\varepsilon.$$]{} Then $s_k$ satisfies [$$s_k = \frac{1}{aG(t)}\log\left(1+k\frac{aG(t)}{bG(t)+\theta(1+\varepsilon)s_k^\varepsilon}\right).$$]{} By substitution, it is easy to check that $s_k$ is approximatively $c_1\log k-c_2\log\log k = \log k(1-\frac{\log\log k}{\log k})$, for some positive constants $c_1$ and $c_2$. This means that as first order approximation, $s_k$ is still of logarithmic order. Then, [$$\label{for-subexp-proof-2} \frac{\mu''(s)}{\mu(s)} = \theta^2(1+\varepsilon)^2s^{2\varepsilon}-\theta(1+\varepsilon)\varepsilon s^{\varepsilon-1}.$$]{} Using and , we can write [$$\begin{split} k\frac{d^2\psi_k(s)}{ds^2} = &\theta(1+\varepsilon)\varepsilon s_k^{\varepsilon-1}+\theta(1+\varepsilon)s_k^\varepsilon\frac{aG(t)}{1-{\mathrm{e}}^{-as_kG(t)}}+\frac{abG(t)^2}{1-{\mathrm{e}}^{-as_kG(t)}}\\ =& \theta(1+\varepsilon)\varepsilon s_k^{\varepsilon-1}+\theta(1+\varepsilon)\frac{s_k^{\varepsilon}}{k} (bG(t)+\theta(1+\varepsilon)s_k^\varepsilon)\\ &+ \frac{bG(t)}{k}(bG(t)+\theta(1+\varepsilon)s_k^\varepsilon). \end{split}$$]{} The dominant term is $c_1s_k^{\varepsilon}$, for some constant $c_1$. This means $k\frac{d^2\psi_k(s)}{ds^2}$ is of order $(\log k)^{\varepsilon}$. Now, [$$\begin{split} J(k) &= \left(k\frac{d^2\psi_k(s)}{ds^2}\right)^{-1/2}C{\mathrm{e}}^{-bG(t)s_k-\theta s_k^{1+\varepsilon}+k\log(1-{\mathrm{e}}^{-aG(t)s_k})}(1+o(1)) \\ & = (\log k)^{-\varepsilon/2}k^{-b/a}{\mathrm{e}}^{-\theta (\log k)^{1+\varepsilon}}(1+o(1)). \end{split}$$]{} As a consequence, [$${\mathbb{P}}\left(M_t = k\right) = k^{-1}(\log k)^{-\varepsilon/2}{\mathrm{e}}^{-\theta (\log k)^{1+\varepsilon}}(1+o(1)),$$]{} which is not a power-law distribution. Again using similar arguments, we show that the limiting degree distribution of the ${\mathrm{CTBP}}$ does not show a power-law tail. In this case $$s_k = \frac{1}{aG(t_k)}\log\left(1+k\frac{aG(t_k)}{bG(t_k)+\theta(1+\varepsilon)s_k^\varepsilon}\right),$$ and $$s_kg(t_k) = \frac{\alpha G(t_k)}{\theta(1+\varepsilon)s_k^\varepsilon}= \frac{\alpha G(t_k)}{\log^\varepsilon k}(1+o(1))\rightarrow0.$$ The Hessian matrix elements are [$$\begin{split} (kH_k(t_k,s_k))_{1,1} &= \frac{a\alpha^2 G(t_k)}{s_k^\varepsilon}-a\alpha \frac{g'(t_k)}{g(t_k)}+o(1),\\ (kH_k(t_k,s_k))_{2,2} &=\theta(1+\varepsilon)\varepsilon s_k^{\varepsilon-1}+\theta s_k^\varepsilon aG(\infty)+abG(\infty)^2+o(1),\\ (kH_k(t_k,s_k))_{1,2} &= o(1). \end{split}$$]{} This implies that [$$\mathrm{det}\left(kH_k(t_k,s_k)\right) = C_1-s_k^\varepsilon\frac{g'(t_k)}{g(t_k)}+o(1)>0.$$]{} As a consequence, $(t_k,s_k)$ is an actual minimum. Then using the definition of $\Psi_k(t,s)$, [$$p_k={\mathrm{e}}^{-\alpha^* t_k}k^{-1+b/a}k^{-b/a}\mu(s_k)\sim {\mathrm{e}}^{-\alpha^* t_k}k^{-1}\left(C_1-s_k^\varepsilon\frac{g'(t_k)}{g(t_k)}\right){\mathrm{e}}^{-\frac{\theta}{(aG(\infty))^{1+\varepsilon}} (\log k)^{1+\varepsilon}}(1+o(1)).$$]{} This completes the proof. [**Acknowledgments.**]{} We are grateful to Nelly Litvak and Shankar Bhamidi for discussions on preferential attachment models and their applications, and Vincent Traag and Ludo Waltman from CWTS for discussions about citation networks as well as the use of Web of Science data. This work is supported in part by the Netherlands Organisation for Scientific Research (NWO) through the Gravitation [Networks]{} grant 024.002.003. The work of RvdH is further supported by the Netherlands Organisation for Scientific Research (NWO) through VICI grant 639.033.806.
--- abstract: 'Maneuvering a general 2-trailer with a car-like tractor in backward motion is a task that requires significant skill to master and is unarguably one of the most complicated tasks a truck driver has to perform. This paper presents a path planning and path-following control solution that can be used to automatically plan and execute difficult parking and obstacle avoidance maneuvers by combining backward and forward motion. A lattice-based path planning framework is developed in order to generate kinematically feasible and collision-free paths and a path-following controller is designed to stabilize the lateral and angular path-following error states during path execution. To estimate the vehicle states needed for control, a nonlinear observer is developed which only utilizes information from sensors that are mounted on the car-like tractor, making the system independent of additional trailer sensors. The proposed path planning and path-following control framework is implemented on a full-scale test vehicle and results from simulations and real-world experiments are presented.' author: - | Oskar Ljungqvist$^{\dagger*}$, Niclas Evestedt$^{\ddagger}$, Daniel Axehill$^{\dagger}$, Marcello Cirillo$^{\diamond}$ and Henrik Pettersson$^{\diamond}$\ \ \ \ \ bibliography: - 'root.bib' title: 'A path planning and path-following control framework for a general 2-trailer with a car-like tractor' --- Introduction ============ A massive interest for intelligent and fully autonomous transport solutions has been seen from industry over the past years as technology in this area has advanced. The predicted productivity gains and the relatively simple implementation have made controlled environments such as mines, harbors, airports, etc., interesting areas for commercial launch of such systems. In many of these applications, tractor-trailer systems are used for transportation and therefore require fully automated control. Reversing a semitrailer with a car-like tractor is known to be a task that require lots of training to perfect and an inexperienced driver usually encounter problems already when performing simple tasks, such as reversing straight backwards. To help the driver in such situations, trailer assist systems have been developed and released to the passenger car market [@werling2014reversing; @hafner2017control]. These systems enable the driver to easily control the semitrailer’s curvature though a control knob. An even greater challenge arise when reversing a general 2-trailer (G2T) with a car-like tractor. As seen in Figure \[j1:fig:truck\_scania\], this system is composed of three interconnected vehicle segments; a front-wheel steered tractor, an off-axle hitched dolly and an on-axle hitched semitrailer. The word general refers to that the connection between the vehicle segments are of mixed hitching types [@altafini1998general]. Compared to a single semitrailer, the dolly introduces an additional degree of freedom into the system, making it very difficult to stabilize the semitrailer and the joint angles in backward motion. ![The full-scale test vehicle that is used as a research platform. The car-like tractor is a modified version of a Scania R580 6x4 tractor.[]{data-label="j1:fig:truck_scania"}](truck_scania_bright.png){width="1\linewidth"} A daily challenge that many truck drivers encounter is to perform a reverse maneuver in, *e.g.*, a parking lot or a loading/off-loading site. In such scenarios, the vehicle is said to operate in an unstructured environment because no clear driving path is available. To perform a parking maneuver, the driver typically needs to plan the maneuver multiple steps ahead, which often involves a combination of driving forwards and backwards. For an inexperienced driver, these maneuvers can be both time-consuming and mentally exhausting. To aid the driver in such situations, this work presents a motion planning and path-following control framework for a G2T with a car-like tractor that is targeting unstructured environments. It is shown through several experiments that the framework can be used to automatically perform complex maneuvers in different environments. The framework can be used as a driver assist system to relieve the driver from performing complex tasks or as part of a motion planning and feedback control layer within an autonomous system architecture. The motion planner is based on the state-lattice motion planning framework [@Cirillo2017; @CirilloIROS2014; @pivtoraiko2009differentially] which has been tailored for this specific application in our previous work in [@LjungqvistIV2017]. The lattice planner efficiently computes kinematically feasible and collision-free motion plans by combining a finite number of precomputed motion segments. During online planning, challenging parking and obstacle avoidance maneuvers can be constructed by deploying efficient graph search algorithms [@arastar]. To execute the motion plan, a path-following controller based on our previous work in [@Ljungqvist2016CDC] is used to stabilize the lateral and angular path-following error states during the execution of the planned maneuver. Finally, a nonlinear observer based on an extended Kalman filter (EKF) is proposed to obtain full state information of the system. Based upon request from our commercial partner and since multiple trailers are usually switched between during daily operation, the observer is developed so that it only uses information from sensors that are mounted on the tractor. The proposed path planning and path-following control framework summarizes and extends our previous work in [@Ljungqvist2016CDC; @LjungqvistIV2017; @LjungqvistACC2018]. Here, the complete system is implemented on a full-scale test vehicle and results from both simulations and real-world experiments are presented to demonstrate its performance. To the best of the author’s knowledge, this paper presents the first path planning and path-following control framework for a G2T with a car-like tractor that is implemented on a full-scale test vehicle. The remainder of the paper is structured as follows. In Section \[j1:sec:systemArchitecture\], the responsibility of each module in the path planning and path-following control framework is briefly explained and an overview of related work is provided. In Section \[j1:sec:Modeling\], the kinematic vehicle model of the G2T with a car-like tractor and the problem formulations are presented. In Section \[j1:sec:MotionPlanner\] and \[j1:sec:Controller\], the lattice-based path planner and the path-following controller are explained, respectively. In Section \[j1:sec:stateEstimation\], the nonlinear observer that is used for state estimation is presented. Implementation details are covered in Section \[j1:sec:implementation\] and simulation results as well as results from real-world experiments are presented in Section \[j1:sec:Results\]. A discussion is provided in Section 9 and the the paper is concluded in Section \[j1:sec:conclusions\] by summarizing the contributions and discusses directions for future work. Background and related work {#j1:sec:systemArchitecture} =========================== The full system is built from several modules and a simplified system architecture is illustrated in Figure \[j1:fig:sys\_arch\], where the integration and design of state estimation, path planning and path-following control are considered as the main contributions of this work. Below, the task of each module is briefly explained and for clarity, related work for each module is given individually. Perception and localization {#j1:sec:loc} --------------------------- The objective of the perception and localization layer is to provide the planning and control layer with a consistent representation of the surrounding environment and an accurate estimation of where the tractor is located in the world. A detailed description of the perception layer is outside the scope of this paper, but a brief introduction is given for clarity. Precomputed maps and onboard sensors on the car-like tractor (RADARs, LIDARs, a global positioning system (GPS), inertial measurement units (IMUs) and cameras) are used to construct an occupancy grid map [@occupancyGridMap] that gives a probabilistic representation of drivable and non-drivable areas. Dynamic objects are also detected and tracked but they are not considered in this work. Standard localization techniques are then used to obtain an accurate position and orientation estimate of the car-like tractor within the map [@skog2009; @levinson2011towards; @montemerlo2008junior]. Together, the occupancy grid map and the tractor’s position and orientation provide the environmental representation in which motion planning and control is performed. State estimation ---------------- To control the G2T with car-like tractor, accurate and reliable state estimation of the semitrailer’s position and orientation as well as the two joint angles of the system need to be obtained. An ideal approach would be to place sensors at each hitch connection to directly measure each joint angle [@hafner2017control; @evestedtLjungqvist2016; @michalek2014highly] and equip the semitrailer with a similar localization system as the tractor (*e.g.*, IMU and a high precision GPS). However, commercial trailers are often exchanged between tractors and a high-performance navigation system is very expensive, making it an undesirable solution for general applications. Furthermore, no standardized communication protocol between different trailer and tractor manufacturers exists. ![A schematic illustration of the proposed system architecture where the blue subsystems; motion planning, path-following control and state estimation, are considered in this work.[]{data-label="j1:fig:sys_arch"}](architecture_proposal_blockscheme.pdf){width="0.7\linewidth"} Different techniques for estimating the joint angle for a tractor with a semi-trailer and for a car with a trailer using wide-angle cameras are reported in [@CameraSolSaxe] and [@caup2013video], respectively. In [@CameraSolSaxe], an image bank with images taken at different joint angles is first generated and during execution used to compare and match against the current camera image. Once a match is found, the corresponding joint angle is given from the matched image in the image bank. The work in [@caup2013video] exploits symmetry of the trailer’s drawbar in images to estimate the joint angle between a car and the trailer. In [@Fuchs2016trailerEst], markers with known locations are placed on the trailer’s body and then tracked with a camera to estimate the joint angles of a G2T with car-like tractor. The proposed solution is tested on a small-scale vehicle in a lab environment. Even though camera-based joint angle estimation would be possible to utilize in practice, it is unclear how it would perform in different lighting conditions, *e.g.*, during nighttime. The concept for angle estimation used in this work was first implemented on a full-scale test vehicle as part of the master’s thesis [@Patrik2016] supervised by the authors of this work. Instead of using a rear-view camera, a LIDAR sensor is mounted in the rear of the tractor. The LIDAR sensor is mounted such that the body of the semitrailer is visible in the generated point cloud for a wide range of joint angles. The semitrailer’s body is assumed to be rectangular and by iteratively running the random sample consensus (RANSAC) algorithm [@fischler1981random], the visible edges of the semitrailer’s body can be extracted from the point cloud. Virtual measurements of the orientation of the semitrailer and the lateral position of the midpoint of its front with respect to the tractor are then constructed utilizing known geometric properties of the vehicle. These virtual measurements together with information of the position and orientation of the tractor are used as observations to an EKF for state estimation. In [@Daniel2018], the proposed iterative RANSAC algorithm is benchmarked against deep-learning techniques to compute the estimated joint angles directly from the LIDAR’s point cloud or from camera images. That work concludes that for trailers with rectangular bodies, the LIDAR and iterative RANSAC solution outperforms the other tested methods in terms of accuracy and robustness which makes it a natural choice for state estimation in this work. Motion planning --------------- Motion planning for car-like vehicles is a difficult problem due to the vehicle’s nonholonomic constraints and the non-convex environment the vehicle is operating in [@lavalle2006planning]. Motion planning for tractor-trailer systems is even more challenging due to the vehicle’s complex dynamics, its relatively large dimensional state-space and its unstable joint angle dynamics in backward motion. The standard $N$-trailer (SNT) which only allows on-axle hitching, is differentially flat and can be converted into chained form when the position of the axle of the last trailer is used as the flat output [@sordalen1993conversion]. This property of the SNT is explored in [@Murray1991; @tilbury1995trajectory] to develop efficient techniques for local trajectory generation. In [@tilbury1995trajectory], simulation results for the one and two trailer cases are presented but obstacles as well as state and input constraints are omitted. A well-known issue with flatness-based trajectory generation is that it is hard to incorporate constraints, as well as minimizing a general performance measure while computing the motion plan. Some of these issues are handled in [@sekhavat1997multi] where a motion planner for unstructured environments with obstacles for the S2T is proposed. In that work, the motion planning problem is split into two phases where a holonomic path that violates the vehicle’s nonholonomic constraints is first generated and then iteratively replaced with a kinematically feasible path by converting the system into chained form. A similar hierarchical motion planning scheme is proposed in [@hillary] for a G1T robot. An important contribution in this work is that most of the approaches presented above only consider the SNT-case with on-axle hitching, despite that most practical applications have both on-axle and off-axle hitching. The off-axle hitching makes the system dynamics for the general $N$-trailer (GNT) much more complicated [@altafini1998general]. Since the GNT with car-like tractor is not differentially flat nor feedback equivalent to chained form when $N\geq 2$ [@rouchon1993flatness], the approaches presented above are not applicable. To include the G2T with car-like tractor, we presented a probabilistic motion planning approach in [@evestedtLjungqvist2016planning]. Even though the motion planner presented in that work is capable of solving several hard problems, the framework lacks all completeness and optimality guarantees that are given by the approach developed in this work. The family of motion planning algorithms that belong to the lattice-based motion planning family, can guarantee resolution optimality and completeness [@pivtoraiko2009differentially]. In contrast to probabilistic methods, a lattice-based motion planner requires a regular discretization of the vehicle’s state-space and is constrained to a precomputed set of feasible motions which, combined, can connect two discrete vehicle states. The precomputed motions are called motion primitives and can be generated offline by solving several optimal control problems (OCPs). This implies that the vehicle’s nonholonomic constraints already have been considered offline and what remains during online planning is a search over the set of precomputed motions. Due to its deterministic nature and real-time capabilities, lattice-based motion planning has been used with great success on various robotic platforms [@pivtoraiko2009differentially; @BOSSDarpa; @CirilloIROS2014; @LjungqvistCDC2018; @oliveira2018combining] and is therefore the chosen motion planning strategy for this work. Other deterministic motion planning algorithms rely on input-space discretization [@dolgov2010path; @Beyersdorfer2013tractortrailer] in contrast to state-space discretization. A model of the vehicle is used during online planning to simulate the system for certain time durations, using constant or parametrized control signals. In general, the constructed motions do not end up at specified final states. This implies that the search graph becomes irregular and results in an exponentially exploding frontier during online planning [@pivtoraiko2009differentially]. To resolve this, the state-space is often divided into cells where a cell is only allowed to be explored once. A motion planning algorithm that uses input-space discretization is the hybrid A$^*$ [@dolgov2010path]. In [@Beyersdorfer2013tractortrailer], a similar motion planner is proposed to generate feasible paths for a G1T with a car-like tractor with active trailer steering. A drawback with motion planning algorithms that rely on input-space discretization, is that they lack completeness and optimality guarantees. Moreover, input-space discretization is in general not applicable for unstable systems, unless the online simulations are performed in closed-loop with a stabilizing feedback controller [@evestedtLjungqvist2016planning]. A problem with lattice-based approaches is the curse of dimensionality, *i.e.*, exponential complexity in the dimension of the state-space and in the number of precomputed motions. In [@LjungqvistIV2017], we circumvented this problem and developed a real-time capable lattice-based motion planner for a G2T with a car-like tractor. By discretizing the state-space of the vehicle such that the precomputed motions always move the vehicle from and to a circular equilibrium configuration, the dimension of the state lattice remained sufficiently low and made real-time use of classical graph search algorithms tractable. Even though the dimension of the discretized state-space is limited, the motion planner was shown to efficiently solve difficult and practically relevant motion planning problems. In this work, the work in [@LjungqvistIV2017] is extended by better connecting the cost functional in the motion primitive generation and the cost function in the online motion planning problem. Additionally, the objective functional in backward motion is adjusted such that it reflects the difficulty of executing a maneuver. To avoid maneuvers in backward motion that in practice have a large risk of leading to a jack-knife state, a quadratic penalty on the two joint angles is included in the cost functional. Path-following control ---------------------- During the past decades, an extensive amount of feedback control techniques for different tractor-trailer systems for both forward and backward motion have been proposed. The different control tasks include path-following control (see *e.g.*, [@sampei1995arbitrary; @altafini2002hybrid; @astolfi2004path; @Cascade-nSNT; @bolzern1998path]), trajectory-tracking and set-point control (see *e.g.*, [@CascadeNtrailer; @divelbiss1997trajectory; @michalek2018forward; @SamsonChainedform1995]). Here, the focus will be on related path-following control solutions. For the SNT, its flatness property can be used to design path-following controllers based on feedback linearization [@sampei1995arbitrary] or by converting the system into chained form [@SamsonChainedform1995]. The G1T with a car-like tractor is still differentially flat using a certain choice of flat outputs [@rouchon1993flatness]. However, the flatness property does not hold when $N\geq 2$. In [@bolzern1998path], this issue is circumvented by introducing a simplified reference vehicle which has equivalent stationary behavior but different transient behavior. Similar concepts have also been proposed in [@virtualMorales2013; @pushing2010]. Input-output linearization techniques are used in [@altafini2003path] to stabilize the GNT around paths with constant curvature, where the path-following controller minimizes the sum of the lateral offsets to the nominal path. The proposed approach is however limited to forward motion since the introduced zero-dynamics become unstable in backward motion. A closely related approach is presented in [@minimumSweep], where the objective of the path-following controller is to minimize the swept path of a G1T with a car-like tractor along paths in backward and forward motion. Tractor-trailer vehicles that have pure off-axle hitched trailers, are referred to as non-standard N-trailers (nSNT) [@CascadeNtrailernonmin; @chung2011backward]. For these systems, scalable cascade-like path-following control techniques are presented in [@michalek2014highly; @Cascade-nSNT]. Compared to many other path-following control approaches, these controllers do not need to find the closest distance to the nominal path and the complexity of the feedback controllers scales well with increasing number of trailers. By introducing artificial off-axle hitches, the proposed controller can also be used for the GNT-case [@Cascade-nSNT]. However, as experimental results illustrate, the path-following controller becomes sensitive to measurement noise when an off-axle distance approaches zero. A hybrid linear quadratic (LQ) controller is proposed in [@hybridcontrol2001] to stabilize the G2T with car-like tractor around different equilibrium configurations corresponding to straight lines and circles, and a survey in the area of control techniques for tractor-trailer systems can be found in [@david2014control]. Inspired by [@hybridcontrol2001], a cascade control approach for stabilizing the G2T with car-like tractor in backward motion around piecewise linear reference paths is proposed in [@evestedtLjungqvist2016]. An advantage of this approach is that it can handle arbitrary reference paths that are not necessarily kinematically feasible. However, if a more detailed reference path with full state information is available, this method is only using a subset of the available information and the control accuracy might be reduced. A similar approach for path tracking is also proposed in [@rimmer2017implementation] for reversing a G2T with a car-like tractor. Most of the path-following approaches presented above consider the problem of following a path defined in the position and orientation of the last trailer’s axle. In this work, the nominal path obtained from the path planner is composed of full state information as well as nominal control signals. Furthermore, in a motion planning and path-following control architecture, it is crucial that all nominal vehicle states are followed to avoid collision with surrounding obstacles. To utilize all information in the nominal path, we presented a state-feedback controller with feedforward action in [@Ljungqvist2016CDC]. The proposed path-following controller is proven to stabilize the path-following error dynamics for the G2T with a car-like tractor in backward motion around an arbitrary path that was generated from a set of kinematically feasible paths. The advantage of this approach is that the nominal path satisfies the system dynamics making it, in theory, possible to follow exactly. However, the developed stability result in [@Ljungqvist2016CDC] fails to guarantee stability in continuous-time for motion plans that are combining forward and backward motion segments [@LjungqvistACC2018]. In [@LjungqvistACC2018], we proposed a solution to this problem and presented a framework that is exploiting the fact that a lattice planner is combining a finite number of precomputed motion segments. Based on this, a framework is proposed for analyzing the behavior of the path-following error, how to design the path-following controller and how to potentially impose restrictions on the lattice planner to guarantee that the path-following error is bounded and decays towards zero. Based on this, the same framework is used in this work, where results from real-world experiments on a full-scale test vehicle are also presented. Kinematic vehicle model and problem formulations {#j1:sec:Modeling} ================================================ The G2T with a car-like tractor considered in this work is schematically illustrated in Figure \[j1:fig:schematic\_model\_description\]. This system has a positive off-axle connection between the car-like tractor and the dolly and an on-axle connection between the dolly and the semitrailer. The state vector $x=\begin{bmatrix} x_3 & y_3 & \theta_3 & \beta_3 & \beta_2\end{bmatrix}^T\in\mathbb R^5$ is used to represent a configuration of the vehicle, where $(x_3,y_3)$ is the position of the center of the semitrailer’s axle, $\theta_3$ is the orientation of the semitrailer, $\beta_3$ is the joint angle between the semitrailer and the dolly and $\beta_2$ is the joint angle between the dolly and the car-like tractor[^1]. The length $L_3$ represent the distance between the axle of the semitrailer and the axle of the dolly, $L_2$ is the distance between the axle of the dolly and the off-axle hitching connection at the car-like tractor, $M_1>0$ is the length of the positive off-axle hitching, and $L_1$ denotes the wheelbase of the car-like tractor. The car-like tractor is front-wheeled steered and assumed to have perfect Ackerman geometry. The control signals to the system are the steering angle $\alpha$ and the longitudinal velocity $v$ of the rear axle of the car-like tractor. A recursive formula derived from nonholonomic and holonomic constraints for the GNT vehicle is presented in [@altafini1998general]. Applying the formula for this specific G2T with a car-like tractor results in the following vehicle model [@altafini2002hybrid]: \[j1:eq:model\_global\_coord\] $$\begin{aligned} \dot{x}_3 &= v \cos \beta_3 C_1(\beta_2,\tan\alpha/L_1) \cos \theta_3, \label{eq:model1}\\ \dot{y}_3 & = v \cos \beta_3 C_1(\beta_2,\tan\alpha/L_1) \sin \theta_3, \label{eq:model2}\\ \dot{\theta}_3 & = v \frac{\sin \beta_3 }{L_3} C_1(\beta_2,\tan\alpha/L_1), \label{eq:model3}\\ \dot{\beta}_3 & =v \left( \frac{1}{L_2}\left(\sin\beta_2 - \frac{M_1}{L_1}\cos\beta_2\tan \alpha \right) - \frac{\sin\beta_3}{L_3}C_1(\beta_2,\tan\alpha/L_1)\right), \label{eq:model4}\\ \dot{\beta}_2 &= v \left(\frac{\tan\alpha}{L_1} - \frac{\sin \beta_2}{L_2} + \frac{M_1}{L_1 L_2}\cos\beta_2\tan\alpha \right), \label{eq:model5} \end{aligned}$$ where $C_1(\beta_2,\kappa)$ is defined as $$\begin{aligned} C_1(\beta_2,\kappa) = \cos{\beta_2} + M_1\sin\beta_2\kappa. \label{j1:eq:C1}\end{aligned}$$ By performing the input substitution $\kappa = \frac{\tan \alpha}{L_1}$, the model in  can be written on the form $\dot{ x} = vf(x,\kappa)$. Define $$\begin{aligned} \label{j1:relation_v_v3} g_v(\beta_2,\beta_3,\kappa) = \cos\beta_3 C_1(\beta_2,\kappa),\end{aligned}$$ which describes the relationship, $v_3 = vg_v(\beta_2,\beta_3,\kappa)$, between the longitudinal velocity of the axle of the semitrailer $v_3$ and the longitudinal velocity of the rear axle of the car-like tractor, $v$. When $g_v(\beta_2,\beta_3,\kappa)=0$, the system in  is uncontrollable which practically implies that the position of the axle of the dolly or the semitrailer remain in stationarity even though the tractor moves. To avoid these vehicle configurations, it is assumed that $g_v(\beta_2,\beta_3,\kappa)>0$, which implies that the joint angles has to satisfy [$|\beta_3| < \pi/2$]{} and [$|\beta_2| < \pi/2$]{}, respectively, and that These imposed restrictions are closely related to the segment-platooning assumption defined in [@michalek2014highly] and does not limit the practical usage of the model since structural damage could occur on the semitrailer or the tractor, if these limits are exceeded. The model in  is derived based on no-slip assumptions and the vehicle is assumed to operate on a flat surface. Since the intended operational speed is quite low for our use case, these assumptions are expected to hold. The direction of motion is essential for the stability of the system , where the joint angles are structurally unstable in backward motion ($v < 0$), where it risks to fold and enter what is called a jack-knife state [@altafini2002hybrid]. In forward motion ($v > 0$), these modes are stable. ![Definition of the geometric lengths, states and control signals that are of relevance for modeling the general 2-trailer with a car-like tractor.[]{data-label="j1:fig:schematic_model_description"}](truck_def_angles_final.pdf){width="0.7\linewidth"} Since the longitudinal velocity $v$ enters linear into the model in , time-scaling [@sampei1986time] can be applied to eliminate the dependence on the longitudinal speed $|v|$. Define $s(t)$ as the distance traveled by the rear axle of the tractor, *i.e.*, $s(t)=\int_0^t|v(\tau)|\mathrm{d}\tau$. By substituting time with $s(t)$, the differential equation in  can be written as $$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d} s} x( s) = \operatorname*{sign}{(v(s))} f(x(s), \kappa(s)). \label{j1:eq:time_scaling}\end{aligned}$$ Since only the sign of $v$ enters into the state equation, it implies that the traveled path is independent of the tractor’s speed $|v|$ and the motion planning problem can be formulated as a path planning problem [@lavalle2006planning], where the speed is omitted. Therefore, the longitudinal velocity $v$ is, without loss of generality, assumed to take on the values $v = 1$ for forward motion and $v = -1$ for backward motion, when path planning is considered. In practice, the vehicle has limitations on the maximum steering angle $|\alpha|\leq\alpha_{\text{max}}<\pi/2$, the maximum steering angle rate $|\omega|\leq\omega_{\text{max}}$ and the maximum steering angle acceleration $|u_\omega|\leq u_{\omega,\text{max}}$. These constraints have to be considered in the path planning layer in order to generate feasible paths that the physical vehicle can execute. Problem formulations {#j1:sec:problemformulation} -------------------- In this section, the path planning and the path-following control problems are defined. To make sure the planned path avoids uncontrollable regions and the nominal steering angle does not violate any of its physical constraints, an augmented state-vector $z = \begin{bmatrix} x^T & \alpha & \omega\end{bmatrix}^T\in\mathbb R^{7}$ is used during path planning. The augmented model of the G2T with a car-like tractor  can be expressed in the following form $$\begin{aligned} \label{j1:driftless_system} \frac{\text{d}z}{\text{d}s} = f_z(z(s),u_p(s)) = \begin{bmatrix} v(s)f(x(s),\tan\alpha(s)/L_1) \\ \omega(s) \\ u_\omega(s) \end{bmatrix},\end{aligned}$$ where its state-space ${\mathbb Z}\subset\mathbb R^7$ is defined as follows $$\begin{aligned} \mathbb Z = \left\{ z\in\mathbb R^7 \mid |\beta_3| < \pi/2, \hspace{2pt} \hspace{2pt} |\beta_2| < \pi/2,\hspace{2pt} |\alpha|\leq\alpha_{\text{max}} ,\hspace{2pt} |\omega|\leq\omega_{\text{max}} ,\hspace{2pt} C_1(\beta_2,\tan\alpha/L_1)>0 \right\},\end{aligned}$$ where $C_1(\beta_2,\tan\alpha/L_1)$ is defined in . During path planning, the control signals are $u_p = \begin{bmatrix} v & u_\omega \end{bmatrix}^T \in {\mathbb U}_p$, where ${\mathbb U}_p=\{-1,1\}\times [-u_{\omega,\text{max}},u_{\omega,\text{max}}]$. Here, $u_\omega$ denotes the steering angle acceleration and the longitudinal velocity $v$ is constrained to $\pm1$ and determines the direction of motion. It is assumed that the perception layer provides the path planner with a representation of the surrounding obstacles $\mathbb Z_{\text{obs}}$. In the formulation of the path planning problem, it is assumed that $\mathbb Z_{\text{obs}}$ can be described analytically ($e.g.$, circles, ellipsoids, polytopes or other bounding regions [@lavalle2006planning]). Therefore, the free-space, where the vehicle is not in collision with any obstacles, can be defined as . Given an initial state $z_I = \begin{bmatrix} x_I^T & \alpha_I & 0 \end{bmatrix}^T \in\mathbb Z_{\text{free}}$ and a desired goal state $z_G= \begin{bmatrix} x_G^T & \alpha_G & 0 \end{bmatrix}^T \in\mathbb Z_{\text{free}}$, a feasible solution to the path planning problem is a distance-parametrized control signal $u_{p}(s)\in {\mathbb U}_p$, $s\in[0,s_G]$ which results in a nominal path in $z(s)$, $s\in[0,s_G]$ that is feasible, collision-free and moves the vehicle from its initial state $z_I$ to the desired goal state $z_G$. Among all feasible solutions to this problem, the optimal solution is the one that minimizes a specified cost functional $J$. The optimal path planning problem is defined as follows. \[j1:pathplanningproblem\] Given the 5-tuple ($z_I,z_G, \mathbb Z_{\text{free}},{\mathbb U}_p, J$), find the path length $s_G\in\mathbb R_+$ and a distance-parametrized control signal $u_{p}(s)= \begin{bmatrix} v(s) & u_{\omega}(s) \end{bmatrix}^T$, $s\in[0,s_G]$ that minimizes the following OCP: \[j1:eq:MotionPlanningOCP\] $$\begin{aligned} \operatorname*{minimize}_{u_{p}(\cdot), \hspace{0.5ex}s_{G} }\hspace{3.7ex} & J = \int_{0}^{s_{G}}L(x(s),\alpha(s), \omega(s), u_\omega(s))\,\mathrm{d}s \label{j1:eq:MotionPlanningOCP_obj} \\ \operatorname*{subject\:to}\hspace{3ex} & \frac{\mathrm{d}z}{\mathrm{d}s} = f_z(z(s),u_p(s)), \label{j1:eq:MotionPlanningOCP_syseq} \\ & z(0) = z_I, \quad z(s_{G}) = z_G, \label{j1:eq:MotionPlanningOCP_initfinal} \\ & z(s) \in \mathbb Z_{\text{free}}, \quad u_{p}(s) \in {\mathbb U}_p, \label{j1:eq:MotionPlanningOCP_constraints} \end{aligned}$$ where $L:\mathbb R^5\times\mathbb R\times\mathbb R\times\mathbb R\rightarrow \mathbb R_+$ is the cost function. The optimal path planning problem in  is a nonlinear OCP which is often, depending on the shape of $\mathbb Z_{\text{free}}$, highly non-convex. Thus, the OCP in  is in general hard to solve by directly invoking a numerical optimal control solver [@bergman2018combining; @zhang2018optimization] and sampling-based path planning algorithms are commonly employed to obtain an approximate solution [@lavalle2006planning; @reviewFrazzoli2016]. In this work, a lattice-based path planner [@pivtoraiko2009differentially; @CirilloIROS2014] is used and the framework is presented in Section \[j1:sec:MotionPlanner\]. For the path-following control design, a nominal path that the vehicle is expected to follow is defined as $(x_r(s),u_r(s)), s\in[0,s_{G}]$, where $x_r(s)$ is the nominal vehicle states and $u_r(s)=\begin{bmatrix} v_r(s) & \kappa_r(s) \end{bmatrix}^T$ is the nominal velocity and curvature control signals. The objective of the path-following controller is to locally stabilize the vehicle around this path in the presence of disturbances and model errors. When path-following control is considered, it is not crucial that the vehicle is located at a specific nominal state in time, rather that the nominal path is executed with a small and bounded path-following error . The path-following control problem is formally defined as follows. \[j1:pathfollowingproblem\] Given a controlled G2T with a car-like tractor  and a feasible nominal path $(x_r(s),u_r(s))$, $s\in[0,s_{G}]$. Find a control-law $\kappa(t)=g(s(t),x(t))$ with $v(t)=v_r(s(t))$, such that the solution to the closed-loop system\ $\dot x(t)=v_r(s(t))f(x(t),g(s(t), x(t)))$ locally around the nominal path satisfies the following: For all $t \in\{t\in\mathbb{R}_+ \mid 0 \leq s(t) \leq s_G \}$, there exist positive constants $r$, $\rho$ and $\epsilon$ such that 1. $||\tilde x(t)||\leq \rho ||\tilde x(t_0)||e^{-\epsilon (t-t_0)}, \quad \forall ||\tilde x(t_0)||<r$, 2. $\dot{s}(t)>0$. If the nominal path would be infinitely long ($s_G\rightarrow \infty$), Definition \[j1:pathfollowingproblem\] coincides with the definition of local exponential stability of the path-following error model around the origin [@khalil]. In this work, the path-following controller is designed by first deriving a path-following error model. This derivation as well as the design of the path-following controller are presented in Section \[j1:sec:Controller\]. System properties ----------------- Some relevant and important properties of  that will be exploited for path planning are presented below. ![Illustration of a circular equilibrium configuration for the G2T with a car-like tractor. Given a constant steering angle $\alpha_e$, there exists a unique pair of joint angles, $\beta_{2,e}$ and $\beta_{3,e}$, where $\dot\beta_2 = \dot\beta_3 =0$.[]{data-label="j1:fig:eq_conf"}](truck_lin_circle.pdf){width="0.6\linewidth"} ### Circular equilibrium configurations Given a constant steering angle $\alpha_e$ there exists a circular equilibrium configuration where $\dot \beta_2$ and $\dot\beta_3$ are equal to zero, as illustrated in Figure \[j1:fig:eq\_conf\]. In stationarity, the vehicle will travel along circles with radiuses determined by $\alpha_e$ [@altafini2002hybrid]. By utilizing trigonometry, the equilibrium joint angles, $\beta_{2e}$ and $\beta_{3e}$, are related to $\alpha_e$ through the following equations \[j1:eq:equ\] $$\begin{aligned} \beta_{3e} &= \operatorname*{sign}{(\alpha_e)} \arctan \left( \frac{L_3}{R_3} \right) \label{j1:eq:equ1},\\ \beta_{2e} &= \operatorname*{sign}{(\alpha_e)} \left( \arctan \left( \frac{M_1}{R_1} \right) + \arctan \left( \frac{L_2}{R_2} \right)\right), \label{j1:eq:equ2} \end{aligned}$$ where $R_1 = L_1/ |\tan \alpha_e|$, $R_2 = (R^2_1 + M_1^2 - L_2^2)^{1/2}$ and $R_3 = (R_2^2 - L_3^2)^{1/2}$. ### Symmetry A feasible path $(z(s),u_p(s))$, $s\in[0,s_G]$ to  that moves the system from an initial state $z(0)$ to a final state $z(s_G)$, is possible to reverse in distance and revisit the exact same points in $x$ and $\alpha$ by a simple transformation of the control signal. The result is formalized in Lemma \[j1:L1\]. \[j1:L1\] Denote $z(s),$ $s\in[0,s_G]$, as the solution to  that satisfies $|\alpha(\cdot)|\leq\alpha_{\text{max}}<\pi/2$, when the control signal $u_p(s)\in\mathbb U_p$, $s\in[0,s_G]$ is applied from the initial state $z(0)$ which ends at the final state $z(s_G)$. Moreover, denote $\bar z(\bar s)$, $\bar s\in[0,s_G]$ as the distance-reversed solution to  when the distance-reversed control signal $$\begin{aligned} \label{j1:eq:reversed_controls} \bar u_p(\bar s) = \begin{bmatrix} -v(s_G-\bar s) & u_\omega(s_G-\bar s) \end{bmatrix}^T,\quad \bar s \in[0,s_G] \end{aligned}$$ is applied from the initial state $\bar z(0) = \begin{bmatrix}x(s_G)^T & \alpha(s_G) & -\omega(s_G)\end{bmatrix}^T$. Then, $z(s),$ $s\in[0,s_G]$ and $\bar z(\bar s)$, $\bar s\in[0,s_G]$ are unique and they are related according to $$\begin{aligned} \label{j1:eq:reversed_states} \bar{z}(\bar s) = \begin{bmatrix} x(s_G-\bar s)^T & \alpha(s_G-\bar s) & -\omega(s_G-\bar s)\end{bmatrix}^T,\quad \bar s\in[0,s_G]. \end{aligned}$$ In particular, the final state is $\bar z(s_G)=\begin{bmatrix}x(0)^T & \alpha(0) & -\omega(0)\end{bmatrix}^T$. See Appendix A. Note that the actual state $x(\cdot)$ and steering angle $\alpha(\cdot)$ paths of the system  are fully distance-reversed and it is only the path of the steering angle velocity $\omega(\cdot)$ that changes sign. Moreover, if $\omega(0)$ and $\omega(s_G)$ are equal to zero, the initial and final state constraints coincide. The practical interpretation of the result in Lemma \[j1:L1\] is that any path taken by the G2T with a car-like tractor  with $|\alpha(\cdot)|\leq \alpha_{\text{max}}$ is feasible to follow in the reversed direction. Now, define the reverse optimal path planning problem to  as \[j1:eq:revMotionPlanningOCP\] $$\begin{aligned} \operatorname*{minimize}_{\bar u_{p}(\cdot), \hspace{0.5ex}\bar s_{G} }\hspace{3.7ex} & \bar J = \int_{0}^{\bar s_{G}}L(\bar x(\bar s),\bar \alpha(\bar s), \bar \omega(\bar s), \bar u_\omega(\bar s))\,\text d\bar s \label{j1:eq:revMotionPlanningOCP_obj}\\ \operatorname*{subject\:to}\hspace{3ex} & \frac{\text d\bar z}{\text d\bar s} = f_z(\bar z(\bar s),\bar u_p(\bar s)), \label{j1:eq:revMotionPlanningOCP_syseq} \\ & \bar z(0) = z_G, \quad \bar z(\bar s_{G}) = z_I, \label{j1:eq:revMotionPlanningOCP_initfinal} \\ & \bar z(\bar s) \in \mathbb Z_{\text{free}}, \quad \bar u_{p}(\bar s) \in {\mathbb U}_p. \label{j1:eq:revMotionPlanningOCP_constraints} \end{aligned}$$ Note that the only difference between the OCPs defined in  and , respectively, is that the initial and goal state constraints are switched. In other words,  defines a path planning problem from $z_I$ to $z_G$ and  defines a path planning problem from $z_G$ to $z_I$. It is possible to show that also the optimal solutions to these OCPs are related through the result established in Lemma \[j1:L1\]. \[j1:A-optimal-symmetry\] For all $z\in\mathbb Z_{\text{free}}$ and $u_p\in{\mathbb U}_p$, the cost function $L$ in  satisfies $L(x,\alpha, \omega,u_\omega)=L(x,\alpha, -\omega,u_\omega)$. \[j1:A-optimal-symmetry2\] $z = \begin{bmatrix} x^T & \alpha & \omega\end{bmatrix}^T\in\mathbb Z_{\text{free}} \Leftrightarrow \bar z = \begin{bmatrix} x^T & \alpha & -\omega\end{bmatrix}^T\in\mathbb Z_{\text{free}}$. \[j1:T-optimal-symmetry\] Under Assumption \[j1:A-optimal-symmetry\]–\[j1:A-optimal-symmetry2\], if $(z^*(s), u_p^*(s))$, $s\in [0, s_G^*]$ is an optimal solution to the optimal path planning problem  with optimal objective functional value $J^*$, then the distance-reversed path $(\bar z^*(\bar s),\bar u^*_p(\bar s))$, $\bar s\in [0, \bar s_G^*]$ given by – with $\bar s_G^*=s_G^*$, is an optimal solution to the reverse optimal path planning problem  with optimal objective functional value $\bar J^* = J^*$. See Appendix A. In other words, if an optimal solution to the optimal path planning problem in  or the reversed optimal path planning problem in  is known, an optimal solution to the other one can immediately be derived using the invertible transformation defined in – and $\bar s_G=s_G$. Lattice-based path planner {#j1:sec:MotionPlanner} ========================== As previously mentioned, the path planning problem defined in  is hard to solve by directly invoking a numerical optimal control solver. Instead, it can be combined with classical search algorithms and a discretization of the state-space to build efficient algorithms to solve the path planning problem. By discretizing the state-space $\mathbb Z_d$ of the vehicle in a regular fashion and constraining the motion of the vehicle to a lattice graph $\pazocal{G} = \langle \pazocal{V},\pazocal{E}\rangle$, which is a directed graph embedded in an Euclidean space that forms a regular and repeated pattern, classical graph-search techniques can be used to traverse the graph and compute a path to the goal [@pivtoraiko2009differentially; @CirilloIROS2014]. Each vertex $\nu[k] \in \pazocal V$ represents a discrete augmented vehicle state $z[k]\in\mathbb Z_d$ and each edge $e_i \in \pazocal{E}$ represents a motion primitive $m_i$, which encodes a feasible path $(z^i(s), u_p^i(s))$, $s\in [0, s_f^i]$ that moves the vehicle from one discrete state $z[k] \in \mathbb Z_d$ to a neighboring state $z[k+1] \in \mathbb Z_d$, while respecting the system dynamics and its physically imposed constraints. For the remainder of this text, state and vertex will be used interchangeably. Each motion primitive $m_i$ is computed offline and stored in a library containing a set $\pazocal{P}$ of precomputed feasible motion segments that can be used to connect two vertices in the graph. In this work, an OCP solver is used to generate the motion primitives and the complex non-holonomic constraints that are inherited by the vehicle are in this way handled offline, and what remains during online planning is a search over the set of precomputed motions. Performing a search over a set of precomputed motion primitives is a well known technique and is known as lattice-based path planning [@pivtoraiko2009differentially; @CirilloIROS2014]. Let $z[k+1]=f_p(z[k],m_i)$ represent the state transition when $m_i$ is applied from $z[k]$, and let $J_p(m_i)$ denote the cost associated with this transition. The complete set of motion primitives $\pazocal{P}$ is computed offline by solving a finite set of OCPs to connect a set of initial states with a set of neighboring states in an obstacle-free environment. The set $\pazocal{P}$ is constructed from the position of the semitrailer at the origin and since the G2T with a car-like tractor  is position-invariant, a motion primitive $m_i\in\pazocal P$ can be translated and reused from all other positions on the grid. The cardinality of the complete set of motion primitives is $|\pazocal{P}|=M$, where $M$ is a positive integer-valued scalar. In general, all motion primitives are not applicable from each state $z[k]$ and the set of motion primitives that can be used from a specific state $z[k]$ is denoted $\pazocal P(z[k])\subseteq \pazocal{P}$. The cardinality of $\pazocal P(z[k])$ defines the number of edges that can be used from a given state $z[k]$ and the average $|\pazocal P(z[k])|$ defines the branching factor of the search problem. Therefore, a trade off between planning time and maneuver resolution has to be made when designing the motion primitive set. Having a large library of diverse motions gives the lattice planner more maneuverability, however, the planning time will increase exponentially with the size of $|\pazocal P(z[k])|$, while a small library gives a faster planning time on the expense of maneuverability. As the branching factor increases, a well-informed heuristic function becomes more and more important in order to maintain real-time performance during online planning [@knepper2006high; @CirilloIROS2014]. A heuristic function estimates the cost-to-go from a state $z[k]\in\mathbb Z_d$ to the goal state $z_G$, and is used as guidance for the online graph search to expand the most promising vertices [@lavalle2006planning; @CirilloIROS2014; @knepper2006high]. The nominal path taken by the vehicle when motion primitive $m_i\in\pazocal{P}$ is applied from $z[k]$, is declared collision-free if it does not collide with any obstacles $c(m_i,z[k])\in\mathbb Z_{\text{free}}$, otherwise it is declared as in collision. Define as a discrete and integer-valued control signal that is controlled by the lattice planner, where $u_q[k]$ specifies which motion primitive that is applied a stage $k$. By specifying the set of allowed states $\mathbb Z_d$ and precomputing the set of motion primitives $\pazocal P$, the continuous-time optimal path planning problem  is approximated by the following discrete-time OCP: $$\begin{aligned} \operatorname*{minimize}_{\{u_q[k]\}^{N-1}_{k=0}, \hspace{0.5ex} N}\hspace{3.7ex} & J_{\text{D}} = \sum_{k=0}^{N-1}J_p(m_{u_q[k]}) \label{j1:eq:OCP_discrete} \\ \operatorname*{subject\:to}\hspace{3ex} & z[0] = z_I, \quad z[N] = z_G, \nonumber\\ & z[k+1] = f_{p}(z[k],m_{u_q[k]}), \nonumber \\ & m_{u_q[k]} \in \pazocal P(z[k]), \nonumber \\ & c(m_{u_q[k]},z[k]) \in \mathbb Z_{\text{free}}. \nonumber\end{aligned}$$ The decision variables to this problem are the integer-valued control signal sequence $\{u_q[k]\}^{N-1}_{k=0}$ and its length $N$. A feasible solution is an ordered sequence of collision-free motion primitives $\{m_{u_q[k]}\}^{N-1}_{k=0}$, *i.e.*, a nominal path $(z(s), u_p(s))$, $s\in [0, s_G]$, that connect the initial state $z(0)=z_I$ and the goal state $z(s_G)= z_G$. Given the set of all feasible solutions to , the optimal solution is the one that minimizes the cost function $J_{\text{D}}$. During online planning, the discrete-time OCP in  is solved using the anytime repairing A$^*$ (ARA$^*$) search algorithm [@arastar]. ARA$^*$ is based on standard A$^*$ but initially performs a greedy search with the heuristic function inflated by a factor $\gamma\geq1$. This provides a guarantee that the found solution has a cost $J_D$ that satisfies $J_D \leq \gamma J_D^*$, where $J_D^*$ denotes the optimal cost to . When a solution with guaranteed bound of $\gamma$-suboptimality has been found, $\gamma$ is gradually decreased until an optimal solution with $\gamma=1$ is found or if a maximum allowed planning time is reached. With this search algorithm, both real-time performance and suboptimality bounds for the produced solution can be guaranteed. In , it is assumed that $z_I\in \mathbb Z_d$ and $z_G\in \mathbb Z_d$ to make the problem well defined. If $z_I\notin \mathbb Z_d$ or $z_G\notin \mathbb Z_d$, they have to be projected to their closest neighboring state in $\mathbb Z_d$ using some distance metric. Thus, the discretization of the vehicle’s state-space restricts the set of possible initial states the lattice planner can plan from and desired goal states that can be reach exactly. Even though not considered in this work, these restrictions could be alleviated by the use of numerical optimal control [@ipopt] as a post-processing step [@lavalle2006planning; @oliveira2018combining; @andreasson2015fastsmoothing]. The main steps of the path planning framework used in this work are summarized in Workflow \[j1:alg1\] and each step is now explained more thoroughly. **Step 1 – State lattice construction:** 1. ***State-space discretization:*** Specify the resolution of the discretized state-space $\mathbb Z_d$. 2. ***Motion primitive selection:*** Specify the connectivity in the state lattice by selecting pairs of discrete states $\{z_s^i,z_f^i\}$, $i=1,\hdots,M$, to connect. 3. ***Motion primitive generation:*** Design the cost functional $J$ and compute the set of motion primitives $\pazocal P$ that moves the vehicle between $\{z_s^i,z_f^i\}$, $i=1,\hdots,M$. **Step 2 – Efficiency improvements:** 1. ***Motion primitive reduction:*** Systematically remove redundant motion primitives from $\pazocal P$ to reduce the branching factor of the search problem and therefore enhance the online planning time. 2. ***Heuristic function:*** Precompute a HLUT by calculating the optimal cost-to-go in an obstacle-free environment. **Step 3 – Online path planning:** 1. ***Initialization:*** Project the vehicle’s initial state $z_I$ and desired goal state $z_G$ to $\mathbb Z_d$. 2. ***Graph search:*** Solve the discrete-time OCP in  using ARA$^*$. 3. ***Return:*** Send the computed solution to the path-following controller or report failure. State lattice construction {#j1:subsec:lattice_creation} -------------------------- The offline construction of the state lattice can be divided into three steps, as illustrated in Figure \[j1:fig:state\_lattice\_construction\]. First, the state-space of the vehicle is discretized with a certain resolution. Second, the connectivity in the state lattice is decided by specifying a finite amount of pairs of discrete vehicle states , to connect. Third, the motion primitives connecting each of these pairs of vehicle states are generated by the use of numerical optimal control [@ipopt]. Together, these three steps define the resolution and the size of the lattice graph $\pazocal G$ and needs to be chosen carefully to maintain a reasonable search time during online planning, while at the same time allowing the vehicle to be flexible enough to maneuver in confined spaces. To maintain a reasonable search space, the augmented state-space of the vehicle $z[k] = \begin{bmatrix} x[k]^T & \alpha[k] &\omega[k]\end{bmatrix}^T$ is discretized into circular equilibrium configurations  at each state in the state lattice. This implies that the joint angles, $\beta_{2}[k]$ and $\beta_{3}[k]$, are implicitly discretized since they are uniquely determined by the equilibrium steering angle $\alpha[k]$ through the relationships in . However, in between two discrete states in the state lattice, the system is not restricted to circular equilibrium configurations. The steering angle rate $\omega[k]$ is constrained to zero at each vertex in the state lattice to make sure that the steering angle is continuously differentiable, even when multiple motion primitives are combined during online planning. The position of the semitrailer $(x_{3}[k],y_{3}[k])$ is discretized to a uniform grid with resolution and the orientation of the semitrailer $\theta_{3}[k]$ is discretized irregularly[^2] into different orientations [@pivtoraiko2009differentially]. This discretization of $\theta_{3}[k]$ is used to make it possible to construct short straight paths, compatible with the chosen discretization of the position from every orientation $\theta_{3}[k]\in\Theta$. Finally, the equilibrium steering angle $\alpha_{e}[k]$ is discretized into $|\Phi|=3$ different angles, where $\Phi = \{-0.1, 0, 0.1\}$. With the proposed state-space discretization, the actual dimension of the discretized state-space $\mathbb Z_d$ is four. Of course, the proposed discretization imposes restriction to the path planner, but is motivated to enable fast and deterministic online planning. Motion primitive generation {#j1:subsec:MPrimitiveGen} --------------------------- The motion primitive set $\pazocal{P}$ is precomputed offline by solving a finite set of OCPs that connect a set of initial states $z_s^i \in \mathbb Z_d$ to a set of neighboring states $z_f^i\in \mathbb Z_d$ in a bounded neighborhood in an obstacle-free environment. Unlike our previous work in [@LjungqvistIV2017], the objective functional used during motion primitive generation coincides with the online planning stage-cost $J(m_i)$. This enables the resulting motion plan to be as close as possible to the optimal one and desirable behaviors can be favored in a systematic way. To promote and generate less complex paths that are easier for a path-following controller to execute, the cost function $L$ in  is chosen as $$\begin{aligned} L( x, \alpha, \omega, u_\omega) = 1 + \left\lVert\begin{bmatrix} \beta_3 & \beta_2 \end{bmatrix}^T\right\rVert_\mathbf{Q_1}^2 + \left\lVert\begin{bmatrix} \alpha & \omega & u_\omega \end{bmatrix}^T\right\rVert_\mathbf{Q_2}^2, \label{j1:obj_rev}\end{aligned}$$ where the matrices $\mathbf Q_1 \succeq 0$ and $\mathbf Q_2 \succeq 0$ are design parameters that are used to trade off between simplicity of executing the maneuver and the path distance $s_f$. By tuning the weight matrix $\mathbf Q_1$, maneuvers in backward motion with large joint angles, $\beta_2$ and $\beta_3$, that have a higher risk to enter a jack knife state, can be penalized and therefore avoided during online planning if less complex motion primitives exist. In forward motion, the modes corresponding to the two joint angles $\beta_2$ and $\beta_3$ are stable and are therefore not penalized. To guarantee that the motion primitives in $\pazocal{P}$ move the vehicle between two discrete states in the state lattice, they are constructed by selecting initial states $z^i_{s}\in \mathbb Z_d$ and final states $z^i_{f}\in \mathbb Z_d$ that lie on the grid. A motion primitive in forward motion from $z^i_{s}=\begin{bmatrix} x_s^i & \alpha_s^i & 0\end{bmatrix}^T$ to $z^i_{f}=\begin{bmatrix} x_f^i & \alpha_f^i & 0\end{bmatrix}^T$ is computed by solving the following OCP: $$\begin{aligned} \operatorname*{minimize}_{u^i_\omega(\cdot), \hspace{0.5ex} s^i_{f} }\hspace{3.7ex} & J(m_i) = \int_{0}^{s^i_{f}}L(x^i(s),\alpha^i(s), \omega^i(s), u^i_\omega(s))\,\text ds \label{j1:OCP_mp_gen}\\ \operatorname*{subject\:to}\hspace{3ex} & \frac{\text dx^i}{\text ds} = f( x^i(s),\tan \alpha^i(s)/L_1), \nonumber \\ &\frac{\text d\alpha^i}{\text ds} =\omega^i(s), \quad \frac{\text d\omega^i}{\text ds}= u^i_{\omega}(s), \nonumber \\ & z^i(0) = z_s^i, \quad z^i(s_f) = z_f^i, \nonumber \\ & z^i(s) \in \mathbb Z, \quad |u^i_\omega(s)|\leq u_{\omega,\text{max}}. \nonumber \end{aligned}$$ Note the similarity of OCP in  with the optimal path planning problem . Here, the obstacle imposted constraints are neglected and the vehicle is constrained to only move forwards at constant speed $v=1$. The established results in Lemma \[j1:L1\] and Theorem \[j1:T-optimal-symmetry\] are exploited to generate the motion primitives for backward motion. Here, each OCP is solved from the final state $z_f^i$ to the initial state $z_s^i$ in forward motion and the symmetry result in Lemma \[j1:L1\] is applied to recover the backward motion segment. This technique is used to avoid the structural unstable joint angle dynamics in backward motion that can cause numerical problems for the OCP solver. Furthermore, Theorem \[j1:T-optimal-symmetry\] guarantees that the optimal solution $(z^i(s), u^i_p(s))$, $s\in [0, s^i_f]$ and the optimal objective functional value $J(m_i)$ remain unaffected. In this work, the OCP in  is solved by deploying the state-of-the-art numerical optimal control solver CasADi [@casadi], combined with the primal-dual interior-point solver IPOPT [@ipopt]. Each generated motion primitive is represented as a distance sampled path in all vehicle states and control signals. Finally, since the system is orientation-invariant, rotational symmetries of the system are exploited[^3] to reduce the number of OCPs that need to be solved during the motion primitive generation [@pivtoraiko2009differentially; @CirilloIROS2014]. Even though the motion primitive generation is performed offline, it is not feasible to make an exhaustive generation of motion primitives to all grid points due to computation time and the high risk of creating redundant and undesirable segments. Instead, for each initial state $x_s^i\in\mathbb Z_d$ with position of the semitrailer at the origin, a careful selection of final states $x_f^i\in\mathbb Z_d$ is performed based on system knowledge and by visual inspection. The OCP solver is then only generating motion primitives from this specified set of OCPs. For our full-scale test vehicle, the set of motion primitives from all initial states with $\theta_{3,s}=0$, is illustrated in Figure \[j1:fig:primitives\]. The following can be noted regarding the manual specification of the motion primitive set: - A motion primitive $m_i\in \pazocal P$ is either a straight motion, a heading change maneuver or a parallel maneuver. - The motion primitives in forward motion are more aggressive compared to the ones in backward motion, $i.e.$, a maneuver in forward motion has a shorter path distance compared to a similar maneuver in backward motion. - The final position ($x^i_{3,f},y^i_{3,f}$) of the axle of the semitrailer is manually reconfigured if the ratio between the stage cost $J(m_i)$ and the path distance $s^i_f$ is too high (*e.g.*, if $J(m_i)/s^i_f\geq1.5$ for our application). - While starting in a nonzero equilibrium configuration, the final position ($x^i_{3,f},y^i_{3,f}$) is mainly biased to the first and second quadrants for $\alpha^i_{s}=0.1$ and to the third and fourth quadrants for $\alpha^i_{s}=-0.1$. Efficiency improvements and online path planning {#j1:subsec:Mreduction} ------------------------------------------------ To improve the online planning time, the set of motion primitives $\pazocal P$ is reduced using the reduction technique presented in [@CirilloIROS2014]. A motion primitive $m_i\in \pazocal P$ with stage cost $J(m_i)$ is removed if its state transition in free-space can be obtained by a combination of the other motion primitives in $\pazocal P$ with a combined total stage cost $J_{comb}$ that satisfies $J_{comb}\leq \eta J(m_i)$, where $\eta\geq 1$ is a design parameter. This procedure can be used to reduce the size of the motion primitive set by choosing $\eta>1$, or by selecting $\eta = 1$ to verify that redundant motion primitives do not exist in $\pazocal P$. As previously mentioned, a heuristic function is used to guide the online search in the state lattice. The goal of the heuristic function is to perfectly estimate the cost-to-go at each vertex in the graph. In this work, we rely on a combination of two admissible heuristic functions: Euclidean distance and a free-space HLUT [@knepper2006high]. The HLUT is generated using the techniques presented in [@knepper2006high]. It is computed offline by solving several obstacle free path planning problems from all initial states $z_I\in\mathbb Z_d$ with position of the semitrailer at the origin, to all final states $z_G\in\mathbb Z_d$ with a specified maximum cut-off cost $J_{\text{cut}}$. As explained in [@knepper2006high], this computation step can be done efficiently by running a Dijkstra’s algorithm from each initial state. During each Dijkstra’s search, the optimal cost-to-come from explored vertices are simply recorded and stored in the HLUT. Moreover, in analogy to the motion primitive generation, the size of the HLUT is kept small by exploiting the position and orientation invariance properties of $\pazocal P$ [@knepper2006high; @CirilloIROS2014]. The final heuristic function value used during the online graph search is the maximum of these two heuristics. As shown in [@knepper2006high], a HLUT significantly reduces the online planning time, since it takes the vehicle’s nonholonomic constraints into account and enables perfect estimation of cost-to-go in free-space scenarios with no obstacles. Path-following controller {#j1:sec:Controller} ========================= The motion plan received from the lattice planner is a feasible nominal path satisfying the time-scaled model of the G2T with a car-like tractor : $$\begin{aligned} \frac{\text dx_r}{\text ds} = v_{r}(s)f( x_r(s),\kappa_r(s)), \quad s \in[0, s_G], \label{j1:eq:tray:tractor}\end{aligned}$$ where $x_r(s)$ is the nominal vehicle states for a specific $s$ and $u_r(s)=\begin{bmatrix} v_r(s) & \kappa_r(s) \end{bmatrix}^T$ is the nominal velocity and curvature control signals. The nominal path satisfies the system dynamics, its physically imposed constraints and moves the vehicle in free-space from the vehicle’s initial state $x_r(0)=x_I$ to a desired goal state $x_r(s_G)=x_G$. Here, the nominal path is parametrized in $s$, which is the traveled distance by the rear axle of the car-like tractor. When backward motion tasks are considered and the axle of the semitrailer is to be controlled, it is more convenient to parametrize the nominal path in terms of traveled distance by the axle of the semitrailer $\tilde s$. Using the ratio $g_v>0$ defined in , these different path parameterizations are related as $\tilde s(s) = \int_0^{s}g_v(\beta_{2,r}(\tau),\beta_{3,r}(\tau),\kappa_r(\tau))\text ds$ and the nominal path  can equivalently be represented as $$\begin{aligned} \label{j1:eq:tray:semitrailer} \frac{\text dx_r}{\text d\tilde s} = \frac{v_{r}(\tilde s)}{g_v(\beta_{2,r}(\tilde s),\beta_{3,r}(\tilde s),\kappa_{r}(\tilde s))}f(x_r(\tilde s),\kappa_r(\tilde s)), \quad \tilde s \in[0,\tilde s_G],\end{aligned}$$ where $\tilde s_G$ denotes the total distance of the nominal path taken by the axle of the semitrailer. According to the problem definition in Definition 2, the objective of the path-following controller is to stabilize the G2T with a car-like tractor  around this nominal path. It is done by first describing the controlled vehicle  in terms of deviation from the nominal path generated by the system in , as depicted in Figure 7. During path execution, $\tilde s(t)$ is defined as the orthogonal projection of center of the axle of the semitrailer $(x_{3}(t),y_{3}(t))$ onto the nominal path in $(x_{3,r}(\tilde s),y_{3,r}(\tilde s))$, $\tilde s\in[0,\tilde s_G]$ at time $t$: $$\begin{aligned} \label{j1:eq:tildes_def} \tilde s(t) = \operatorname*{\arg\min}_{\tilde s\in[0,\tilde s_G]} \left|\left|\begin{bmatrix} x_{3}(t)-x_{3,r}(\tilde s) \\ y_{3}(t)-y_{3,r}(\tilde s)\end{bmatrix}\right|\right|_2.\end{aligned}$$ Using standard geometry, the curvature $\kappa_{3,r}(\tilde s)$ of the nominal path taken by the axle of the semitrailer is given by $$\begin{aligned} \kappa_{3,r}(\tilde s)=\frac{\text{d}\theta_{3,r}}{\text{d}\tilde s}=\frac{\tan\beta_{3,r}(\tilde s)}{L_3}, \quad \tilde s \in[0,\tilde s_G]. \label{j1:eq:kappa3}\end{aligned}$$ Define $\tilde z_3(t)$ as the signed lateral distance between the center of the axle of the semitrailer $(x_3(t),y_3(t))$ onto its projection to the nominal path in $(x_{3,r}(\tilde s),y_{3,r}(\tilde s))$, $\tilde s\in[0,\tilde s_G]$ at time $t$. Introduce the controlled curvature deviation as $\tilde \kappa(t)=\kappa(t)-\kappa_{r}(\tilde s(t))$, define the orientation error of the semitrailer as $\tilde\theta_3(t)=\theta_3(t)-\theta_{3,r}(\tilde s(t))$ and define the joint angular errors as $\tilde\beta_3(t)=\beta_3(t)-\beta_{3,r}(\tilde s(t))$ and $\tilde\beta_2(t)=\beta_2(t)-\beta_{2,r}(\tilde s(t))$, respectively. Define $\Pi(a,b) = \{t\in\mathbb R_+ \mid a \leq \tilde s(t) \leq b \}$ as the time-interval when the covered distance along the nominal path $\tilde s(t)$  is between $a\in\mathbb R_+$ and $b\in\mathbb R_+$, where $0\leq a\leq b\leq \tilde s_G$. Then, using the Frenet-Serret formula, the progression along the nominal path $\tilde s(t)$ and the signed lateral distance $\tilde z_3(t)$ to the nominal path can be modeled as: \[j1:eq:model\_s\_dot\_sz\] $$\begin{aligned} \dot {\tilde s} &= v_3 \frac{v_r\cos \tilde \theta_3}{1-\kappa_{3,r} \tilde z_3}, \quad t\in\Pi(0,\tilde s_G), \label{j1:eq:model_s1} \\ \dot{\tilde z}_3 &= v_3\sin \tilde \theta_3 \label{j1:eq:model_s2}, \hspace{28pt} t\in\Pi(0,\tilde s_G), \end{aligned}$$ where $v_3 = vg_v(\tilde \beta_{2}+\beta_{2,r},\tilde\beta_3 + \beta_{3,r}, \tilde \kappa+ \kappa_{r})$ and the dependencies of $\tilde s$ and $t$ are omitted for brevity. This transformation is valid in a tube around the nominal path in for which $\kappa_{3,r}\tilde z_3<1$. The width of this tube depends on the semitrailer’s nominal curvature $\kappa_{3,r}$ and when it tends to zero (a straight nominal path), $\tilde z_3$ can vary arbitrarily. Essentially, to avoid the singularities in the transformation, we must have that $|z_3| < |\kappa^{-1}_{3,r}|$, when $\tilde z_3$ and $\kappa_{3,r}$ have the same sign. Note that $v_r\in\{-1,1\}$ is included in  to make $\tilde s(t)$ a monotonically increasing function in time during tracking of nominal paths in both forward and backward motion. Here, it is assumed that the longitudinal velocity of the tractor $v(t)$ is chosen such that $\text{sign}(v(t))=v_r(\tilde s(t))$ and it is assumed that the orientation error of the semitrailer satisfies . With the above assumptions, $\dot {\tilde s}>0$ during path tracking of nominal paths in both forward and backward motion. ![An illustrative description of the Frenet frame with its moving coordinate system located at the orthogonal projection of the center of the axle of the semitrailer onto the reference path (dashed red curve) in the nominal position of the axle of the semitrailer $(x_{3,0}(\tilde s),y_{3,0}(\tilde s))$, $\tilde s \in [0,\tilde s_G]$. The black tractor-trailer system is the controlled vehicle and the gray tractor-trailer system is the nominal vehicle, or the desired vehicle configuration at this specific value of $\tilde s(t)$.](truck_frenet.pdf){width="0.9\linewidth"} \[j1:fig:frenet\_frame\] The models for the remaining path-following error states $\tilde\theta_3(t)$, $\tilde\beta_3(t)$ and $\tilde\beta_2(t)$ are derived by applying the chain rule, together with equations –,  and : \[j1:eq:model\_s\] $$\begin{aligned} \dot{\tilde\theta}_3 =& v_3 \left( \frac{\tan(\tilde{\beta}_3+\beta_{3,r})}{L_3} - \frac{\kappa_{3,r}\cos \tilde \theta_3}{1-\kappa_{3,r}\tilde z_3} \right), \hspace{61pt} t\in\Pi(0,\tilde s_G), \label{j1:eq:model_s3} \\ \dot{\tilde \beta}_3 =& v_3 \left(\frac{\sin(\tilde \beta_2+\beta_{2,r})-M_1\cos(\tilde \beta_2+\beta_{2,r}) (\tilde \kappa+ \kappa_r)}{L_2\cos(\tilde \beta_3+\beta_{3,r}) C_1(\tilde \beta_2+\beta_{2,r}, \tilde \kappa+ \kappa_r)} - \frac{\tan(\tilde \beta_3+\beta_{3,r})}{L_3} \nonumber \right. \\ &\left. -\frac{\cos{\tilde{\theta}_3}}{1-\kappa_{3,r}\tilde z_3}\left(\frac{\sin\beta_{2,r} -M_1 \cos\beta_{2,r}\kappa_r}{L_2\cos\beta_{3,r} C_1(\beta_{2,r},\kappa_r)}-\kappa_{3,r}\right)\right), \quad t\in\Pi(0,\tilde s_G), \label{j1:eq:model_s4} \\ \dot{\tilde \beta}_2 =& v_3\left( \left( \frac{\tilde \kappa+ \kappa_r - \frac{\sin(\tilde \beta_2+\beta_{2,r})}{L_2} + \frac{M_1}{L_2}\cos(\tilde \beta_2+\beta_{2,r})(\tilde \kappa+ \kappa_r)}{\cos(\tilde \beta_3+\beta_{3,r}) C_1(\tilde \beta_2+\beta_{2,r}, \tilde \kappa+ \kappa_r)}\right) \nonumber \right. \\ &\left. -\frac{\cos{\tilde{\theta}_3}}{1-\kappa_{3,r}\tilde z_3}\left( \frac{\kappa_r - \frac{\sin \beta_{2,r}}{L_2} + \frac{M_1}{L_2}\cos \beta_{2,r}\kappa_r}{\cos \beta_{3,r} C_1(\beta_{2,r}, \kappa_r)}\right)\right), \hspace{25pt} t\in\Pi(0,\tilde s_G).\label{j1:eq:model_s5} \end{aligned}$$ A more detailed derivation of  is provided in Appendix A. Together, the differential equations in  and  describe the model of the G2T with a car-like tractor  in terms of deviation from the nominal path generated by the system in . When path-following control is considered, the speed at which the nominal path  is executed is not considered, but only that it is followed with a small path-following error. This means that the progression along the path $\tilde s(t)$ is not explicitly controlled by the path-following controller. However, the dependency of $\tilde s$ in  and makes the nonlinear system distance-varying. Define the path-following error states as $\tilde x_e = \begin{bmatrix} \tilde z_3 & \tilde\theta_3 & \tilde\beta_3 & \tilde\beta_2\end{bmatrix}^T$, where its model is given by –. By replacing $v_3$ with $v$ using the relationship defined in , the path-following error model – and the progression along the nominal path , can compactly be expressed as (see Appendix A) \[j1:eq:error\_model\_and\_progression\] $$\begin{aligned} \dot{\tilde s} &= vf_{\tilde s}(\tilde s,\tilde x_e), \hspace{18pt} t\in\Pi(0,\tilde s_G), \label{j1:eq:progression_compact} \\ \dot{\tilde x}_e &= v \tilde f(\tilde s, \tilde x_e, \tilde \kappa),\quad t\in\Pi(0,\tilde s_G), \label{j1:eq:error_model_compact} \end{aligned}$$ where $\tilde f(\tilde s, 0, 0)=0$, $\forall t\in\Pi(0,\tilde s_G)$, *i.e.*, the origin $(\tilde x_e,\tilde \kappa)=(0,0)$ is an equilibrium point. Since $v$ enters linear in , in analogy to , time-scaling [@sampei1986time] can be applied to eliminate the speed dependence $|v|$ from the model. Therefore, without loss of generality, it is hereafter assumed that the longitudinal velocity of the rear axle of the tractor is chosen as $v(t)=v_r(\tilde s(t))\in\{-1,1\}$, which implies that $\dot{\tilde s}(t) > 0$. Moreover, from the construction of the set of motion primitives $\pazocal P$, each motion primitive encodes a forward or backward motion segment (see Section \[j1:subsec:MPrimitiveGen\]). Local behavior around a nominal path ------------------------------------ The path-following error model in  and can be linearized around the nominal path by equivalently linearizing  around the origin $(\tilde x_e, \tilde \kappa) = (0,0)$. The origin is by construction an equilibrium point to  and hence a first-order Taylor series expansion yields $$\begin{aligned} \dot{\tilde x}_e = vA(\tilde s(t))\tilde x_e + vB(\tilde s(t))\tilde{\kappa},\quad t\in\Pi(0,\tilde s_G). \label{j1:eq:lin_sys} \end{aligned}$$ For the special case when the nominal path moves the system either straight forwards or backwards, the matrices $A$ and $B$ simplify to $$\begin{aligned} A = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & \frac{1}{L_3} & 0 \\ 0 & 0 & -\frac{1}{L_3} & \frac{1}{L_2} \\[2pt] 0 & 0 & 0 & -\frac{1}{L_2} \\ \end{bmatrix}, \quad B= \begin{bmatrix} 0 \\ 0 \\ -\frac{M_1}{L_2} \\[3pt] \frac{L_2 + M_1}{L_2} \end{bmatrix}, \label{j1:eq:lin_AB}\end{aligned}$$ and the characteristic polynomial is $$\begin{aligned} \det{(\lambda I-vA)}=v^2\lambda^2\left(\lambda+\frac{v}{L_3}\right)\left(\lambda+\frac{v}{L_2}\right).\end{aligned}$$ Thus, around a straight nominal path, the linearized system in  is marginally stable in forward motion because of the double integrator and unstable in backward motion , since the system has two poles in the right half plane. Due to the positive off-axle hitching $M_1>0$, the linear system has a zero in some of the output channels [@altafini2002hybrid; @CascadeNtrailernonmin]. As an example, with $C=\begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}$ the transfer function from $\tilde\kappa$ to $\tilde z_3$ is $$\begin{aligned} G(s) = C\left(sI-vA\right)^{-1}vB = \frac{v^3M_1\left(\frac{v}{M_1} - s\right)}{L_2L_3s^2\left(\frac{v}{L_2} + s\right)\left(\frac{v}{L_3} + s\right)}.\end{aligned}$$ Here, it is clear that during path-following of a straight path in forward motion, the positive off-axle hitching $M_1>0$ introduces non-minimum phase properties for the system due to the existence of a zero in the right half-plane (see [@CascadeNtrailernonmin] for an extensive analysis). In backward motion, this zero is located in the left half-plane and the system is instead minimum-phase. It can be shown that the transfer functions from $\tilde\kappa$ to $\tilde \theta_3$ and $\tilde \beta_3$ have the same properties and that the transfer function from $\tilde\kappa$ to $\tilde \beta_2$ has no zero. In the sequel, we focus on stabilizing the path-following error model  in some neighborhood around the origin $(\tilde x_e,\tilde \kappa)=(0,0)$. This is done by utilizing the framework presented in [@LjungqvistACC2018], where the closed-loop system consisting of the controlled vehicle and the path-following controller, executing a nominal path computed by a lattice planner, is first modeled as a hybrid system. The framework is tailored for the lattice-based path planner considered in this work and is motived due to the fact it is well-known from the theory of hybrid systems that switching between stable systems in an inappropriate way can lead to instability of the switched system [@decarlo2000perspectives; @pettersson96]. Connection to hybrid systems {#j1:sec:connection} ---------------------------- The nominal path  is computed online by the lattice planner and is thus a priori unknown. However, it is composed of a finite sequence of precomputed motion primitives $\{m_{u_q[k]}\}^{N-1}_{k=0}$ of length $N$. Each motion primitive $m_i$ is chosen from the set of $M$ possible motion primitives, *i.e.*, $m_i\in\pazocal P$. Along motion primitive $m_i\in\pazocal P$, the nominal path is represented as $(x^i_r(\tilde s),u^i_r(\tilde s)),$ $\tilde s\in[0,\tilde s_f^i]$ and the path-following error model  becomes $$\begin{aligned} \label{j1:eq:error_model_mi} \dot{\tilde x}_e = v_r(\tilde s) \tilde f_i(\tilde s, \tilde x_e, \tilde \kappa),\quad t\in\Pi(0,\tilde s^i_f).\end{aligned}$$ From the fact that the sequence of motion primitives is selected by the lattice planner, it follows that the system can be descried as a hybrid system. Define $q : \mathbb [0,\tilde s_G] \rightarrow \{1,\hdots,M\}$ as a piecewise constant control signal that is selected by the lattice planner. Then, the path-following error model can be written as a distance-switched continuous-time hybrid system: $$\begin{aligned} \label{j1:eq:error_model_hybrid} \dot {\tilde x}_e = v_r(\tilde s)\tilde f_{q(\tilde s)}(\tilde s, \tilde x_e, \tilde \kappa), \quad t\in\Pi(0,\tilde s_G).\end{aligned}$$ This hybrid system is composed of $M$ different subsystems, where only one subsystem is active for each $\tilde s\in[0,\tilde s_G]$. Here, $q(\tilde s)$ is assumed to be right-continuous and from the construction of the motion primitives, it holds that there are finitely many switches in finite distance [@decarlo2000perspectives; @pettersson96]. We now turn to the problem of designing the hybrid path-following controller $\tilde\kappa = g_{q(\tilde s)}(\tilde x_e)$, such that the path-following error is upper bounded by an exponential decay during the execution of each motion primitive $m_i\in\pazocal P$, individually. Design of the hybrid path-following controller {#j1:sec:feedback_design} ---------------------------------------------- \[j1:sec:lowlevelcontrol\] The synthesis of the path-following controller is performed separately for each motion primitive $m_i\in\pazocal P$. The class of hybrid path-following controllers is limited to piecewise linear state-feedback controllers with feedforward action. Denote the path-following controller dedicated for motion primitive $m_i\in\pazocal P$ as $\kappa (t) = \kappa_r(\tilde s(t))+K_i \tilde x_e(t)$. When applying this control law to the path-following error model in , the nonlinear closed-loop system can, in a compact form, be written as $$\begin{aligned} \label{j1:eq:error_model_mi_cl} \dot{\tilde x}_e = v_r(\tilde s)\tilde f_i(\tilde s, \tilde x_e, K_i \tilde x_e) = v_r(\tilde s)\tilde f_{cl,i}(\tilde s, \tilde x_e), \quad t\in\Pi(0,\tilde s^i_f),\end{aligned}$$ where $\tilde x_e = 0$ is as equilibrium point, since $f_{cl,i}(\tilde s, 0)=\tilde f_{i}(\tilde s, 0, 0)=0$, $\forall \tilde s\in[0,s^i_f]$. The state-feedback controller $\tilde \kappa = K_i \tilde x_e$ is intended to be designed such that the path-following error is locally bounded and decays towards zero during the execution of $m_i\in\pazocal P$. This is guaranteed by Theorem \[j1:T3\]. Assume $\tilde f_{cl,i}:[0,\tilde s_f^i] \times \tilde{\mathbb{X}}_e \rightarrow \mathbb R^4$ is continuously differentiable with respect to $\tilde x_e \in \tilde{\mathbb{X}}_e = \{ \tilde x_e \in \mathbb R^4 \mid \|\tilde x_e\|_2 < r \}$ and the Jacobian matrix $[\partial f_{cl,i} / \partial \tilde{x}_e]$ is bounded and Lipschitz on $\tilde{\mathbb{X}}_e$, uniformly in $\tilde s\in [0,\tilde s_f^i]$. \[j1:A2\] Consider the closed-loop system in . Under Assumption \[j1:A2\], let $$\begin{aligned} A_{cl,i}(\tilde s)=v_r(\tilde s)\frac{\partial \tilde f_{cl,i}}{\partial \tilde{x}_e}(\tilde s,0). \label{j1:Acli} \end{aligned}$$ If there exist a common matrix $ P_i\succ 0$ and a positive constant $\epsilon$ that satisfy $$\begin{aligned} A_{cl,i}(\tilde s)^{T} P_i +P_iA_{cl,i}(\tilde s) \preceq -2\epsilon P_i \quad \forall \tilde s \in [0,\tilde s_f^i]. \label{j1:eq:lyap} \end{aligned}$$ Then, the following inequality holds $$\begin{aligned} \label{j1:convergece_LTV} ||\tilde x_e(t)|| \leq \rho_i||\tilde x_e(0)|| e^{-\epsilon t},\quad \forall t\in\Pi(0,\tilde s^i_f), \end{aligned}$$ where $\rho_i=\text{Cond}(P_i)$ is the condition number of $ P_i$. \[j1:T3\] See, *e.g.*, [@khalil]. Theorem \[j1:T3\] guarantees that if the feedback gain $K_i$ is designed such that there exists a quadratic Lyapunov function $V_i(\tilde x_e) = \tilde x_e^T P_i \tilde x_e$ for  around the origin satisfying $\dot V_i \leq -2\epsilon V_i$, then a small disturbance in the initial path-following error $\tilde x_e(0)$ results in a path-following error state trajectory $\tilde x_e(t)$ whose norm is upper bounded by an exponential decay. I analogy to [@LjungqvistACC2018], the condition in  can be reformulated as a controller synthesis problem using linear matrix inequality (LMI) techniques. By using the chain rule, the matrix $A_{cl,i}(\tilde s)$ in  can be written as $$\begin{aligned} \label{j1:eq:linearizaion_Acl} A_{cl,i}(\tilde s) &= v_r(\tilde s)\frac{\partial \tilde f_i}{\partial \tilde x}(\tilde s,0,0) + v_r(\tilde s)\frac{\partial \tilde f_i}{\partial \tilde \kappa}(\tilde s,0,0)K_i \triangleq A_i(\tilde s)+B_i(\tilde s)K_i.\end{aligned}$$ Furthermore, assume the pairs $[A_i(\tilde s),B_i(\tilde s)]$ lie in the convex polytope $\mathbb S_i$, $\forall \tilde s\in[0,s^i_f]$, where $\mathbb S_i$ is represented by its $L_i$ vertices $$\begin{aligned} [A_i(\tilde s),B_i(\tilde s)] \in \mathbb S_i = \textbf{Co} \left\{[A_{i,1},B_{i,1}],\hdots,[A_{i,L_i},B_{i,L_i}] \right\}, \label{j1:def:polytope}\end{aligned}$$ where **Co** denotes the convex hull. Now, condition  in Theorem \[j1:T3\] can be reformulated as [@boyd1994linear]: $$\begin{aligned} \label{j1:matrixineq_nonconvex} (A_{i,j}+B_{i,j}K_i)^TS_i + S_i(A_{i,j}+B_{i,j}K_i) \preceq -2\epsilon S_i, \quad j=1,\hdots,L_i.\end{aligned}$$ This matrix inequality is not jointly convex in $S_i$ and $K_i$. However, if $\epsilon>0$ is fixed, using the bijective transformation $Q_i=S_i^{-1}\succ 0$ and $Y_i=K_iS_i^{-1}\in \mathbb R ^{1\times 4}$, the matrix inequality in  can be rewritten as an LMI in $Q_i$ and $Y_i$ [@wolkowicz2012handbook]: $$\begin{aligned} \label{j1:eq:matrixineq_convex} Q_iA_{i,j}^T + Y_i^T B_{i,j}^T + A_{i,j}Q_i+B_{i,j}Y_i + 2\epsilon Q_i \preceq 0, \quad j=1,\hdots,L_i.\end{aligned}$$ Hence, it is an LMI feasibility problem to find a linear state-feedback controller that satisfies condition  in Theorem \[j1:T3\]. If $Q_i$ and $Y_i$ are feasible solutions to , the quadratic Lyapunov function is $V_i(\tilde x) = \tilde x^T Q_i^{-1}\tilde x$ and the linear state-feedback controller is $\tilde \kappa = Y_iQ_i^{-1}\tilde x_e$. The feedback control design is performed separately for each motion primitives $m_i\in \pazocal P$, where the feedback controller $\tilde \kappa = K_i \tilde x_e$ and the corresponding Lyapunov function $V_i(\tilde x_e)=\tilde x_e^TS_i\tilde x_e$ are dedicated for motion primitive $m_i\in\pazocal P$. If a common quadratic Lyapunov function exists that satisfies  $\forall m_i \in \pazocal P$ (*i.e.*, $Q_i=Q$, but $Y_i$ can vary), then the path-following error is guaranteed to exponentially decay towards zero under an arbitrary sequence of motion primitives [@boyd1994linear; @decarlo2000perspectives]. This is however not possible for underactuated vehicles, where the Jacobian linearization takes on the form in . \[j1:P1\] Consider the switched linear system $$\begin{aligned} \dot x = vAx+vBu, \quad v\in\{ -1, 1\}, \label{j1.eq:switched_lin_v_sys} \end{aligned}$$ where $A\in \mathbb R^{n\times n}$ and $B\in \mathbb R^{n\times m}$. When $\text{rank}(B)<n$, there exists no hybrid linear state-feedback control law in the form $$\begin{aligned} u=\begin{cases} K_1x, \quad v = 1 \\ K_2x, \quad v = -1 \\ \end{cases}, \label{j1:hybrid_ctrl} \end{aligned}$$ where $K_1\in\mathbb R^{m \times n}$ and $K_2\in\mathbb R^{m \times n}$, such that the closed-loop system is quadratically stable with a quadratic Lyapunov function $V(x)=x^TPx$, $\dot V(x) < 0$ and $ P\succ 0$. See [@LjungqvistACC2018]. A direct consequence of Theorem \[j1:P1\] is that it is not possible to design a single state-feedback controller $\tilde \kappa = K\tilde x_e$ such that the closed-loop system  is locally quadratically stable [@hybridcontrol2001] along nominal paths that are composed of backward and forward motion segments. From Theorem \[j1:P1\], it is clear that for hybrid nonlinear systems, where the Jacobian linearization can be written as in , it is not possible to design a path-following controller  such that local quadratic stability in continuous-time can be guaranteed. In the next section, a systematic framework is presented for analyzing the behavior of the distance-switched continuous-time hybrid system in , when the hybrid path-following controller $\tilde \kappa=K_{q(\tilde s)}\tilde x_e$ already has been designed. Convergence along a combination of motion primitives {#j1:sec:convergence} ---------------------------------------------------- Consider the continuous-time hybrid system in  with the hybrid path-following controller $\tilde\kappa= K_{q(\tilde s)}\tilde x_e$ that has been designed following the steps presented in Section \[j1:sec:lowlevelcontrol\]. Assume motion primitive $m_i\in\pazocal P$ is switched in at path distance $s_k$, $i.e.$, $q(\tilde s(t))=i$, for all $t\in\Pi(\tilde s_k, \tilde s_k + s_f^i)$. We are now interested in analyzing the evolution of the path-following error $\tilde x_e(t)$ during the execution of this motion primitive. Since the longitudinal velocity of the tractor is selected as $v(t)=v_r(\tilde s(t))$, then $\dot{\tilde s}(t)>0$ and it is possible to eliminate the time-dependency in the path-following error model . By applying the chain rule, we get $\frac{\text d\tilde x_e}{\text d\tilde s}=\frac{\text d\tilde x_e}{\text dt}\frac{\text dt}{\text d\tilde s}=\frac{\text d\tilde x_e}{\text dt}\frac{1}{\dot{\tilde s}}$. Hence, using , the distance-based version of the path-following error model  can be represented as $$\begin{aligned} \frac{\text d\tilde x_e}{\text d\tilde s} = \frac{ f_{cl,i}(\tilde s, \tilde x_e(\tilde s))}{f_s(\tilde s,\tilde x_e(\tilde s))}, \quad \tilde s \in[\tilde s_k, \tilde s_k + s_f^i], \label{j1:eq:error_states_distance}\end{aligned}$$ where $\tilde x_e(\tilde s_k)$ is given. The evolution of the path-following error $\tilde x_e(\tilde s)$ becomes $$\begin{aligned} \tilde x_e(\tilde s_k+\tilde s^i_f) = \tilde x_e(\tilde s_k) + \bigintsss_{\tilde s_k}^{\tilde s_k+\tilde s_f^i}\frac{ f_{cl,i}(\tilde s, \tilde x_e(\tilde s))}{f_s(\tilde s,\tilde x_e(\tilde s))}\text d\tilde s \triangleq T_i(\tilde x_e(s_k)), \label{j1:eq:error_states_lattice}\end{aligned}$$ where $\tilde x_e(s_k)$ denotes the path-following error when motion primitive $m_i\in\pazocal P$ is started and $\tilde x_e(\tilde s_k+\tilde s^i_f)$ denotes the path-following error when the execution of $m_i$ is finished. The solution to the integral in  has not an analytical expression. However, numerical integration can be used to compute a local approximation of the evolution of $\tilde x_e(\tilde s)$ between the two switching points $\tilde s_k$ and $\tilde s_k + \tilde s_f^i$. A first-order Taylor series expansion of  around the origin $\tilde x_e(\tilde s_k) = 0$ yields $$\begin{aligned} \tilde x_e(\tilde s_k+\tilde s^i_f) = T_i(0) + \underbrace{\left.\frac{\text dT_i(\tilde x_e(\tilde s_k))}{\text d\tilde x_e(\tilde s_k)}\right|_{(0)}}_{=F_i}\tilde x_e(\tilde s_k). \label{j1:eq:lin_disc}\end{aligned}$$ The term $ T_i(0)=0$, since $\tilde f_{cl,i}(\tilde s,0) = 0$, $\forall\tilde s\in[\tilde s_k, \tilde s_k + \tilde s_f^i]$. Denote $\tilde x_e[k] = \tilde x_e (\tilde s_k)$, $\tilde x_e[k+1] = \tilde x_e(\tilde s_k+ \tilde s_f^i)$ and $u_q[k] = q(\tilde s_k) = i$. By, *e.g.*, the use of finite differences, the evolution of the path-following error  after motion primitive $m_i\in\pazocal P$ has be executed can be approximated as a linear discrete-time system $$\begin{aligned} \tilde x_e[k+1] = F_i \tilde x_e[k]. \label{j1:eq:lin_disc_transition}\end{aligned}$$ Repeating this procedure for all $M$ motion primitives, a set of $M$ transition matrices $\mathbb F=\{F_1,\hdots,F_M\}$ can be computed. Then, the discrete-time system that locally around the origin describes the evolution of the path-following error  between each switching point can be described as a linear discrete-time switched system: $$\begin{aligned} \tilde x_e[k+1]=F_{u_q[k]}\tilde x_e[k], \quad u_q[k]\in\{1,\hdots,M\}, \label{j1:eq:swithing_system}\end{aligned}$$ where the motion primitive sequence $\{u_q[k]\}_{k=0}^{N-1}$ and its length $N$ are unknown at the time of the analysis. Exponential decay of the solution $\tilde x[k]$ to  is guaranteed by Theorem \[j1:T4\]. \[j1:T4\] Consider the linear discrete-time switched system in . Suppose there exist a matrix $ S\succ 0$ and a $\eta\geq 1$ that satisfy \[j1:eq:dlmitotal\] $$\begin{aligned} I \preceq S &\preceq \eta I \label{j1:cond_S},\\ \label{j1:dlmi} F_j^T S F_j - S &\preceq - \mu S, \quad \forall j \in \{ 1,\hdots,M\}, \end{aligned}$$ where $0<\mu<1$ is a constant. Then, under arbitrary switching for $k \geq 0$ the following inequality holds $$\begin{aligned} \label{ineq:discretetime} \|\tilde x_e[k]\| \leq \|\tilde x_e[0]\|\eta^{1/2}\lambda^{k}, \end{aligned}$$ where $\lambda = \sqrt{1 - \mu}$ and $\eta=\text{Cond}(S)$ denotes the condition number of $S$. See [@LjungqvistACC2018]. For a fixed $\mu$,  is a set of LMIs in the variables $S$ and $\eta$. With $\mu$, $\eta$ and $S$ as variables, the problem in  is a generalized eigenvalue problem and bisection can be used to solve the optimization problem while, $e.g.$, maximizing the decay rate $\mu$ and/or minimizing the condition number $\eta$ of the matrix $S$. The result in Theorem \[j1:T4\] establishes that the upper bound on the path-following error at the switching points exponentially decays towards zero. Thus, the norm of the initial path-following error $\Vert \tilde x_e(\tilde s_k)\Vert$, when starting the execution of a new motion primitive, will decrease as $k$ grows. Moreover, combining Theorem \[j1:T3\] and Theorem \[j1:T4\], this implies that the upper bound on the continuous-time path-following error $\Vert \tilde x_e(t)\Vert$ will exponentially decay towards zero. This result is formalized in Corollary \[j1:C1\]. \[j1:C1\] Consider the hybrid system in  with the path-following controller $\tilde \kappa = K_{q(\tilde s)}\tilde x_e$. Assume the conditions in Theorem \[j1:T3\] are satisfied for each mode $i\in\{1,\hdots,M\}$ of and assume the conditions in Theorem \[j1:T4\] are satisfied for the resulting discrete-time switched system . Then, $\forall k\in\mathbb Z_{+}$ and $t\in\Pi(\tilde s_k,\tilde s_k+s_f^i)$ with $q(\tilde s(t))=i$, the continuous-time path-following error $\tilde x_e(t)$ satisfies $$\begin{aligned} \lVert\tilde x_e(t)\rVert\leq \lVert\tilde x_e(t_0)\rVert\eta^{1/2}\rho_i^{1/2}\lambda^{k}, \end{aligned}$$ where $ P_i\succ 0$, $ S \succ 0$, $0<\lambda<1$, $\eta=\text{Cond}(S)$ and $\rho_i=\text{Cond}(P_i)$. See [@LjungqvistACC2018]. The practical interpretation of Corollary \[j1:C1\] is that the upper bound on the continuous-time path-following error is exponentially decreasing as a function of the number of executed motion primitives. The section is concluded by summarizing the work flow that has been presented in this section: 1. For each motion primitive $m_i \in \pazocal P$, design a path-following controller $\tilde \kappa = K_i\tilde x_e$ such that Theorem \[j1:T3\] holds, *e.g.*, by finding a feasible solution to the LMIs in . 2. For each motion primitive $m_i \in \pazocal P$, compute a discrete-time linear system that locally around the origin describes the evolution of the path-following error during the executing of the motion primitive . 3. In order to show that the origin to the continuous-time hybrid system in  with the hybrid path-following controller $\tilde \kappa = K_{q(\tilde s)}\tilde x_e$ is behaving as desired, show that the derived discrete-time switched system in  satisfies Theorem \[j1:T4\]. In this application, none of the vehicle states are directly observed from the vehicle’s onboard sensors and we instead need to rely on dynamic output feedback [@rugh1996linear], *i.e.*, the hybrid state-feedback controller $\tilde\kappa = K_{q(\tilde s)}\tilde x_e$ is operating in serial with a nonlinear observer. Naturally, the observer is operating in a discrete-time fashion and we make the assumption that the observer is operating sufficiently fast and estimates the state $\hat x(t_k)$ with good accuracy. This means that it is further assumed that the separation principle of estimation and control holds. That is, the current state estimate from the observer $\hat x(t_k)$ is interpreted as the true vehicle state $x(t_k)$, which is then used to construct the path-following error $\tilde x_e(t_k)$ used by the hybrid state-feedback controller. State observer {#j1:sec:stateEstimation} ============== The state-vector $x=\begin{bmatrix} x_3 & y_3 & \theta_3 & \beta_3 & \beta_2 \end{bmatrix}^T$ for the G2T with a car-like tractor is not directly observed from the sensors on the car-like tractor and therefore needs to be inferred using the available measurements, the system dynamics  and the geometry of the vehicle. High accuracy measurements of the position of the rear axle of the car-like tractor $(x_1,y_1)$ and its orientation $\theta_1$ are obtained from the localization system that was briefly described in Section \[j1:sec:loc\]. To obtain information about the joint angles $\beta_2$ and $\beta_3$, a LIDAR sensor is mounted in the rear of the tractor as seen in Figure \[j1:fig:ransac\_meas\]. This sensor provides a point-cloud from which the $y$-coordinate $L_y$, given in the tractor’s local coordinate system, of the midpoint of the semitrailer’s front and the relative orientation $\phi$ between the tractor and semitrailer can be extracted[^4]. To estimate $L_y$ and $\phi$, an iterative RANSAC algorithm [@fischler1981random] is first used to find the visible edges of the semitrailer’s body. Logical reasoning and the known width $b$ of the semitrailer’s front are used to classify an edge to the front, the left or the right side of the semitrailer’s body. Once the front edge and its corresponding corners are found, $L_y$ and $\phi$ can easily be calculated [@Patrik2016; @Daniel2018]. The measurements $y_{k}^{\text{loc}}=\begin{bmatrix} x_{1,k} & y_{1,k} & \theta_{1,k}\end{bmatrix}^T$ from the localization system and the constructed measurements from the iterative RANSAC algorithm are treated as synchronous observations with different sampling rates. These observations are fed to an EKF to estimate the full state vector $\hat x$ of the G2T with car-like tractor . Extended Kalman filter ---------------------- The EKF algorithm performes two steps, a time update where the next state $\hat x_{k\mid k-1}$ is predicted using a prediction model of the vehicle and a measurement update that corrects $\hat x_{k\mid k-1}$ to give a filtered estimate $\hat x_{k\mid k}$ using the avaiable measurements [@gustafsson2010statistical]. ![A bird’s-eye view of the connection between the car-like tractor and the semitrailer, as well as the geometric properties of the semitrailer that are used by the nonlinear observer. The green dot represents the midpoint of the front of the semitrailer’s body, where $L_y$ is the $y$-coordinate in the tractor’s local coordinate system. The LIDAR sensor is mounted at the blue dot and the dashed blue lines illustrate the LIDAR’s field of view.[]{data-label="j1:fig:ransac_meas"}](truck_ranstac_equations_only_ly_phi.pdf){width="0.7\linewidth"} To construct the prediction model, the continuous-time model of the G2T with a car-like tractor  is discretized using Euler forward with a time discretisation of $T_s$ seconds. The control signals to the prediction model are the longitudinal velocity $v$ of the car-like tractor and its curvature $\kappa$. Given the control signals $u_k = \begin{bmatrix} v_k & \kappa_k\end{bmatrix}^T$, the vehicle states $x_k$ and a process noise model $w_k$ with covariance $\Sigma^w$, the prediction model for the G2T with a car-like tractor can be written as $$\begin{aligned} \label{j1:eq:discrete_time_model} x_{k+1} = \hat f(x_k, u_k,w_k), \quad w_k \thicksim \pazocal{N}(0,\Sigma^w).\end{aligned}$$ Since the observations $y_{k}^{\text{loc}}$ and $y_{k}^{\text{ran}}$ are updated at different sampling rates, independent measurement equations for each observation are derived. Assuming measurements with normally distributed zero mean noise, the measurement equation for the observation from the iterative RANSAC algorithm can be written as $$\begin{aligned} \label{j1:eq:meas_eq_ransac} y_{k}^{\text{ran}} = h^{\text{ran}}(x_{k}) + e_{k}^{\text{ran}}, \quad e_{k}^{\text{ran}} \thicksim \pazocal{N}(0, \Sigma^{e}_\text{ran}),\end{aligned}$$ where $e_{k}^{\text{ran}}$ is the measurement noise with covariance matrix $\Sigma^{e}_\text{ran}$ and $h^{\text{ran}}(x_k)$ defines the relationship between the states and the measurements. From Figure \[j1:fig:ransac\_meas\] and basic trigonometry the two components of $h^{\text{ran}}(x_k)$ can be derived as \[j1:eq:meas\_eq\_hx\_ransac\] $$\begin{aligned} L_{y,k}=h^{\text{ran}}_1(x_k) &= L_2\sin{\beta_{2,k}} - L_a \sin{(\beta_{2,k} +\beta_{3,k})}, \label{j1:eq:meas_eq_hx_ransac_ly}\\ \phi_k=h^{\text{ran}}_2(x_k) &=\beta_{2,k} + \beta_{3,k}. \label{j1:eq:meas_eq_hx_ransac_phi} \end{aligned}$$ The second measurement equation, corresponding to the observation $y_{k}^{\text{loc}}=\begin{bmatrix} x_{1,k} & y_{1,k} & \theta_{1,k}\end{bmatrix}^T$ from the localization system is given by $$\begin{aligned} \label{j1:eq:meas_eq_localization} y_{k}^{\text{loc}} = h^{\text{loc}}(x_k) + e_k^{\text{loc}}, \quad e_k^{\text{loc}} \thicksim \pazocal{N}(0, \Sigma^{e}_\text{loc}),\end{aligned}$$ where the components of $h^{\text{loc}}(x_k)$ can be derived from Figure \[j1:fig:schematic\_model\_description\] as \[j1:eq:meas\_eq\_hx\_localization\] $$\begin{aligned} x_{1,k}=h^{\text{loc}}_1(x_k) &= x_{3,k} + L_3\cos{\theta_{3,k}} + L_2\cos{(\theta_{3,k}+\beta_{3,k})} + M_1 \cos{(\theta_{3,k}+\beta_{3,k}+\beta_{2,k})}, \\ y_{1,k}=h^{\text{loc}}_2(x_k) &= y_{3,k} + L_3\sin{\theta_{3,k}} + L_2\sin{(\theta_{3,k}+\beta_{3,k})} + M_1 \sin{(\theta_{3,k}+\beta_{3,k}+\beta_{2,k})},\\ \theta_{1,k}=h^{\text{loc}}_3(x_k) &=\theta_{3,k}+\beta_{3,k}+\beta_{2,k}, \end{aligned}$$ and $e_{k}^{\text{loc}}$ is the the measurement noise with covariance matrix $\Sigma^{e}_\text{loc}$. The standard EKF framework is now applied using the prediction model in  and the measurement equations in  and [@gustafsson2010statistical]. The process noise $w_k$ is assumed to enter additively with $u_k$ into the prediction model  and the time update of the EKF is performed as follows \[j1:eq:time\_update\_ekf\] $$\begin{aligned} \hat x_{k+1\mid k} &= \hat f(\hat x_{k\mid k},u_k,0), \\ \Sigma^x_{k+1\mid k} &= F_k\Sigma^x_{k\mid k}F_k^T + G_{w,k}\Sigma^wG_{w,k}^T, \end{aligned}$$ where $F_k = \hat f'_x(\hat x_{k\mid k},u_k,0)$ and $G_{w,k} = \hat f'_u(\hat x_{k\mid k},u_k,0)$ are the Jacobian linearizations of the prediction model around the current state estimate $\hat x_{k \mid k}$ with respect to $x$ and $u$, respectively. Since the observations $y_{k}^{\text{loc}}$ and $y_{k}^{\text{ran}}$ are updated at different sampling rates, the measurement update of the state estimate $\hat x_{k\mid k}$ and the covariance matrix $P_{k\mid k}$ is performed sequentially for $y_{k}^{\text{loc}}$ and $y_{k}^{\text{ran}}$. Let $H_k$ be defined as the block matrix $$\begin{aligned} H_k = \begin{bmatrix} H_{1,k} \\[1ex] H_{2,k}\end{bmatrix} = \begin{bmatrix} \left(\dfrac{\partial h^{\text{ran}}(x_{k\mid k-1})}{\partial x}\right)^T & \left(\dfrac{\partial h^{\text{loc}}(x_{k\mid k-1})}{\partial x}\right)^T \end{bmatrix}^T.\end{aligned}$$ Each time an observation from the localization system $y_{k}^{\text{loc}}$ is available, the following measurement update is performed \[j1:eq:meas\_update\_ekf\] $$\begin{aligned} K_k &= \Sigma^x_{k\mid k-1}H_{2,k}^T\left(\Sigma^e_{\text{loc}} + H_{2,k} \Sigma^x_{k\mid k-1} H_{2,k}^T\right)^{-1}, \\ \hat x_{k\mid k} &= \hat x_{k\mid k-1} + K_k\left(y_{k}^{\text{loc}} - h^{\text{loc}}(\hat x_{k\mid k-1}) \right), \\ \Sigma^x_{k\mid k} &= \Sigma^x_{k\mid k-1} - K_k H_{2,k} \Sigma^x_{k\mid k-1}, \end{aligned}$$ where $K_k$ is the Kalman gain [@gustafsson2010statistical]. Similarly, when the observation $y_{k}^{\text{ran}}$ is updated, the same measurement update  is performed with $\Sigma^e_{\text{loc}}$, $H_{2,k}$, $y_{k}^{\text{loc}}$ and $h^{\text{loc}}$ replaced with $\Sigma^e_{\text{ran}}$, $H_{1,k}$, $y_{k}^{\text{ran}}$ and $h^{\text{ran}}$, respectively. To decrease the convergence time of the estimation error, the EKF is initialized as follows. Define the combined measurement equation of $y_{k}^{\text{ran}}$ and $y_{k}^{\text{loc}}$ as $y_k=h(x_k)$. Assuming noise-free observations and that , this system of equations has a unique solution given by \[j1:eq:invers\_h\] $$\begin{aligned} \beta_{2,k} &= \arcsin\left(\frac{L_{y,k}+L_a\sin{\phi_k}}{L_2}\right)= h_{\beta_{2,k}}^{-1}( y_k), \label{j1:eq:invers_h_beta2}\\ \beta_{3,k} &= \phi_k - h_{\beta_{2,k}}^{-1}(y_k)= h_{\beta_{3,k}}^{-1}( y_k), \label{j1:eq:invers_h_beta3}\\ \theta_{3,k} &= \theta_{1,k}-\phi_k= h_{\theta_{3,k}}^{-1}( y_k), \\ x_{3,k} &= x_{1,k} - L_3\cos{( h_{\theta_{3,k}}^{-1}( y_k))} - L_2\cos{( h_{\theta_{3,k}}^{-1}( y_k)+ h_{\beta_{3,k}}^{-1}( y_k))} \nonumber \\ &+M_1 \cos{( h_{\theta_{3,k}}^{-1}( y_k)+ h_{\beta_{3,k}}^{-1}( y_k)+ h_{\beta_{2,k}}^{-1}( y_k))}, \\ y_{3,k} &= y_{1,k} - L_3\sin{( h_{\theta_{3,k}}^{-1}( y_k))} - L_2\sin{(h_{\theta_{3,k}}^{-1}( y_k)+h_{\beta_{3,k}}^{-1}( y_k))} \nonumber \\ &+M_1 \sin{( h_{\theta_{3,k}}^{-1}( y_k)+ h_{\beta_{3,k}}^{-1}( y_k)+ h_{\beta_{2,k}}^{-1}( y_k))}. \end{aligned}$$ This relationship is used to initialize the EKF with the initial state estimate $\hat x_{1\mid 0}= h^{-1}(y_0)$, the first time both measurements are obtained. The state covariance matrix is at the same time initialized to $\Sigma^x_{1\mid 0} = \Sigma^x_0$, where $\Sigma^x_0\succeq 0$ is a design parameter. Since no ground truth is available for all vehicle states, the filter cannot be individually evaluated but will be seen as part of the full system and thus be evaluated through the overall system performance. Implementation details: Application to full-scale tractor-trailer system {#j1:sec:implementation} ======================================================================== The path planning and path-following control framework has been deployed on a modified version of a Scania G580 6x4 tractor that is shown in Figure \[j1:fig:truck\_scania\]. The car-like tractor is a sensor platform as described in Section \[j1:sec:systemArchitecture\], including a real time kinematic GPS (RTK-GPS), IMUs and a rear view LIDAR sensor with 120 degrees field of view in the horizontal scan field. The tractor is also equipped with a servo motor for automated control of the steering column and additional computation power compared to the commercially available version. The triple axle semitrailer and the double axle dolly are both commercially available and are not equipped with any sensors that are used by the system. The vehicle lengths and the physical parameters for the car-like tractor are summarized in Table \[j1:tab:vehicle\_parameters\], where we have assumed that the rotational centers are located at the longitudinal center for each axle pair and triple, respectively. The total distance from the front axle of the car-like tractor to the center of the axle of the semitrailer is approximately . In the remainder of this section, implementation details for each module within the path planning and path-following control framework are presented. Lattice planner {#j1:sec:implementation_lattice_planner} --------------- The lattice planner is implemented in C++ and the motion primitive set is calculated offline using the numerical optimal control solver CasADi [@casadi], together with the primal-dual interior-point solver IPOPT [@ipopt]. The resulting paths are represented as distance sampled points containing full state information including the control signals. For generation of the set of backward motion primitives $\pazocal P_{\text{rev}}$, the weight matrices $\mathbf{Q}_1\succeq 0$ and $\mathbf{Q}_2 \succeq 0$ in the cost function  are chosen as $$\begin{aligned} \mathbf{Q}_1 = \begin{bmatrix} 11 & -10 \\ -10 & 11 \end{bmatrix}, \quad \mathbf{Q}_2 = \text{diag}\left(\begin{bmatrix}1& 10& 1\end{bmatrix}\right),\end{aligned}$$ giving the integrand $||\begin{bmatrix} \beta_3 & \beta_2\end{bmatrix}^T||_\mathbf{Q_1}^2=\beta_3^2 + \beta_2^2 + 10(\beta_3-\beta_2)^2$. This means that large joint angles with opposite signs are highly penalized during backward motion, which is directly related to motion plans that have an increased risk of leading to a jack-knife state during path execution. For the set of forward motion primitives $\pazocal P_{\text{fwd}}$, the weight $\mathbf{Q}_1$ is chosen as $\mathbf{Q}_1=0_{2\times 2}$. During motion primitive generation, the physical limitation on steering angle $\alpha_{\text{max}}$ is additionally 20 % tightened to enable the path-following controller to reject disturbances during plan execution. The complete set of motion primitives from the initial orientation $\theta_{3,i}=0$ is presented in Figure \[j1:fig:primitives\]. The generated motion primitive set $\pazocal P$ was then reduced using the reduction technique described in Section \[j1:subsec:Mreduction\], with $\eta=1.2$, yielding a reduction factor of about 7 %. The size of the reduced motion primitive set was $|\pazocal{P}'|=3888$, with between 66–111 different state transitions from each discrete state $z[k]\in\mathbb Z_d$. For the reduced motion primitive set $\pazocal{P}'$, a free-space HLUT [@CirilloIROS2014; @knepper2006high] was precomputed using a Dijkstra’s search with cut-off cost $J_\text{cut}=170$. The surrounding environment is represented by an occupancy gridmap [@occupancyGridMap] and efficient collision checking is performed using grid inflation and circle approximations for the trailer and tractor bounding boxes  [@lavalle2006planning]. [l l]{} Vehicle Parameters & Value\ The tractor’s wheelbase $L_1$ & 4.62 m\ Maximum steering angle $\alpha_{\text{max}}$ & $42\pi/180$ rad\ Maximum steering angle rate $\omega_{\text{max}}$ & $0.6$ rad/s\ Maximum steering angle acceleration $u_{\omega,\text{max}}$ & $40$ rad/s$^2$\ Length of the off-hitch $M_1$ & 1.66 m\ Length of the dolly $L_2$ & 3.87 m\ Length of the semitrailer $L_3$ & 8.00 m\ Length of the overhang $L_a$ & 1.73 m\ Width of the semitrailer’s front $b$ & 2.45 m\ \[j1:tab:vehicle\_parameters\] In the experiments, the lattice planner is given a desired goal state $z_G$ that can be specified by an operator or selected by an algorithm. The equilibrium steering angle $\alpha_G$ at the goal is constrained to zero, *i.e.*, the G2T with a car-like tractor is constrained to end up in a straight vehicle configuration. When a desired goal state $z_G$ has been specified, the vehicle’s initial state $z(0)$ is first projected down to its closest neighboring state in $\mathbb Z_d$. The ARA$^*$ search algorithm is initialized with heuristic inflation factor $\gamma=2$ and $\gamma$ is then iteratively decreased by 0.1 in every subsequent iteration. If $\gamma$ reaches 1 or if a specified maximum allowed planning time is reached and a motion plan with a proven $\gamma$-suboptimality cost has been found, portions of the resulting motion plan are iteratively sent to the path-following controller for path execution. Path-following controller {#j1:implementation:details:feedback} ------------------------- The framework presented in Section \[j1:sec:Controller\] is here deployed to synthesize the hybrid path-following controller for this specific application. First, a feedback gain $K_i$ and a corresponding Lyapunov function $V_i(\tilde x_e) = \tilde x_e^TP_i\tilde x_e$ is computed for each motion primitive $m_i\in \pazocal P$, separately. As in , the bijective transformation $Q_i = P_i^{-1}$ and $Y_i = K_iP_i^{-1}$ is performed, and the convex polytope $\mathbb S_i$ in  is estimated by evaluating the Jacobian linearization  of the path-following error model  at each sampled point of the nominal path. Each resulting pair $[A_{i,j},B_{i,j}]$ of the linearization is assumed to be a vertex of the convex polytope $\mathbb S_i$ in . In order to guarantee that the path-following error for the closed-loop system is bounded and decays toward zero, we show that the matrix inequalities defined in  have a feasible solution. As in [@LjungqvistACC2018], for each motion primitive $m_i\in\pazocal P$, the synthesis of $\tilde \kappa = K_i\tilde x_e$ is performed by solving the following convex optimization problem $$\begin{aligned} \operatorname*{minimize}_{Y_{i}, Q_{i}} \hspace{3.7ex} & \|Y_{i}-K_{\text{LQ}}^iQ_{i}\| \label{j1:eq:opt_LTV}\\ \operatorname*{subject\:to}\hspace{3ex} & \eqref{j1:eq:matrixineq_convex} \text{ and } Q_i \succeq I, \nonumber\end{aligned}$$ with decay rate $\epsilon=0.01$, where $K_{\text{LQ}}^i$ is a nominal feedback gain that depends on $m_i\in\pazocal P$. Here, two nominal feedback gains are used; $K_{\text{fwd}}$ for all forward motion primitives $m_i\in\pazocal P_{\text{fwd}}$ and $K_{\text{rev}}$ for all backward motion primitives $m_i\in\pazocal P_{\text{rev}}$. The motivation for this choice of objective function in  is that it is desired that the path-following controller $\tilde \kappa = K_i\tilde x_e$ inherits the nominal LQ-controller’s properties. It is also used to reduce the number of different feedback gains $K_i$, while not sacrificing desired convergence properties of the path-following error along the execution of each motion primitive. The nominal feedback gains are designed using infinite-horizon LQ-control [@anderson2007optimal], with the linearized path-following error model around a straight nominal path in backward and forward motion, respectively. In these cases, the Jacobian linearization is given by the matrices $A$ and $B$ defined in . The weight matrices $\tilde Q_{\text{fwd}}$ and $\tilde Q_{\text{rev}}$ that are used in the LQ-design are listed in Table \[j1:tab:design\_parameters\]. By choosing the penalty on the nominal curvature deviation as $\tilde R_{\text{rev}}=\tilde R_{\text{fwd}}=1$, the nominal feedback gains are $$\begin{aligned} K_{\text{rev}} &= \begin{bmatrix} -0.12 & 1.67 & -1.58 & 0.64 \end{bmatrix}, \\ K_{\text{fwd}} &= -\begin{bmatrix} 0.20 & 2.95 & 1.65 & 1.22 \end{bmatrix}, \end{aligned}$$ where positive feedback is assumed. Here, $K_{\text{fwd}}$ and $K_{\text{rev}}$ are dedicated for the set of forward $\pazocal P_{\text{fwd}}$ and backward $\pazocal P_{\text{rev}}$ motion primitives, respectively. Using these nominal feedback gains, the optimization problem in  is solved separately for each motion primitive $m_i\in\pazocal P$ using YALMIP [@lofberg2004yalmip], and each optimization generates a feedback gain $K_i=Y_iQ_i^{-1}$ and a quadratic Lyapunov function $V_i(\tilde x_e)=\tilde x_e^TQ_i^{-1}\tilde x_e$. In this specific application, $\forall m_i\in\pazocal P$, the optimal value of the objective function in  is zero, which implies that $K_i = K_{\text{LQ}}^i$. Thus, for this specific set of motion primitives $m_i\in\pazocal P$ (see Figure \[j1:fig:primitives\]), the hybrid path-following controller is given by $$\begin{aligned} \label{j1:eq:hybrid_controller} \tilde \kappa(t) = \kappa_r(\tilde s) + \begin{cases} K_{\text{fwd}}\tilde x_e(t), \quad &m_i\in\pazocal P_{\text{fwd}}, \\ K_{\text{rev}}\tilde x_e(t), \quad &m_i\in\pazocal P_{\text{rev}}. \end{cases}\end{aligned}$$ However, the continuous-time quadratic Lyapunov functions are certainly not equal $\forall m_i\in\pazocal P$. \[j1:remark2\] For these specific vehicle parameters and motion primitive set $\pazocal P$, it was possible to find a common quadratic Lyapunov function $V_{\text{fwd}}(\tilde x_e)$ with decay-rate $\epsilon=0.01$ and path-following controller $\tilde \kappa = K_{\text{fwd}}\tilde x_e$, for all forward motion primitives $m_i\in\pazocal P_{\text{fwd}}$. It was also possible to find a common quadratic Lyapunov function $V_{\text{rev}}(\tilde x_e)$ with decay-rate $\epsilon=0.01$ and path-following controller $\tilde \kappa = K_{\text{rev}}\tilde x_e$, for all backward motion primitives $m_i\in\pazocal P_{\text{rev}}$. It was however not possible to find a common quadratic Lyapunov function $V(\tilde x_e)$ with a decay-rate $\epsilon>0$ and $\tilde \kappa = K_{i}\tilde x_e$, for the complete set of forward and backward motion primitives $m_i\in\pazocal P$. This follows directly from Theorem \[j1:P1\]. Practically, Remark \[j1:remark2\] implies that if the lattice planner is constrained to only compute nominal paths using either $\pazocal P_{\text{fwd}}$ or $\pazocal P_{\text{rev}}$, it is possible to guarantee that the path-following error is bounded and exponentially decays towards zero. To guarantee similar properties for the path-following error when the motion plan is composed of forward and backward motion primitives, the framework presented in Section \[j1:sec:convergence\] needs to be applied. This analysis is presented in the next section. EKF parameters Value --------------------------------------------------- --------------------------------------------------------------------------------- Process noise $\Sigma^w$ $10^{-3}\times\text{diag}\left(\begin{bmatrix} 1 & 1 \end{bmatrix}\right)$ \[2pt\] Measurement noise $\Sigma^e_{\text{loc}}$ $10^{-3}\times\text{diag}\left(\begin{bmatrix}1&1&0.5\end{bmatrix}\right)$ \[2pt\] Measurement noise $\Sigma^e_{\text{ran}}$ $10^{-3}\times\text{diag}\left(\begin{bmatrix}0.5&0.1\end{bmatrix}\right)$ \[2pt\] Initial state covariance $\Sigma^x_0$ $0.5\times\text{diag}\left(\begin{bmatrix}1&1&0.1&0.1&0.1\end{bmatrix}\right)$ \[2pt\] EKF frequency 100 Hz \[1pt\] Controller parameters Value Nominal LQ weight $\tilde Q_\text{fwd}$ $0.05\times\text{diag}\left(\begin{bmatrix}0.8& 6& 8 & 8\end{bmatrix}\right)$ \[2pt\] Nominal LQ weight $\tilde Q_\text{rev}$ $0.05\times\text{diag}\left(\begin{bmatrix}0.3 & 6 & 7 & 5\end{bmatrix}\right)$ \[2pt\] Controller frequency 50 Hz \[1pt\] : Design parameters for the EKF and the path-following controller during the real-world experiments. \[j1:tab:design\_parameters\] The hybrid path-following controller  was implemented in Matlab/Simulink and C-code was then auto-generated where the path-following controller was specified to operate at 50 Hz. During the real-world experiments, the tractor’s set-speed controller was used for longitudinal control with along forward motion primitives and along backward motion primitives. State observer {#state-observer} -------------- The design parameters for the EKF are summarized in Table \[j1:tab:design\_parameters\], which were tuned using collected data from manual tests with the vehicle. This data was then used offline to tune the covariance matrices in the EKF and to calibrate the position and orientation of the rear view LIDAR sensor. The pitch angle of the rear view LIDAR sensor was adjusted such that that body of the semitrailer was visible in the LIDAR’s point-cloud for all vehicle configurations that are of relevance for this application. The EKF and the iterative RANSAC algorithm [@Patrik2016; @Daniel2018] was implemented in Matlab/Simulink and C-code was then auto-generated. The EKF was specified to operate at 100 Hz and the measurements from the localization system is updated at the same sampling rate. The observation from the iterative RANSAC algorithm is received at a sampling rate of 20 Hz. The iterative RANSAC algorithm is specified to extract at most two edges of the semitrailer’s body and 500 random selections of data pairs are performed for each edge extraction with an inlier threshold of 5 centimeters. Results {#j1:sec:Results} ======= In this section, the behavior of the closed-loop system consisting of the controlled G2T with a car-like tractor and the path-following controller, executing a nominal path computed by the lattice planner is first analyzed. Then, the planning capabilities of the lattice planner and the ideal tracking performance of the path-following controller are evaluated in simulation experiments. Finally, the complete framework is evaluated in three different real-world experiments on the full-scale test vehicle that is depicted in Figure \[j1:fig:truck\_scania\]. Analysis of the closed-loop hybrid system ----------------------------------------- To verify that the path-following error $\tilde x_e(t)$ is bounded and decays toward zero when the nominal path is constructed by any sequence of motion primitives, backward as well as forward ones, the method presented in Section \[j1:sec:convergence\] is applied. The closed-loop system in  is implemented in MATLAB/Simulink. Central differences is used to compute the linear discrete-time system in  that describes the evolution of the path-following error  when motion primitive $m_i\in\pazocal P$ is executed. With a step size $\delta=0.01$, the state-transition matrix $F_i$ can be computed numerically by simulating the closed-loop system with an initial error $\pm \delta$ in each path-following error state at a time. Since there are four error states, eight simulations of the closed-loop system are performed in order to generate each transition matrix $F_i$. This numerical differentiation is performed for all $m_i\in\pazocal P$ and $M$ state-transition matrices are produced, $i.e.$, $\mathbb F = \{F_1,\hdots,F_M\}$. The matrix inequalities in are solved to show that the norm of path-following error for the discrete-time switched system in , exponentially decays towards zero at the switching instants $\tilde x_e[k]$. By selecting $0 < \mu < 1$ the semidefinite optimization problem in  can be solved. The condition number of $S$ is minimized such that the guaranteed upper bound  of the path-following error is as tight as possible. $$\begin{aligned} \operatorname*{minimize}_{\eta,S} \hspace{3.7ex} & \eta \label{j1:eq:opt_DISC}\\ \operatorname*{subject\:to}\hspace{3ex} &F_j^T S F_j - S \preceq - \mu S, \quad j=1,\hdots,M \nonumber\\ & I \preceq S \preceq \eta I \nonumber\end{aligned}$$ It turns out that it is not possible to select $0<\mu<1$ such that a feasible solution to exists for the original motion primitive set $\pazocal P$. The reason for this is because in $\pazocal P$, there are short motion segments of about 1 m that moves the vehicle either straight forwards or straight backwards. If these paths are switched between, it is not possible to guarantee that the norm of the path-following error at the switching points will exponentially decay towards zero, which makes sense from a practical point of view. In order to resolve this, the short motion primitives were extended to about 18 m (as the size of the tractor-trailer system) and their corresponding discrete-time transition matrices $F_i$ were again computed. With this adjusted motion primitive set $\pazocal P_{\text{adj}}$ and $\mu = 0.3$, the optimization problem in  was feasible to solve using YALMIP [@lofberg2004yalmip], and the optimal solutions are $\eta = 51.58$ and matrix $S\succ 0$: $$\begin{aligned} \label{j1:eq:lyapunov_matrix_discrete} S = \begin{bmatrix} 1.04 & 1.29 & 0.29 & 0.34 \\ 1.29 & 50.54 & -0.22 & 6.62 \\ 0.29 & -0.23 & 51.09 & 2.58 \\ 0.34 & 6.62 & 2.58 & 5.16 \end{bmatrix}.\end{aligned}$$ Extending the short motion primitives manually is equivalent to adding constraints on the switching sequence $\{u_q[k]\}_{k=0}^{N-1}$ in the lattice planner. For this case, when a short motion primitive $p_i\in \pazocal P$ is activated, $u_q[k]$ needs to remain constant for a certain amount of switching instances. This constraint can easily be added within the lattice planner. (myplot) at (0,0) ; at (0.75,0.79) [$D=1$ m]{}; at (0.745,0.38) [$D=10$ m]{}; at (0.355,0.27) [$D=18$ m]{}; Figure \[j1:fig:sim\_fwd\_backward\] illustrates, the behavior of the closed-loop system when switching between a straight forward and backward motion primitive of three different path lengths $D$ m, with the initial path-following error state . As can be seen, $V_d(\tilde x_e[k])=\tilde x^T_e[k]S\tilde x_e[k]$ is a valid discrete-time Lyapunov function for , since $V_d(\tilde x_e[k])$ is monotonically decreasing towards zero. When $D=10$ m, the path-following error decays towards zero, but not monotonously. When $D=1$ m, the path-following error remains bounded, but is not decaying towards zero. For more simulations of the closed-loop hybrid system, the reader is referred to [@LjungqvistACC2018]. From our practical experience, allowing the short motion primitives have not caused any problems, since repeated switching between short straight forward and backward motion primitives is of limited practical relevance in the missions typically considered in this work. Simulation experiments ---------------------- Results from a quantitative analysis of the lattice planner is first presented, where its performance has been statistically evaluated in Monte Carlo experiments in two practically relevant scenarios. Then, simulation results for the path-following controller during ideal conditions where perfect state information is available is given to demonstrate its performance. The simulation experiments have been performed on a standard laptop computer with an Intel Core i7-6820HQ@2.7GHz CPU. ### Simulation experiments of the lattice planner Two different path planning scenarios are used to evaluate the performance of the lattice planner. One thousand Monte Carlo simulations are performed for each scenario, where the goal state $z_G\in\mathbb Z_d$ and/or the initial state $z_I\in\mathbb Z_d$ are randomly selected from specified regions that are compliant with the specified state-space discretization $\mathbb Z_d$. For simplicity, it is assumed that the vehicle starts and ends in a straight configuration, $i.e.$, $\alpha_I=\alpha_G=0$. A goal state is thus specified by a goal position $(x_{3,G},y_{3,G})$ of the axle of the semitrailer and a goal orientation $\theta_{3,G}$ of the semitrailer. As explained in Section \[j1:sec:implementation\_lattice\_planner\], the ARA$^*$ search is initialized with heuristic inflation factor $\gamma=2$. This factor is then iteratively decreased by 0.1 every time a path to the goal for a specific $\gamma$ has been found. To evaluate the computation time and the quality of the produced solution, the lattice planner was allowed to plan until an optimal solution with $\gamma =1$ was found. In the Monte Carlo experiments, each time a solution for a specific $\gamma$ is found, the accumulated planning time and the value of the cost function $J_D$ are stored. During the experiments, a planning problem is marked unsolved if the planning time exceeds 60 s and a solution with $\gamma = 1$ has not yet been found. ![An overview of the parking planning problem. The goal position of the axle of the semitrailer $(x_{3,G},y_{3,G}$) is marked by the white cross inside the blue rectangle, where the white arrow specifies its goal orientation $\theta_{3,G}$. The initial position $(x_{3,I},y_{3,I}$) is uniformly sampled within the two white-dotted rectangles, and the initial orientation $\theta_{3,I}\in\Theta$ is sampled from six different initial orientations. The white path illustrates the planned path for the axle of the semitrailer for one out of 1000 Monte Carlo experiments. The area occupied by obstacles is colored in red and the black area is free-space.[]{data-label="j1:fig:parking_example"}](Parking_example_new.pdf){width="0.85\linewidth"}   The first planning scenario is illustrated in Figure \[j1:fig:parking\_example\], where the objective of the path planner is to plan a parking maneuver from a randomly selected initial state to a fixed goal state $z_G\in\mathbb Z_d$. In Figure \[j1:fig:parking\_example\], the goal position of the axle of the semitrailer $(x_{3,G},y_{3,G})$ is illustrated by the white cross inside the blue rectangle, where the white arrow specifies its goal orientation $\theta_{3,G}$. The initial position of the axle of the semitrailer $(x_{3,I},y_{3,I})$ is uniformly sampled from two different 20 m $\times$ 15 m rectangles on each side of the goal location and the initial orientation of the semitrailer $\theta_{3,I}$ is randomly selected from six different initial orientations, as depicted in Figure \[j1:fig:parking\_example\]. In all experiments, the lattice planner was able to find an optimal path ($\gamma=1$) to the goal within the allowed planning time of 60 s (max: 40 s). A statistical evaluation of the simulation results from one thousand Monte Carlo experiments are provided in Figure \[j1:fig:eval\_planner\_tt\], where the planning time (Figure \[j1:fig:eval\_planner\_planning\_time\_tt\]), and the level of suboptimality $\Delta J_D$ (Figure \[j1:fig:eval\_planner\_obj\_cs\]) between the cost $J_D$ for a specific $\gamma$ and the optimal cost $J_D^*$ for each planning experiment are plotted. In the box plots, the red central mark of each bar is the median, the bottom and top edges of the boxes indicate the 25th and 75th percentiles, respectively, and the whiskers extends to the most extreme data points where outliers are not presented.   As can be seen in Figure \[j1:fig:eval\_planner\_planning\_time\_tt\], the planning time is drastically increasing with decreasing $\gamma$. For most of the problems, a feasible solution to the goal with $\gamma=2$ was found within $0.7$ s, while a median planning time of was needed to find an optimal solution with $\gamma = 1$. In Figure \[j1:fig:eval\_planner\_obj\_tt\], the quality of the produced solution in terms of level of suboptimality $\Delta J_D$ as a function of $\gamma$ is displayed. For $\gamma\geq 1$, the provided theoretical guarantee is that the cost for a feasible solution $J_D$ satisfies $J_D \leq \gamma J_D^*$, where $J_D^*$ denotes the optimal cost. For all iterations of the ARA$^*$, the median level of suboptimality is $0$ % and the extreme values for large $\gamma$ are about $5$ %. For this scenario, we conclude that the guaranteed upper bound of $\gamma$-suboptimality is a conservative bound. A loading/offloading site is used as the second planning scenario and the setup is illustrated in Figure \[j1:fig:construction\_site\_example\]. In this scenario, the lattice planner has to plan a path from a randomly selected initial state $z_I\in\mathbb Z_d$ to one of the six loading bays, or plan how to exit the site. In the Monte Carlo experiments, the initial position of the semitrailer ($x_{3,I},y_{3,I}$) is uniformly sampled from a square (see, Figure \[j1:fig:construction\_site\_example\]), and the initial orientation of the semitrailer $\theta_{3,I}$ is randomly selected from one of its sixteen discretized orientations, $i.e.$, $\theta_{3,I}\in\Theta$. Also in this scenario, the lattice planner was always able to find an optimal path to the goal within the allowed planning time of 60 s (max: 27 s). A statical evaluation of the simulation results from one thousand Monte Carlo experiments are presented in Figure \[j1:fig:eval\_planner\_cs\]. From the box plots in \[j1:fig:eval\_planner\_cs\], it can be seen that the planning time is also in this scenario increasing with decreasing heuristic inflation factor $\gamma$. However, the median planning time to find an optimal solution with $\gamma=1$ was only 0.84 s and most problems where solved within $3$ s. The main reason for this improvement in terms of planning time compared to the parking scenario is because the precomputed HLUT here yields a better estimation of the true cost-to-go in this less constrained environment. However, as can be seen in Figure \[j1:fig:eval\_planner\_obj\_cs\], the extreme values for the level of suboptimality $\Delta J_D$ is about $43$ % for large $\gamma$. Compared to the parking scenario, a heuristic inflation factor of $\gamma = 1.2$ is needed in this scenario to obtain a median level of suboptimality of . One reason for this greedy behavior in this scenario compared to parking scenario is that the there exist more alternative paths to the goal. This implies that the probability of finding a suboptimal path to the goal increases [@arastar]. ![An overview of the loading/offloading site planning problem. The goal positions of the axle of the semitrailer $(x_{3,G},y_{3,G})$ are illustrated by the white crosses inside the blue rectangles, where the white arrow specifies its goal orientation $\theta_{3,G}$. The initial position $(x_{3,I},y_{3,I})$ is uniformly sampled within the two white-dotted rectangles, and the initial orientation $\theta_{3,I}$ are sampled from sixteen different initial orientations. The white path illustrates the planned path for the axle of the semitrailer for one case out of 1000 Monte Carlo experiments. The area occupied by obstacles is colored in red and the black area is free-space.[]{data-label="j1:fig:construction_site_example"}](construction_site.png){width="0.7\linewidth"} ### Path following of a figure-eight nominal path Nominal paths of the shape of a figure-eight are used to evaluate the performance of the proposed path-following controller in backward and forward motion. These nominal paths are used as a benchmark since they expose the closed-loop system for a wide range of practically relevant maneuvers, *e.g.*, enter, exit and keep a narrow turn. To generate the figure-eight nominal path in forward motion, a list of waypoints of the same shape was first constructed manually. The nominal path was generated by simulating the model of the G2T with a car-like tractor  in forward motion with $v(t)=1$ m/s, together with the pure pursuit controller in [@evestedtLjungqvist2016planning]. The path taken by the vehicle $(x_r(\tilde s),u_r(\tilde s))$, $\tilde s\in[0,\tilde s_G]$ was then stored and used as nominal path in forward motion. The established symmetry result in Lemma \[j1:L1\] was then used to construct the figure-eight nominal path in backward motion. In analogy to the design of the hybrid path-following controller , the OCP in  is solved with decay-rate $\epsilon=0.01$. In both cases, the optimal objective function to  is zero, which implies that the proposed hybrid path-following controller  is able the locally stabilize the path-following error model  around the origin while tracking the figure-eight nominal path in forward and backward motion, respectively. To confirm the theoretical analysis and to illustrate how the proposed path-following controller handles disturbance rejection, the closed-loop system is simulated with a perturbation in the initial path-following error states. For backward tracking, the initial error is chosen as $\tilde x_e(0)=\begin{bmatrix} 1 & 0 & 0.1 & 0.1 \end{bmatrix}^T$ and for forward tracking it is chosen as $\tilde x_e(0)=\begin{bmatrix} -3 & 0 & -\pi/6 & \pi/6 \end{bmatrix}^T$. To perform realistic simulations, the steering angle of the car-like tractor is constrained according to Table \[j1:tab:vehicle\_parameters\]. The velocity of the car-like tractor is set to $v=v_r(s)$, $i.e.$, 1 m/s for forward tracking and $v=-1$ m/s for backward tracking. (myplot) at (0,0) ; (0.3,0.73) to \[out=0,in=130\] (0.5,0.55); at (0.38,0.62) [Backward]{}; (0.79,0.4) to \[out=20,in=270\] (0.88,0.67); at (0.79,0.53) [Forward]{};       The simulation results are provided in Figure \[j1:fig:eight\_rev\_sim\]–\[j1:fig:sim\_eight\_rev\_states\]. In Figure \[j1:fig:eight\_rev\_sim\], the resulting paths taken by the axle of the semitrailer $(x_3(\cdot),y_3(\cdot))$ is plotted together with its nominal path $(x_{3,r}(\tilde s),y_{3,r}(\tilde s))$, $\tilde s\in[0,\tilde s_G]$. The resulting trajectories for the path-following error states are presented in Figure \[j1:fig:sim\_eight\_rev\_z3\]–\[j1:fig:sim\_eight\_rev\_b2\]. As theoretically verified, the path-following error states are converging towards the origin. The controlled curvature of the car-like tractor is plotted in Figure \[j1:fig:sim\_kappa\_state\_eight\_fwd\] and Figure \[j1:fig:sim\_kappa\_state\_eight\_rev\] for the forward and backward tracking simulation, respectively. From these plots, it is clear that the feedback part in the path-following controller $K\tilde x_e$ is responsible for disturbance rejection and the feedforward part $\kappa_r(\tilde s)$ takes care of path-following. Results from real-world experiments ----------------------------------- The path planning and path-following control framework is finally evaluated in three different real-world experiments. First, the performance of the path-following controller and the nonlinear observer are evaluated by path-tracking of a precomputed figure-eight nominal path in backward motion. Then, two real-world experiments with the complete path planning and path-following control framework are presented. To validate the performance of the path-following controller and the nonlinear observer, a high-precision RTK-GPS[^5] was mounted above midpoint of the axle of the semitrailer. The authors recommend the supplemental video material in [@videoExperiment] for real-world demonstration of the proposed framework. (myplot) at (0,0) ; (0.3,0.73) to \[out=0,in=130\] (0.5,0.55); at (0.38,0.62) [Backward]{}; ### Path following of a figure-eight nominal path The figure-eight nominal path in backward motion that was used in the simulation experiment, is also used here as nominal path to evaluate the joint performance of the path-following controller and the nonlinear observer. The real-world path-following experiment is performed on an open gravel surface at Scania’s test facility in Södertälje, Sweden. During the experiment, the longitudinal velocity of the rear axle of the car-like tractor was set to and results from one lap around the figure-eight nominal path are provided in Figure \[j1:fig:eight\_rev\]–\[j1:fig:estimation\_eight\_rev\].       Figure \[j1:fig:eight\_rev\] shows the nominal path for the position of the axle of the semitrailer $(x_{3,r}(\cdot),y_{3,r}(\cdot))$, compared to its ground truth path $(x_{3,GT}(\cdot),y_{3,GT}(\cdot))$ and its estimated path $(\hat x_3(\cdot),\hat y_3(\cdot))$ around one lap of the figure-eight nominal path. A more detailed plot is provided in Figure \[j1:fig:eight\_rev\_states\], where all four estimated error states $\tilde x_e(t)$ are plotted. From these plots, we conclude that the path-following controller is able to keep its estimated lateral path-following error $\tilde z_3(\cdot)$ within (avg. 0.21 m), while at the same time keep the orientation and joint angle errors within acceptable error tolerances. As in the simulation experiments, it can be seen from Figure \[j1:fig:kappa\_state\_eight\_rev\] that the feedforward part $\kappa_r(s)$ of the path-following controller takes care of path-following and the feedback part $\tilde\kappa=K_{\text{rev}}\tilde x_e$ is responsible for disturbance rejection.   The performance of the nonlinear observer are presented in Figure \[j1:fig:eight\_est\_poes\_error\] and Figure \[j1:fig:estimation\_eight\_rev\]. In Figure \[j1:fig:eight\_est\_poes\_error\], the Euclidean norm of the difference between the estimated position of the axle of the semitrailer $(\hat x_3(\cdot),\hat y_3(\cdot))$ and its ground truth $(x_{3,GT}(\cdot),y_{3,GT}(\cdot))$ measured by the external RTK-GPS is presented. The maximum estimation error is $0.6$ m and the average error is . The cause to this estimation error is probably due to asymmetries in the tractor’s steering column [@truls2018] and unavoidable lateral slip-effects of the wheels of the dolly and the semitrailer which are not captured by the kinematic model of the vehicle [@winkler1998simplified]. Furthermore, unavoidable offsets in the manual placement of the GPS-antenna on the semitrailer used for validation, may also add on to the estimation error. Note that the absolute position of the axle of the semitrailer ($\hat x_3(\cdot),\hat y_3(\cdot)$) is estimated from GPS measurements of the car-like tractor’s position, its orientation, propagated about 14 m through two hitch connections whose angles are estimated using only a LIDAR sensor on the car-like tractor. It can be seen from Figure \[j1:fig:eight\_rev\_z3\] that the estimated lateral path-following error for the axle of the semitrailer $\tilde z_3$ is increasing at the end of the maneuver. The reason for this is because the nonlinear observer is not able to track the absolute position of the axle of the semitrailer with high precision in this part of the maneuver. In Figure \[j1:fig:eight\_est\_beta2\] and \[j1:fig:eight\_est\_beta3\], the estimated trajectories of the joint-angles, $\hat\beta_2$ and $\hat\beta_3$, are compared with their derived angles based on the outputs from the RANSAC algorithm, respectively. The maximum (avg.) errors in the residuals $\hat\beta_2-\beta_2$ and $\hat\beta_3-\beta_3$, are $0.83\degree$ (avg. $0.27\degree$) and $2.18\degree$ (avg. $0.8\degree$), respectively.   To illustrate the repeatability of the system, the same figure-eight nominal path was executed multiple times. This experiment was performed at another occasion on rougher ground surface conditions compared to the first experiment. The resulting estimated lateral control error $\tilde z_3$ and the Euclidean norm of the position error $||e(t)||_2$ over four consecutive laps are presented in Figure \[j1:fig:eight\_many\_laps\]. As can be seen, both errors are bounded and have a periodic behavior of approximately 250 seconds, *i.e.*, one lap time around the figure-eight nominal path. ![Illustration of the planned two-point turn maneuver. The goal position of the semitrailer is illustrated by the white cross inside the blue rectangle, where the white arrow specifies its goal orientation. The white path is the planned path for axle of the semitrailer $(x_{3,r}(\cdot),y_{3,r}(\cdot))$.[]{data-label="j1:fig:two_point_turn_gui"}](two_point_turn.pdf){width="0.75\linewidth"} ### Two-point turn In this section, the complete path planning and path-following control framework is evaluated in a real-world experiment. The G2T with a car-like tractor is operating on dry asphalt on a relatively narrow road at Scania’s test facility. The scenario setup is shown in Figure \[j1:fig:two\_point\_turn\_gui\] and the objective is to change the orientation of the semitrailer with $180\degree$ while at the same time move the vehicle about 40 m longitudinally. Similarly to the parking planning problem in Figure \[j1:fig:parking\_example\], the precomputed HLUT may underestimate the cost-to-go due to the confined environment. Despite this, the lattice planner found an optimal solution ($\gamma=1$) in 636 milliseconds and the ARA$^*$ search expanded only from 720 vertices. As a comparison, a planning time of only 29 milliseconds was needed for this example to find a motion plan with $\gamma=1.3$, *i.e.*, a solution that is guaranteed to less than 30 % worse than the optimal one. (myplot) at (0,0) ; (0.77,0.72) node\[right,draw=red,rounded corners, text width=1cm,align=center,text=black\] [$x_{\text{init}}$]{}; (0.25,0.68) node\[right,draw=red,rounded corners, text width=1cm,align=center,text=black\] [$x_{\text{goal}}$]{}; (0.6,0.6) to \[out=50,in=180\] (0.75,0.75); at (0.7,0.64) [Backward]{};       In Figure \[j1:fig:two\_point\_turn\_gui\], the white path illustrates the planned path for the axle of the semitrailer $(x_{3,r}(\tilde s),y_{3,r}(\tilde s))$, $\tilde s\in[0,\tilde s_G]$. As can be seen, the solution is mainly composed of a $90\degree$ turn in backward motion followed by a $90\degree$ turn in forward motion. The execution of the planned two-point turn maneuver is visualized in Figure \[j1:fig:two\_point\_turn\], where the estimated path taken by the axle of the semitrailer $(\hat x_3(\cdot),\hat y_3(\cdot))$ is plotted together with its ground truth path $(x_{3,GT}(\cdot),\hat y_{3,GT}(\cdot))$ measured by the external RTK-GPS. More detailed plots are provided in Figure \[j1:fig:two\_point\_error\_states\]. In Figure \[j1:fig:two\_point\_turn\_poes\_error\], the Euclidean norm of the difference between the estimated position for the axle of the semitrailer $(\hat x_3(\cdot),\hat y_3(t\cdot))$ and its ground truth $(x_{3,GT}(\cdot),\hat y_{3,GT}(\cdot))$ is plotted, where the vehicle is changing from backward to forward motion at $t=60$ s. In this scenario, the maximum position estimation error was and the mean absolute error was . The path-following error states are plotted in Figure \[j1:fig:two\_point\_turn\_z3\]–\[j1:fig:two\_point\_turn\_b2\]. From these plots, it can be seen that the estimated lateral control error $\tilde z_3(t)$, which is plotted in Figure \[j1:fig:two\_point\_turn\_z3\], has a maximum absolute error of 0.37 m and a mean absolute error of 0.12 m. Except from initial transients, the joint angle errors, $\tilde\beta_3$ and $\tilde\beta_2$, attain their peak values when the vehicle is changing from backward to forward motion at There are multiple possible sources to this phenomenon. Except from possible estimation errors in the joint angles, one possibility is that lateral dynamical effects arise when the vehicle is exiting the tight $90\degree$ turn in backward motion. However, the path-following controller is still able to compensate for these disturbances, as can be seen for in Figure \[j1:fig:two\_point\_turn\_kappa\]. ### T-turn The final real-world experiment is an open area planning problem on the same gravel surface as the execution of the figure-eight nominal path was performed. The open area planning problem is shown in Figure \[j1:fig:T-turn\], where the G2T with a car-like tractor is intended to change the orientation of the semitrailer with $180\degree$ together with a small lateral and longitudinal movement. In this scenario, the planning time for finding an optimal solution ($\gamma=1$) was only 38 milliseconds and the ARA$^*$ search explored only from 22 vertices. The reason why such a small amount of vertex expansions was needed is because the precomputed HLUT perfectly estimates the cost-to-go in free-space scenarios like this. (myplot) at (0,0) ; (0.61,0.28) node\[right,draw=red,rounded corners, text width=1cm,align=center,text=black\] [$x_{\text{init}}$]{}; (0.42,0.56) node\[right,draw=red,rounded corners, text width=1cm,align=center,text=black\] [$x_{\text{goal}}$]{}; (0.68,0.43) to \[out=0,in=-90\] (0.83,0.55); at (0.70,0.55) [Forward]{}; Figure \[j1:fig:T-turn\] shows the optimal nominal path for the axle of semitrailer $(x_{3,r}(\cdot),y_{3,r}(\cdot))$, which essentially is composed by two $90\degree$-turns in forward motion together with a parallel maneuver in backward motion. In this example, the impact of penalizing complex backward motions is clear, the advanced maneuvers are performed while driving forwards if allowed by the surrounding environment. In the same plot, the estimated path taken by the axle of the semitrailer $(\hat x_{3}(\cdot),\hat y_{3}(\cdot))$ and its ground truth path $(x_{3,GT}(\cdot),y_{3,GT}(\cdot))$ obtained from the external RTK-GPS are presented. More detailed plots are provided in Figure \[j1:fig:T\_turn\_error\_states\]. In Figure \[j1:fig:T\_turn\_poes\_error\], the norm of the position estimation error for the axle of the semitrailer is plotted, where the vehicle is changing from forward to backward motion at $t=50$ s and from backward to forward motion at $t=100$ s. In this experiment, the maximum estimation error was $0.20$ m and the mean absolute error was $0.12$ m. The path-following error states are presented in Figure \[j1:fig:T\_turn\_z3\]–\[j1:fig:T\_turn\_b2error\]. In Figure \[j1:fig:T\_turn\_z3\], the estimated lateral control error $\tilde z_3$ is plotted, where the maximum absolute error was $0.31$ m and the mean absolute error was $0.11$ m. In this experiment, both joint angle errors, $\tilde\beta_2$ and $\tilde \beta_3$, as well as the orientation error of the semitrailer $\tilde\theta_3$, lie within $\pm 5\degree$ for the majority of the path execution. The controlled curvature $\kappa$ of the car-like tractor is plotted in Figure \[j1:fig:T\_turn\_kapp\]. Similar to the two-point turn experiment, it can be seen that for large nominal curvature values $\kappa_r$, the feedback part in the path-following controller is compensating for lateral dynamical effects that are not captured by the kinematic vehicle model.       Discussion ========== The proposed path planning and path-following control framework has been successfully deployed on a full-scale test platform. Since the full system is built upon several modules, an important key to fast deployment was to separately test and evaluate each module in simulations. By performing extensive simulations during realistic conditions, the functionality of each module as well as the communication between them could be verified before real-world experiments was performed. As illustrated in the real-world experiments, the performance of the system in terms of path-following capability is highly dependent on accurate estimates of the vehicle’s states. The tuning and calibration of the nonlinear observer was also the most time-consuming part of the process when the step from simulations to real-world experiments was taken. The main difficulty was to verify that the nonlinear observer was capable of tracking the true trajectory of the position and orientation of the axle of the semitrailer as well as the two joint angles, despite that their true state trajectories were partially or completely unknown. To resolve this, data was collected from manual tests with the vehicle. This data was then used offline to tune the covariance matrices in the EKF and to calibrate the position and orientation of the rear view LIDAR sensor. For the calibration of the LIDAR sensor, an accurately calibrated yaw angle was found to be very important. In this work, the yaw angle was calibrated using data that was collected from a manual test while driving straight in forward motion. The deployment of the hybrid path-following controller and the lattice planner was a much smoother process, where only minor tuning was needed compared to simulations. For the design of the path-following controller, the penalty for the lateral path-following error $\tilde z_3$ was found to be the most important tuning parameter which had the largest effects on the region of attraction for the closed-loop system. However, since the lattice planner is planning from the vehicle’s current state, the initial error in $\tilde z_3$ will be small and a rather aggressive tuning of the path-following controller was possible. Conclusions and future work {#j1:sec:conclusions} =========================== A path planning and path-following control framework for a G2T with a car-like tractor is presented. The framework is targeting low-speed maneuvers in unstructured environments and has been successfully deployed on a full-scale test vehicle. A lattice-based path planner is used to generate kinematically feasible and optimal nominal paths in all vehicle states and controls, where the ARA$^*$ graph-search algorithm is used during online planning. To follow the planned path, a hybrid path-following controller is developed to stabilize the path-following error states of the vehicle. A nonlinear observer is proposed that is only utilizing information from sensors that are mounted on the car-like tractor, which makes the solution compatible with basically all of today’s commercially available semitrailers that have a rectangular body. The framework is first evaluated in simulation experiments and then in three different real-world experiments and results in terms of closed-loop performance and real-time planning capabilities are presented. In the experiments, the system shows that it is able to consistently solve challenging planning problems and that it is able to execute the resulting motion plans, despite no sensors on the dolly or semitrailer, with good accuracy. A drawback with the lattice-based path planning framework is the need of manually selecting the connectivity in the state lattice. Even though this procedure is done offline, it is both nontrivial and time-consuming. Future work includes automating this procedure to make the algorithm more user-friendly and compatible with different vehicle parameters. Moreover, the discretization of the vehicle’s state-space, restricts the set of possible initial states the lattice planner can plan from and desired goal states that can be reached exactly. As mentioned in the text, this is a general problem with sampling-based motion planning algorithms, which could for example be alleviated by the use of numerical optimal control as a post-processing step [@lavalle2006planning; @CirilloIROS2014; @oliveira2018combining; @andreasson2015fastsmoothing]. Thus, future work includes exploiting the structure of the path planning problem and develop an efficient and numerically stable online smoothing framework by, $e.g.$, the use of numerical optimal control as a backbone. At some parts of the figure-eight path-following experiments, the magnitude of the estimation error for the position of the axle of the semitrailer had a size which potentially could cause problems in narrow environments. Hence, reasonable future work also includes exploring alternative onboard sensors as well as using external sensors that can be placed at strategic locations where high-accuracy path tracking is critical, $e.g.$, when backing up to a loading bay. Appendix A {#appendix-a .unnumbered} ========== Proof of Lemma \[j1:L1\] {#proof-of-lemmaj1l1 .unnumbered} ------------------------ Given a piecewise continuous $u_p(s)=\begin{bmatrix} v(s) & u_{\omega}(s) \end{bmatrix}^T\in\mathbb U_p$, $s\in[0,s_G]$, define $$\tilde f_z (s,z)\triangleq f_z(z,u_p(s))= \begin{bmatrix} v(s)f(x,\tan\alpha/L_1) \\ \omega \\ u_\omega(s) \end{bmatrix}.$$ Direct calculations verify that $f(x,\tan\alpha/L_1)$ in  is continuous and continuously differentiable with respect to $z$ for all $z\in\mathbb Z_o\in \{z\in\mathbb R^7 \mid |\alpha| < \pi/2\}$. This is true since $f(x,\tan\alpha/L_1)$ is composed of sums and products of trigonometric functions which are continuous and continuously differentiable with respect to $z$ for all $z\in \mathbb Z_o$. Furthermore, $\tilde f_z (s,z)$ is piecewise continuous in $s$ since $f_z(z,u_p(s))$ is continuous in $u_p$ for all $z\in \mathbb Z_o$. Therefore, on any interval $[a,b]\subset [0,s_G]$ where $u_p(\cdot)$ is continuous, $\tilde f_z (s,z)$ and $[\partial\tilde f_z (s,z)/\partial z]$ are continuous on $[a,b]\times Z_o$. Then, from Lemma 3.2 in [@khalil], the vector field $\tilde f_z (s,z)$ is piecewise continuous in $s$ and locally Lipschitz in $z$, for all $s\in[0,s_G]$ and all $z\in \mathbb Z_o$. Define $\mathbb Z_c = \{z\in\mathbb R^7\mid|\alpha| \leq \alpha_{\text{max}}\}$ which is a compact subset of $\mathbb Z_o$. Then, from Theorem 3.3 in [@khalil], every solution $z(s)$, $s\in[0,s_G]$ that lies entirely in $\mathbb Z_c$ is unique for all $s\in[0,s_G]$. Now, let $z(s),$ $s\in[0,s_G]$, be the unique solution to  assumed to lie entirely in $\mathbb Z_c$, when the control signal $u_p(s)\in\mathbb U_p$, $s\in[0,s_G]$ is applied from the initial state $z(0)$ which ends at the final state $z(s_G)$. Introduce $(\bar z(\bar s),\bar u_p(\bar s)),$ $\bar s\in[0,s_G]$ with \[j1:eq:sym:proof\] $$\begin{aligned} \bar{z}(\bar s) &= \begin{bmatrix} x(s_G-\bar s)^T & \alpha(s_G-\bar s) & -\omega(s_G-\bar s) \end{bmatrix}^T, \quad \bar s\in[0, s_G],\\ \bar u_p(\bar s) &= \begin{bmatrix}-v(s_G-\bar s) & u_\omega(s_G-\bar s)\end{bmatrix}^T,\hspace{62pt} \bar s\in[0, s_G]. \end{aligned}$$ Since $$\begin{aligned} \frac{\text d}{\text{d}\bar s}\bar z(\bar s) &= \frac{\text d}{\text{d}\bar s} \begin{bmatrix} x(s_G-\bar s) \\ \alpha(s_G-\bar s) \\ -\omega(s_G-\bar s) \end{bmatrix} = \{s = s_G- \bar s\} = \frac{\text d}{\text{d}s} \left.\begin{bmatrix} x(s) \\ \alpha(s) \\ -\omega(s) \end{bmatrix}\right|_{s=s_G-\bar s}\underbrace{\frac{\text{d}s}{\text{d}\bar s}}_{=-1}= \nonumber \\ &=\left.\begin{bmatrix} -v(s)f(x(s),\tan\alpha(s)/L_1) \\ -\omega(s) \\ u_\omega(s) \end{bmatrix}\right|_{s=s_G-\bar s} = \begin{bmatrix} \bar v(\bar s)f(\bar x(\bar s),\tan\bar \alpha(\bar s)/L_1) \\ \bar \omega(\bar s) \\ \bar u_\omega(\bar s) \end{bmatrix} = \\ & = f_z(\bar z(\bar s),\bar u_p(\bar s)), \hspace{4pt}\bar s\in[0,s_G],\end{aligned}$$ also satisfies the system dynamics  from the initial state $\bar z(0)=\begin{bmatrix} x(s_G)^T & \alpha(s_G) & -\omega(s_G)\end{bmatrix}^T$. Finally, since the solution $\bar z(\bar s)$, $\bar s\in[0,s_G]$ also lies entirely in $\mathbb Z_c$, this solution is also unique. Proof of Theorem \[j1:T-optimal-symmetry\] {#proof-of-theoremj1t-optimal-symmetry .unnumbered} ------------------------------------------ Let $(z(s),u_p(s)),$ $s\in[0,s_G]$ denote a feasible solution to the optimal path planning problem  with objective functional value $J$. Now, consider the reverse optimal path planning problem \[j1:eq:revMotionPlanningOCP\_appendix\] $$\begin{aligned} \operatorname*{minimize}_{\bar u_{p}(\cdot), \hspace{0.5ex}\bar s_{G} }\hspace{3.7ex} & \bar J = \int_{0}^{\bar s_{G}}L(\bar x(\bar s),\bar \alpha(\bar s), \bar \omega(\bar s), \bar u_\omega(\bar s))\,\text{d}\bar s \label{j1:eq:revMotionPlanningOCP_obj_appendix}\\ \operatorname*{subject\:to}\hspace{3ex} & \frac{\text d\bar z}{\text d\bar s} = f_z(\bar z(\bar s),\bar u_p(\bar s)), \label{j1:eq:revMotionPlanningOCP_syseq_appendix} \\ & \bar z(0) = z_G, \quad \bar z(\bar s_{G}) = z_I, \label{j1:eq:revMotionPlanningOCP_initfinal_appendix} \\ & \bar z(\bar s) \in \mathbb Z_{\text{free}}, \quad \bar u_{p}(\bar s) \in {\mathbb U}_p. \label{j1:eq:revMotionPlanningOCP_constraints_appendix} \end{aligned}$$ Then, using the invertible transformations –: \[j1:eq:invertible\_transformation\] $$\begin{aligned} \bar{z}(\bar s) &= \begin{bmatrix} x(s_G-\bar s)^T & \alpha(s_G-\bar s) & -\omega(s_G-\bar s) \end{bmatrix}^T, \quad \bar s\in[0, s_G],\\ \bar u_p(\bar s) &= \begin{bmatrix}-v(s_G-\bar s) & u_\omega(s_G-\bar s)\end{bmatrix}^T,\hspace{62pt} \bar s\in[0, s_G] \end{aligned}$$ and $\bar s_G=s_G$, the reverse optimal path planning problem  becomes \[j1:eq:revMotionPlanningOCP\_1\] $$\begin{aligned} \operatorname*{minimize}_{u_{p}(\cdot), \hspace{0.5ex}s_{G} }\hspace{3.7ex} & \bar J = \int_{0}^{s_{G}}L(x(s_G-\bar s),\alpha(s_G-\bar s), -\omega(s_G-\bar s) , u_\omega(s_G-\bar s))\,\text{d}\bar s \label{j1:eq:revMotionPlanningOCP_obj_1}\\ \operatorname*{subject\:to}\hspace{3ex} & \frac{\text d}{\text{d}\bar s} \begin{bmatrix} x(s_G-\bar s) \\ \alpha(s_G-\bar s) \\ -\omega(s_G-\bar s) \end{bmatrix} = \begin{bmatrix} -v(s_G-\bar s)f(x(s_G-\bar s),\tan\alpha(s_G-\bar s)/L_1) \\ -\omega(s_G-\bar s) \\ u_\omega(s_G-\bar s) \end{bmatrix}, \label{j1:eq:revMotionPlanningOCP_syseq_1} \\ & \begin{bmatrix} x(s_G)^T & \alpha(s_G) & -\omega(s_G) \end{bmatrix}^T = z_G,\\ &\begin{bmatrix} x(0)^T & \alpha(0) & -\omega(0) \end{bmatrix}^T = z_I, \label{j1:eq:revMotionPlanningOCP_initfinal_1} \\ & \begin{bmatrix} x(s_G- \bar s)^T & \alpha(s_G- \bar s) & -\omega(s_G-\bar s) \end{bmatrix}^T \in \mathbb Z_{\text{free}}, \\ &\begin{bmatrix}-v(s_G-\bar s) & u_\omega(s_G-\bar s)\end{bmatrix}^T \in {\mathbb U}_p. \label{j1:eq:revMotionPlanningOCP_constraints_1} \end{aligned}$$ Let $s=s_G-\bar s$, $s\in[0,s_G]$. It then follows from Lemma \[j1:L1\] that  simplifies to $\frac{\mathrm{d}z}{\mathrm{d}s}=f_z(z(s),u_p(s))$. From Assumption \[j1:A-optimal-symmetry\] it follows that $$\begin{aligned} \bar J &=\int_{0}^{s_{G}}L(x(s_G-\bar s),\alpha(s_G-\bar s), -\omega(s_G-\bar s), u_\omega(s_G-\bar s))\,\text{d}\bar s=\{s = s_G - \bar s\} \nonumber \\ &= -\int_{s_G}^{0}L(x(s),\alpha(s), -\omega(s), u_\omega(s))\,\text{d}s \nonumber\\ &= \int_{0}^{s_G}L(x(s),\alpha(s), -\omega(s), u_\omega(s))\,\text{d}s = \{L(x,\alpha, -\omega, u_\omega)=L(x,\alpha, \omega, u_\omega)\} \nonumber \\ &= \int_{0}^{s_G}L(x(s),\alpha(s), \omega(s), u_\omega(s))\,\text{d}s = J. \label{j1:proof:obj_1}\end{aligned}$$ Hence, the problem in  can equivalently be written as \[j1:eq:revMotionPlanningOCP\_2\] $$\begin{aligned} \operatorname*{minimize}_{u_{p}(\cdot), \hspace{0.5ex}s_{G} }\hspace{3.7ex} & J = \int_{0}^{s_{G}}L(x(s),\alpha(s), \omega(s) , u_\omega(s))\,\text{d} s \label{j1:eq:revMotionPlanningOCP_obj_2}\\ \operatorname*{subject\:to}\hspace{3ex} & \frac{\text d z}{\text d s} = f_z(z(s),u_p(s)), \label{j1:eq:revMotionPlanningOCP_syseq_2} \\ & \begin{bmatrix} x(s_G)^T & \alpha(s_G) & -\omega(s_G) \end{bmatrix}^T = z_G,\\ &\begin{bmatrix} x(0)^T & \alpha(0) & -\omega(0) \end{bmatrix}^T = z_I, \label{j1:eq:revMotionPlanningOCP_initfinal_2} \\ & \begin{bmatrix} x(s)^T & \alpha(s) & -\omega(s) \end{bmatrix}^T \in \mathbb Z_{\text{free}}, \label{j1:eq:revMotionPlanningOCP_constraints_state_2} \\ &\begin{bmatrix}-v(s) & u_\omega(s)\end{bmatrix}^T \in {\mathbb U}_p. \label{j1:eq:revMotionPlanningOCP_constraints_2} \end{aligned}$$ From the symmetry of the set $\mathbb U_p=\{-1,1\}\times[-u_{\omega,\text{max}}, u_{\omega,\text{max}}]$,  is equivalent to $u_p(s)\in {\mathbb U}_p$. From Assumption \[j1:A-optimal-symmetry2\], is equivalent to $z(s)=\begin{bmatrix} x(s)^T & \alpha(s) & \omega(s)\end{bmatrix}^T\in\mathbb Z_{\text{free}}$. Moreover, since $z_I = \begin{bmatrix} x_I^T & \alpha_I & 0 \end{bmatrix}^T$ and $z_G= \begin{bmatrix} x_G^T & \alpha_G & 0 \end{bmatrix}^T$, the problem in  can equivalently be written as \[j1:eq:revMotionPlanningOCP\_3\] $$\begin{aligned} \operatorname*{minimize}_{u_{p}(\cdot), \hspace{0.5ex}s_{G} }\hspace{3.7ex} & J = \int_{0}^{s_{G}}L(x(s),\alpha(s), \omega(s) , u_\omega(s))\,\text{d} s \label{j1:eq:revMotionPlanningOCP_obj_3}\\ \operatorname*{subject\:to}\hspace{3ex} & \frac{\text dz}{\text ds} = f_z(z(s),u_p(s)), \label{j1:eq:revMotionPlanningOCP_syseq_3} \\ & z(0) = z_I, \quad z(s_G) = z_G, \label{j1:eq:revMotionPlanningOCP_initfinal_3}\\ & z(s) \in \mathbb Z_{\text{free}}, \\ &u_p(s) \in {\mathbb U}_p, \label{j1:eq:revMotionPlanningOCP_constraints_3} \end{aligned}$$ which is identical to the optimal path planning problem in . Hence, the OCPs in  and  are equivalent [@boyd2004convex] and the invertible transformation relating the solutions to the two equivalent problems is given by . Hence, if an optimal solution to one of the problems is known, an optimal solution to the other one can immediately be derived using . Or more practically, given an optimal solution in one direction, an optimal solution in the other direction can be trivially found. Derivation of the path-following error model {#derivation-of-the-path-following-error-model .unnumbered} -------------------------------------------- In this section, the details regarding the derivation of $\tilde \theta_3$, $\tilde\beta_3$ and $\tilde\beta_2$ in path-following error model in  are given. First, the nominal path in $\eqref{j1:eq:tray:semitrailer}$ render in the equations: $$\begin{aligned} \frac{\text d\theta_{3,r} }{\text d\tilde s} &= v_r \kappa_{3,r}, \quad \tilde s \in[0,\tilde s_G], \label{j1:eq:s_a1}\\ \frac{\text d\beta_{3,r}}{\text d\tilde s} &= v_r\left(\frac{\sin\beta_{2,r} - M_1\cos\beta_{2,r}\kappa_r}{L_2 \cos \beta_{3,r} C_1(\beta_{2,r},\kappa_r)} - \kappa_{3,r}\right), \quad \tilde s \in[0,\tilde s_G], \label{j1:eq:s_a2}\\ \frac{\text d\beta_{2,r}}{\text d\tilde s} &= v_r \left(\frac{\kappa_r - \frac{\sin \beta_{2,r}}{L_2} + \frac{M_1}{L_2}\cos\beta_{2,r}\kappa_r}{\cos \beta_{3,r} C_1(\beta_{2,r},\kappa_r)}\right), \quad \tilde s \in[0,\tilde s_G]. \label{j1:eq:s_a3} \end{aligned}$$ \[j1:eq:model:S\_a\] Moreover, the models of $\theta_3$, $\beta_3$ and $\beta_2$ in  can equivalently be represented as \[j1:eq:model:S\_b\] $$\begin{aligned} \dot{\theta}_3 &= v_3 \frac{\tan \beta_3 }{L_3}, \label{j1:eq:s_a4}\\ \dot{\beta}_3 &= v_3\left(\frac{\sin\beta_2 - M_1\cos\beta_2\kappa}{L_2 \cos \beta_3 C_1(\beta_2,\kappa)} - \frac{\tan\beta_3}{L_3}\right), \label{j1:eq:s_a5}\\ \dot{\beta}_2 &= v_3 \left( \frac{\kappa - \frac{\sin \beta_2}{L_2} + \frac{M_1}{L_2}\cos \beta_2\kappa}{\cos \beta_3 C_1(\beta_2,\kappa)}\right), \label{j1:eq:s_a6} \end{aligned}$$ where $v$ has been replaced with $v_3$ using . Now, since $\tilde\theta_3(t)=\theta_3(t)-\theta_{3,r}(\tilde s(t))$, the chain rule together with the equation for $\dot{\tilde s}$ in  yields $$\begin{aligned} \dot{\tilde\theta}_3(t) &= \dot\theta_3 - \dot {\tilde s} \frac{\text d}{\text d\tilde s}\theta_{3,r}(\tilde s) \nonumber \\ &= v_3 \left( \frac{\tan(\tilde{\beta}_3+\beta_{3,r})}{L_3} - \frac{\kappa_{3,r}\cos \tilde \theta_3}{1-\kappa_{3,r}\tilde z_3} \right) = v_3f_{\tilde\theta_3}(\tilde s,\tilde x_e,\tilde \kappa), \quad t\in\Pi(0,\tilde s_G). \label{j1:eq:model:S_c}\end{aligned}$$ In analogy, taking the time-derivate of $\tilde\beta_3(t)=\beta_3(t)-\beta_{3,r}(\tilde s(t))$ and apply the chain rule renders in $$\begin{aligned} \dot{\tilde \beta}_3 &= \dot \beta_3 + \dot{\tilde s}\frac{\text d}{\text d\tilde s}\beta_{3,r}(\tilde s) \nonumber\\ &=v_3\left(\frac{\sin(\tilde \beta_2+\beta_{2,r})-M_1\cos(\tilde \beta_2+\beta_{2,r}) (\tilde \kappa+ \kappa_r)}{L_2\cos(\tilde \beta_3+\beta_{3,r}) C_1(\tilde \beta_2+\beta_{2,r}, \tilde \kappa+ \kappa_r)} - \frac{\tan(\tilde \beta_3+\beta_{3,r})}{L_3} \nonumber \right. \\ &\left. -\frac{\cos{\tilde{\theta}_3}}{1-\kappa_{3,r}\tilde z_3}\left(\frac{\sin\beta_{2,r} -M_1 \cos\beta_{2,r}\kappa_r}{L_2\cos\beta_{3,r} C_1(\beta_{2,r},\kappa_r)}-\kappa_{3,r}\right)\right)=v_3f_{\tilde\beta_3}(\tilde s,\tilde x_e,\tilde \kappa), \quad t\in\Pi(0,\tilde s_G). \label{j1:eq:model:S_d}\end{aligned}$$ Finally, taking the time-derivate of $\tilde\beta_2(t)=\beta_2(t)-\beta_{2,r}(\tilde s(t))$ and apply the chain rule yields $$\begin{aligned} \dot{\tilde \beta}_2 &=\dot \beta_2 + \dot {\tilde s}\frac{\text d}{\text d\tilde s}\beta_{2,r}(\tilde s) \nonumber \\=&v_3\left( \left( \frac{\tilde \kappa+ \kappa_r - \frac{\sin(\tilde \beta_2+\beta_{2,r})}{L_2} + \frac{M_1}{L_2}\cos(\tilde \beta_2+\beta_{2,r})(\tilde \kappa+ \kappa_r)}{\cos(\tilde \beta_3+\beta_{3,r}) C_1(\tilde \beta_2+\beta_{2,r}, \tilde \kappa+ \kappa_r)}\right) \nonumber \right. \\ &\left. -\frac{\cos{\tilde{\theta}_3}}{1-\kappa_{3,r}\tilde z_3}\left( \frac{\kappa_r - \frac{\sin \beta_{2,r}}{L_2} + \frac{M_1}{L_2}\cos \beta_{2,r}\kappa_r}{\cos \beta_{3,r} C_1(\beta_{2,r}, \kappa_r)}\right)\right)=v_3f_{\tilde\beta_2}(\tilde s,\tilde x_e,\tilde \kappa), \quad t\in\Pi(0,\tilde s_G), \label{j1:eq:model:S_e}\end{aligned}$$ which finalizes the derivation. Moreover, by inserting $(\tilde x_e,\tilde\kappa) = (0,0)$ in – yield $\dot{\tilde\theta}_3=\dot{\tilde \beta}_3=\dot{\tilde \beta}_2=0$, , $i.e.$, the origin is an equilibrium point. Finally, from , we have that $v_3 = vg_v(\beta_2,\beta_3,\kappa)$ and the models in – can in a compact form also be represented as \[j1:eq:model:S\_f\] $$\begin{aligned} \dot{\tilde\theta}_3 &= vg_v(\tilde\beta_2+\beta_{2,r},\tilde\beta_3+\beta_{3,r},\tilde\kappa+\kappa_r)f_{\tilde\theta_3}(\tilde s,\tilde x_e,\tilde \kappa), \quad t\in\Pi(0,\tilde s_G), \\ \dot{\tilde \beta}_3 &= vg_v(\tilde\beta_2+\beta_{2,r},\tilde\beta_3+\beta_{3,r},\tilde\kappa+\kappa_r)f_{\tilde\beta_3}(\tilde s,\tilde x_e,\tilde \kappa), \quad t\in\Pi(0,\tilde s_G), \\ \dot{\tilde \beta}_2 &= vg_v(\tilde\beta_2+\beta_{2,r},\tilde\beta_3+\beta_{3,r},\tilde\kappa+\kappa_r)f_{\tilde\beta_2}(\tilde s,\tilde x_e,\tilde \kappa), \quad t\in\Pi(0,\tilde s_G), \end{aligned}$$ where the origin is still an equilibrium point since $f_{\tilde\theta_3}(\tilde s,0,0)=f_{\tilde\beta_3}(\tilde s,0,0)=f_{\tilde\beta_2}(\tilde s,0,0)=0$, $\forall \tilde s\in[0,\tilde s_G]$. ### Acknowledgments {#acknowledgments .unnumbered} The research leading to these results has been founded by Strategic vehicle research and innovation (FFI). We gratefully acknowledge the Royal Institute of Technology for providing us with the external RTK-GPS. The authors would also like to express their gratitude to Scania CV for providing necessary hardware, as well as software and technical support. [^1]: All angles are defined positive counter clockwise. [^2]: $\Theta$ is the the set of unique angles $-\pi<\theta_{3}\leq \pi$ that can be generated by $\theta_{3} = \arctan2(i,j)$ for two integers $i,j\in\{-2,-1,0,1,2\}$. [^3]: Essentially, it is only necessary to solve the OCPs from the initial orientations $\theta_{3,s}=0,\text{ }\arctan(1/2) \text{ and } \pi/4$. The motion primitives from the remaining initial orientations $\theta_{3,s}\in\Theta$ can be generated by mirroring the solutions. [^4]: Other features could be extracted from the point cloud, but using $L_y$ and $\phi$ have shown to yield good performance in practice. [^5]: The RTK-GPS is a Trimble SPS356 with a horizontal accuracy of about 0.1 m.
--- abstract: 'There are different requirements on cybersecurity of industrial control systems and information technology systems. This fact exacerbates the global issue of hiring cybersecurity employees with relevant skills. In this paper, we present [[KYPO4INDUSTRY ]{}]{}training facility and a course syllabus for beginner and intermediate computer science students to learn cybersecurity in a simulated industrial environment. The training facility is built using open-source hardware and software and provides reconfigurable modules of industrial control systems. The course uses a flipped classroom format with hands-on projects: the students create educational games that replicate real cyber attacks. Throughout the semester, they learn to understand the risks and gain capabilities to respond to cyber attacks that target industrial control systems. Our described experience from the design of the testbed and its usage can help any educator interested in teaching cybersecurity of cyber-physical systems.' author: - Pavel Čeleda - Jan Vykopal - Valdemar Švábenský - Karel Slavíček bibliography: - 'references.bib' title: | KYPO4INDUSTRY: A Testbed for Teaching Cybersecurity\ of Industrial Control Systems --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10010583&lt;/concept\_id&gt; &lt;concept\_desc&gt;Hardware&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010520.10010553&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computer systems organization Embedded and cyber-physical systems&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10002978&lt;/concept\_id&gt; &lt;concept\_desc&gt;Security and privacy&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10003456.10003457.10003527&lt;/concept\_id&gt; &lt;concept\_desc&gt;Social and professional topics Computing education&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; Introduction ============ Industrial control systems (ICS) provide vital services, such as electricity, water treatment, and transportation. Although these systems were formerly isolated, they became connected with information technology (IT) systems and even to the Internet. shows the ISA-95 enterprise reference architecture that describes the connection between the functions of ICS and IT systems [@scholten2007road]. This connection of processes in the cyberspace and the physical world has reduced costs and enabled new services. However, the ICS assets became vulnerable to new threats and ever-evolving cyber threat landscape [@zhu2011taxonomy]. ![The ISA-95 architecture: A hierarchical model of enterprise-control system integration [@scholten2007road][]{data-label="fig:isa-95-architecture"}](img/isa-95){width="0.85\linewidth"} ICSs are made to maintain the integrity and availability of production processes and to sustain conditions of industrial environments. Their hardware and software components are often custom-built and tightly integrated. However, IT systems use off-the-shelf hardware and software and have different operational characteristics and security objectives [@neitzel2014top]. Traditional cybersecurity courses are falling short in training ICS security [@butts2015industrial], since they focus on exploiting and defending IT assets. To teach ICS security, a training facility (testbed) is needed to model a real-world ICS system [@holm2015ICStestbedsurvey] and to provide hands-on experience. However, building and operating a realistic cyber-physical testbed using standard industrial equipment is expensive. It incorporates equipment such as programmable logic controllers (PLC), input/output modules, sensors, actuators, and other devices. This paper addresses how to teach ICS cybersecurity to computer science students. Currently, the majority of students has intermediate knowledge level of IT cybersecurity but is unfamiliar with ICS principles. Our work brings two main contributions. First, we share our experience with the design and acquisition of [[KYPO4INDUSTRY ]{}]{}testbed. Second, we describe a course syllabus to deliver cybersecurity training in a simulated industrial environment. The course uses a flipped classroom format [@bishop2013flipped] with hands-on projects replicating real cyber attacks. The students learn to understand the risks and gain capabilities to respond to cyber attacks that target ICS. This paper is organized into five sections. provides an overview of hands-on activities for teaching cybersecurity in IT and ICS. describes the ICS training facility, lists the main components, and provides implementation details. provides a detailed description of the design, content, and assessment methods of the ICS cybersecurity course. Finally, concludes the paper and outlines future work. Related Work {#sec:related-work} ============ Cybersecurity knowledge and skills are usually taught through classroom lectures complemented with labs, exercises, and home assignments. Such a combination of theory and practice is essential in training cybersecurity experts, since the number of cyber attacks and the ingenuity of attackers is ever-growing. This section presents the current best practice for teaching cybersecurity in IT and ICS. Teaching Cybersecurity in IT {#sec:learning_by_doing} ---------------------------- The three most popular types of IT cybersecurity training are hands-on assignments, capture the flag (CTF) games, and cyber defense exercises (CDX). Hands-on assignments include working with cybersecurity tools, usually in a virtual environment. An example collection of such assignments is SecKnitKit [@siraj2015], a set of virtual machines (VMs) and corresponding learning materials. Using ready-made VMs offers a realistic and isolated environment with minimal setup, which is well-suited for cybersecurity training. Alternatively, online learning platforms, such as Root Me [@rootme], provide a set of cybersecurity challenges that the learners solve locally or online. CTF is a format of cybersecurity games and competitions in which the learners solve various cybersecurity tasks. Completing each task yields a textual string called flag, which is worth a certain amount of points. There are two main variations of the CTF format: Jeopardy and Attack-Defense. In Jeopardy CTF, such as PicoCTF [@chapman2014], learners choose the tasks to solve from a static collection of challenges presented in a web interface. The challenges are divided into categories such as cryptography, reverse engineering, or forensics. Learners solve the tasks locally at their machines or interact with a remote server. Jeopardy CTFs can thus accommodate hundreds of players at the same time. In Attack-Defense CTF, such as iCTF [@vigna2014], teams of learners each maintain an identical instance of a vulnerable computer network. Each team must protect its network while exploiting vulnerabilities in the networks of other teams. Successful attacks yield flags, which, along with maintaining the availability of the network services, contribute to the teams’ score. While anyone can participate in hands-on training or CTF games, CDX is a complex cybersecurity exercise for professionals, often from military or government agencies or dedicated cybersecurity teams [@eagle2013; @kypo-cdx]. Learners are divided into blue teams responsible for maintaining and defending a complex network infrastructure against attacks of an external red team. The blue teams must preserve the availability of the network services for end-users and respond to prompts from law enforcement groups and journalists. Beyond IT systems, some exercises feature simulated critical infrastructure (e.g., electricity grid or transportation). Teaching Cybersecurity in ICS ----------------------------- Teaching ICS relies on components that are likely to be encountered in operational environments. Testbeds are built to replicate the behavior of ICS and incorporate a control center, communication architecture, field devices, and physical processes [@stouffer2015nist]. Holm et al. surveyed the current ICS testbeds and reported on their objectives and implementation [@holm2015ICStestbedsurvey]. Most testbeds focus on cybersecurity – vulnerability analysis, tests of defense mechanisms, and education. Testbed fidelity is essential for training activities and the level of provided courses. High-fidelity testbeds are rare, and most testbeds use simulations, scaled-down models, and individual components [@butts2015industrial]. ICS courses cover beginner and intermediate levels of training. Virtualized, purely software-based testbeds are built upon virtual PLCs and devices modeled in software [@alves2018virtscada]. They can be highly flexible and imitate any real environment with an arbitrary number of various devices. Their main drawback is the lack of look and feel of the operational environment. Users who are accustomed to using the real equipment might perceive purely software-based testbeds as a computer game and not as training for real situations. An example of such testbed is a system for assessment of cyber threats against networked critical infrastructures [@Siaterlis:2014:CT:2602695.2602575]. Hardware-based testbeds are used, for example, in training operating personnel of chemical and nuclear plants. Apart from these, there are other specialized ones, such as PowerCyber [@Hahn2010PowerCyber], which is designed to closely resemble power grid communication utilizing actual field devices and Supervisory Control and Data Acquisition (SCADA) software. This testbed allows to explore cyber attacks and defenses while evaluating their impact on power flow. Ahmed et al. [@Ahmed:2016:SST:3018981.3018984] presented SCADA testbed that demonstrates three industrial processes (a gas pipeline, a power transmission and distribution system, and a wastewater treatment plant) in a small scale. To do so, it employs real-world industrial equipment, such as PLCs, sensors, or aerators. These are deployed at each physical process system for local control and monitoring, and the PLCs are connected to a computer running human-machine interface (HMI) software for monitoring the status of the physical processes. The testbed is used in a university course on ICS security. Students can observe the industrial processes, learn ladder logic programming in various programming environments, and observe network traffic of multiple communication protocols. In 2016, Antonioli et al. [@Antonioli:2017:GIS:3140241.3140253] prepared SWaT Security Showdown, the first CTF event targeted at ICS security. The game employed Secure Water Treatment (SWaT), a software-based testbed available at Singapore University of Technology and Design [@7469060]. Selected twelve international teams from academia and industry were invited. The game was divided into two phases: online Jeopardy and on-site Attack-Defense CTF. The first part served as a training session and included novel categories related to the ICS realm. The on-site CTF lasted two days. The teams visited the testbed on the first day. The next day, they had three hours to attack the SWaT testbed. The authors devised a dedicated scoring system for the assessment of attacks launched by the teams. The scoring evaluated the impact of the attacks on the physical and monitoring processes of the testbed, and the ability to conduct attacks that are not discovered by ICS detection systems deployed in the testbed. Chothia and de Ruiter [@198081] developed a course at the University of Birmingham on penetration testing techniques of off-the-shelf consumer Internet of Things (IoT) devices. Students were tasked to analyze device functionality, write up a report, and give a presentation of their findings. KYPO4INDUSTRY: ICS Training Facility {#sec:testbed} ==================================== In this section, we describe the hardware and software components of the ICS testbed. The ICS training takes place in a specialized physical facility, which has been frequently used for university courses [@kypolab-course], international CDXs [@kypo-cdx], and extracurricular events. The room contains six large tables, each with three seats, three desktop PCs, and ICS hardware devices. As shows, the devices within the testbed infrastructure are interconnected and so can communicate with each other. The tables are portable to allow the instructor to rearrange the room for various activities, including team assignments, student presentations, and group discussions. ![Training facility setup[]{data-label="fig:training-facility"}](img/k4i-training-facility){width="0.8\linewidth"} Hardware Components ------------------- Based on the discussions with our partners and our experience, we defined these requirements on the hardware components of the [[KYPO4INDUSTRY ]{}]{}testbed: - *Open-hardware* – full access to hardware and software to avoid vendor-lock and other proprietary limitations, unlimited software manipulation, and community support. - *Performance* – the PLC processor and memory (RAM, FLASH) must be sufficient to host operating-system with virtualization support (containers) and TCP/IP networking. - *Communication interfaces* – wired and wireless communication buses for connecting peripherals and devices in the testbed. Industry standards like Ethernet, Wi-Fi, Bluetooth, USB, RS-485, and 1-Wire must cover both IT and ICS environments. - *Inputs* – digital inputs to read binary sensors and devices such as buttons, switches, and motion sensors. Analog inputs to measure voltage from temperature, pressure, and light sensors. - *Outputs* – digital outputs to switch binary actuators (LEDs, relays, motors), seven-segment displays, and graphical display (touchscreen) for human-machine interface. - *Physical dimension* – hardware setup which will provide a cyber-physical experience (allow manipulation and observation of physical processes), multiple devices mounted in the same control panel, tabletop and mobile setup. - *Safety* – durable equipment and a tamper-resistant installation, all cabling and connectors should be concealed to prevent (un)intended tampering during hands-on training, and electrical safety – avoid grid power parts. shows the proposed hardware architecture. The hardware components of the control panel include PLCs, I/O modules, touchscreen, linear motor, and communication gateway. ![Control panel block diagram[]{data-label="fig:testbed-hw-architecture"}](img/k4i-hw-architecture){width="0.9\linewidth"} PLC devices are a fundamental component of the control panel. When choosing a suitable PLC platform, it was essential for us that it leverages well-known hardware and has an industrial appearance. We chose the [[UniPi ]{}]{}platform, which uses the popular Raspberry Pi single-board computer [@raspberry] and industrial casing. The [[UniPi ]{}]{}Neuron M103 [@unipiM103] model is used as the master PLC, and slave PLCs use [[UniPi ]{}]{}Neuron S103 [@unipiS103]. Both versions use Raspberry Pi 3 Model B with four-core 1.2GHz CPU and 1GB RAM. The Neuron PLC is DIN rail mountable, requires 24V DC power supply, and has the following interfaces: - [10/100 Mbit Ethernet, Wi-Fi, Bluetooth,]{} - [four USB 2.0 ports, Micro SD port,]{} - [RS-485, 1-Wire interface,]{} - [digital input and output pins,]{} - [one analog input and one analog output port.]{} The control panel uses two I/O module types. The first one connects the master PLC to three large-area LEDs, two buttons, one key switch, and two motion detectors. The master PLC controls the linear motor through the RS-485 interface. The second type connects slave PLC with three large-area LEDs, two buttons, high power led (heating), 1-wire digital thermometer, and light sensor (analog input). Slave PLC uses RS-485 to control two-digit seven-segment display and 1.54" e-paper module. 10" LCD touchscreen is used to display technology processes. Dedicated Raspberry Pi module controls LCD via the HDMI interface and the touch panel via USB. A mechanical demonstrator (actuator) uses a linear motor. It includes DRV8825 stepper motor driver, ATmega 328 MCU, two end-stops switches, and three infrared position sensors. A network switch (MikroTik CRS125-24G) connects all PLC devices. Switch manages the flow of data between PLCs (100 Mbit Ethernet network) and incorporates routing functionality to connect the control panel to the IT network. We built ten control panels to place at the top of the table and six as a movable trolley (see ). The tabletop setup is space-efficient, and the portable trolley provides mobility. The control panel is easy to handle; it requires only a power cord to connect to the mains electricity supply and Ethernet cable to connect to the IT network. The power consumption of one control panel is less than the power consumption of a desktop computer ($\leq$200W). ![Physical hardware setup of the ICS testbed[]{data-label="fig:ics-hw-setup"}](img/k4i-hw-setup){width="1.0\linewidth"} Software Components ------------------- The physical equipment provides fidelity of the operational environment, but software is needed to replicate the functions and behavior of various ICS systems. shows the proposed software architecture based on the simplified ISA-95 model. ![Interfaces between software components[]{data-label="fig:testbed-sw-architecture"}](img/k4i-sw-architecture){width="0.9\linewidth"} Based on our experience from developing and delivering hands-on cybersecurity courses, we defined the following requirements on software components of [[KYPO4INDUSTRY ]{}]{}testbed: - *Open-source model* – access to source code and full software control, no licensing fees and licensing obstacles, community support, and collaboration. - *Operating system* – a fully-fledged operating system with Raspberry PI support, operating-system-level virtualization, and high-speed networking. - *Orchestration* – the ability to manage all testbed devices - configuration management and application deployment, automated preparation of testbed environment. - *Communication protocols* – support for numerous legacy and emerging communication protocols used in ICS and IT environment. The software stack of ICS testbed includes Linux OS (Debian optimized for PLC devices), Docker ecosystem [@docker2019], and on-premise OpenStack [@openstack2019] cloud environment. We combine cloud deployment (virtual machines in OpenStack) with physical devices (PLCs, sensors, and actuators) to create ICS systems with varying levels of fidelity. Automated orchestration of the testbed environment is of utmost importance. The central testbed controller runs as a virtual appliance. It provides management and monitoring of ICS testbed and contains Docker repository for PLC devices. The PLC devices are pre-installed with Debian OS and enabled Docker support. Using Docker containers simplifies software deployment and configuration of testbed components. The openness of the used software allows us to implement virtually any new software component. We focus on two use-cases: widely deployed systems and new emerging technologies. Communication protocols and application interfaces are essential to creating a complete ICS system. There are dozens of industrial protocols, and many new protocols are being proposed every year. Widely deployed protocols are Modbus and DNP3 [@Knapp2014book]. They have been used for decades for communications between ICS devices. The new emerging protocols represent MQTT [@mqtt2019] and REST [@Richardson2007rest]. ICS Cybersecurity Course Design {#sec:ics-course} =============================== This section presents our proposed ICS cybersecurity course that employs the ICS testbed. While the previous section described the hardware and software components of the testbed, it did not deal with content. One of our motivations for this course, apart from student learning, is that the students will create training content for the testbed. When writing this section, we followed the guidelines for planning new courses [@Walker:2016] and Joint Task Force on Cybersecurity Education (JTF) Cybersecurity Curricula 2017 [@cybered]. Course Goals and Covered Topics ------------------------------- The overall goal of the course is to provide undergraduate students with an awareness of threats within the ICS domain via hands-on experience. As in the *authentic learning* framework [@lombardi2007authentic], the focus is on solving real-world problems and learning by doing. The students’ final product of the course is a training game for exercising both attacks at and defense of a selected industrial process. Our students previously created such games in the IT domain [@kypolab-course]. The primary JTF curriculum Knowledge Area (KA) the course covers is System Security, with Knowledge Units (KU) of Common System Architectures, System Thinking, and System Control. The secondary KAs are Component Security (KU Component Testing), Connection Security (KU Network Defense), Data Security (KU Secure Communication Protocols), and Organizational Security (KU Systems Administration). We also marginally include the KA Social Security (KUs Cybercrime and Cyber Law). Finally, the course focuses not only on technical skills but also enables students to exercise communication, presentation skills, and time management. Course Format ------------- The course is aimed at computer science university students, namely undergraduates with a basic background in computer networks and security. The recommended prerequisite is completing our Cyber Attack Simulation course in the IT domain [@kypolab-course]. The initial run of the course is prepared for 6 students; however, the training facility described in can accommodate up to 20 students who can work in pairs using the 16 control panels (see ). The course spans the whole semester (13 weeks). It is taught in a flipped classroom format [@bishop2013flipped] with 2-hour long weekly lab sessions, various homework assignments, and a hands-on semester project. The necessary infrastructure includes, apart from the ICS testbed, also a CTF game infrastructure for running students’ games (such as CTFd [@chung2017] or KYPO cyber range platform [@kypo-icsoft17]), and Gitlab repositories for students’ projects. We appreciate the effort of the open-source community, such as learning resources, documentation, and countless projects [@ics-awe; @openplc; @node-red], which will help students to understand the used software. **Week** **Class content** **Student homework task (% of the grade)** **Instructor tasks** ---------- ---------------------------------------------- ----------------------------------------------------- ------------------------- 1 Motivation, real attacks, legal issues Prepare a presentation about an ICS attack (5%) — 2 Student presentations of chosen attacks Read this paper and some of the references Grade the presentations 3 Hands-on labs on ICS testbed familiarization Write an ICS security threat landscape report (5%) — 4 Threat discussion, demo on ICS testbed Write a short survey of CTF games in ICS (5%) Grade the reports 5 Merge surveys, introduce game concepts Select threats for your game Grade the surveys 6 Threat modeling, storyline, consultation Write a game draft Check the game drafts 7 Preparing ICS part, educational objectives Add learning outcomes and prerequisites Check the game drafts 8 Preparing ICS and IT part Prepare an alpha version of the game Deploy the games 9 Dry run of the games with peers Improve the game, submit bug reports (5%) Review bug reports 10 Bug presentations, game improvement Improve the game — 11 Documentation, automation, deployment Submit the game for presentation (50%) Deploy the games 12 Public run of the games Write a reflection from the public run (5%) Oversee the event 13 Final reflections Fix any issues that emerged in the public run (15%) Grade the games Course Syllabus --------------- provides an overview of the course syllabus, student deliverables, and assessment methods. The course is divided into three parts: basics of ICS, development of an ICS training game, and its presentation and submission. ### ICS Principles {#ics-principles .unnumbered} Since we expect the students to have little knowledge of ICS, the first class session will motivate the topic by presenting examples of past cyber attacks such as Stuxnet [@langner2011]. The goal is to demonstrate the real-world impact of ICS incidents. We will follow by explaining the related terms, such as critical information infrastructures, and the corresponding legal regulations (such as a national Act on cybersecurity). For their homework, the students will individually choose a real, publicly-known attack on ICS and present it to others next class (in 15 minutes, including Q&A). After the presentations, the homework assigned in week 2 will be reading this paper and the papers we reference in the related work. In week 3, the students familiarize themselves with the ICS testbed. They will complete several hands-on labs to learn the basic operational features of HW and SW components of the testbed. At the end of the class, they will discuss in groups how to demonstrate the known attacks using ICS testbed. As an individual homework assignment, they will search for existing ICS security threat landscape reports/lists, like the OWASP Internet of Things Project [@owasp-iot]. The following week, each student will present their results. The group will discuss the severity of each threat, and which of them can or cannot be demonstrated on the [[KYPO4INDUSTRY ]{}]{}testbed to understand the capabilities and limitations of the testbed. The individual homework for the next week will be to prepare a 1-page written survey of CTF games in the ICS domain. In the week 5 class, the students will engage in a pair activity of merging their reports to create a shared list of existing CTF games for the whole class. The motivation is to have a knowledge base of inspiration for students’ games. The activity will follow with a short discussion centered around the question, “What features should an engaging game have?” The instructor will then briefly lecture on the principles of gamification [@annetta2010] and provide an illustrative example to help students in their later assignment. The homework for the next week will be to think about a topic of student’s game, which processes and threats the student will focus on, and how the student can use the ICS testbed for it. The instructor will highlight the specifics of ICS processes, and point out that they are threatened by different types of attacks than conventional IT systems. This homework starts the semester project phase. ### Game Development {#game-development .unnumbered} Week 6 starts with an activity in which pairs of students “peer-review” each others’ discovered threats using the Security Threat Modelling Cards [@threatcards]. Students who finish will proceed to one-on-one consultations with the instructor to discuss the topic and the process of the game (output of the previous homework). Afterward, the students start working on the game narrative (storyline) and design the game flow, including the separation of tasks into levels. For their homework, the students will finish this design and send the draft to the instructor to receive formative feedback. The instructor will review the drafts and send comments before the next class. In week 7, students will individually continue to develop their game, particularly the PLC-related part (Layer 1 of the ISA-95 architecture, see ). The instructor will then briefly lecture on the importance of the proper setting of learning outcomes and prerequisites, including examples from existing games. The students will use these instructions in their homework and add the learning outcomes and prerequisites to the description of their game. Week 8 is dedicated to finishing the development of the PLC-related part and development and configuration of the Supervisory part (Layer 2 of the ISA-95 architecture). Students have to deliver an alpha version of their game for the dry run before the next class. Week 9 starts with the dry run of students’ games in pairs. Each student plays the game of another student for 45 minutes and takes notes about the learning experience. Then they switch roles. Afterward, the students are instructed on how to file a good bug report and report their feedback on the game in Gitlab. The instructor will review the submitted bug reports before the next class. The optional homework is to improve the games based on the dry run. Week 10 starts with a short presentation of demonstrative examples of filed bug reports chosen by the instructor. For the rest of the class, students improve the games based on the feedback from the dry run. In week 11, the students document their game and automate its deployment in the ICS testbed. They must submit the final version of their game three days before the next class, the course finale. ### Game Presentation and Submission {#game-presentation-and-submission .unnumbered} In week 12, the students take part in organizing a Hacking Day – a public event during which other students of the university can play the created games. This event has two goals: motivating the students to work on their projects and popularizing ICS cybersecurity. Our experience from hosting such an event in the IT domain is described in [@kypolab-course]. Finally, week 13 is dedicated to students’ reflections and the Hacking Day wrap-up in a focus group discussion. If any issues emerged in their game during the Hacking Day, they must fix them. Conclusions {#sec:conclusions} =========== We shared the design details of KYPO4INDUSTRY, a testbed for teaching ICS cybersecurity in a hands-on way. Moreover, we proposed a novel university course that employs the testbed. The students will practically learn about threats associated with the ICS domain, develop an educational cyber game, and exercise their soft skills during multiple public presentations. The acquired skills will be essential for the computer science undergraduates who will be responsible for cybersecurity operations of an entire organization in their future career. We suppose that more organizations will employ cyber-physical systems, and so understanding of ICS-specific features will constitute an advantage for the prospective graduates. Experience and Lessons Learned ------------------------------ Although using simple microprocessor systems (e.g., development boards) in teaching is popular, these systems do not replicate complex ICSs. Cyber-physical systems are unique and change with the physical process they control. The proposed testbed provides ten tabletop control panels and six mobile installations. In total, students can work with 148 PLCs, which use the popular Raspberry Pi single-board computers. The individual components (PLCs, sensors, actuators) are available off the shelf; however, the challenge is to build a hardware setup that will replicate the ICS in a laboratory environment. Addressing this challenge involves multiple engineering professions and requires external collaboration. Future Work ----------- The presented testbed is modular; therefore, it can be gradually upgraded as new advances in the field will emerge in the future. We rely on open-source components that are supported by large communities of users and developers. Still, there is room for future work on the content of training scenarios and novel instruction methods in the ICS domain. Another interesting research idea is to develop methods for creating cyber games and compare whether they work the same in the IT and ICS domain. This research was supported by the project *CyberSecurity, CyberCrime and Critical Information Infrastructures Center of Excellence* (No. ).
--- abstract: 'Similarities between force-driven compression experiments of porous materials and earthquakes have been recently proposed. In this manuscript, we measure the acoustic emission during displacement-driven compression of a porous glass. The energy of acoustic-emission events shows that the failure process exhibits avalanche scale-invariance and therefore follows the Gutenberg-Richter law. The resulting exponents do not exhibit significant differences with respect the force-driven case. Furthermore, the force exhibits an avalanche-type behaviour for which the force drops are power-law distributed and correlated with the acoustic emission events.' author: - 'Víctor Navas-Portella' - Álvaro Corral - Eduard Vives bibliography: - 'Displacement.bib' title: 'Avalanches and force drops in displacement-driven compression of porous glasses' --- Introduction ============ Earthquakes constitute a complex phenomenon which has been studied for a long time due to their impact as natural disasters. From a fundamental point of view, statistical laws in seismology have attracted the attention not only of geoscientists but also of physicists and mathematicians due to their signs of scale-invariance. Recent works have found that some of these laws also manifest in materials which exhibit crackling noise: porous glasses [@Salje2011; @Baro2013], minerals [@Nataf2014] and wood under compression [@Makinen2015], breaking of bamboo-sticks [@Tsai2016], ethanol-dampened charcoal [@Ribeiro2015], confined-granular matter under continuous shear [@Lherminier2015], etc. Due to the difference between time, space and energy scales, these analogies have originated an important interest in the condensed-matter-physics community. In general, the experimental results are based on the analysis of acoustic emission (AE) signals in the ultrasonic range, which are detected when these systems are mechanically perturbed. Baró et al. [@Baro2013; @Nataf2014] found statistical similarities between earthquakes and the AE during compression experiments of porous materials. In that case, the experiments were performed using the applied force as a driving parameter, which means that the force increases linearly in time (force-driven compression). Crackling noise during failure of porous materials has also been studied through computational models that show qualitative agreement with experimental results [@Kun2013; @Kun2014]. Within the context of structural phase transitions, it has been shown that avalanche scale-invariance manifests in different ways depending on the driving mechanism [@Perez-Reche2008]. If the control variable for the driving is a generalized force, disorder plays an important role leading to a dominant nucleation process and the criticality is of the order-disorder type. However, if the driving mechanism consists in the control of a generalized displacement, the critical state is reached independently of the disorder and by means of a self-organized criticality mechanism. These results were experimentally confirmed [@Vives2009; @Planes2013] based on the study of amplitude and energy distributions in AE experiments of martensitic transformations. The influence of the driving mechanism has been studied in the slip events occuring in compressed microcrystals [@Maass2015]. One question that still holds is whether the driving mechanism will influence or not the distributions of AE events in the case of failure under compression experiments. This question is important because when comparing with earthquakes, the natural accepted mechanism is that tectonic plates are driven at constant velocity at far enough distances from the faults [@Larson1997]. Here we study the displacement-driven compression of porous glasses with the aim of answering this question. When changing the driving mechanism from force to displacement, the first main macroscopic difference is that force fluctuates and shows drops that, as will be shown, correlate with AE events. Recently, Illa et al. have shown that the driving mechanism influences the nucleation process in martensitic transformations and these microscopic effects can lead to macroscopic changes in stress-strain curves in which force fluctuations appear [@Illa2015]. An exponentially-truncated power-law distribution has been found for torque drops in shear experiments of granular matter [@Lherminier2015]. Serrations or force drops have also been studied in metallic single crystals [@Lebyodkin1995], metallic glasses [@Antonaglia2014; @Sun2010; @DallaTorre2010] and in high-entropy alloys [@Carroll2015]. These studies are essentially focused on the presence of criticality. Furthermore, Dalla Torre et al. studied the AE during the compression of metallic glasses and concluded that there exists a correlation between AE bursts and stress drops [@DallaTorre2010]. In this work we provide a description of the distribution of force drops in displacement-driven compression experiments of porous glasses and a correlation between these force drops and the energy of the recorded AE events is identified. The manuscript is structured as follows: in Section \[Experimental\] the experimental methods as well as the sample details are described. Results are analysed in Section \[Results\], which is divided in three subsections: the first one (\[AEdata\]) refers to the study of AE events, the second one (\[Force avalanches\]) focuses in the study of force drops and the third one (\[scatter\]) is devoted to the study of the relation between the energy of AE events and force drops. A brief summary and the conclusions are reported in Section \[Conclusions\]. Experimental Methods {#Experimental} ==================== Uni-axial compression experiments of porous glass Vycor (a mesoporous silica ceramics with $40\%$ porosity), are performed in a conventional test machine ZMART.PRO (Zwick/Roell). The cylindrical samples, with diameters $\Phi$ of $1$ mm and $2$ mm and different heights $H$ are placed between two plates that approach each other at a certain constant compression rate $\dot{z}$. We refer to such framework as displacement driven. Compression is done in the axial direction of the cylindrical samples with no lateral confinement. The force opposed by the material is measured by means of a load cell Xforce P (Zwick/Roell), with a maximal nominal force of 5 kN and output to a communication channel every $\Delta t=0.1$ s. Performing blank measurements in the same conditions as those of the experiments presented below, we have checked that force uncertainties are of the order of $10^{-2}$ N. Simultaneous recording of AE signals is performed by using piezoelectric transducers embedded in both plates. The electric signals are pre-amplified ($60$ dB), band filtered (between 20 kHz and 2 MHz) and analysed by means of a PCI-2 acquisition system from Euro Physical Acoustics (Mistras Group) working at 40 MSPS. The AE acquisition system reads also the force measured by the conventional test machine through the communication channel. Recording of the data stops when a big failure event occurs, the sample gets destroyed and the force drops to zero. We prescribe that an AE avalanche or event starts at the time $t_{i}$ when the pre-amplified signal $V(t)$ crosses a fixed threshold of $23$ dB, and finishes at time $t_{i}+\Delta_{i}$ when the signal remains below threshold from $t_{i}+\Delta_{i}$ to at least $t_{i}+\Delta_{i}+200\mu$s. The energy $E_{i}$ of each signal is determined as the integral of $V^{2}(t)$ for the duration $\Delta_{i}$ of the event divided by a reference resistance of $10$ k$\Omega$. Different experiments have been performed at room temperature for 13 different Vycor cylinders with different diameters and heights as well as different compression rates. We have checked that different cleaning protocols before the experiment do not alter the results. All the details related to experiments are listed in Table \[table:samples\]. **Sample** ------------ --- ----- ------------------- V105 1 0,5 $2\times 10^{-3}$ V11 1 1 $2\times 10^{-3}$ V115 1 1,5 $2\times 10^{-3}$ V12 1 2 $2\times 10^{-3}$ V125 1 2,5 $2\times 10^{-3}$ V205 2 0,5 $1\times 10^{-2}$ V21 2 1 $1\times 10^{-2}$ V22 2 2 $1\times 10^{-2}$ V23 2 3 $1\times 10^{-2}$ V26 2 6 $1\times 10^{-2}$ V28 2 8 $1\times 10^{-2}$ V212 2 12 $1\times 10^{-2}$ V24 2 4 $5\times 10^{-2}$ : Summary of dimensions and compression rates $\dot{z}$ for the different experiments reported in this work.[]{data-label="table:samples"} ![\[fig:experimental\] Typical output for the sample V12. Panel in (a) shows the energy of the AE events as well as the measure of the force as a function of time. Green lines represent those time intervals ($\Delta t=0.1$ s) in which the force increases whereas blue lines represent those for which the force decreases (force drops). Plot in (b) represents the activity rate of the experiment as well as the cumulative number of AE events $N(t)$ as a function of time. ](output.eps) Figure  \[fig:experimental\] shows a typical experimental output for the sample V12. Panel (a) displays the sequence of energies of the AE events and the evolution of the force as a function of the time. The acoustic activity rate $r$(s$^{-1}$) has been computed as the number of events per unite time recorded along windows of $20$ seconds. Its behaviour is shown in Figure \[fig:experimental\](b) together with the cumulative number of events as a function of the time. It must be noticed that force drops occur along the whole curve and clearly show variability on 3-4 orders of magnitude. In general, the largest force drops coincide with AE events with very large energy. Results {#Results} ======= Acoustic Emission data {#AEdata} ---------------------- In force-driven compression experiments of porous glasses [@Salje2011; @Baro2013] it was found that the energy probability density $P(E)$ of AE events follows a power-law with exponent $\epsilon=1.39\pm 0.05$ independently of the loading rate ($0.2$ kPa/s - $12.2$ kPa/s), $$P(E)dE = \left(\epsilon-1\right)E_{min}^{\epsilon-1} E^{-\epsilon} dE, \label{eq:epower}$$ where $E_{min} \sim 1$ aJ is the lower bound required for the normalization of the probability density. Figure \[fig:energy\] (a) shows an example of histogram of the energy of AE events for the sample V12 in one of our displacement-driven experiments. As can be seen, data seems to follow the Gutenberg-Richter law for more than 6 decades. The different curves, corresponding to consecutive time windows of approximately $2000$ seconds, reveal that the energy distribution is stationary. We use the procedure exposed in Ref. [@Deluca2013] in order to guarantee statistical significance in the fit of the exponent $\epsilon$ and the lower threshold $E_{min}$. Considering as a null hypothesis that the energy distribution follows a non-truncated power-law (see Eq. (\[eq:epower\])), maximum likelihood estimation (MLE) for the exponent $\epsilon$ is computed for increasing values of the lower threshold $E_{min}$ (see inset of Figure \[fig:energy\]). For each lower threshold and its corresponding exponent, a Kolmogorov-Smirnov test of the fit is performed with a resulting $p$-value. The final values of the exponent and the threshold are chosen once the $p$-value has first overcome the significance level $p_{c}=0.05$ and the power-law hypothesis cannot be rejected. The obtained values for every sample are shown in Figure \[fig:energy\](b) together with the standard deviation of the MLE. The horizontal line in Figure \[fig:energy\](a) and in the Inset of Figure \[fig:energy\](a) show the average value and associated standard deviation $\epsilon=1.34\pm 0.03$. In spite of the variations around this mean value, it seems that the value of the exponent does not have a strong dependence neither on the dimensions of the sample nor on the compression rate. Complementary information obtained from the fitting method is presented in Table II. The average value of the exponent $\epsilon=1.34\pm 0.03$ found for the present displacement-driven experiments is compatible with the value found in force-driven measurements $\epsilon=1.39\pm0.05$. Contrarily to what happens in martensitic transformations [@Planes2013], we conclude that there are no clear evidences that the driving mechanism changes the value of the exponent in compression experiments. ![\[fig:energy\] Panel in (a) shows the energy distribution for the sample V12 for different time windows as well as for the whole experiment. The numbers in parentheses account for the number of AE events in each time interval. Inset in (a) presents the MLE of the exponent $\epsilon$ as a function of the lower threshold $E_{min}$ for all the samples. Vertical lines correspond to the fitted values of $E_{min}$ and $\epsilon$. The color code for each sample can be read from the color bars in (b). In panel (b) the value of the exponent $\epsilon$ is shown for each sample. The dark horizontal line in the inset and in (b) is the mean value of the exponent $\epsilon=1.34$.](output22.eps) **Sample** ------------ ------ ------ --------- ---------------------- ------ V105 869 829 0.602 1.84 $\times 10^{5}$ 1.36 V11 1438 1438 0.502 2.69$\times 10^{6}$ 1.31 V115 836 797 0.626 1.10$\times 10^{6}$ 1.35 V12 2314 928 7.669 5.16$\times 10^{5}$ 1.45 V125 1097 865 1.128 4.94$\times 10^{5}$ 1.36 V205 4160 2609 2.361 9.97$\times 10^{6}$ 1.35 V21 4170 1136 36.707 1.07$\times 10^{7}$ 1.39 V22 3683 746 117.583 6.57$\times 10^{6}$ 1.39 V23 1275 1196 0.645 7.16$\times 10^{6}$ 1.28 V26 2071 2065 0.516 1.38$\times 10^{7}$ 1.30 V28 974 974 0.501 2.82$\times 10^{6}$ 1.29 V212 1646 1338 1.15 4.37$\times 10^{6}$ 1.31 V24 2129 2039 0.595 5.97$\times 10^{6}$ 1.29 : Number of AE events $N_{AE}$, number of those which are power-law distributed $N_{AE}^{PL}$, value of the lower threshold $E_{min}$, maximum value $E_{Max}$, and exponent $\epsilon$. The standard deviation of the MLE is of the order of $10^{-2}$.[]{data-label="table:ae"} Force drops {#Force avalanches} ----------- The evolution of the force as a function of time is shown in Figure \[fig:experimental\]. We define force changes as $\Delta F(t)= -\left( F(t+\Delta t)-F(t) \right)$, with $\Delta t=0.1$ s, so that force drops are positive. As can be observed in Figure \[fig:global\](a)-(c) the distribution of $\Delta F$ can exhibit several contributions. There is a clear Gaussian-like peak corresponding to negative $\Delta F$ that shifts to the left when increasing the compression rate. This peak is related to the average elastic behaviour of the porous material. The rest of contributions in the negative part of the histogram correspond to the different elastic regimes of the material as it experiences successive failures. In the present work we will only focus on the positive part of this distribution which corresponds to the force drops. Our goal is to find whether the distribution of force drops is fat-tailed or not. In Figure \[fig:dis\](a)-(c) the distribution of force drops ($\Delta F > 0$) corresponding to Figure \[fig:global\] is shown in log-log scale. For completeness, complementary cumulative distribution functions or survivor functions $S\left( \Delta F \right)$ are also shown in Figure \[fig:deltaf\] (a)-(c). The probability density of force drops seems to follow a power-law $D(\Delta F) \propto \Delta F^{-\phi}$ which holds for three decades in the case of the slower compression rate and four decades for the higher ones. This difference is essentially due to the difference of surfaces of samples. The larger the surface contact between the sample and the plate, the larger the force opposed by the material. Note that, in contrast to Fig. \[fig:global\], the distribution of $\Delta F$ is conditioned to $\Delta F$ larger or equal than the lower threshold $\Delta F_{min}$ obtained from the fit. In order to determine from which value $\Delta F_{min}$ the power-law hypothesis holds, the fit of the right tail of the distribution of $\Delta F$ has been performed following the same procedure as that followed for the energy distribution. In Figure \[fig:fits\](a)-(c) MLE’s of the exponent $\phi$ as a function of the lower threshold for the samples compressed at different compression rates are shown. Three samples have been excluded due to wrong sampling of the measurement of the force. Vertical lines of different colors represent the selected threshold $\Delta F_{min}$ for each sample. Note that, contrarily to what happens in the MLE of the energy exponent, for the lowest values of $\Delta F_{min}$ where the power-law hypothesis is not already valid, there is an overestimation of the exponent due to the presence of the Gaussian peak. ![\[fig:global\] Probability densities of $\Delta F$ for three samples with different compression rates. Sample V12 compressed at $\dot{z}=2\times10^{-3}$mm/min is shown in (a), sample V212 compressed at $\dot{z}=1\times10^{-2}$mm/min in (b) and sample V24 compressed $\dot{z}=5\times10^{-2}$mm/min is presented in (c).](globals.eps) ![\[fig:dis\] Probability densities of force drops $\Delta F$ and their corresponding fits for V12 (a), V212 (b) and V24 (c). Distributions are displayed and normalized for $\Delta F \geq \Delta F_{min}$.](distribucions.eps) ![\[fig:deltaf\] Survivor functions $S\left( \Delta F \vert \Delta F \geq \Delta F_{min} \right)$ and their corresponding fits for V12 (a), V212 (b) and V24 (c). Survivor functions are displayed and normalized for $\Delta F \geq \Delta F_{min}$.](cumulatives.eps) ![\[fig:fits\] Panels (a)-(c) show the MLE of the exponent $\phi$ as a function of the lower threshold $\Delta F_{min}$ for samples compressed at $\dot{z}=2\times 10^{-3}$ mm/min, $\dot{z}=10^{-2}$ mm/min and $\dot{z}=5\times 10^{-2}$ mm/min respectively. Vertical lines in each panel mark the threshold $\Delta F_{min}$ which is selected by the fitting and testing procedure. Panel in (d) presents the values of the exponent for each sample. Blue horizontal line at $1.85$ and green horizontal line at $1.54$ are the mean values of the exponent for the two smallest compression rates. ](outputfits.eps) The value of the exponents $\phi$ for the different samples is shown in Figure \[fig:fits\](d) and three clear groups can be distinguished. The value of the exponent is higher for the slower compression rate and decreases for increasing compression rates. The exponent values are robust under the change of time window $\Delta t$. Additional parameters resulting from fits are shown in Table \[tab:def\]. **Sample** ------------ ------- ----- ---------------------- -------- ------ V115 9960 174 1.73$\times 10^{-2}$ 8.31 1.79 V12 32323 445 2.12$\times 10^{-2}$ 27.61 1.95 V125 26104 208 1.93$\times 10^{-2}$ 24.31 1.80 V205 5603 334 3.55$\times 10^{-2}$ 853.36 1.53 V21 10787 149 0.16 977.78 1.72 V22 6609 133 9.05$\times 10^{-2}$ 801.63 1.57 V23 9987 162 1.61$\times 10^{-2}$ 593.19 1.46 V28 8881 113 1.72$\times 10^{-2}$ 340.22 1.53 V212 9030 202 1.45$\times 10^{-2}$ 247.56 1.55 V24 3742 53 5.80$\times 10^{-2}$ 797.63 1.32 : Total number of force drops $D_{Tot}$ and the resulting values of the number of those data which are power-law distributed $D_{PL}$, values of the lower threshold $\Delta F_{min}$ and the value of the largest force drop $\Delta F_{Max}$ and the fitted exponent $\phi$. The standard deviation of the MLE is around $0.05$.[]{data-label="tab:def"} With the use of these techniques, there is evidence that force drops are power-law distributed, as found for metallic glasses [@Sun2010], with a robust exponent under the change of time window and that decreases for increasing compression rates. Joint distribution of Energy and Force Drops {#scatter} -------------------------------------------- In this subsection we try to unveil the relation between force drops and the energy of AE events. As it can be appreciated in Figure \[fig:experimental\](a), the largest force drops correspond with the highest energy of AE events. Actually, Dalla Torre et al. [@DallaTorre2010] found that there exists a correlation between force drops and AE events but no evidence of correlation between the amplitude of these signals and the magnitude of the force drops was found. Nevertheless, the energy could show a certain correlation since not only the amplitude plays an important role in its calculation but also the duration of the AE events. **Sample** ------------ ------- ------- ------ ----- ------ ------ ------ V115 20119 9960 119 217 836 191 645 V12 47663 32323 251 820 2314 345 1969 V125 37028 26104 141 313 1097 215 882 V205 26564 5603 1028 336 4160 2093 2067 V21 32380 10787 1324 223 4170 2572 1598 V22 24388 6609 930 177 3683 2066 1617 V23 24922 9987 423 70 1275 804 471 V28 25468 8881 359 83 974 638 336 V212 27367 9030 453 135 1646 882 764 V24 8114 3742 602 41 2129 1745 384 : Numbers which are involved in the construction of $W^{\Delta t}$. $U_{Tot}$ and $D_{Tot}$ are the total number of intervals where the force has raised up or dropped . $U_{AE}$ and $D_{AE}$ are the number of force rises and drops with AE events. $N_{AE}$ is the total number of AE events, $N_{AE}^{U}$ and $N_{AE}^{D}$ are the number of AE events associated to rises and drops of the force, respectively. \[table:const\] ![\[fig:marg\] Main panel shows the distribution of $E$, the distributions of energies $E^{D}$ and $E^{U}$ that appear when a force drop or a force rises occurs, and the distributions of $W^{\Delta t}_{D}$ and $W^{\Delta t}_{U}$, which refer to the sum of AE energies for a certain force drop or force rise. The inset represents the histogram of the number of AE events encapsulated in time intervals where force drops occur. All these distributions correspond to the sample V12.](outemarg.eps) This correlation would be interesting for two reasons: on the one hand, it would set a relation between the energy of AE events, which is from microscopic nature (aJ), and force drops, which are at the macroscopic scale (N). On the other hand, force drops appear every time there is a micro-failure in the sample and thus they can be understood as releases of elastic energy. In the same way as Ref. [@DallaTorre2010], we find that there is a correlation in time between the occurrence of force drops and the presence of AE events. In order to associate a certain energy to the $i$-th force drop, we define the quantity: $$W_{D,i}^{\Delta t}=\sum_{j=1}^{N_{AE}^{i}} E_{j}, \label{eq:w}$$ where $N_{AE}^{i}$ is the number of AE events that occur within the time interval of duration $\Delta t=0.1$ s where the $i$-th force drop appears and $E_{j}$ is the energy of those AE events. The same construction can be done for force rises by defining $W_{U}^{\Delta t}$. This construction is divided in two steps: the first one consists in splitting the time axis in intervals of duration $\Delta t$ so that there is a correspondence between AE events and force rises or drops. The second step consists in applying Eq. (\[eq:w\]) and its counterpart for $W_{U}^{\Delta t}$ for every interval with AE events. In Figure \[fig:marg\](a) we present the different distributions involved in this construction for the sample V12. There are two random variables corresponding to the first step of the transformation: $E^{D}$ corresponds to the energy when a force drop appears whereas $E^{U}$ corresponds to the energy when force rises appear. The second step of the transformation is reflected in the quantities $W_{D}^{\Delta t}$ and $W_{U}^{\Delta t}$, which correspond to the sum of energies in every force drop and in every force rise respectively. The plot in Figure \[fig:marg\](a) reinforces the importance of the relation between force drops and AE events since the distributions of $E^{U}$ and $W_{U}^{\Delta t}$ are restricted to low values of the energy whereas the range of the distributions of $E^{D}$ and $W_{D}^{\Delta t}$ is very similar to the original one. The inset shows the histogram of the number $N_{AE}$ of AE events encapsulated in time intervals of $\Delta t$ in which there are force drops for the sample V12. The maximum of this histogram is at $N_{AE}=2$ and decreases up to the maximum encapsulation of $N_{AE}=24$. The numbers involved in these constructions are shown for all the samples in Table IV. The fact that there are force rises associated to acoustic emission activity can be explained by the presence of force drops that have not been identified in a $\Delta t$ interval where the force has globally increased. This prediction agrees with the fact that the energy associated to force rises covers a small range corresponding to low energy values of the total energy distribution. It is important to remark that, despite the fraction of AE events associated to force drops decreases as the compression rate increases, the fraction that accounts for the average number of events encapsulated in a force drop ($N_{AE}^{D}/D_{AE}$) is always larger than the average number of AE events encapsulated in intervals where the force is increasing ($N_{AE}^{U}/U_{AE}$). Hence, increments of AE activity are essentially associated to drops in the force. The total duration of the experiment is given by $T=\left( U_{Tot}+D_{Tot}\right)\Delta t$. Note that, despite the big difference between the total number of force drops ($D_{Tot}$) and the number of force drops with AE activity ($D_{AE}$), this second number is in the same order of magnitude as the number of power-law data in Table \[tab:def\] but larger always. In Figure \[fig:scatter\] we present scatter plots for the different compression rates. It must be noticed that the largest AE events are manifested in those force drops which are power-law distributed. The associated energy of the remaining force drops is relatively low compared with those with large values of $\Delta F$. The rest of force drops that have no associated AE activity are related to experimental fluctuations of the measurement. ![\[fig:scatter\] Scatter plots of the energy released in each force drop for all the samples compressed at $\dot{z}= 2\times 10^{-3}$ mm/min in (a), at $\dot{z}=10^{-2}$ mm/min in (b) and at $\dot{z}=5\times 10^{-2}$ mm/min at (c). Panel in (d) shows the Pearson correlation for the logarithm of the variables for all the samples.](outputscatter.eps) Under these circumstances, we study the energy associated to force drops and try to unveil if there exists any correlation between them. It must be mentioned that, as it has been seen in the previous section, the range of interest of force drops is restricted to those values which exceed $10^{-2}$ N. In Figure \[fig:scatter\](d) the Pearson correlation of the logarithm of the variables for the range of interest is shown for each sample. These correlations are much higher than the ones resulting after the reshuffling of the data, so they have statistical significance. The correlation is positive and it establishes a relation between AE events, which are of microscopic nature, with a magnitude of macroscopic character, the force drops. Conclusions {#Conclusions} =========== In this manuscript we have reported the results of displacement-driven compression experiments of several Vycor cylinders with different dimensions and different compression rates. The Gutenberg-Richter law is found for the energy distribution in the same way it was previously found for force-driven compression experiments. Regarding the values of the exponents, we conclude that they do not seem to be affected by the driving mechanism in compression experiments. The independence with the driving mechanism has also been found in the measurement of slip events in microcrystals [@Maass2015]. When the driving variable turns out to be the displacement, the release of elastic energy is not only expressed by means of AE but it is also manifested as drops in the force which are power-law distributed with a compression-rate-dependent exponent. These drops can also be observed in computer simulations near the big failure event [@Kun2013; @Kun2014]. Nevertheless, some tuning of the disorder should be arranged in simulations in order to replicate a situation with a similar level of heterogeneity as in our experiments. Furthermore, we have established a correlation between force drops and the associated energy of AE events. We thank Jordi Baró and Ferenc Kun for fruitful discussions. The research leading to these results has received funding from “La Caixa” Foundation. Financial support was received from FIS2012-31324, FIS2015-71851-P, MAT2013-40590-P, Proyecto Redes de Excelencia 2015 MAT2015-69-777-REDT from Ministerio de Economía y Competitividad (Spain) and 2014SGR-1307 from AGAUR.
--- abstract: 'We study Pisot numbers $\beta \in (1, 2)$ which are univoque, i.e., such that there exists only one representation of $1$ as $1 = \sum_{n \geq 1} s_n\beta^{-n}$, with $s_n \in \{0, 1\}$. We prove in particular that there exists a smallest univoque Pisot number, which has degree $14$. Furthermore we give the smallest limit point of the set of univoque Pisot numbers.' author: - | Jean-Paul Allouche[^1]\ CNRS, LRI, Bâtiment 490\ Université Paris-Sud\ 91405 Orsay Cedex, France\ [allouche@lri.fr]{} - | Christiane Frougny\ LIAFA, CNRS UMR 7089\ 2 place Jussieu\ 75251 Paris Cedex 05, France\ and Université Paris 8\ [Christiane.Frougny@liafa.jussieu.fr]{} - | Kevin G. Hare[^2]\ Department of Pure Mathematics\ University of Waterloo\ Waterloo, Ontario, Canada, N2L 3G1\ [kghare@math.uwaterloo.ca]{} title: On univoque Pisot numbers --- [*MSC*]{}: Primary 11R06, Secondary 11A67 [*Keywords*]{}: Univoque, Pisot Number, Beta-Expansion Introduction ============ Representations of real numbers in non-integer bases were introduced by Rényi [@Re] and first studied by Rényi and by Parry [@Pa; @Re]. Among the questions that were addressed is the uniqueness of representations. Given a sequence $(s_n)_{n\geq 1}$, Erdős, Joó and Komornik, [@EJK], gave a purely combinatorial characterization for when there exists $\beta \in (1,2)$ such that $1 = \sum_{n \geq 1} s_n\beta^{-n}$ is the unique representation of 1. This set of binary sequences is essentially the same as a set studied by Cosnard and the first author [@All; @AC1; @AC3] in the context of iterations of unimodal continuous maps of the unit interval. Following [@KL; @KLP], a number $\beta>1$ is said to be [*univoque*]{} if there exists a unique sequence of integers $(s_n)_{n \ge 1}$, with $0 \le s_n <\beta$, such that $1=\sum_{n \ge 1}s_n \beta^{-n}$. (Note that we consider only the representation of $1$. The uniqueness of the representation of real numbers in general was studied in particular in [@GS].) Using the characterization of [@EJK], Komornik and Loreti constructed in [@KL] the smallest real number in $(1, 2)$ for which $1$ has a unique representation. Its representation happens to be the famous Thue-Morse sequence (see for example [@AS]). Are there univoque Pisot numbers? It is worth noting that if the base $\beta$ is the “simplest” non-integer Pisot number, i.e., the golden ratio, then the number $1$ has infinitely many representations. In this paper we study the univoque Pisot numbers belonging to $(1, 2)$. We prove in particular (Theorem \[thm:finite2\]) that there exists a smallest univoque Pisot number, and we give explicitly the least three univoque Pisot numbers in $(1,2)$: they are the roots in $(1, 2)$ of the polynomials $$\begin{array}{lll} & {x}^{14}-2{x}^{13}+{x}^{11}-{x}^{10}-{x}^{7}+{x}^{6} -{x}^{4}+{x}^{3}-x+1 & (\mathrm{root} \approx 1.8800), \\ & {x}^{12}-2{x}^{11}+{x}^{10}-2{x}^{9}+{x}^{8}-{x}^{3}+{x}^{2}-x+1, & (\mathrm{root} \approx 1.8868), \\ & x^4 - x^3 - 2x^2 + 1, & (\mathrm{root} \approx 1.9052). \end{array}$$ The last number is the smallest limit point of the set of univoque Pisot numbers (Theorem \[thm:chi\]). We also prove that $2$ is a limit point of univoque Pisot numbers. Definitions and reminders ========================= Infinite words -------------- Let ${\mathbb N}_+$ denote the set of positive integers. Let $A$ be a finite alphabet. We define $A^{{\mathbb N}_+}$ to be the set of infinite sequences (or infinite words) on $A$: $$A^{{\mathbb N}_+} := \{s=(s_n)_{n \ge 1} \ | \ \forall n \ge 1, \ s_n \in A\}.$$ This set is equipped with the distance $\rho$ defined by: if $s=(s_n)_{n \ge 1}$ and $v=(v_n)_{n \ge 1}$ belong to $A^{{\mathbb N}_+}$, then $\rho(s,v) := 2^{-r}$ if $s \neq v$ and $r := \min\{n \mid s_n \neq v_n\}$, and $\rho(s,v)=0$ if $s=v$. The topology on the set $A^{{\mathbb N}_+}$ is then the product topology, and it makes $A^{{\mathbb N}_+}$ a compact metric space. A sequence $(s_n)_{n \geq 1}$ in $A^{{\mathbb N}_+}$ is said to be [*periodic*]{} if there exists an integer $T \geq 1$, called a [*period*]{} of the sequence, such that $s_{n+T} = s_n$ for all $n \geq 1$. A sequence $(s_n)_{n \geq 1}$ in $A^{{\mathbb N}_+}$ is said to be [*eventually periodic*]{} if there exists an integer $n_0 \geq 0$ such that the sequence $(s_{n+n_0})_{n \geq 1}$ is periodic. If $w$ is a (finite) word, we denote by $w^{\infty}$ the infinite word obtained by concatenating infinitely many copies of $w$ (this is in particular a periodic sequence, and the length of $w$, usually denoted by $|w|$, is a period). Base $\beta$ representations ---------------------------- Let $\beta$ be a real number $>1$. A [*$\beta$-representation*]{} of the real number $x \in [0,1]$ is an infinite sequence of integers $(x_n)_{n \ge 1}$ such that $x = \sum_{n \ge 1}x_n \beta^{-n}$. If a representation ends in infinitely many zeros, say, is of the form $w0^\infty$, then the ending zeros are omitted and the representation is said to be [*finite*]{}. The reader is referred to [@Lo Chapter 7] for more on these topics. ### Greedy representations {#sec:greedy} A special representation of a number $x$, called the [*greedy $\beta$-expansion*]{}, is the infinite sequence $(x_n)_{n \ge 1}$ obtained by using the greedy algorithm of Rényi [@Re]. Denote by $\lfloor y \rfloor$ and $\{y\}$ the integer part and the fractional part of the real number $y$. Set $r_0 := x$ and, for $n \ge 1$, let $x_n := \lfloor \beta r_{n-1} \rfloor$, $r_n := \{\beta r_{n-1}\}$. Then $x= \sum_{n \ge 1} x_n \beta^{-n}$. Intuitively, the digit $x_n$ is chosen so that it is the maximal choice allowed at each step. The digits $x_n$ obtained by the greedy algorithm belong to the alphabet $A_\beta = \{0,1,\ldots,\lfloor \beta \rfloor\}$ if $\beta$ is not an integer, which will always be the case in this work. It is clear from the definition that amongst the $\beta$-representations of a number, the greedy $\beta$-expansion is the largest in the lexicographic order (denoted by $\leq_{lex}$ and $<_{lex}$). The greedy $\beta$-expansion of $x$ will be denoted by $d_\beta(x) := (x_n)_{n \ge 1}$. The greedy $\beta$-expansion of $1$ plays an important role. Set ${d_\beta(1)}=(e_n)_{n \ge 1}$ and define $${d^*_{\beta}(1)}:= \left\{ \begin{array}{ll} {d_\beta(1)}\ &\mbox{\rm if ${d_\beta(1)}$ is infinite} \\ (e_1 \cdots e_{m-1} (e_m - 1))^\infty \ &\mbox{\rm if ${d_\beta(1)}=e_1 \cdots e_{m-1}e_m$ is finite}. \end{array} \right.$$ Of course if ${d_\beta(1)}$ is finite, the sequence ${d^*_{\beta}(1)}$ is also a $\beta$-representation of $1$. Denote by $\sigma$ the shift on $A_\beta^{{\mathbb N}_+}$: for any sequence $s = (s_n)_{n \geq 1}$ in $A_\beta^{{\mathbb N}_+}$, the sequence $v = \sigma(s)$ is defined by $v = (v_n)_{n \geq 1} := (s_{n+1})_{n \geq 1}$. We recall some useful results. \[Pa\] [[@Pa]]{} Let $s = (s_n)_{n \geq 1}$ be a sequence in $A_{\beta}^{{\mathbb N}_+}$. Then - the sequence $s$ is the greedy $\beta$-expansion of some $x \in [0,1)$ if and only if $$\forall k \geq 0, \ \ \sigma^k(s) <_{lex} {d^*_{\beta}(1)}$$ - the sequence $s$ is the greedy $\beta$-expansion of $1$ for some $\beta >1$ if and only if $$\forall k \geq 1, \ \ \sigma^k(s) <_{lex} s.$$ ### Lazy representations {#sec:lazy} Another distinguished $\beta$-representation of the real number $x$ is the so-called [*lazy*]{} expansion, which is the smallest in the lexicographic order among the $\beta$-representations of $x$ on the alphabet $A_\beta$. Denote by $\ell_\beta(x)=(x_n)_{n \geq 1} $ the lazy $\beta$-expansion of $x$. To compute it, intuitively we have to choose $x_n$ to be as small as possible at each step. The algorithm to obtain the lazy expansion is the following. Let $B := \sum_{n \geq 1} \frac{\lfloor \beta \rfloor}{\beta^n}=\frac{\lfloor \beta \rfloor}{\beta -1}$. Set $r_0 := x$ and, for $n \ge 1$, let $x_n := \max(0,\lceil \beta r_{n-1}-B \rceil)$, $r_n := \beta r_{n-1} - x_n$. Then $x= \sum_{n \ge 1} x_n \beta^{-n}$, where the $(x_n)$ forms the lazy $\beta$-expansion. Let $s=(s_n)_{n \ge 1}$ be in $A_\beta^{{\mathbb N}_+}$. Denote by $\overline{s_n} := \lfloor \beta \rfloor-s_n$ the “complement” of $s_n$, and by extension $\bar{s} := (\overline{s_n})_{n \ge 1}$. Then the following characterization of lazy expansions holds true. \[EJK1\] [[@EJK; @DK]]{} Let $s = (s_n)_{n \geq 1}$ be a sequence in $A_{\beta}^{{\mathbb N}_+}$. Then - the sequence $s$ is the lazy $\beta$-expansion of some $x \in [0,1)$ if and only if $$\forall k \ge 0, \ \ \sigma^k(\bar{s}) <_{lex} {d^*_{\beta}(1)}$$ - the sequence $s$ is the lazy $\beta$-expansion of $1$ for some $\beta >1$ if and only if $$\forall k \ge 1, \ \ \sigma^k(\bar{s}) <_{lex} s.$$ \[GR\] Take $\psi_1=\frac{1+\sqrt{5}}{2}$ the golden ratio. The greedy $\beta$-expansion of $1$ is $d_{\psi_1}(1)=11$, $d_{\psi_1}^*(1)=(10)^\infty$, and the lazy expansion of $1$ is $\ell_{\psi_1}(1)=01^\infty$. Univoque real numbers --------------------- Following [@KL; @KLP], a number $\beta > 1$ is said to be [*univoque*]{} if there exists a unique sequence of integers $(s_n)_{n \ge 1}$, with $0 \le s_n <\beta$, such that $1=\sum_{n \ge 1}s_n \beta^{-n}$. In this case the sequence $(s_n)_{n \ge 1}$ coincides both with the greedy and with the lazy $\beta$-expansion of $1$. Remark that the number 2 is univoque, but we will be concerned with non-integer real numbers in this paper. Note that some authors call “univoque” the real numbers $x$ having a unique $\beta$-representation (see [@DK2]). Binary sequences $(s_n)_{n \geq 1}$ such that the convergent sum $\sum_{n \geq 1}{s_n}\beta^{-n}$ uniquely determines the sequence $(s_n)_{n \geq 1}$ are also called “univoque” (see [@DK1]). Nevertheless, for simplicity we keep our notion of “univoque". \[selfbrack\] We define two sets of binary sequences as follows. - A sequence $s = (s_n)_{n \geq 1}$ in $\{0,1\}^{{\mathbb N}_+}$ is called [*self-bracketed*]{} if for every $k \ge 1$ $$\bar{s} \le_{lex} \sigma^k(s) \le_{lex} s$$ The set of self-bracketed sequences in $\{0,1\}^{{\mathbb N}_+}$ is denoted by $\Gamma$. - If all the inequalities above are strict, the sequence $s$ is said to be [*strictly self-bracketed*]{}. If one of the inequalities is an equality, then $s$ is said to be [*periodic self-bracketed*]{}. The subset of $\Gamma$ consisting of the strictly self-bracketed sequences is denoted by $\Gamma_{strict}$. \[periodic\] The reader will have noticed that the expression “periodic self-bracketed” comes from the fact that $\sigma^k(s) = s$ or $\sigma^k(s) = \bar{s}$ for some $k \geq 1$ implies that the sequence $s$ is periodic. With this terminology we can rephrase the following result from [@EJK]. \[EJK\] [[@EJK]]{} A sequence in $\{0,1\}^{{\mathbb N}_+}$ is the unique $\beta$-expansion of $1$ for a univoque number $\beta$ in $(1,2)$ if and only if it strictly self-bracketed. \[cor:01\] Let $s = (s_n)_{n\geq 1}$ be a sequence in $\{0,1\}^{{\mathbb N}_{+}}$. Suppose that the largest string of consecutive $1$’s in $s$ has length $k$, and the largest string of consecutive $0$’s has length $n$ (here $k$ and $n$ may be $\infty$.) If $n > k$, then $s$ is not self-bracketed. There exists a smallest univoque real number in $(1, 2)$, [@KL]. Recall first that the Thue-Morse sequence is the fixed point beginning with 0 of the morphism $0 \to 01$, $1 \to 10$ (see for example [@AS]), hence the sequence $$0 \ 1 \ 1 \ 0 \ 1 \ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \ 0 \ 1 \ 1 \ 0 \ \ldots$$ \[KL\] [[@KL]]{} There exists a smallest univoque real number $\kappa \in (1,2)$. One has $\kappa \approx 1.787231$, and $d_\kappa(1)=(t_n)_{n \ge 1}$, where $(t_n)_{n \ge 1}=11010011\ldots$ is obtained by shifting the Thue-Morse sequence. The number $\kappa$ is not rational; actually more can be proved. \[AC\] [[@AC2]]{} The Komornik-Loreti constant $\kappa$ is transcendental. Notation {#notation .unnumbered} -------- In the remainder of this paper, we will denote by ${\mathcal U}$ the set of real numbers in $(1, 2)$ which are univoque. We will denote by $\widetilde{\mathcal U}$ the set of real numbers $\beta \in (1, 2)$ such that ${d_\beta(1)}$ is finite and ${d^*_{\beta}(1)}$ is a periodic self-bracketed sequence. Formally, we have $${\mathcal U} = \{\beta \in (1,2): d_\beta(1) \in \Gamma_{strict}\}$$ and $$\widetilde{\mathcal U} = \{\beta \in (1,2): d_\beta(1)\ \mathrm{is\ finite\ and}\ d_{\beta}^*(1)\ \mathrm{is\ periodic\ self-bracketed}\}$$ Pisot numbers ------------- A [*Pisot number*]{} is an algebraic integer $>1$ such that all its algebraic conjugates (other than itself) have modulus $<1$. As usual the set of Pisot numbers is denoted by $S$ and its derived set (set of limit points) by $S'$. It is known that $S$ is closed [@S], and has a smallest element, which is the root $>1$ of the polynomial $x^3-x-1$ (approx. 1.3247). A [*Salem number*]{} is an algebraic integer $>1$ such that all its algebraic conjugates have modulus $\le 1$, with at least one conjugate on the unit circle. We recall some results on Pisot and Salem numbers (the reader is referred to [@Ber] for more on these topics). One important result is that if $\beta$ is a Pisot number then ${d_\beta(1)}$ is eventually periodic (finite or infinite) [@Be1]. Note that ${d_\beta(1)}$ is never periodic, but that when ${d_\beta(1)}$ is finite, ${d^*_{\beta}(1)}$ is periodic. A number $\beta$ such that ${d_\beta(1)}$ is eventually periodic is called a [*Parry number*]{} (they are called [*beta-numbers*]{} by Parry [@Pa]). When ${d_\beta(1)}$ is finite, $\beta$ is called a [*simple*]{} Parry number. One deeper result is the following one. \[AB\] [[@Be1; @schmidt]]{} Let $\beta$ be a Pisot number. A number $x$ of $[0,1]$ has a (finite or infinite) eventually periodic greedy $\beta$-expansion if and only if it belongs to ${\mathbb Q}(\beta)$. For lazy expansions we have a similar result. \[lep\] Let $\beta$ be a Pisot number. A number $x$ of $[0,1]$ has an eventually periodic lazy $\beta$-expansion if and only if it belongs to ${\mathbb Q}(\beta)$. Let $\ell_\beta(x)=(x_n)_{n\ge 1}$. By Theorem \[EJK1\] the sequence $(\overline{x_n})_{n\ge 1}$ is the greedy $\beta$-expansion of the number $\frac{\lfloor \beta \rfloor}{\beta -1}-x$, and the result follows from Theorem \[AB\]. Amara has determined all the limit points of $S$ smaller than 2 in [@Am]. \[Am\] [[@Am]]{} The limit points of $S$ in $(1,2)$ are the following: $$\varphi_1=\psi_1<\varphi_2<\psi_2<\varphi_3<\chi <\psi_3<\varphi_4< \cdots <\psi_r<\varphi_{r+1}< \cdots <2$$ where $$\begin{cases} \text{the minimal polynomial of\ } \varphi_r \text{\ is\ } x^{r+1}-2x^r+x-1, \\ \text{the minimal polynomial of\ } \psi_r \text{\ is\ } x^{r+1}-x^r-\cdots-x-1, \\ \text{the minimal polynomial of\ } \chi \text{\ is\ } x^4-x^3-2x^2+1. \\ \end{cases}$$ The first few limit points are: - $\varphi_1 = \psi_1 \approx 1.618033989$, the root in $(1,2)$ of $x^2-x-1$ - $\varphi_2 \approx 1.754877666$, the root in $(1,2)$ of $x^3-2 x^2+x-1$ - $\psi_2 \approx 1.839286755$, the root in $(1,2)$ of $x^3-x^2-x-1$ - $\varphi_3 \approx 1.866760399$, the root in $(1,2)$ of $x^4-2 x^3+x-1$ - $\chi \approx 1.905166168$, the root in $(1,2)$ of $x^4-x^3-2 x^2+1$ - $\psi_3 \approx 1.927561975$, the root in $(1,2)$ of $x^4-x^3-x^2-x-1$ The greedy and lazy $\beta$-expansions of these points are given in Table \[tab:S limit\] below. For any interval $[a,b]$, with $b<2$, an algorithm of Boyd [@Boyd78; @Boyd83; @Boyd84] finds all Pisot numbers in the interval. If $[a,b]$ contains a limit point $\theta$, then there exists an $\varepsilon >0$ such that all Pisot numbers in $[\theta-\varepsilon, \theta+\varepsilon]$ are [*regular*]{} Pisot numbers of a known form. Boyd’s algorithm detects these regular Pisot numbers, and truncates the search accordingly. (For a non-effective study of Pisot numbers in subintervals of $(1,2)$, see also [@Ta1; @Ta2].) Recall that Boyd has shown that for any Salem number of degree 4 the greedy expansion of 1 is eventually periodic, [@Boyd89], and has given some evidence in favor of the conjecture that it is still the case for degree 6, [@Boyd96a]. Preliminary combinatorial results ================================= We start by defining a function $\Phi$ on the infinite words of the form $b=(z0)^\infty$. \[Phi\] Let $b=(z0)^\infty$ be a periodic binary word whose period pattern ends in a $0$. Suppose furthermore that the minimal period of $b$ is equal to $1+|z|$. Then we define $\Phi(b)$ by $$\Phi(b):=(z1\overline{z}0)^\infty.$$ We now recall a result from [@All]. \[all\] - If a sequence $b$ belonging to $\Gamma$ begins with $u \bar{u}$ where $u$ is a finite nonempty word, then $b = (u \bar{u})^\infty$. - If $b=(z0)^\infty$, where the minimal period of $b$ is equal to $1+|z|$, is an element of $\Gamma$, then $\Phi(b)$ belongs to $\Gamma$, and there is no element of $\Gamma$ lexicographically between $b$ and $\Phi(b)$. \[closed\] The inequalities defining the set $\Gamma$ show that $\Gamma$ is a (topologically) closed set. \[all2\] Let $b=(z0)^\infty$ (where the minimal period of $b$ is equal to $1+|z|$). The sequence $(\Phi^{(n)}(b))_{n \geq 0}$ is a sequence of elements of $\Gamma$ that converges to a limit $\Phi^{(\infty)}(b)$ in $\Gamma$. The only elements of $\Gamma$ lexicographically between $b$ and $\Phi^{(\infty)}(b)$ are the $\Phi^{(k)}(b)$, $k \geq 0$. By abuse of notation, if $\theta$ is the number such that $d^*_\theta(1)=b$, we denote by $\Phi(\theta)$ the real number $> 1$ such that $d^*_{\Phi(\theta)}(1)=\Phi(b)$. Take $b=d^{*}_{\psi_r}(1)=(1^{r}0)^\infty$. Then $\Phi(b)=(1^{r}10^r0)^\infty = d^{*}_{\varphi_{r+1}}(1)$, thus $\varphi_{r+1}=\Phi(\psi_r)$. Let $\pi_{r}$ be the real number defined by $d^{*}_{\pi_{r}}(1)=\Phi^{(\infty)}((1^{r}0)^\infty)$, that is, $\pi_{r} = \Phi^{\infty}(\psi_r)$. Then $d^{*}_{\pi_{r}}(1)$ is strictly self-bracketed (see [@All]), hence the following result holds true. \[pi\] The number $\pi_{r}$ is univoque. Furthermore between $\psi_r$ and $\pi_{r}=\Phi^{(\infty)}(\psi_r)$ the only real numbers belonging to ${\mathcal U}$ or $\widetilde{\mathcal U}$ are the numbers $\varphi_{r+1}$, $\Phi(\varphi_{r+1})$, $\Phi^{(2)}(\varphi_{r+1})$, etc. They all belong to $\widetilde{\mathcal U}$. We will now prove a combinatorial property of the sequences $d_{\beta}(1)$. Before stating and proving this property we first make a straightforward remark. \[real\] Let $u$ and $v$ be two binary words having the same length. Let $a$ and $b$ be either two binary words having the same length or two infinite binary sequences. Suppose that $a$ begins with $u$ and $b$ begins with $v$. Then $$\begin{array}{lll} a \leq_{lex} b & \Longrightarrow & u \leq_{lex} v \\ u <_{lex} v & \Longrightarrow & a <_{lex} b. \end{array}$$ \[pral\] Let $a=(w0)^{\infty}$ be an infinite periodic binary sequence with minimal period $1+|w|$, such that $w$ (and hence $a$) begins in $1$. Let $b=w10^{\infty}$. Then the following two properties are equivalent: \(i) $\forall k \geq 1$,  $\sigma^k(a)\leq_{lex} a$, \(ii) $\forall k \geq 1$,  $\sigma^k(b)<_{lex} b$. We first prove (i) $\Longrightarrow$ (ii). Since we clearly have $\sigma^k(b)<_{lex} b$ for each $k \geq |w|$, we can suppose that $k < |w|$. Write $w = u v$ where $|u| = k$, hence $u$ and $v$ are both nonempty. This gives $a = (uv0)^{\infty}$ and $b = uv10^{\infty}$, and we want to prove that $v10^{\infty} <_{lex} uv10^{\infty}$. Let us write $|v| = d|u| + e$, where $d \geq 0$ and $e \in [0, |u|)$. We can write $v = v_1 v_2 \ldots v_d z$, with $|v_1| = |v_2|= \ldots = |v_d| = |u|$, and $|z| = e < |u|$. Note that, if $d=0$, then $v=z$. Let us also write $u=st$ and, for each $j \in [1,d]$, $v_j = s_jt_j$, where $|s| = |s_1| = |s_2|= \ldots = |s_d| = |z|$ and $|t| = |t_1| = |t_2| = \ldots = |t_d|$. We thus have $$a = (s t s_1 t_1 s_2 t_2 \ldots s_d t_d z 0)^{\infty}$$ and we want to prove that $$s_1 t_1 s_2 t_2 \ldots s_d t_d z 1 0^{\infty} <_{lex} s t s_1 t_1 s_2 t_2 \ldots s_d t_d z 1 0^{\infty}.$$ Applying, for each $j \in [1,d]$, the hypothesis $\sigma^k(a)\leq_{lex} a$ with $k = |s t s_1 t_1 s_2 t_2 \ldots s_{j-1} t_{j-1}|$ (in particular if $j=1$ then $k = |st|$), we see that $s_j t_j \leq_{lex} s t$. Define $${\mathcal E} := \{j, \ s_j t_j <_{lex} s t\}.$$ - If ${\mathcal E} \neq \emptyset$, let $j_0 = \min {\mathcal E}$. Then $$s t = s_1 t_1 = s_ 2 t_2 = \ldots = s_{j_0-1} t_{j_0-1}$$ i.e., $$s = s_1 = s_2 = \ldots = s_{j_0-1} \ \mbox{\rm and } \ t = t_1 = t_2 = \ldots = t_{j_0-1}$$ (this condition is empty if $j_0=1$) and $$s_{j_0} t_{j_0} <_{lex} s t.$$ In this case we have $b = (st)^{j_0}s_{j_0}t_{j_0} \ldots s_d t_d z10^{\infty}$ and we want to prove that $$(st)^{j_0-1}s_{j_0}t_{j_0} \ldots s_d t_d z10^{\infty} <_{lex} (st)^{j_0}s_{j_0}t_{j_0} \ldots s_d t_d z10^{\infty}$$ which is an immediate consequence of the inequality $s_{j_0}t_{j_0} <_{lex} st$. - If ${\mathcal E} = \emptyset$, then either $d=0$, or $s_1 t_1 = s_2 t_2 = \ldots = s_d t_d = s t$. Either way, we get $$s_1 = s_2 = \ldots = s_d = s \ \mbox{\rm and }\ t_1 = t_2 = \ldots = t_d = t.$$ In this case we have $a = ((st)^{d+1}z0)^{\infty}$ and we want to prove that $(st)^d z10^{\infty} <_{lex} (st)^{d+1} z10^{\infty}$, i.e., that $z10^{\infty} <_{lex} st z10^{\infty}$. Applying the hypothesis $\sigma^k(a)\leq_{lex} a$ with $k = |(st)^{d+1}|$, we see that $z \leq_{lex} s$. - If $z <_{lex} s$, the inequality $z10^{\infty} <_{lex} st z10^{\infty}$ is clear. - If $z = s$, we want to prove that $10^{\infty} <_{lex} t z10^{\infty}$, i.e., that $t$ begins in $1$ (note that, if $t$ is empty, then the inequality is clear since $z=s$ begins in $1$ as does $a$). If we had $t=0r$, with $r$ possibly empty, we would have $a = ((z0r)^{d+1}z0)^{\infty}$. Applying the hypothesis $\sigma^k(a)\leq_{lex} a$ with $k = |(z0r)^{d+1}|$ and $k = |(z0r)^dz0|$ we get respectively $z0z0r \leq_{lex} z0rz0$ (i.e., $z0r \leq_{lex} rz0$) and $rz0 \leq_{lex} z0r$. Hence we have $rz0 = z0r$. Writing this last equality as $r (z0) = (z0) r$, the Lyndon-Schützenberger theorem (see [@LS62]) implies that $r = \emptyset$ or there exist a nonempty word $x$ and two integers $p, q \geq 1$, such that $z0 = x^p$ and $r = x^q$. This gives $a = (x^{a(d+2)})^{\infty}$ or $a = (x^{(p+q)(d+1)+p})^{\infty}$. In both cases $a = x^{\infty}$ and $|x| < |((z0r)^{d+1}z0)|$ which contradicts the minimality of the period of $a$. We now prove (ii) $\Longrightarrow$ (i). Because of the periodicity of the sequence $a$ and the fact that it begins in $1$, we can suppose that $k \leq |w|$. Hence we write $w = uv$ with $u$ and $v$ nonempty and $|u| = k$, and we want to prove that $v0(uv0)^{\infty} \leq_{lex} (uv0)^{\infty}$. Since $u$ begins in $1$ as $a$ does, it suffices to prove that $v01^{\infty} \leq_{lex} (uv0)^{\infty}$. Applying the hypothesis $\sigma^k(b)<_{lex} b$ with $k = |u|$, we have $v10^{\infty} <_{lex} uv10^{\infty}$. Hence $v10^{|u|} \leq_{lex} uv1$. This inequality must be strict since its left-hand side ends in a $0$ and its right-hand side ends with a $1$: thus $v10^{|u|} <_{lex} uv1$. Hence $v10^{|u|} \leq_{lex} uv0$. We then can write $v0u <_{lex} v10^{|u|} \leq_{lex} uv0$, hence $v0u <_{lex} uv0$. This implies in turn $v0(uv0)^{\infty} \leq_{lex} (uv0)^{\infty}$. \[dstar\] The sequence $a=(w0)^\infty$ is equal to $d_\theta^*(1)$ for some $\theta>1$ if and only if $b=w10^\infty$ is equal to $d_\theta(1)$. We end this section with a result on limits of sequences of elements in $\Gamma$. \[lsup\] A sequence of $\Gamma$ of the form $(w0)^\infty$ cannot be a limit from above of a non-eventually constant sequence of elements of $\Gamma$. Suppose we have a sequence $(z^{(m)})_{m \ge 0}$ with $z^{(m)} = (z^{(m)}_n)_{n \geq 1}$ belonging to $\Gamma$, and converging towards $(w0)^\infty$, with $z^{(m)} \ge (w0)^\infty$. From Lemma \[all\] there is no element of $\Gamma$ lexicographically between $(w0)^\infty$ and $(w1\bar{w}0)^\infty$, hence $(z^{(m)})_{m \ge 0}$ is ultimately equal to $(w0)^\infty$. First results ============= In this section we consider only numbers $\beta$ belonging to $(1, 2)$. Preliminary Results ------------------- Our goal here is to present some simple preliminary data. In particular, in Table \[tab:S limit\], we give the expansions for Pisot numbers in $S'\cap(1,2)$, in Table \[tab:PisotUnivoque\] we give Pisot numbers of small degree in the interval (1,2), and in Table \[tab:SalemUnivoque\] we examine Salem numbers of small degree in the interval $(1,2)$. Some observations that are worth making, based on these tables, include: [ ]{} - The golden ratio $\varphi_1=\psi_1$ is the smallest element of $\widetilde{\mathcal U}$. (This comes straight from Definition \[selfbrack\].) - There is no univoque Pisot number of degree $2$ or $3$. - The number $\chi$ is the unique Pisot number of degree $4$ which is univoque. - For Pisot numbers $\psi_r$, the lazy expansion coincides with $d_{\psi_r}^*(1)$. - There exists a unique Salem number of degree $4$ which is univoque. - Salem numbers greater than the Komornik-Loreti constant $\kappa$ appear to be univoque (for degrees 4 and 6). -------------------------------- ------------- ----------------- -------------------- ------------------------- Minimal Pisot Greedy Lazy Comment Polynomial Number expansion expansion $x^{r+1} - 2 x^{r} + x - 1$ $\varphi_r$ $1^r0^{r-1}1$ $1^{r-1}01^\infty$ periodic self-bracketed $x^{r+1} - x^{r} - \cdots - 1$ $\psi_r$ $1^{r+1}$ $(1^r0)^\infty$ periodic self-bracketed $x^4-x^3-2 x^2 + 1$ $\chi$ $11(10)^\infty$ $11(10)^\infty$ univoque -------------------------------- ------------- ----------------- -------------------- ------------------------- : Greedy and lazy $\beta$-expansions of real numbers in $S'\cap(1,2)$.[]{data-label="tab:S limit"} We also observe the following lemma which is straightforward. \[unit\] A Parry number which is univoque must be a unit (i.e., an algebraic integer whose minimal polynomial has its constant term equal to $\pm 1$). For each Pisot or Salem number of degree less than 4 or 6 respectively, we simply compute the greedy and lazy expansion, and then compare them to see when they are equal. To find the list of Pisot numbers, we use the algorithm of Boyd [@Boyd78]. Although there is no nice algorithm to find Salem numbers in $(1,2)$ of fixed degree, for low degree we can use brute force. Namely, if $P(x) = x^n + a_1 x^{n-1} + \cdots + a_1 x + 1$ is a Salem polynomial with root in (1,2) and $Q(x) = x^n + b_1 x^{n-1} + \cdots + b_1 x + 1 = (x+2)(x+1/2)(x+1)^{n-2}$, then we have $|a_i| \leq b_i$. See [@Borwein02] for more on bounds of coefficients. -------------------- -------------- ----------------- ----------------- ------------------------- Minimal polynomial Pisot number Greedy Lazy Comment expansion expansion $x^2-x-1 $ 1.618033989 $11 $ $01^\infty$ periodic self-bracketed $x^3-x-1 $ 1.324717957 $10001 $ $00001^\infty$ $x^3-x^2-1 $ 1.465571232 $101 $ $001^\infty$ $x^3-2 x^2+x-1 $ 1.754877666 $1101 $ $101^\infty$ periodic self-bracketed $x^3-x^2-x-1 $ 1.839286755 $111 $ $(110)^\infty$ periodic self-bracketed $x^4-x^3-1 $ 1.380277569 $1001 $ $0001^\infty$ $x^4-2 x^3+x-1 $ 1.866760399 $111001 $ $1101^\infty$ periodic self-bracketed $x^4-x^3-2 x^2+1$ 1.905166168 $11(10)^\infty$ $11(10)^\infty$ univoque $x^4-x^3-x^2-x-1$ 1.927561975 $1111 $ $(1110)^\infty$ periodic self-bracketed -------------------- -------------- ----------------- ----------------- ------------------------- : Greedy and lazy expansions of degree 2, 3 and 4 Pisot numbers.[]{data-label="tab:PisotUnivoque"} [|lll[p]{}[1 in]{}l|]{} Minimal polynomial & Salem & Greedy & Lazy & Comment\ & number & expansion & expansion &\ $x^4-x^3-x^2-x+1$ & 1.722083806 & $1(100)^\infty$ & $101(110)^\infty$ &\ $x^4-2 x^3+x^2-2 x+1$ & 1.883203506 & $1(1100)^\infty$ & $1(1100)^\infty$ & univoque\ & & & &\ $x^6-x^4-x^3-x^2+1$ & 1.401268368 & $1(0010000)^\infty$ & $0010111(1111110)^\infty$ &\ $x^6-x^5-x^3-x+1$ & 1.506135680 & $1(01000)^\infty$ & $01011(11110)^\infty$ &\ $x^6-x^5-x^4+x^3-x^2-x+1$ & 1.556030191 & $1(01001001000)^\infty$ & $ 0 1^3 (01)^2 ( 1^7 0 1^6 w 1^3 w 1^6 0 )^\infty$ &\ $x^6-x^4-2 x^3-x^2+1$ & 1.582347184 & $1(0101000)^\infty$ & $011(110)^\infty$ &\ $x^6-2 x^5+2 x^4-3 x^3$ & 1.635573130 & $1(1000000100)^\infty$ & $1010101(1101111110)^\infty$ &\ $\ \ \ \ +2 x^2-2 x+1$ & & & &\ $x^6-x^5-x^4-x^2-x+1$ & 1.781643599 & $1(10100)^\infty$ & $11001(11110)^\infty$ &\ $x^6-2 x^5+x^3-2 x+1$ & 1.831075825 & $1(10110100)^\infty$ & $1(10110100)^\infty$ & univoque\ $x^6-x^5-x^4-x^3-x^2-x+1$& 1.946856268 & $1(11100)^\infty$ & $1(11100)^\infty$ & univoque\ $x^6-2 x^5-x^4+3 x^3-x^2$ & 1.963553039 & $1(111011100)^\infty$ & $1(111011100)^\infty$ & univoque\ $\ \ \ \ -2 x+1$ & & & &\ $x^6-2 x^5+x^4-2 x^3+x^2$ & 1.974818708 & $1(111100)^\infty$ & $1(111100)^\infty$ & univoque\ $\ \ \ \ -2 x+1$ & & & &\ $x^6-2 x^4-3 x^3-2 x^2+1$ & 1.987793167 & $1(1111100)^{\infty}$ & $1(1111100)^{\infty}$ & univoque\ Limit points of univoque numbers -------------------------------- In this section we concern ourselves with the structure of ${\mathcal U} \cap S$ and $\widetilde{\mathcal U} \cap S$, as well as intersections with the derived set $S'$. We begin with the following result. \[lim\] The limit of a sequence of real numbers belonging to ${\mathcal U}$ belongs to ${\mathcal U}$ or $\widetilde{\mathcal U}$. Let $(\theta_j)_{j \ge 1}$ be a sequence of numbers belonging to ${\mathcal U}$ such that $\lim_{j \to \infty} \theta_j = \theta$. Let $a^{(j)}=(a^{(j)}_n)_{n \ge 1} := d_{\theta_j}(1)$. Up to replacing the sequence $(\theta_j)_{j \ge 1}$ by a subsequence, we may assume that the sequence of sequences $(a^{(j)}_n)_{n \ge 1}$ converges to a limit $a = (a_n)_{n \ge 1}$ when $j$ goes to infinity. Then (dominated convergence): $$1=\sum_{n \ge 1} \frac{a_n}{\theta^n}.$$ For every $j \ge 1$ the number $\theta_j$ belongs to ${\mathcal U}$. Hence the sequence $a^{(j)}$ belongs to $\Gamma_{strict}$ hence to $\Gamma$. Thus the limit $a=\lim_{j \to\infty} a^{(j)}$ belongs to $\Gamma$ (see Remark \[closed\]). If $a$ belongs to $\Gamma_{strict}$, then it is the $\theta$-expansion of $1$, and $\theta$ belongs to ${\mathcal U}$. If $a$ is periodic self-bracketed, it is of the form $a=(w0)^\infty$, where we may assume that the minimal period of $a$ is $1+|w|$. From Corollary \[dstar\], $a=d_\theta^*(1)$, $b:=w10^\infty=d_\theta(1)$, and $\theta$ belongs to the set $\widetilde{\mathcal U}$. \[co\] The numbers $\varphi_{r}$ cannot be limit points of numbers in ${\mathcal U}$. This is a consequence of the first part of Lemma \[all\]. We now give two remarkable sequences of real numbers that converge to the Komornik-Loreti constant $\kappa$. Part (ii) of Proposition \[tr\] below was obtained independently by the second author and in [@KLP]. \[tr\] - Let $t=(t_n)_{n \ge 1}=11010011\ldots$ be the shifted Thue-Morse sequence, and let $\tau_{2^k}$ be the real number $>1$ such that $d_{\tau_{2^k}}(1)=t_1 \cdots t_{2^k}$. Then, the sequence of real numbers $(\tau_{2^k})_{k \ge 1}$ converges from below to the Komornik-Loreti constant $\kappa$. These numbers belong to $\widetilde{\mathcal U}$. The first three are Pisot numbers. - There exists a sequence of univoque Parry numbers that converges to $\kappa$ from above. To prove (i), note that $\tau_2$ is the golden ratio, $\tau_4=\Phi(\tau_2)=\varphi_2$, $\tau_8=\Phi^2(\tau_2)$, etc., and $\kappa=\Phi^{(\infty)}(\tau_2)$. In order to prove (ii) we define $\delta_{2^k}$ as the number such that $$d_{\delta_{2^k}}(1)= t_1 \cdots t_{2^k-1}(1\overline{t_1} \cdots \overline{t_{2^k-1}})^\infty.$$ Clearly the sequence $d_{\delta_{2^k}}(1)$ converges to $t$ when $k$ goes to infinity and thus the sequence $(\delta_{2^k})_{k \ge 1}$ converges to $\kappa$. \[ass1-ass2\] - Let $Q_{2^k}$ be the polynomial “associated” with $\tau_{2^k}$: writing $1=\sum_{1 \leq j \leq 2^k} \dfrac{t_j}{\tau_{2^k}^j}$ immediately gives a polynomial $Q_{2^k}(x)$ of degree $2^k$ such that $Q_{2^k}(\tau_{2^k}) = 0$. Then, for $k \ge 2$, the polynomial $Q_{2^k}(x)$ is divisible by the product $(x+1)(x^2+1) \cdots (x^{2^{k-2}}+1)$. - Let $R_{2^k}(x)$ be the polynomial of degree $2^{k+1}-1$ associated (as above) with $\delta_{2^k}$. Then it can be shown that, for $k \ge 2$, the polynomial $R_{2^k}(x)$ is divisible by the same product $ (x+1)(x^2+1) \cdots (x^{2^{k-2}}+1)$. Main results ============ Recall that Amara gave in [@Am] a complete description of the limit points of the Pisot numbers in the interval $(1,2)$ (see Theorem \[Am\]). Talmoudi [@Ta2] gave a description for sequences of Pisot numbers approaching each of the values $\varphi_r, \psi_r$ or $\chi$. The Pisot numbers in these sequences are called [*regular Pisot numbers*]{}. Further, Talmoudi showed that, for all $\varepsilon > 0$, there are only a finite number of Pisot numbers in $(1, 2-\varepsilon)$, that are not in one of these sequences. These are called the [*irregular Pisot numbers*]{}, and they will be examined later in Section \[sec:finite\]. Since $\chi$ is a univoque Pisot number (Tables \[tab:S limit\] and \[tab:PisotUnivoque\]), it is natural to ask if there are any other univoque Pisot numbers smaller than $\chi$. As well, it is natural to ask if there is a smallest univoque Pisot number. This leads us to our first result: \[uni\] There exists a smallest Pisot number in the set ${\mathcal U}$. Define $\theta$ by $\theta := \inf(S \cap {\mathcal U})$. We already know that $\theta$ belongs to $S$, since $S$ is closed. On the other hand, from Proposition \[lim\], either $\theta$ belongs to ${\mathcal U}$ or to $\widetilde{\mathcal U}$. It suffices to show that $\theta$ cannot belong to $\widetilde{\mathcal U}$. If it were the case, first $\theta$ would be a limit point of elements of $(S \cap {\mathcal U})$. On the other hand we could write $d^*_{\theta}(1) = (w0)^\infty$, with the minimal period of the sequence $d^*_{\theta}(1)$ being $1+|w|$ (note that $\theta < \chi$ since $\chi$ belongs to $(S \cap {\mathcal U})$ and $\theta \neq \chi$). But from Lemma \[lsup\] there is a contradiction. Now, to find the univoque Pisot numbers less than $\chi$, we need to examine the irregular Pisot numbers less than $\chi$ (Section \[sec:finite\]). We need also to examine the infinite sequences of Pisot numbers tending to those $\varphi_r$ and $\psi_r$ less than $\chi$. Lastly, we need to examine the sequences of Pisot numbers tending to $\chi$ from below. By noticing that $\varphi_1 = \psi_1$ and $\varphi_2$ are all strictly less than $\kappa$, the Komornik-Loreti constant, we can disregard these limit points. Further, we may disregard $\varphi_3$ as a limit point by Corollary \[co\]. In particular: \[thm:gamma\] There are no univoque numbers between $\psi_2$ and $1.8705$. (Note that $1.8705 > \varphi_3$.) We easily see from Proposition \[pi\] that $$\Phi^{2}(\psi_2) = \Phi(\varphi_3) \approx 1.870556617$$ which gives the result. So we see that it suffices to examine the sequence of Pisot numbers tending towards $\psi_2$ from below, and those tending to $\chi$ from below. Approaching $\psi_2$ from below {#sec:psi} ------------------------------- We know that the $\psi_r$ are limit points of the set of Pisot numbers. Moreover, we know exactly what the sequences tending to $\psi_r$ look like. Let ${P_{\psi_r}}(x) = x^{r+1} - \cdots - 1$ be the Pisot polynomial associated with $\psi_r$. Let ${A_{\psi_r}}(x) = x^{r+1} -1$ and ${B_{\psi_r}}(x) = \frac{x^r-1}{x-1}$ be two polynomials associated with ${P_{\psi_r}}(x)$.[^3] Then for sufficiently large $n$, the polynomials ${P_{\psi_r}}(x) x^n \pm {A_{\psi_r}}(x)$ and ${P_{\psi_r}}(x) x^n \pm {B_{\psi_r}}(x)$ admit a unique root between $1$ and $2$, which is a Pisot number. These sequences of Pisot numbers are the regular Pisot numbers associated with $\psi_r$. See for example [@Am; @Boyd96]. Moreover, we have that the roots of ${P_{\psi_r}}(x) x^n - {A_{\psi_r}}(x)$ and ${P_{\psi_r}}(x) x^n - {B_{\psi_r}}(x)$ approach $\psi_r$ from above, and those of ${P_{\psi_r}}(x) x^n + {A_{\psi_r}}(x)$ and ${P_{\psi_r}}(x) x^n + {B_{\psi_r}}(x)$ approach $\psi_r$ from below. This follows as ${P_{\psi_r}}(1) = -1$ and ${P_{\psi_r}}(2) = 1$, with ${P_{\psi_r}}(x)$ strictly increasing on $[1,2]$, along with the fact that on $(1, 2]$ we have ${A_{\psi_r}}(x), {B_{\psi_r}}(x) > 0$. Although we need only examine the sequences of Pisot numbers approaching $\psi_2$ from below, we give the results for all sequences approaching $\psi_2$ for completeness. \[psi\_2\] The greedy and lazy expansions of Pisot numbers approaching $\psi_2$ are summarized in Table \[tab:GL psi\_2\]. It is interesting to observe that, in the case $P_{\psi_2}(x) x^n - B_{\psi_2(x)}(x)$, (last section of Table \[tab:GL psi\_2\]), for $n = 2, 3$ and $4$, the lazy expansion $\ell_\beta(1)$ is equal to $d_\beta^*(1)$. [|lllr|]{} Case & Greedy expansion & Lazy expansion & Comment\ \ &&&\ $n = 1 $& $101 $ & $00(1)^{\infty} $ &\ $n = 2 $& $10101 $ & $0(11101)^{\infty} $ &\ $n = 3 $& $110001 $ & $1010(1)^{\infty} $ &\ $n = 4 $& $1100110001 $ & $10(1111110011)^{\infty} $ &\ $n = 3 k + 1 $& $(110)^{k} 011(000)^{k} 1 $ & $(110)^{k} 0(101)^{\infty} $ &\ $n = 3 k + 2 $& $1(101)^{k} 010(000)^{k} 1 $ & $1(101)^{k} 0(011)^{\infty} $ &\ $n = 3 k + 3 $& $(110)^{k+1} 00(000)^{k} 1 $ & $1(101)^{k} 0((110)^{k+1} 01(101)^{k} 1)^{\infty} $ &\ \ &&&\ $n = 1 $&\ $n = 2 $&\ $n = 3 $& $111(110)^{\infty} $ & $111(110)^{\infty} $ & univoque\ $n = 4 $& $111(0110)^{\infty} $ & $111(0110)^{\infty} $ & univoque\ $n = 3 k + 1 $& $111(0(000)^{k-1} 110)^{\infty} $ & $11(011)^{k-1} 1((011)^{k} 0)^{\infty} $ &\ $n = 3 k + 2 $& $111(00(000)^{k-1} 110)^{\infty} $ & $11(011)^{k-1} 1001(101)^{k-1} 0111(11(011)^{k-1} 110)^{\infty} $ &\ $n = 3 k + 3 $& $111((000)^{k} 110)^{\infty} $ & $11(011)^{k} 1(110)^{\infty} $ &\ \ &&&\ $n = 1 $& $10001 $ & $0000(1)^{\infty} $ &\ $n = 2 $& $11 $ & $0(1)^{\infty} $ & periodic self-bracketed\ $n = 3 $& $11001010011 $ & $(1011110)^{\infty} $ &\ $n = 4 $& $11010011001011 $ & $110100(10111111)^{\infty} $ &\ $n = 3 k + 1 $& $1(101)^{k} 00(110)^{k} 0(101)^{k} 1 $ & $(1(101)^{k} 00(110)^{k} 0(101)^{k} 0)^{\infty} $ & periodic self-bracketed\ $n = 3 k + 2 $& $1(101)^{k} 1 $ & $(1(101)^{k} 0)^{\infty} $ & periodic self-bracketed\ $n = 3 k + 3 $& $(110)^{k+1} 0(101)^{k+1} 001(101)^{k} 1 $ & $(110(110)^{k} 0(101)^{k+1} 001(101)^{k} 0)^{\infty} $ & periodic self-bracketed\ \ &&&\ $n = 1 $&\ $n = 2 $& $11111 $ & $(11110)^{\infty} $ & periodic self-bracketed\ $n = 3 $& $111011 $ & $(111010)^{\infty} $ & periodic self-bracketed\ $n = 4 $& $1110011 $ & $(1110010)^{\infty} $ & periodic self-bracketed\ $n = 3 k + 1 $& $11100(000)^{k-1} 11 $ & $ (11(011)^{k-1} 1001(101)^{k-1} 0)^{\infty} $ &\ $n = 3 k + 2 $& $111(000)^{k} 11 $ & $(11(011)^{k} 11(101)^{k} 0)^{\infty} $ &\ $n = 3 k + 3 $& $1110(000)^{k} 11 $ & $(11(011)^{k} (101)^{k+1} 0)^{\infty} $ &\ Table \[tab:GL psi\_2\], as well as Table \[tab:GL chi\] later on, are the results of a computation. The results themselves are easy to verify, so the main interest is the process that the computer went through, to discover these results. This is the subject of Section \[sec:computer\]. We also list which of these numbers correspond to periodic self-bracketed sequences for completeness. This Lemma gives an easy corollary \[cor:psi\] There exists a neighborhood $[\psi_2-\varepsilon, \psi_2+\varepsilon]$ that contains no univoque numbers. In fact we will see in Section \[sec:finite\] that this is actually quite a large neighborhood. This is probably also true for other $\psi_r$, where the neighborhood would not be nearly as large. The limit point $\chi$ ---------------------- We know that $\chi$ is a limit point of the set of Pisot numbers. Moreover, we know exactly what the sequences tending to $\chi$ look like. Let ${P_{\chi}}(x) = x^4-x^3-2 x^2+1$ be the Pisot polynomial associated with $\chi$. Let ${A_{\chi}}(x) = x^3+x^2-x-1$ and ${B_{\chi}}(x) = x^4-x^2+1$ be two polynomials associated with ${P_{\chi}}(x)$. Then for sufficiently large $n$, the polynomials ${P_{\chi}}(x) x^n \pm {A_{\chi}}(x)$ and ${P_{\chi}}(x) x^n \pm {B_{\chi}}(x)$ admit a unique root between $1$ and $2$, which is a Pisot number. See for example [@Am; @Boyd96]. Moreover, we have that the roots of ${P_{\chi}}(x) x^n - {A_{\chi}}(x)$ and ${P_{\chi}}(x) x^n - {B_{\chi}}(x)$ approach $\chi$ from above, and those of ${P_{\chi}}(x) x^n + {A_{\chi}}(x)$ and ${P_{\chi}}(x) x^n + {B_{\chi}}(x)$ approach $\chi$ from below. This follows as ${P_{\chi}}(1) = -1$ and ${P_{\chi}}(2) = 1$, with ${P_{\chi}}(x)$ strictly increasing on $[1,2]$, along with the fact that on $(1, 2]$ we have ${A_{\chi}}(x), {B_{\chi}}(x) > 0$. Although we need only examine the sequences of Pisot numbers approaching $\chi$ from below, we give the results for all sequences approaching $\chi$ for completeness. \[chi\] The greedy and lazy expansions of Pisot numbers approaching $\chi$ are summarized in Table \[tab:GL chi\]. [|ll[p]{}[2.6in]{}r|]{} Case & Greedy expansion & Lazy expansion & Comment\ \ &&&\ $n = 1$ & $1001001 $ & $00(1111011)^{\infty} $ &\ $n = 2$ & $11 $ & $0(1)^{\infty} $ & periodic self-bracketed\ $n = 4$ & $110110101001001011 $ & $110110100(1)^{\infty} $ & periodic self-bracketed\ $n = 2 k + 1$ & $11(10)^{k-1} 01000(10)^{k-1} 0(00)^{k} 11 $ & $11(10)^{k-1} 00(11)^{k} 00(1)^{\infty} $ &\ $n = 2 k + 2$ & $11(10)^{k-1} 0111000(10)^{k-2} 000010(00)^{k-1} 11 $ & $11(10)^{k-1} 01101(11)^{k-1} 00(1)^{\infty} $ &\ \ &&&\ $n = 1$ &\ $n = 2$ &\ $n = 3$ &\ $n = 5$ & $1111(0001100)^{\infty} $ & $1111000101111(0111100)^{\infty} $ &\ $n = 2 k + 1$ & $111(01)^{k-2} 1(00011(10)^{k-2} 00)^{\infty} $ & $111(01)^{k-2} 100011(10)^{k-3} 0111011(1(01)^{k-3} 11111000)^{\infty} $ &\ $n = 2 k + 2$ & $111(01)^{k-1} 1011((10)^{k-1} 0111(01)^{k-1} 1000)^{\infty} $ & $111(01)^{k-1} 1011((10)^{k-1} 0111(01)^{k-1} 1000)^{\infty} $ & univoque\ \ &&&\ $n = 1$ & $10001 $ & $0000(1)^{\infty} $ &\ $n = 2$ & $101000101 $ & $0(1101)^{\infty} $ &\ $n = 3$ & $11001 $ & $10(11011)^{\infty} $ &\ $n = 4$ & $110101(01100110000100)^{\infty} $ & $110(1010110010110111)^{\infty} $ &\ $n = 5$ & $1110001 $ & $110(1110111)^{\infty} $ &\ $n = 2 k + 1$ & $11(10)^{k-1} 001 $ & $1110((10)^{k-3} 01111)^{\infty} $ &\ $n = 2 k + 2$ & $11(10)^{k-1} 0101(1(10)^{k-2} (011)^2(10)^{k-2} 010^4100)^{\infty} $ & $1110((10)^{k-2} 01011(10)^{k-2} (011)^2(10)^{k-2} 001^3(10)^{k-2} 01^4)^{\infty} $ &\ \ &&&\ $n = 1$ &\ $n = 2$ &\ $n = 3$ &\ $n = 4$ & $111111000001 $ & $111110(1)^{\infty} $ & periodic self-bracketed\ $n = 5$ & $1111001111000001 $ & $111100(11101)^{\infty} $ &\ $n = 2 k + 1$ & $111(01)^{k-2} 101000(10)^{k-3} 011(1(00)^{k-1} 10)^{\infty} $ & $111(01)^{k-2} 100(1(11)^{k-1} 01)^{\infty} $ &\ $n = 2 k + 2$ & $111(01)^{k-2} 100000(10)^{k-2} 001 $ & $111(01)^{k-1} 110(1)^{\infty} $ &\ Lemma \[chi\] above, along with Proposition \[pi\] and Corollary \[cor:psi\] prove the following result: \[thm:finite\] There are only a finite number of univoque Pisot numbers less than $\chi$. In addition, Lemma \[chi\] proves the result \[thm:chi\] The univoque Pisot number $\chi$ is the smallest limit point of univoque Pisot numbers. It is a limit point from above of regular univoque Pisot numbers. Univoque Pisot numbers less than $\chi$ {#sec:finite} --------------------------------------- Our goal in this section is to describe our search for univoque Pisot numbers below the first limit point $\chi$. We know that all univoque Pisot numbers less than $\chi$ are either in the range $[\kappa, \psi_2]$, or in the range $[\pi_2, \chi]$. Here $\kappa$ is the Komornik-Loreti constant, (approximately $1.787231$), $\psi_2$ is approximately $1.839286755$, $\pi_2 > 1.8705$ and $\chi$ is approximately $1.905166168$. We will search for Pisot numbers in the range $[1.78, 1.85]$ and $[1.87, 1.91]$. To use the algorithm of Boyd [@Boyd78], we need to do an analysis of the limit points in these two ranges. In particular, we need to do an analysis of the limit points $\psi_2$ and $\chi$. We use the notation of [@Boyd78]. Let $P(z)$ be a minimal polynomial of degree $s$ of a Pisot number $\theta$, and $Q(z) = z^s P(1/z)$ be the reciprocal polynomial. Let $A(z)$ be a second polynomial with integer coefficients, such that $|A(z)| \leq |Q(z)|$ for all $|z| = 1$. Then $f(z) = A(z)/Q(z) = u_0 + u_1 z + u_2 z^2 + \cdots \in {\mathbb Z}[[z]]$ is a rational function associated with $\theta$. The sign of $A(z)$ is chosen in such a way that $u_0 \geq 1$. Then by Dufresnoy and Pisot [@DufresnoyPisot55] we have the following. $$\label{eq:rules} \begin{array}{rclcl} 1 &\leq &u_0 \\ u_0^2-1 &\leq &u_1 \\ w_n(u_0, \cdots, u_{n-1}) & \leq & u_n &\leq & w^*_n(u_0, \cdots, u_{n-1}) \end{array}$$ where $w_n$, and $w_n^*$ are defined below. Let $D_n(z) = -z^n + d_1 z^{n-1} + \cdots + d_n$ and $E_n(z) = -z^n D_n(1/z)$. Solve for $d_1, \cdots, d_n$ such that $$\frac{D_n(z)}{E_n(z)} = u_0 + u_1 z + \cdots + u_{n-1} z^{n-1} + w_n(u_0, \cdots, u_{n-1})z^n + \cdots$$ This will completely determine $w_n$. There are some nice recurrences for $w_n$ and $D_n$, which simplify the computation of $w_n$ [@Boyd78]. We have that $w_n^*$ is computed very similarly, instead considering $D_n^*(z) = z^n + d_1 z^{n-1} + \cdots + d_n$ and $E_n^*(z) = z^n D_n(1/z)$. Expansions $u_0 + u_1 z + \cdots$ satisfying Equation (\[eq:rules\]) with integer coefficients are in a one-to-one correspondence with Pisot numbers. Using this notation, Lemma 2 of [@Boyd78] becomes: Let $f = u_0 + u_1 z + u_2 z^2 + \cdots $ be associated with a limit point $\theta$ in $S'$. Suppose that $w^*_N - w_N \leq 9/4$ for some $N$. Then for any $n \geq N$, there are exactly two $g$ with expansions beginning with $u_0 + u_1 z + \cdots + u_{n-1} z^{n-1}$. Moreover, for all $n \geq N$, all $g$ beginning with $u_0 + u_1 z + \cdots + u_{n-1} z^{n-1}$ are associated with the regular Pisot numbers approaching the limit point $\theta$. So in particular, we need to find the expansion of the limit points around $\psi_2$ and $\chi$. Consider the following rational functions associated with the limit points $\psi_2$ and $\chi$. 1. Consider $$-\frac{x+1}{x^3+x^2+x-1} = 1+ 2 x+ 3 x^2+ 6 x^3+ 11 x^4+ 20 x^5+ 37 x^6+ \cdots$$ the first of the two rational functions associated with the limit point $\psi_2$. A quick calculation shows that $w_{24}^* - w_{24} < 9/4$. 2. Consider $$\frac{x^3-1}{x^3+x^2+x-1} = 1+x+2 x^2+3 x^3+6 x^4+11 x^5+20 x^6 +\cdots$$ the second of the two rational functions associated with the limit point $\psi_2$. A quick calculation shows that $w_{11}^* - w_{11} < 9/4$. 3. Consider $$-\frac{x^3+x^2-x-1}{x^4-2 x^2-x+1} = 1+2 x+3 x^2+6 x^3+11 x^4+21 x^5+40 x^6 +\cdots$$ th first of the two rational functions associated with the limit poit $\chi$. A quick calculation shows that $w_{33}^* - w_{33} < 9/4$. 4. onsider $$\frac{x^4-x^2+1}{x^4-2 x^2-x+1} = 1+x+2 x^2+4 x^3+8 x^4+15 x^5+29 x^6+\cdots$$ the second f the two rational functions associated with the limit point $\chi$. A quick calculation shows that $w_{44}^* - w_{44} < 9/4$. Using this result, we were able to use Boyd’s algorithm for finding Pisot numbers in the two ranges $[1.78, 1.85]$ and $[1.87, 1.91]$, (which contain $[\kappa, \psi_2]$ and $[\pi_2, \chi]$), where when we have an expansion that matches one of the four rational functions listed above, we prun that part of the search tree, as we would only find regular Pisot numbers of a known form. There were 227 Pisot numbers in the first range (minus the known regular Pisot numbers pruned by the discussion above), and 303 in the second range (similarly pruned). There were 530 such Pisot numbers in total. A corollary of this computation worth noting is [ ]{} - The only Pisot numbers in $[\psi_2-10^{-8}, \psi_2+10^{-8}]$ are $\psi_2$ and the regular Pisot numbers associated with $\psi_2$. - The only Pisot numbers in $[\chi-10^{-13}, \chi+10^{-3}]$ are $\chi$ and the regular Pisot numbers associated with $\chi$. We then checked each of these 530 Pisot numbers to see if they were univoque. We did this by computing the greedy and lazy $\beta$-expansion of each Pisot number and checked if they were equal. This calculation gave the following theorem: \[thm:finite2\] There are exactly two univoque Pisot numbers less than $\chi$. They are - $1.880000\cdots$ the root in $(1,2)$ of the polynomial ${x}^{14}-2{x}^{13}+{x}^{11}-{x}^{10}-{x}^{7}+{x}^{6} -{x}^{4}+{x}^{3}-x+1$ with univoque expansion $111001011(1001010)^\infty$. - $1.886681\cdots$ the root in $(1,2)$ of the polynomial ${x}^{12}-2{x}^{11}+{x}^{10}-2{x}^{9}+{x}^{8}-{x}^{3}+{x}^{2}-x+1$ with univoque expansion $111001101(1100)^\infty$ Regular Pisot numbers associated with $\psi_r$ ============================================== The goal of this section is to show that $2$ is the limit point of univoque Pisot numbers. We will do this by observing that for each $r$, there are regular Pisot numbers between $\psi_r$ and $2$ that are univoque. We know that the $\psi_r$ are limit points of the set of regular Pisot numbers. Moreover we know that $\psi_r \to 2$ as $r \to \infty$. Using the notation of Section \[sec:psi\] we define ${P_{\psi_r}}$ and ${A_{\psi_r}}$. We denote the Pisot number associated with the polynomial ${P_{\psi_r}}(x) x^n - {A_{\psi_r}}(x)$, as ${\psi_{r,n}^{A,-}}$. Let $n \geq r + 1$. Then the greedy expansion of ${\psi_{r,n}^{A,-}}$ is $$1^{r+1} (0^{n-r-1} 1^{r} 0)^{\infty}. \label{eq:general}$$ First we expand this expansion to see that it is equivalent to $$\begin{array}{lrcl} & 1 & = & \frac{1}{x} + \cdots + \frac{1}{x^{r+1}} + (0 + \frac{1}{x^{n+1}} + \cdots + \frac{1}{n+r} + 0) \left(\frac{1}{1-\frac{1}{x^n}}\right) \\ \implies & 1 & = & \frac{\frac{1}{x}-\frac{1}{x^{r+2}}}{1-\frac{1}{x}} + \left(\frac{\frac{1}{x^{n+1}}-\frac{1}{x^{n+r+1}}}{1-\frac{1}{x}}\right) \left(\frac{1}{1-\frac{1}{x^n}}\right) \\ \implies & (1-\frac{1}{x})(1-\frac{1}{x^n}) & = & (\frac{1}{x}-\frac{1}{x^{r+2}}) (1-\frac{1}{x^n}) + (\frac{1}{x^{n+1}}-\frac{1}{x^{n+r+1}}) \\ \implies & x^{r+2} (x-1) (x^n-1) & = & x (x^{r+1}-1)(x^n-1) + x^{r+2} - x^2 \\ \implies & 0 & = & x^{n+r+3} - 2 x^{n+r+2} +x^{n+1} - x^{r+3} + x^{r+2} - x^2 - x \\ \implies & 0 & = & x (x-1) (x^n(x^{r+1} - x^r - \cdots - 1) - (x^{r+2}-1)) \\ \implies & 0 & = & x (x-1)({P_{\psi_r}}(x) x^n - {A_{\psi_r}}(x)) \end{array}$$ So we see that this is a valid expansion for this regular Pisot number. To observe that this is indeed the greedy $\beta$-expansion we observe that the $\beta$-expansion starts with $r+1$ consecutive $1$’s, and all strings of consecutive $1$’s after this are shorter than $r+1$. Hence it follows from Theorem \[Pa\]. By Corollary \[cor:01\] we get the immediate result: If $n \geq 2 (r+1)$ the regular Pisot number ${\psi_{r,n}^{A,-}}$ is not univoque. So, the main theorem is Assume $r+1 \leq n < 2 (r+1)$. Then ${\psi_{r,n}^{A,-}}$ is a univoque Pisot number. So it suffices to see that the equation (\[eq:general\]) is both greedy and lazy. This follows from Theorems \[Pa\] and \[EJK1\]. We have that $2$ is a limit point of $S \cap \mathcal U$. We see that ${\psi_{r,n}^{A,-}}$ is always greater than $\psi_r$ Further, for $r+1 \leq n \leq 2(r+1)$ we have that ${\psi_{r,n}^{A,-}}$ is less than 2, which follows from noticing that ${P_{\psi_r}}(1) 1^n - {A_{\psi_r}}(1) = (1 - 1 - 1 - \cdots - 1) - (1^{r+1} -1) < 0$ and ${P_{\psi_r}}(2) 2^n - {A_{\psi_r}}(2) = 2^n(2^{r+1} - 2^{r} - \cdots - 1) - (2^{r+1} - 1) = 2^n - 2^{r+1} + 1 > 0$. Further, we see that $\psi_r$ tends to 2. Automated conjectures and proofs {#sec:computer} ================================ The results in Tables \[tab:GL psi\_2\] and \[tab:GL chi\] were generated automatically. This section describes the algorithms that were used to do this. - [**Computing the greedy $\beta$-expansion.**]{} We will explain, given $\beta$ a root of $P_\beta(x)$, how to compute the greedy $\beta$-expansion of $1$, (assuming periodicity). - [**Computing the lazy $\beta$-expansion.**]{} We will explain, given $\beta$ a root of $P_\beta(x)$, how to compute the lazy $\beta$-expansion of $1$, (assuming periodicity). - [**Creating the conjecture.**]{} We will explain how with the greedy or lazy $\beta$-expansion of $1$ for a sequence of regular Pisot numbers, how to create a conjecture of the general pattern of the $\beta$-expansion. - [**Verifying conjecture.**]{} We will explain how a general pattern can be verified to be a valid $\beta$-expansion. - [**Check greedy/lazy/univoque/periodic self-bracketed expansion.**]{} We will explain how to check if a general pattern is a valid greedy, lazy, univoque or periodic self-bracketed $\beta$-expansion. Computing the greedy $\beta$-expansion {#sec:Greedy Comp} -------------------------------------- The greedy algorithm does the most work possible at any given step (see discussion in Section \[sec:greedy\]). The computation is done symbolically modulo the minimal polynomial of $\beta$, and floating point numbers are used only when computing $x_n$. A check is done on $\beta r_{n-1} - x_n$ to ensure that the calculation is being done with sufficient digits to guarantee the accuracy of the result. A list of previously calculated $r_n$’s is kept and checked upon each calculation to determine when the $\beta$-expansion becomes eventually periodic. Computing the lazy $\beta$-expansion {#sec:Lazy Comp} ------------------------------------ Basically, the algorithm tries to do the minimal work at any given time (see discussion in Section \[sec:lazy\]). As with the greedy expansion, computations are done as a mixture of floating point and symbolic, to allow for recognition of periodicity, with the same checks being performed as before to ensure the accuracy of the result. Creating the conjecture {#sec:conj} ----------------------- In this section we will explain how, given $d_{q_1}(1)$ and $d_{q_2}(1)$, (or the related lazy $\beta$-expansions), for some “regular sequence” of Pisot numbers $q_k$, we can conjecture a “nice” expression for $d_{q_k}(1)$. This is probably best done by example. Assume that two consecutive greedy expansions give the finite expansions: $$\begin{aligned} d_{q_1}(1) & = & 0011011011 \\ d_{q_2}(1) & = & 00111101101011.\end{aligned}$$ We start by reading characters from each string into the “string read” expression. $$\begin{array} {lll} \mathrm{String\ 1} & \mathrm{String\ 2} & \mathrm{String\ read}\\ \hline 001101101011 & 0011110110101011 & \mathrm{empty} \\ 01101101011 & 011110110101011 & 0 \\ 1101101011 & 11110110101011 & 00 \\ 101101011 & 1110110101011 & 001 \\ 01101011 & 110110101011 & 0011 \end{array}$$ At this point we see that the next characters to read from String 1 and String 2 are different. We use a result that is only observed computationally, and has no theoretical reason for being true. This is that the size of every part that depends on the value of $k$ is of the same size, which is known before the computation begins. So an expression $d_{q_k}(1) = v_1 (w_1)^k v_2 (w_2)^k \cdots$ would have all $|w_i|$ constant, and known in advance. In this case, we are assuming that this size is 2. So we check if the next two characters of String 2 are the same as the previous two characters of String 1. (In this case, both of these are “11”.) We then truncate the result to give something of the form $(11)^k$ which is valid for both strings. So we continue. $$\begin{array} {lll} \mathrm{String\ 1} & \mathrm{String\ 2} & \mathrm{String\ read}\\ \hline 01101011 & 110110101011 & 0011 \\ 01101011 & 0110101011 & 00(11)^k \\ 1101011 & 110101011 & 00(11)^k0 \\ \vdots & \vdots & \vdots \\ 1 & 011 & 00(11)^k0110101 \end{array}$$ Again we check if the next two characters of String 2 are equal to the previous two characters in String 1. We also notice that the two characters “01” are in fact repeated more times than this, so we get $$\begin{array} {lll} \mathrm{String\ 1} & \mathrm{String\ 2} & \mathrm{String\ read}\\ \hline 1 & 011 & 00(11)^k0110101 \\ 1 & 1 & 00(11)^k01101(01)^k \\ 1 & 1 & 00(11)^k011(01)^{k+1} \\ \mathrm{empty} & \mathrm{empty} & 00(11)^k011(01)^{k+1}1 \end{array}$$ So we would conjecture that $d_{q_k}(1) = 00(11)^k011(01)^{k+1}1$. It should be pointed out that this is in no way a proof that this is the general result. This has to be done separately in Sections \[sec:verify\] and \[sec:gl\]. Verifying conjecture {#sec:verify} -------------------- In this section we show how, given a conjectured expansion for $q_k$, we would verify that this is a valid expansion for all $q_k$. It should be noticed that this does not prove what type of $\beta$-expansion it is (i.e., greedy, lazy ...). This will be done in Section \[sec:gl\]. We will demonstrate this method, by considering an example. Consider the greedy expansion $d_{\beta_k}(1) = 1(101)^k1 = 11(011)^k$ associated with the greedy expansion of $1$ for the Pisot root associated with $$P_k^*(x) = P_{\psi_2}(x) x^{3 k +2} + B_{\psi_2}(x) = (x^3-x^2-x-1) x^{3 k + 2} + (x+1).$$ For convenience we write $\beta$ for this root (where $\beta$ will depend on $k$). We see then that this expansions implies $$\frac{1}{\beta} + \frac{1}{\beta^2} + \frac{1}{\beta^4} + \frac{1}{\beta^5} + \frac{1}{\beta^7} + \frac{1}{\beta^8} + \cdots + \frac{1}{\beta^{3 k + 1}} + \frac{1}{\beta^{3 k+2}} =1$$ This simplifies to $${\beta}^{-1}+{\beta}^{-2}+ \left( {\beta}^{-4}+{\beta}^{-5} \right) \sum_{j=0}^{k-1} \left( {\beta}^{3 j} \right)^{-1} = 1.$$ By subtracting $1$ from both sides, and clearing the denominator, this is equivalent to $D_k(\beta) = 0$ where $$D_k(x) := - x^{3 k+5} +x^{3 k+4} +x^{3 k + 3} + x^{3 k + 2} - x - 1$$ But we notice that $$D_k(x) = -(P_{\psi_2}(x) x^{3 k +2} + B_{\psi_2}(x)),$$ hence $D_k(x) = - P_k^*(x)$. All of these processes can be automated. The hardest part is finding a co-factor $C_k(x)$ such that $D_k(x) = C_k(x) P^*_k(x)$. (We are not always so lucky that $C_k(x) = -1$ as was the case in this example.) Here we noticed computationally that $C_k(x)$ is always of the form: $$C_k(x) = a_n x^{b_n k + c_n} + a_{n-1} x^{b_{n-1} k + c_{n-1}} + \cdots + a_2 x^{b_2 k + c_2} + a_{1} x^{b_{1} k + c_{1}}.$$ For our purposes it was unnecessary to prove that this is always the case, as we could easily verify it for all cases checked, and we were using this as a tool to verify the conjectured general form. Check greedy/lazy/univoque/periodic self-bracketed $\beta$-expansion {#sec:gl} -------------------------------------------------------------------- In this section we discuss how one would check if an expression (conjectured using the techniques of Section \[sec:conj\] and verified as a $\beta$-expansion in Section \[sec:verify\]), is in fact a greedy, lazy or periodic self-bracketing $\beta$-expansion. Consider a general expression of the form $$E(k) := v_1 (w_1)^{k} v_2 (w_2)^k \cdots (w_{n-1})^k v_n (u_1 (w_{n})^k \cdots (w_{n+m})^k u_m)^\infty$$ where the $w_i$ all have the same length (this is in fact the case for all problems that we studied). Then the main thing to notice is that there exists a $K$ such that if the $\beta$-expansion $E(K)$ has a desired property (either being or not being greedy, lazy, etc), then for all $k \geq K$ we have $E(k)$ has the same property. Moreover the $K$ is explicitly computable, being a function of the lengths of the $v_i$, $w_i$ and $u_i$. This means that what initially looks like an infinite number of calculations is in fact a finite number of calculations. The way to see this is that for sufficiently large $k$, most of the comparisons will be done between the $w_i$’s, and then an increase in $k$ will not change this, but just add another redundant check to something already known. Comments, Open Questions and Further Work ========================================= There are some interesting observations that can be made from the data and results so far. This investigation has opened up a number of questions. - First, given a sequence of greedy of lazy $\beta$-expansions of a nice sequence of Pisot numbers $q_k$ that looks like: $$E(k) := v_1 (w_1)^{k} v_2 (w_2)^k \cdots (w_{n-1})^k v_n (u_1 (w_{n})^k \cdots (w_{n+m})^k u_m)^\infty$$ is it always true that $|w_1| = |w_2| = \cdots = |w_{n+m}|$? - Is the co-factor from Section \[sec:verify\] always of the form: $$C_k(x) = a_n x^{b_n k + c_n} + a_{n-1} x^{b_{n-1} k + c_{n-1}} + \cdots + a_2 x^{b_2 k + c_2} + a_{1} x^{b_{1} k + c_{1}}?$$ - It appears in Table \[tab:SalemUnivoque\] that for all Salem numbers of degree $4$ and $6$ greater than $\approx 1.83$, these Salem numbers are univoque. Is this just an artifact of small degrees, or is something more general going on? - In general, are the greedy/lazy $\beta$-expansions even periodic for Salem numbers? (This is not known to be true, see [@Boyd96a] for more details.) - It is known that Pisot numbers can be written as a limit of Salem numbers, where if $P(x)$ is the minimal polynomial of a Pisot number, then $P(x) x^n \pm P^*(x)$ has a Salem number as a root, which tends to the root of the Pisot number. Some preliminary and somewhat haphazard investigation suggests that we might be able to find a “regular” looking expression for the greedy (resp. lazy) $\beta$-expansion of these Salem numbers, which tends towards the greedy (resp. lazy) $\beta$-expansion of the Pisot number. If true, this could have implications towards questions concerning the $\beta$-expansions of Salem numbers being eventually periodic. Acknowledgments {#acknowledgments .unnumbered} =============== The authors wish to thank David Boyd for stimulating discussions, and the referee for a careful reading of the manuscript. Note added on June 12, 2006 {#note-added-on-june-12-2006 .unnumbered} =========================== Just before submitting this paper we came across a paper where the topological structure of the set ${{\mathcal{U}}}$ and of its (topological) closure are studied. We cite it here for completeness: [V. Komornik, P. Loreti, On the structure of univoque sets, [*J. Number Theory*]{}, to appear.]{} One can also read consequences of the results of that paper in [M. de Vries, Random $\beta$-expansions, unique expansions and Lochs’ Theorem, PhD Thesis, Vrije Universiteit Amsterdam, 2005.]{} (available at [http://www.cs.vu.nl/$\sim$mdvries/proefschrift.pdf]{}). [99]{} J.-P. Allouche, [*Théorie des Nombres et Automates*]{}, Thèse d’État, Bordeaux, 1983. J.-P. Allouche, M. Cosnard, Itérations de fonctions unimodales et suites engendrées par automates, [*C. R. Acad. Sci. Paris, Sér. 1* ]{} [**296**]{} (1983) 159–162. J.-P. Allouche, M. Cosnard, The Komornik-Loreti constant is transcendental, [*Amer. Math. Monthly*]{} [**107**]{} (2000) 448–449. J.-P. Allouche, M. Cosnard, Non-integer bases, iteration of continuous real maps, and an arithmetic self-similar set, [*Acta Math. Hung.*]{} [**91**]{} (2001) 325–332. J.-P. Allouche, J. Shallit, The ubiquitous Prouhet-Thue-Morse sequence, in C. Ding, T. Helleseth and H. Niederreiter (Eds.) [*Sequences and their applications, Proceedings of SETA’98*]{}, Springer, 1999, pp. 1–16. M. Amara, Ensembles fermés de nombres algébriques, [*Ann. Sci. École Norm. Sup.*]{} [**83**]{} (1966) 215–270. M.-J. Bertin, A. Descomps-Guilloux, M. Grandet-Hugot, M. Pathiaux-Delefosse, J.-P. Schreiber, [*Pisot and Salem numbers*]{}, Birkhäuser, 1992. A. Bertrand, Développements en base de Pisot et répartition modulo $1$, [*C. R. Acad. Sci. Paris, Sér. A-B*]{} [**285**]{} (1977) 419–421. P. Borwein, [*Computational excursions in analysis and number theory*]{}, CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC, [**10**]{}, Springer-Verlag, New York, 2002. D. W. Boyd, Pisot and Salem numbers in intervals of the real line, [*Math. Comp.*]{} [**32**]{} (1978) 1244–1260. D. W. Boyd, Pisot numbers in the neighbourhood of a limit point, I, [*J. Number Theory*]{} [**21**]{} (1985) 17–43. D. W. Boyd, Pisot numbers in the neighborhood of a limit point, II, [*Math. Comp.*]{} [**43**]{} (1984) 593–602. D. W. Boyd, Salem numbers of degree four have periodic expansions, in J.-H. De Coninck, C. Levesque (Eds.), [*Théorie des Nombres, Québec, 1987*]{}, Walter De Gruyter, 1989, pp. 57–64. D. W. Boyd, On beta expansions for Pisot numbers, [*Math. Comp.*]{} [**65**]{} (1996) 841–860. D. W. Boyd, On the beta expansion for [S]{}alem numbers of degree [$6$]{}, [*Math. Comp.*]{} [**65**]{} (1996) 861–875, S29–S31. K. Dajani and C. Kraaikamp, From greedy to lazy expansions and their driving dynamics, [*Expo. Math.*]{} [**20**]{} (2002) 315–327. Z. Daróczy, I. Kátai, Univoque sequences, [*Publ. Math. Debrecen*]{} [**42**]{} (1993) 397–407. Z. Daróczy, I. Kátai, On the structure of univoque numbers, [*Publ. Math. Debrecen*]{} [**46**]{} (1995) 385–408. J. Dufresnoy, Ch. Pisot, Étude de certaines fonctions méromorphes bornées sur le cercle unité. Application à un ensemble fermé d’entiers algébriques, [*Ann. Sci. École Norm. Sup.*]{} [**72**]{} (1955) 69–92. P. Erdős, I. Joó, V. Komornik, Characterization of the unique expansions $1=\sum_{i=1}^{\infty} q^{-n_i}$, and related problems, [*Bull. Soc. Math. France*]{} [**118**]{} (1990) 377–390. P. Glendinning and N. Sidorov, Unique representations of real numbers in non-integer bases, [*Math. Res. Letters*]{} [**8**]{} (2001) 447–472. V. Komornik, P. Loreti, Unique developments in non-integer bases, [*Amer. Math. Monthly*]{} [**105**]{} (1998) 636–639. V. Komornik, P. Loreti, A. Pethő, The smallest univoque number is not isolated, [*Publ. Math. Debrecen*]{} [**62**]{} (2003) 429–435. M. Lothaire, [*Algebraic combinatorics on words*]{}, Cambridge University Press, 2002. R. C. Lyndon and M. P. Schützenberger, The equation $A^M=b^Nc^P$ in a free group, [*Michigan Math. J.*]{} [**9**]{} (1962) 289–298. W. Parry, On the $\beta$-expansions of real numbers, [*Acta Math. Acad. Sci. Hungar.*]{} [**11**]{} (1960) 401–416. A. Rényi, Representations for real numbers and their ergodic properties, [*Acta Math. Acad. Sci. Hungar.*]{} [**8**]{} (1957) 477–493. R. Salem, Power series with integral coefficients, [*Duke Math. J.*]{} [**12**]{} (1945) 153–172. K. Schmidt, On periodic expansions of Pisot and Salem numbers, [*Bull. london Math. Soc.*]{} [**12**]{} (1980) 269–278. F. L. Talmoudi, Sur les nombres de $S \cap [1,2]$, [*C. R. Acad. Sci. Paris, Sér. Math.*]{} [**285**]{} (1977) 969–971. F. L. Talmoudi, Sur les nombres de $S \cap [1,2[$, [*C. R. Acad. Sci. Paris, Sér. Math.*]{} [**287**]{} (1978) 739–741. [^1]: Research partially supported by MENESR, ACI NIM 154 Numération. [^2]: Research supported, in part by NSERC of Canada. [^3]: Note that the definition of ${B_{\psi_r}}(x)$ is different from the definition in [@Boyd96], and corrects a misprint in that paper.
--- abstract: 'The temporal instability of stably stratified flow was investigated by analyzing the Taylor-Goldstein equation theoretically. According to this analysis, the stable stratification $N^2\geq0$ has a destabilization mechanism, and the flow instability is due to the competition of the kinetic energy with the potential energy, which is dominated by the total Froude number $Fr_t^2$. Globally, $Fr_t^2 \leq 1$ implies that the total kinetic energy is smaller than the total potential energy. So the potential energy might transfer to the kinetic energy after being disturbed, and the flow becomes unstable. On the other hand, when the potential energy is smaller than the kinetic energy ($Fr_t^2>1$), the flow is stable because no potential energy could transfer to the kinetic energy. The flow is more stable with the velocity profile $U''/U''''''>0$ than that with $U''/U''''''<0$. Besides, the unstable perturbation must be long-wave scale. Locally, the flow is unstable as the gradient Richardson number $Ri>1/4$. These results extend the Rayleigh’s, Fj[ø]{}rtoft’s, Sun’s and Arnol’d’s criteria for the inviscid homogenous fluid, but they contradict the well-known Miles-Howard theorem. It is argued here that the transform $F=\phi/(U-c)^n$ is not suitable for temporal stability problem, and that it will lead to contradictions with the results derived from the Taylor-Goldstein equation. However, such transform might be useful for the study of the Orr-Sommerfeld equation in viscous flows.' date: - - '20 April 2010, and in revised form ' title: General temporal instability criteria for stably stratified inviscid flow --- Introduction ============ The instability of the stably stratified shear flow is one of main problems in fluid dynamics, astrophysical fluid dynamics, oceanography, meteorology, etc. Although both pure shear instability without stratification and statical stratification instability without shear have been well studied, the instability of the stably stratified shear flow is still mystery. On the one hand, the shear instability is known as the instability of vorticity maximum, after a long way of investigations [@Rayleigh1880; @Fjortoft1950; @Arnold1965a; @SunL2007ejp; @SunL2008cpl]. It is recognized that the resonant waves with special velocity of the concentrated vortex interact with flow for the shear instability [@SunL2008cpl]. Other velocity profiles are stable in homogeneous fluid without stratification. On the other hand, [@Rayleigh1883] proved out that buoyancy is a stabilizing effect in the statical case. Thus, it is naturally believed that the stable stratification do favor the stability [see, e.g. @Taylor1931; @Chandrasekhar1961], which finally results in the well known Miles-Howard theorem [@Miles1961; @Miles1963; @Howard1961]. According to this theorem, the flow is stable to perturbations when the Richardson number $Ri$ (ratio of stratification to shear) exceeds a critical value $Ri_c=1/4$ everywhere. In three-dimensional stratified flow, the corresponding criterion is $Ri_c=1$ [@Abarbanelt1984]. However, the stabilization effect of buoyancy is a illusion. In a less known paper, [@Howard1973] had shown with several special examples that stratification effects can be destabilizing due to the vorticity generated by non-homogeneity, and the instability depends on the details of the velocity and density profiles. One instability is called as Holmboe instability [@Holmboe1962; @Ortiz2002; @Alexakis2009]. Then [@Howard1973] stated three main points from the examples without any further proof. (a) Stratification may shift the band of unstable wave numbers so that some which are stable at homogeneous cases become unstable. (b) Conditions ensuring stability in homogeneous flow (such as the absence of a vorticity maximum) do not necessarily carry over to the stratified case, so that ’static stability’ can destabilize. (c) New physical mechanisms brought in by the stratification may lead to instability in the form of a pair of growing and propagating waves where in the homogeneous case one had a stationary wave. Recall the points by [@Howard1973], and that there is a big gap between Rayleigh’s criterion and Miles-Howard’ criterion, [@YihCSBook1980] even wrote “Miles’ criterion for stability is not the nature generalization of Rayleigh’s well-known sufficient condition for the stability of a homogeneous fluid in shear flow". The mystery of the instability is still cover for us. Following the frame work of [@SunL2007ejp; @SunL2008cpl], this study is an attempt to clear the confusion in theories. We find that the flow instability is due to the competition of the kinetic energy with the potential energy, which is dominated by the total Froude number $Fr_t^2$. And the unexpected assumption in Miles-Howard theorem leads the contradiction to other theories. Introduction ============ The Miles-Howard criterion is one of the most important theorem for stably stratified shear flow. According to this theorem, the flow is stable to perturbations when the Richardson number $Ri$ exceeds a critical value $Ri_c=1/4$ [@Miles1961; @Miles1963; @Howard1961]. This criterion is widely used in fluid dynamics, astrophysical fluid dynamics, oceanography, meteorology, etc. Specifically, it is the most important criterion for turbulence genesis in ocean modelling [@Peltier2003]. Using Arnold’s method [@Arnold1965a], the the corresponding criterion for three-dimensional stratified flow is $Ri_c=1$ [@Abarbanelt1984]. Consequently, it is widely believed that the stable stratification do favor the stability, and that all perturbations should decay when $Ri>Ri_c$ [see, e.g. @Taylor1931; @Chandrasekhar1961; @Miles1961; @Miles1963; @Howard1961; @Turner1979]. However, the experiments indicate otherwise. In fact, the flow might be unstable at very large $Ri$ [@Zilitinkevich2008; @Canuto2008; @Alexakis2009]. Thus, to dispel the contradiction between the experiments and the Miles-Howard criterion, different explanations were postulated in literature: “this interval, $0.25<Ri<1$, separates two different turbulent regimes: strong mixing and weak mixing rather than the turbulent and the laminar regimes, as the classical concept states" [@Zilitinkevich2008], or, “the Richardson number criteria is not, in general, a necessary and sufficient condition" [@Friedlander2001]. In fact, the Miles-Howard criterion is also in contradiction to other criteria for neutral stratification (homogeneous) fluid, e.g., Rayleigh-Kuo’s criterion [e.g. @Rayleigh1880; @CriminaleBook2003], Fjortoft’s criterion [@Fjortoft1950], Arnold’s criteria [@Arnold1965a] and Sun’s criterion [@SunL2007ejp]. This contradiction was also noted long time ago by [@YihCSBook1980], who noticed that “Miles’ criterion for stability is not the nature generalization of Rayleigh’s well-known sufficient condition for the stability of a homogeneous fluid in shear flow". He also also made an attempt to make a generalization [@YihCSBook1980]. Following the frame work of [@SunL2007ejp; @SunL2008cpl], this study is an attempt to clear the confusion in theories and to build a bridge between the lab experiments and theories. General Unstable Theorem For Stratified Flow ============================================ Taylor-Goldstein Equation ------------------------- The Taylor-Goldstein equation for the stratified inviscid flow is employed [@Howard1961; @YihCSBook1980; @Baines1994; @CriminaleBook2003], which is the vorticity equation of the disturbance [@Drazin2004]. Considering the flow with velocity profile $U(y)$ and the density field $\rho(y)$, and the corresponding stability parameter $N$ (the Brunt-Vaisala frequency), $$N^2=-g\rho'/\rho, \label{Eq:stable_stratifiedflow_Brunt-Vaisala}$$ where $g$ is the acceleration of gravity, the single prime $'$ denotes $d/dy$, and $N^2>0$ denotes a stable stratification. The vorticity is conserved along pathlines. The streamfunction perturbation $\phi$ satisfies $$\phi''+[\frac{N^2}{(U-c)^2}-\frac{U''}{U-c}-k^2 ]\phi=0, \label{Eq:stable_stratifiedflow_TaylorGoldsteinEq}$$ where $k$ is the real wavenumber and $c=c_r+ic_i$ is the complex phase speed and double prime $''$ denotes $d^2/dy^2$. For $k$ is real, the problem is called temporal stability problem. The real part of complex phase speed $c_r$ is the wave phase speed, and $\omega_i=k c_i$ is the growth rate of the wave. This equation is subject to homogeneous boundary conditions $$\phi=0 \,\, at\,\, y=a,b. \label{Eq:stable_parallelflow_RayleighBc}$$ It is obvious that the criterion for stability is $\omega_i=0$ ($c_i=0$), for that the complex conjugate quantities $\phi^*$ and $c^*$ are also physical solutions of Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\]) and Eq.(\[Eq:stable\_parallelflow\_RayleighBc\]). Multiplying Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\]) by the complex conjugate $\phi^{*}$ and integrating over the domain $a\leq y \leq b$, we get the following equations $$\displaystyle\int_{a}^{b} [|\phi'|^2+k^2|\phi|^2+\frac{U''(U-c_r)}{|U-c|^2} |\phi|^2]\, dy =\int_{a}^{b} \frac{(U-c_r)^2-c_i^2}{|U-c|^4}N^2|\phi|^2\, dy. \label{Eq:stable_stratifiedflow_TaylorGoldsteinEq_Int_Rea}$$ and $$\displaystyle c_i\int_{a}^{b} [\frac{U''}{|U-c|^2}-\frac{2(U-c_r)N^2}{|U-c|^4}]|\phi|^2\,dy=0. \label{Eq:stable_stratifiedflow_TaylorGoldsteinEq_Int_Img} $$ In the case of $N^2=0$, [@Rayleigh1880] used Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\_Int\_Img\]) to prove that a necessary condition for inviscid instability is $U''(y_s)=0$, where $y_s$ is the inflection point and $U_s=U(y_s)$ is the velocity at $y_s$. Using Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\_Int\_Img\]), [@Synge1933] also pointed out that a necessary condition for instability is that $U''-\frac{2(U-c_r)N^2}{|U-c|^2}$ should change sign. But such condition is useless as there are two unknown parameters $c_r$ and $c_i$. As a first step in our investigation, we need to estimate the ratio of $\int_{a}^{b} |\phi'|^2 dy$ to $\int_{a}^{b} |\phi|^2 dy$. This is known as the Poincaré’s problem: $$\int_{a}^{b}|\phi'|^2 dy=\mu\int_{a}^{b}|\phi|^2 dy, \label{Eq:stable_parallelflow_Poincare}$$ where the eigenvalue $\mu$ is positively definite for any $\phi \neq 0$. The smallest eigenvalue value, namely $\mu_1$, can be estimated as $\mu_1>(\frac{\pi}{b-a})^2$ [@Mumu1994; @SunL2007ejp]. General Instability Theorem --------------------------- In departure from previous investigations, we shall investigate the stability of the flow by using Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\_Int\_Rea\]) and Eq.(\[Eq:stable\_parallelflow\_Poincare\]). As $\mu$ is estimated with boundary so the criterion is global. We will also adapt a different methodology. If the velocity profile is unstable ($c_i\neq0$), then the equations with the hypothesis of $c_i=0$ should result in contradictions in some cases. Following this, a sufficient condition for instability can be obtained. Firstly, substituting Eq.(\[Eq:stable\_parallelflow\_Poincare\]) into Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\_Int\_Rea\]), we have $$\displaystyle c_i^2\int_{a}^{b} \frac{ g(y)}{|U-c|^2}|\phi|^2\, dy =-\int_{a}^{b}\frac{ h(y)}{|U-c|^2}|\phi|^2\, dy,\label{Eq:stable_stratifiedflow_Sun_Int_Rea} $$ where g(y)&=&+k\^2+, and\ h(y)&=&(+k\^2)(U-c\_r)\^2+U”(U-c\_r)-N\^2. \[Eq:stable\_stratifiedflow\_hygy\] It is noted that $g(y)>0$ for $N^2\geq0$. Then $c_i^2>0$ if $h(y)\leq0$ throughout the domain $a\leq y \leq b$ for a proper $c_r$ and $k$. Obviously, $h(y)$ is a monotone function of $k$: the smaller $k$ is, the smaller $h(y)$ is. When $k=0$, $h(y)$ has the smallest value. $$h(y)=\displaystyle N^2[ \frac{(U-c_r)^2}{N^2/\mu}+\frac{U''(U-c_r)}{N^2}-1]. \label{Eq:stable_stratifiedflow_hyFroude}$$ If we define shear, parallel and Rossby Froude numbers $Fr_t$, $Fr_s$ and $Fr_r$ as $$Fr_t^2=Fr_s^2+Fr_r^2, Fr_s^2=\displaystyle \frac{(U-c_r)^2}{N^2/\mu}, Fr_r^2=\frac{U''(U-c_r)}{N^2}, \label{Eq:stable_stratifiedflow_Froude}$$ where the shear Froude number $Fr_s$ is a dimensionless ratio of kinetic energy to potential energy. As $U''$ plays the same role of $\beta$ effect in the Rossby wave [@SunL2006c; @SunL2007ejp], the shear Froude number $Fr_r$ is a dimensionless ratio of Rossby wave kinetic energy to potential energy. Then $h(y)\leq 0$ equals to $Fr_t^2\leq 1$. Thus a general theorem for instability can be obtained from the above notations. Theorem 1: If velocity $U$ and stable stratification $N^2$ satisfy $h(y)\leq0$ or $Fr_t^2\leq 1$ throughout the domain for a certain $c_r$, the flow is unstable with a $c_i>0$. Physically, $Fr_t^2 \leq 1$ implies that the total kinetic energy is smaller than the total potential energy. So the potential energy might transfer to the kinetic energy after being disturbed, and the flow becomes unstable. On the other hand, when the potential energy is smaller than the kinetic energy ($Fr_t^2 > 1$), the flow is stable because no potential energy could transfer to the kinetic energy. Mathematically, we need derive some useful formula for applications, since there is still unknown $c_r$ in above equations. To this aim, we rewrite Eq.(\[Eq:stable\_stratifiedflow\_hyFroude\]) as $$h(y)=\displaystyle \mu(U+\frac{U''}{2\mu}-c_r)^2-(N^2+\frac{U''^2}{4\mu}). \label{Eq:stable_stratifiedflow_hy}$$ Assume that the minimum and maximum value of $U+\frac{U''}{2\mu}$ within $a\leq y\leq b$ is respectively $m_i$ and $m_a$. It is from Eq.(\[Eq:stable\_stratifiedflow\_hy\]) that $m_i\leq c_r\leq m_a$ for the smallest value of $h(y)$. Thus a general theorem for instability can be obtained from the above notations. Theorem 2: If velocity $U$ and stable stratification $N^2$ satisfy $h(y)\leq0$ throughout the domain for a certain $m_i\leq c_r\leq m_a$, there must be a $c_i>0$ and the flow is unstable. It is from Eq.(\[Eq:stable\_stratifiedflow\_hy\]) that $h(y)\leq0$ requires $\mu(U+\frac{U''}{2\mu}-c_r)^2$ less than $(N^2+\frac{U''^2}{4\mu})$. The bigger $N^2$ is, the smaller $h(y)$ is. So the stable stratification has a destabilization mechanism in shear flow. This conclusion is new as former theoretic studies always took the static stable stratification as the stable effects for shear flows. According to Eq.(\[Eq:stable\_stratifiedflow\_hy\]), the bigger $m_a-m_i$ is, the more stable the flow is. It is obvious that $m_a-m_i$ is bigger for $U'/U'''>0$ than that for $U'/U'''<0$. So the flow is more stable with the velocity profile $U'/U'''>0$. Although Theorem 1 gives a sufficient unstable condition for instability, the complicated expression makes it difficult for application. In the following section we will derive simple and useful criteria. Criteria For Flow Instability ============================= ![The value of h(y) under the condition $U''_s=0$: (a) for $U''/(U-U_s)>0$, (b) for $U''/(U-U_s)<0$. []{data-label="Fig:hy"}](Figb.eps "fig:"){width="6cm"} ![The value of h(y) under the condition $U''_s=0$: (a) for $U''/(U-U_s)>0$, (b) for $U''/(U-U_s)<0$. []{data-label="Fig:hy"}](Figc.eps "fig:"){width="6cm"} Inviscid Flow ------------- The simplest flow is the inviscid shear flow with $N^2=0$. The sufficient condition for instability is $h(y)\leq0$. To find such condition, we rewrite $h(y)$ in Eq.(\[Eq:stable\_stratifiedflow\_hy\]) as $$h(y)=(U_1-c_r)(U_2-c_r) \label{Eq:Rayleigh-hy}$$ where $U_1=U$ and $U_2=U+U''/\mu$. Then there may be three cases. Two of them have $U_1$ intersecting with $U_2$ at $U''_s=0$ (Fig.\[Fig:hy\]). The first case is that $U''/(U-U_s)>0$; thus, $h(y)>0$ always holds at $c_r=U_s$ as shown in Fig.\[Fig:hy\]a. The second case is that $U''/(U-U_s)<0$; thus, $h(y)<0$ always holds in the whole domain, as shown in Fig.\[Fig:hy\]b. In this case, the flow might be unstable. The sufficient condition for instability can be found from Eq.(\[Eq:Rayleigh-hy\]) as shown in Fig.\[Fig:hy\]b. Given $c_r=U_s$, Eq.(\[Eq:Rayleigh-hy\]) becomes $$h(y)=\frac{(U-U_s)^2}{\mu}[\mu+\frac{U''}{(U-U_s)}]$$ If $\frac{U''}{(U-U_s)}<-\mu$ is always satisfied, $h(y)<0$ holds within the domain. Corollary 1.1: If the velocity profile satisfies $\frac{U''}{U-U_s}<-\mu$ within the domain, the flow is unstable. Since [@SunL2007ejp] obtained a sufficient condition for stability, i.e. $\frac{U''}{U-U_s}>-\mu$ within the domain. The above condition for instability is nearly marginal [@SunL2008cpl]. The last case is that $U''\neq 0$ throughout the domain; thus, $h(y)>0$ always exists somewhere within the domain, as shown in Fig.\[Fig:TG-hy\]a. Stably Stratified Flow ---------------------- ![The value of $h(y)$ for $U''\neq0$ in case 3 and case 4.[]{data-label="Fig:TG-hy"}](Figa.eps "fig:"){width="6cm"} ![The value of $h(y)$ for $U''\neq0$ in case 3 and case 4.[]{data-label="Fig:TG-hy"}](Figd.eps "fig:"){width="6cm"} If the static stratification is stable ($N^2>0$), then $g(y)$ is positive. The flow is unstable if $h(y)$ is negatively defined within $a\leq y \leq b$ at $k=0$. We rewrite $h(y)$ as $$\begin{array}{rl} h(y)=&\mu (U_1-c_r)(U_2-c_r) \\ =&\displaystyle \mu [U+\frac{1}{2\mu}(U''-\sqrt{U''^2+4\mu N^2}\,)-c_r]\\ &\displaystyle \times [U+\frac{1}{2\mu}(U''+\sqrt{U''^2+4\mu N^2}\,)-c_r]. \end{array}\label{Eq:TaylorGoldsteinEq-hy}$$ The value of $h(y)$ can be classified into 4 cases. The first and the second ones ($U''_s=0$ and $N^2_s=0$ at $y=y_s$) are similar to discussed above and shown in Fig\[Fig:hy\]a and Fig\[Fig:hy\]b. For such cases, we have a sufficient condition for instability, $$\frac{U''(U-U_s)- N^2}{(U-U_s)^2}<-\mu.\label{Eq:TaylorGoldsteinEq-SIC1}$$ This can be derived directly from Eq.(\[Eq:stable\_stratifiedflow\_Sun\_Int\_Rea\]), similar to Corollary 1.1. The first sufficient condition for instability is due to the shear instability, and the unstable criterion is Eq.(\[Eq:TaylorGoldsteinEq-SIC1\]). Corollary 1.2: If the velocity profile satisfies $\frac{U''(U-U_s)-N^2}{(U-U_s)^2}<-\mu$ within the domain, the flow is unstable. The third case ($U''\neq 0$) is also similar to the case in Fig\[Fig:TG-hy\]a, and the flow is stable. The last one is unstable flow shown in Fig.\[Fig:TG-hy\]b, where $U''\neq 0$ and $h(y)<0$ throughout. In the last case, the maximum of $U_1$ must be smaller than the minimum of $U_2$ so that a proper $c_r$ within the $U_1$ and $U_2$ could be used for the unstable waves. Although the exact criterion can not be obtained as the required maximum and minimum can not be explicitly given, the approach is very straightforward. Nevertheless, we can also obtain some approximate criterion for the fourth case. It is from Eq.(\[Eq:stable\_stratifiedflow\_hy\]) that $h(y)\leq0$ if the minimax of $\mu(U+\frac{U''}{2\mu}-c_r)^2$ is less than the minimum of $(N^2+\frac{U''^2}{4\mu})$. As the minimax value of $\mu(U+\frac{U''}{2\mu}-c_r)^2$ is $\frac{1}{4}\mu(m_a-m_i)^2$ when $c_r=(m_a+m_i)/2$, we obtained a new criterion according to Eq.(\[Eq:stable\_stratifiedflow\_Froude\]). $$Fr_t^2(c_r) = \frac{1}{4\mu N^2}[\mu^2 (m_a+m_i)^2-U''^2]. \label{Eq:stable_stratifiedflow-Fs2}$$ Thus a sufficient (but not necessary) condition for $h(y)<0$ is that the following equation holds for $a\leq y\leq b$. $$Fr_t^2\leq 1. \label{Eq:stable_stratifiedflow-Criterion}$$ From the above corollaries, the flow might be unstable if the static stable stratification is strong enough. The stably stratification destabilize the flow, which is a new unstable mechanism. The above corollary contradicts the previous results [@Abarbanelt1984], but it agrees well with the recent theory [@Friedlander2001], experiments [@Zilitinkevich2008] and simulations [@Alexakis2009]. Again, we point out here that the flow is unstable due to potential energy transfer to kinetic energy under the condition of $Fr_t^2 \leq 1$. This conclusion is new because it is quite different from previous theorems in which the static stable stratification plays the role as a stabilizing factor for shear flows. Discussion ========== Necessary Instability Criterion ------------------------------- In the above investigation, it was found that stable stratification is a destabilization mechanism for the flow. Such finding is not surprising if one notes the terms in Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\]). Mathematically the sum of terms in square brackets should be negative for the wave solution. Thus both $\frac{U''}{U-c}>0$ and $N^2<0$ do favor this condition. This is why the unstable solutions always occur at $\frac{U''}{U-c}<0$ in shear flow. And $N^2>0$ here might lead to $c_i^2>0$. Physically, the perturbation waves are truncated in the neutral stratified flow. But the stable stratification allows wide range of waves in the perturbation. Such waves might interact with each other like what was illustrated in [@SunL2008cpl]. As Theorem 1 is the only sufficient condition, it is hypothesized that the criterion is not only the sufficient but also the necessary condition for instability in stably stratified flow. This hypothesis might be criticized in that the flow might be unstable ($c_i^2>0$) if $h(y)$ changes sign within the interval (Fig\[Fig:TG-hy\]a), where a proper chosen $\phi$ would let the right hand of Eq.(\[Eq:stable\_stratifiedflow\_Sun\_Int\_Rea\]) become negative. However, this criticism is not valid for the case in Fig\[Fig:TG-hy\]a. It is from the well-known criteria (e.g. Rayleigh’s inflexion point theorem) that the proper chosen $\phi$ always let the right hand of Eq.(\[Eq:stable\_stratifiedflow\_Sun\_Int\_Rea\]) vanish. It seems that the flow tends to be stable, or the perturbations have a prior policy to let $c_i=0$. The flow become unstable unless any choice of $\phi$ would let the right hand of Eq.(\[Eq:stable\_stratifiedflow\_Sun\_Int\_Rea\]) be negative. In this situation, we hypothesize that Theorem 1 fully solves the stability problem. Long-wave Instability --------------------- In inviscid shear flows, it has been recognized that very short-wave perturbations are dynamically stable under neutral stratification, and the dynamic instability is due to the larger wavelengths [@SunL2006b]. It should be noted that Rayleigh’s case is reduced to the Kelvein-Helmholtz vortex sheet model under the long-wave limit $k\ll 1$ [@Huerre1998; @CriminaleBook2003]. We have shown that this can be extended to shear flows, and that the growth rate $\omega_i$, is proportional to $\sqrt{\mu_1}$ [@SunL2006b; @SunL2008cpl]. Such conclusion can be simply generated to the stratified shear flows, which can be seen from Eq.(\[Eq:stable\_stratifiedflow\_hygy\]). If $k$ is larger than a critical value $k_c$, the sufficient condition in Theorem 1 can not be satisfied and the flow is stable. For shortwave ($k\gg1$), $h(y)$ is always larger than that for long-wave $k\ll 1$. The long-wave instability in the stratified shear flow was also noted by [@Miles1961; @Miles1963] and [@Howard1961], who showed a likelihood of $c_i\rightarrow 0$ at $k\rightarrow\infty$. The long-wave instability theory can explain the results in numerical simulations [@Alexakis2009], where the unstable perturbations are long-wave. Local Criterion --------------- In the above investigations, an parameter $\mu$ is used, which represents the ratio of two integrations with boundaries. So the criteria are global. On the other hand, we can also investigate the local balance without boundary conditions. For example, consider the flow within a thick layer $-\delta \leq y \leq \delta$. The velocity is $U(y)=U_0+U'y$, and the kinetic energy is $(U-c_r)^2$. The stratification is $N^2$, and the potential energy is $N^2 d^2$, where $d=2\delta$ is the thickness of the layer. The Froude number is $Fr_t^2=(U'^2\delta^2)/(N^2d^2)$ for $c_r=U_0$. The instability criterion in Eq.(\[Eq:stable\_stratifiedflow-Criterion\]) becomes $$Ri=\frac{U'^2}{N^2}>\frac{1}{4}. \label{Eq:stable_stratifiedflow-Criterion-Ri}$$ If local gradient Richardson number exceeds $1/4$, the local disturbances is unstable. However, the flow might be stable as the globe total Froude number $Fr_t^2>1$. This criterion is opposite to Miles-Howard theorem, we will show why Miles-Howard’ theorem is not correct from their derivations. Relations to Other Theories --------------------------- In the inviscid shear flow, the linear theories, e.g., Rayleigh-Kuo cirterion [@CriminaleBook2003], Fjørtoft criterion [@Fjortoft1950] and Sun’s criterion [@SunL2007ejp], are equal to Arnol’d’s nonlinear stability criterion [@Arnold1965a]. Arnol’d’s first stability theorem corresponds to Fjørtoft’s criterion [@Drazin2004; @CriminaleBook2003], and Arnol’d’s second nonlinear theorem corresponds to Sun’s criterion [@SunL2007ejp; @SunL2008cpl]. It is obvious that the present theory, especially Corollary 1.1 is a natural generalization of inviscid theories. In the stratified flow, [@Miles1961; @Miles1963] and [@Howard1961] applied a transform $F=\phi/(U-c)^n$ to Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\]), which allows different kind of perturbations. Thus $n=1/2$ gives Miles’s theory and $n=1$ gives Howard’s semicircle theorem. Considering that $n=1$ and $N^2=0$ [@Howard1961], Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\_Int\_Rea\]) becomes $$\displaystyle\int_{a}^{b} (|F'|^2+k^2|F|^2)[(U-c_r)^2-c_i^2] dy=0. \label{Eq:stable_stratifiedflow_TaylorGoldsteinEq_Miles_Rea}$$ It is from Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\_Miles\_Rea\]) that all the inviscid flows (no mater what the velocity profile $U(y)$ is) must be temporal unstable ($k$ is real). This contradicts the criteria (both linear and nonlinear ones) for inviscid shear flow. So the wavenumber $k$ in Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\_Miles\_Rea\]) should be complex $k=k_r+ik_i$. Besides, from Eq.(\[Eq:stable\_stratifiedflow\_hy\]), Eq.(\[Eq:TaylorGoldsteinEq-hy\]) and Fig.\[Fig:TG-hy\]b, the unstable $c_r$ might be either within the value of $U$ or beyond the value of $U$. This also contradicts the Howard’s semicircle theorem for the stratified flow. It implies that the transform $F$ is not suitable for temporal stability problem. Taking $n=1/2$, Howard extracted a new equation from Taylor-Goldstein equation, $$\displaystyle [(U-c)F']'-[k^2(U-c)+\frac{U''}{2}+(\frac{1}{4}U'^2-N^2)/(U-c)]F=0 \label{Eq:stable_stratifiedflow_TaylorGoldsteinEq_Howard}$$ After multiplying above equation by the complex conjugate of $F$ and integrating over the flow regime, then the imaginary part of the expression is $$\displaystyle -c_i \int_a^b |F'|^2+[k^2|F|^2+(\frac{1}{4}U'^2-N^2)|F|^2/|U-c|^2=0 \label{Eq:stable_stratifiedflow_TaylorGoldsteinEq_Howard_Img}$$ Miles-Howard theorem concludes that if $c_i \neq 0$, then $Ri<\frac{1}{4}$ for instability. However, the transform $F=\phi/\sqrt{U-c}$ requires a complex function $F$, even though both $\phi$ and $c$ are real. In that $\sqrt{U-c}$ might be complex somewhere as $U-c_r<0$. Consequently, the wave number $k$ in Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\_Howard\]) is a complex number but no longer a real number as that assumed in Taylor-Goldstein equation. The complex wavenumber $k$ leads to spatial stability problem but temporal stability problem investigated in this study. The assumption of $c_i=0$ with $k_i \neq0$ implies the flow is unstable with $\omega_i \neq0$. However, Howard ignored this in his derivations. That is why Miles-Howard theorem leads contradictions to the present studies. Although the transform $F=\phi/(U-c)^n$ leads some contradictions with Rayleigh criterion and present results, it might be useful for the viscous flows. In these flows, the spatial but temporal stability problem is dominated, and $k=k_r+ik_i$ is the complex wavenumber. It is well known that the plane Couette flow is viscously unstable for Reynolds number $Re> Re_c$ from the experiments but viscously stable from the Orr-Sommerfeld equation [@CriminaleBook2003]. If applying the transform in Eq.(\[Eq:stable\_stratifiedflow\_TaylorGoldsteinEq\_Miles\_Rea\]) all the inviscid flows must be unstable. Thus the plane Couette flow might be stable only for $Re< Re_c$ due to the stabilization of the viscosity. It is argued that the Taylor-Goldstein equation represents temporal instability, the transform represents spatial instability [@Huerre1998; @CriminaleBook2003]. In that the perturbation is seen along with the flow at the speed of $(U-c)$ in [@Miles1961; @Howard1961]. The transform $F=\phi/(U-c)^n$ also turns real wavenumber $k$ into complex number, $c_i=0$ implies $\omega_i \neq 0$. The assumption of real $k$ after transform will leads to contradictions with the results derived from the Taylor-Goldstein equation. So the previous investigators can hardly generalize their results from homogeneous fluids to stratified fluids. Conclusion ========== In summary, the stably stratification is a destabilization mechanism, and the flow instability is due to the competition of the kinetic energy with the potential energy. Globally, the flow is always unstable when the total Froude number $Fr_t^2\leq 1$, where the larger potential energy might transfer to the kinetic energy after being disturbed. Locally, the flow is unstable as the gradient Richardson number $Ri>1/4$. The approach is very straightforward and can be used for similar analysis. In the inviscid stratified flow, the unstable perturbation must be long-wave scale. This result extends the Rayleigh’s, Fj[ø]{}rtoft’s, Sun’s and Arnol’d’s criteria for the inviscid homogenous fluid, but contradicts the well-known Miles and Howard theorems. It is argued here that the transform $F=\phi/(U-c)^n$ is not suitable for temporal stability problem, and that it will leads to contradictions with the results derived from Taylor-Goldstein equation. The author thanks Dr. Yue P-T at Virginia Tech, Prof. Yin X-Y at USTC, Prof. Wang W. at OUC and Prof. Huang R-X at WHOI for their encouragements. This work is supported by the National Basic Research Program of China (No. 2012CB417402), and the Knowledge Innovation Program of the Chinese Academy of Sciences (No. KZCX2-YW-QN514). [27]{} natexlab\#1[\#1]{} 1984 Richardson number criterion for the nonlinear stability of three-dimensional stratified flow. [*Phys. Rev. Lett.*]{} [**52**]{}, 2352–2355. 2009 Stratified shear flow instabilities at large richardson numbers. [*Phys. Fluids*]{} [**21**]{}, 054108–054108–10. 1965 Conditions for nonlinear stability of the stationary plane curvilinear flows of an ideal fluid. [*Doklady Mat. Nauk.*]{} [ **162**]{}, 975–978 (Engl. transl.: Sov.Math. 6, 773–777). 1994 On the mechanism of shear flow instabilities. [*J. Fluid Mech.*]{} [**276**]{}, 327–342. 1961 [*Hydrodynamic and Hydromagnetic Stability*]{}. New York, U.S.A.: Dover Publications, Inc. 2003 [*Theory and computation of hydrodynamic stability*]{}. Cambridge, U.K.: Cambridge University Press. 2004 [*[Hydrodynamic Stability]{}*]{}. Cambridge University Press. 1950 Application of integral theorems in deriving criteria of stability of lamiar flow and for the baroclinc circular vortex. [*Geofysiske Publikasjoner*]{} [**17**]{}, 1–52. 2001 On nonlinear instability and stability for stratified shear flow. [*J. Math. Fluid Mech.*]{} [**3**]{}, 82–97. 1962 On the behaviour of symmetric waves in stratified shear flows. [*Geofys. Publ.*]{} [**24**]{}, 67–113. 1961 Note on a paper of [John W Miles]{}. [*J. Fluid Mech.*]{} [**10**]{}, 509–512. 1973 Stability of stratified shear flows. [*Boundary-Layer Meteorol.*]{} [**4**]{}, 511–523. 1998 Hydrodynamic instabilities in open flow. In [*[Hydrodynamics and nonlinear instabilities]{}*]{} (ed. C. Godrèche & P. Manneville). Cambridge: Cambridge University Press. 1961 [On the stability of heterogeneous shear flows ]{}. [ *J. Fluid Mech.*]{} [**10**]{}, 496–508. 1963 [On the stability of heterogeneous shear flows. Part 2]{}. [*J. Fluid Mech.*]{} [**16**]{}, 209–227. 1994 Nonlinear stability of multilayer quasi-geostrophic flow. [*J. Fluid Mech.*]{} [ **264**]{}, 165–184. 2002 Spatial holmboe instability. [*Phys. Fluids*]{} [**14**]{}, 2585–2597. 1880 On the stability or instability of certain fluid motions. [*Proc. London Math. Soc.*]{} [**11**]{}, 57–70. 1883 Investigation of the character of equilibrium of an incompressible heavey fluid of variable density. [*Proc. London Math. Soc.*]{} [**14**]{}, 170–177. 2006 [General stability criteria for inviscid rotating flow]{}. [*arXiv:physics/0603177v1*]{} . 2006 [Long-wave instability in shear flow]{}. [*arXiv:physics/0601112v2*]{} . 2007 [General stability criterion for inviscid parallel flow ]{}. [*Eur. J. Phys.*]{} [**28**]{} (5), 889–895. 2008 [Essence of inviscid shear instability: a point view of vortex dynamics]{}. [*Chin. Phys. Lett.*]{} [**25**]{} (4), 1343–1346. 1933 The stability of heterogeneous liquid. [*Trans. Roy. Soc. Can.*]{} [**27**]{}, 1–18. 1931 Effect of variation in density on the stability of superposed streams of fluid. [*Proc. Roy. Soc.*]{} [**A132**]{}, 499–523. 1980 [*Stratified Flows*]{}. New York: Academic Press. 2008 Turbulence energetics in stably stratified geophysical flows: Strong and weak mixing regimes. [*Quart. J. Roy. Meteor. Soc.*]{} [**134**]{}, 793–799.
--- abstract: 'A complex radio event was observed on January 17, 2005 with the radio-spectrograph ARTEMIS-IV, operating at Thermopylae, Greece; it was associated with an X3.8 SXR flare and two fast Halo CMEs in close succession. We present ARTEMIS–IV dynamic spectra of this event; the high time resolution (1/100 sec) of the data in the 450–270 MHz range, makes possible the detection and analysis of the fine structure which this major radio event exhibits. The fine structure was found to match, almost, the comprehensive Ondrejov Catalogue which it refers to the spectral range 0.8–2 GHz, yet seems to produce similar fine structure with the metric range.' address: - 'Department of Physics, University of Athens, GR-15784 Athens, Greece' - 'Department of Physics, University of Ioannina, 45110 Ioannina, Greece' author: - 'C. Bouratzis' - 'P. Preka-Papadema' - 'X. Moussas' - 'C. Alissandrakis' - 'A. Hillaris' title: Metric Radio Bursts and Fine Structures Observed on 17 January 2005 --- , , , Sun: Solar flares ,Sun: Radio emission ,Sun: Fine Structure =0.5 cm INTRODUCTION {#Introduction} ============ Radio emission at metric and longer waves trace disturbances, mainly electron beams and shock waves, formed in the process of energy release and magnetic restructuring of the corona and propagating from the low corona to interplanetary space. The fine structures, on the other hand, including drifting pulsation structures, may be used as powerful diagnostics of the loop evolution of solar flares. The period 14–20 January 2005 was one of intense activity originating in active regions 720 and 718; while in the visible hemisphere of the sun, they produced 5 X–class and 17 M–class flares, an overview is presented in  [@Bouratzis06]. January the 17th is characterized by an X3.8 SXR flare from 06:59 UT to 10:07 UT (maximum at 09:52 UT) and two fast Halo CMEs within a forty minute interval. The corresponding radio event included an extended broadband continuum with rich fine structure; this fine structure is examined in this report. Observations {#obs} ============ Instrumentation --------------- The Artemis IV[^1] solar radio-spectrograph operating at Thermopylae since 1996 [@Caroubalos01; @Kontogeorgos06] consists of a 7-m parabolic antenna covering the metric range, to which a dipole antenna was added recently in order to cover the decametric range. Two receivers operate in parallel, a sweep frequency analyzer (ASG) covering the 650-20 MHz range in 630 data channels with a cadence of 10 samples/sec and a high sensitivity multi-channel acousto-optical analyzer (SAO), which covers the 270-450 MHz range in 128 channels with a high time resolution of 100 samples/sec. Events observed with the instrument have been described, e.g. by @Caroubalos04 [@Caroubalos01B], @Kontogeorgos [@Kontogeorgos08], @Bouratzis06, @Petoussis06, cf. also @Caroubalos06 for a brief review. The broad band, medium time resolution recordings of the ASG are used for the detection and analysis of radio emission from the base of the corona to two $R_{SUN}$, while the narrow band, high time resolution SAO recordings are mostly used in the analysis of the fine temporal and spectral structures. The event of January 17, 2005–Overview {#Overview} -------------------------------------- ![ARTEMIS IV Dynamic Spectrum (08:40-11:30 UT). UPPER PANEL: ASG Spectrum, MIDDLE PANEL: ASG Differential Spectrum, LOWER PANEL: GOES SXR flux (arbitrary units); the two CME lift–offs are marked on the time axis.[]{data-label="05117_01"}](sxr.eps) ![ARTEMIS IV SAO Differential Spectrum: Zebra pattern (09:19:40-09:19:52 UT) and a spike cluster (09:20:08-09:20:15) on a background of fiber bursts & pulsations (09:19:30-09:20:30 UT); the pulsations with the fibers cover the observation period but they appear more pronounced in the 09:35-09:55 UT interval.[]{data-label="05117_01FS"}](05117_FS.eps) ![ARTEMIS IV Differential Spectra of Fine Structures embedded in the Type IV Continuum. UPPER PANEL: Spikes, LOWER PANEL: Narrow Band Type III(U) Bursts.[]{data-label="FS1"}](05117_170.eps) ![ARTEMIS IV Differential Spectra of Fine Structures embedded in the Type IV Continuum. FIRST TWO PANELS: Narrow Band Slowly Drifting Bursts LOWER PANEL: Narrow Band Type III(U).[]{data-label="FS1B"}](05117_172.eps) ![ARTEMIS IV Differential Spectra of Fine Structures embedded in the Type IV Continuum; narrowband bursts within groups fiber bursts & pulsating structures: UPPER PANEL: Narrowband Type III(U), Narrow Band Slowly Drifting Bursts & spikes; LOWER PANEL: Narrow Band Slowly Drifting Burst & spikes.[]{data-label="FS1C"}](05117_174.eps) ![ARTEMIS IV Spectra of Fast Drift Bursts (FDB), 09:14:55-09:15:00 UT, preceded by narrowband type III bursts. UPPER PANEL: Intensity Spectrum LOWER PANEL: Differential Spectrum.[]{data-label="FDB"}](05117_03.eps "fig:") ![ARTEMIS IV Spectra of Fast Drift Bursts (FDB), 09:14:55-09:15:00 UT, preceded by narrowband type III bursts. UPPER PANEL: Intensity Spectrum LOWER PANEL: Differential Spectrum.[]{data-label="FDB"}](05117_04.eps "fig:") ![ARTEMIS IV Spectra of Isolated Pulsating Structures (IPS), 09:22:55-09:23:05 UT. UPPER PANEL: Intensity Spectrum LOWER PANEL: Differential Spectrum.[]{data-label="IPS"}](05117_IPS.eps) Two groups of type III bursts and a very extensive type IV continuum with rich fine structure characterize the ARTEMIS–IV dynamic spectrum. The high frequency type IV emission starts at 08:53 UT, covers the entire 650-20MHz ARTEMIS–IV spectral range (Figure \[05117\_01\], upper & middle panels) and continues well after 15:00; it was associated with an SXR flare and two fast Halo CMEs (CME$_1$ & CME$_2$ henceforward) in close succession. The GOES records[^2] report an X3.8 SXR flare from 06:59 UT to 10:07 UT, with maximum at 09:52 UT; this is well associated with the brightening of sheared S-shaped loops from the EIT images. The SXR light curve (Figure \[05117\_01\], lower panel) exhibits an, initially, slow rising phase which changes into a much faster rising a little before the peak flux is reached; it thus appears on the time–SXR flux diagram as a two stage process. The CME data from the LASCO lists on line[^3] [@Yashiro] indicate that each of the stages coincides with the, estimated, lift off time of CME$_1$ & CME$_2$ respectively; its also well associated with the high frequency onset of the two type III groups mentioned in the beginning of this subsection. The *halo* CME$_1$ was first recorded by LASCO at 09:30:05 UT. Backward extrapolation indicates that it was launched around 09:00:47 UT. CME$_2$ was first recorded by LASCO at 09:54 UT; it was launched around 09:38:25 UT and was found to overtake CME$_1$ at about 12:45 UT at a height of approximate 37 solar radii. Fine Structure {#FS} -------------- The high sensitivity and time resolution of the SAO facilitated an examination on fine structure embedded in the Type–IV continua within the studied period. In our analysis, the continuum background is removed by the use of high–pass filtering on the dynamic spectra (differential spectra in this case). As fine structure is characterized by a large variety in period, bandwidth, amplitude, temporal and spatial signatures, a morphological taxonomy scheme based on Ondrejov Radiospectrograph recordings in the 0.8–2.0 GHz range was introduced (@Jiricka01 [@Jiricka02] also @Meszarosova05B); the established classification criteria are used throughout this report. We present certain examples of fine structures recorded by the ARTEMIS–IV/SAO in the 450-270 MHz frequency range; this range corresponds to ambient plasma densities which are typical of the base of the corona ($\approx 10^9$ $cm^{-3}$, (cf. for example @Mann99, [-@Mann99]). The fine structures of our data set, are divided according to the above mentioned Ondrejov classification scheme and described in the following paragraphs (cf. also figures \[05117\_01FS\], \[FS1\], \[FS1B\] & \[FS1C\]). ### Broadband pulsations & Fibers The broadband pulsations appear for the duration of the type IV continuum; they are, for same period, associated with fibers; these structures intensify within the rise phase of the SXR, which in turn, coincides with the extrapolated liftoff of CME$_1$ & CME$_2$. Follows a closer examination: - Radio pulsations are series of pulses with bandwidth $> 200 MHz$ and total duration $> 10 s$; on the ARTEMIS-IV recordings they persisted for the duration of the type IV continuum. Some, with a slow global frequency drift, were of the *Drifting Pulsation Structures* (DPS) subcategory. In our recordings the pulsations bandwidth exceeded the SAO frequency range, however from the ASG dynamic spectrum we observe a drift of the pulsating continuum towards the lower frequencies, following the rise of CME$_2$. Three physical mechanisms were proposed as regards the source of this type of structure, (cf @Nindos07 for a review): - [Modulation of radio emission by MHD oscillations in a coronal loop]{} - [Non-linear oscillating systems (wave-wave or wave particle interactions) where the pulsating structure corresponds to their limit cycle]{} - [Quasi-periodic injection of electron populations from acceleration episodes within large scale current sheets.]{} Combined radiospectrograph, radio and SXR imaging and HXR observations [@Kliem00; @Khan02; @Karlicky02] favor the last mechanism; furthermore they identify the sources of Drifting Pulsation Structures with plasmon ejections. - [Isolated Broadband Pulses: Pulsating Structures but with duration $\approx 10 s$.]{} - [Fast Drifting Bursts: Short-lasting and fast drifting bursts with frequency drift $>100MHz/s$; similar to the isolated broadband pulses, except for the frequency drift.]{} - [Fibers or Intermediate Drift Bursts: Fine structure Bursts with the frequency drift $\approx 100 MHz/s$; they often exhibit nearly regular repetition. On our recordings they coincide with broadband pulsations and they also cover the duration of the type IV continuum. They are usually interpreted as the radio signature of whistler waves coalescence with Langmuir waves in magnetic loops; the exciter is thought to be an unstable distribution of nonthermal electrons. (cf. @Nindos07 and references within).]{} ### Zebra patterns: The zebra structures are characterized by several emission lines, which maintain nearly regular distance to their neighbors (figure \[05117\_01FS\]). The zebras from our data set are associated with pulsations and fibers and cover almost the same period with them. They appear, however, more pronounced in the 09:35–09:55 UT interval; this interval includes the $CME_2$ estimated launch and the rise phase of the SXR flare. Zebra patterns were explained as the result electrostatic upper-hybrid waves at conditions of the double plasma resonance where the local upper hybrid frequency equals a multiple of the local gyrofrequency ($\omega _{UH} = \sqrt {\omega _e^2 + \omega _{Be}^2 } = s \cdot \omega _{Be}$) (cf. for example @Chernov06 [@Nindos07] and references therein). The upper hybrid waves are excited by electron beam with loss-cone distribution [@Kuznetsov07]. ### Narrowband Structures The narrowband activity (figures \[FS1\], \[FS1B\] & \[FS1C\]), including Spikes, Narrow Band Type III & III(U) bursts as well as Slowly Drifting Structures, is rather intermittent. A large group of spikes appears at about 09:20 UT; this coincides, in time, with the rising of the first stage of the SXR and the start of the first type III group. Three types of Narrowband Structures were recorded: - [Narrow Band Spikes are very short ($\approx 0.1 s$) narrowband ($\approx 50 MHz$) bursts which usually appear in dense clusters. An example of such a cluster appears on figure \[05117\_01FS\]. The models proposed for the spike interpretation are based either on the loss-cone instability of trapped electrons producing electron cyclotron maser emission or on upper-hybrid and Bernstein modes. An open question remains whether or not spikes are signatures of particle accelerations episodes at a highly fragmented energy release flare site. ]{} - [Narrow Band Type III Bursts: Short ($\approx 1 s$) narrowband ($<200 MHz$) fast drifting ($>100 MHz/s$) bursts. A number of this type of Bursts, on the SAO high resolution dynamic spectra, exhibit a frequency drift turn over; as they drift towards lower frequencies and after reaching a minimum frequency (*turn over frequency*) they reverse direction towards higher frequencies appearing as inverted U on the dynamic spectra. These we have marked as narrowband type III(U) on figures \[FS1\], \[FS1B\] & \[FS1C\]. Similar spectra (III(U), III(N)) were obtained in the microwave range by @Fu04]{}. - [Narrow Band Slowly Drifting Bursts: They are similar to Narrow Band Type III Bursts but with a drift rate $< 100 MHz/s$.]{} Summary and Discussion ====================== The ARTEMIS-IV radio-spectrograph, operating in the range of 650-20 MHz, observed a number of complex events during the super-active period of period 14–20 January 2005; the event on January the 17th was characterized by an extended, broadband type IV continuum with rich fine structure. We have examined the morphological characteristics of fine structure elements embedded in the continuum;it, almost, matches the comprehensive Ondrejov Catalogue [@Jiricka01; @Jiricka02]. The latter, although it refers to the spectral range 0.8–2 GHz, seems to produce similar fine structure with the metric range. The high resolution (100 samples/sec) SAO recordings facilitated the spectral study of the fine structures and permitted the recognition and classification of the type III(U) & III(J) subcategory of the Narrow Band Type III Bursts in the metric frequency range; similar structures have been reported in the microwaves [@Fu04]. The pulsating structures and fibers, although they cover the full observation interval, appear enhanced during the SXR rise phase and the two CME lift off where the major magnetic restructuring takes place. The narrowband structures, on the other hand, are evenly distributed for the above mentioned duration; this indicates that small electron populations are accelerated even after the flare impulsive phase. Two types of fine structures from the Ondrejov Catalogue were not detected in our recordings: - [Continua: As the long duration pulsations accompanied by fibers were prevalent in the SAO spectra, any possible appearance of Continua was, probably, suppressed within the pulsating background.]{} - [Lace Pattern: It is new type of fine structure first reported by @Jiricka01; it is characterized by rapid frequency variations, both positive and negative. It is a very rare structure with only nine reported in the Ondrejov catalog out of a total of 989 structures.]{} [21]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix , K., [Preka-Papadema]{}, P., [Hillaris]{}, A., [Moussas]{}, X., [Caroubalos]{}, C., [Petoussis]{}, V., [Tsitsipis]{}, P., [Kontogeorgos]{}, A., [Radio Bursts in the Active Period January 2005]{}. In: [Solomos]{}, N. (Ed.), Recent Advances in Astronomy and Astrophysics. Vol. 848 of American Institute of Physics Conference Series. pp. 213–217, 2006. , C., [Alissandrakis]{}, C. E., [Hillaris]{}, A., [Nindos]{}, A., [Tsitsipis]{}, P., [Moussas]{}, X., [Bougeret]{}, J.-L., [Bouratzis]{}, K., [Dumas]{}, G., [Kanellakis]{}, G., [Kontogeorgos]{}, A., [Maroulis]{}, D., [Patavalis]{}, N., [Perche]{}, C., [Polygiannakis]{}, J., [Preka-Papadema]{}, P., [ARTEMIS IV Radio Observations of the 14 July 2000 Large Solar Event]{}. 204, 165–177, 2001. , C., [Maroulis]{}, D., [Patavalis]{}, N., [Bougeret]{}, J.-L., [Dumas]{}, G., [Perche]{}, C., [Alissandrakis]{}, C., [Hillaris]{}, A., [Moussas]{}, X., [Preka-Papadema]{}, P., [Kontogeorgos]{}, A., [Tsitsipis]{}, P., [Kanelakis]{}, G., [The New Multichannel Radiospectrograph ARTEMIS-IV/HECATE, of the University of Athens]{}. Experimental Astronomy 11, 23–32, 2001. , C., [Hillaris]{}, A., [Bouratzis]{}, C., [Alissandrakis]{}, C. E., [Preka-Papadema]{}, P., [Polygiannakis]{}, J., [Tsitsipis]{}, P., [Kontogeorgos]{}, A., [Moussas]{}, X., [Bougeret]{}, J.-L., [Dumas]{}, G., [Perche]{}, C., [Solar type II and type IV radio bursts observed during 1998-2000 with the ARTEMIS-IV radiospectrograph]{}. 413, 1125–1133, 2004. , C., [Alissandrakis]{}, C. E., [Hillaris]{}, A., [Preka-Papadema]{}, P., [Polygiannakis]{}, J., [Moussas]{}, X., [Tsitsipis]{}, P., [Kontogeorgos]{}, A., [Petoussis]{}, V., [Bouratzis]{}, C., [Bougeret]{}, J.-L., [Dumas]{}, G., [Nindos]{}, A., [Ten Years of the Solar Radiospectrograph ARTEMIS-IV]{}. In: [Solomos]{}, N. (Ed.), Recent Advances in Astronomy and Astrophysics. Vol. 848 of American Institute of Physics Conference Series. pp. 864–873, 2006. , G. P., [Sych]{}, R. A., [Yan]{}, Y., [Fu]{}, Q., [Tan]{}, C., [Huang]{}, G., [Wang]{}, D.-Y., [Wu]{}, H., [Multi-Site Spectrographic and Heliographic Observations of Radio Fine Structure on April 10, 2001]{}. 237, 397–418, 2006. , Q.-J., [Yan]{}, Y.-H., [Liu]{}, Y.-Y., [Wang]{}, M., [Wang]{}, S.-J., [A New Catalogue of Fine Structures Superimposed on Solar Microwave Bursts]{}. Chinese Journal of Astronomy and Astrophysics 4, 176–188, 2004. , K., [Karlick[ý]{}]{}, M., [M[é]{}sz[á]{}rosov[á]{}]{}, H., [Sn[í]{}[ž]{}ek]{}, V., [Global statistics of 0.8-2.0 GHz radio bursts and fine structures observed during 1992-2000 by the Ond[ř]{}ejov radiospectrograph]{}. 375, 243–250, 2001. , K., [Karlick[ý]{}]{}, M., [M[é]{}sz[á]{}rosov[á]{}]{}, H., [Occurrences of different types of 0.8-2.0 GHz solar radio bursts and fine structures during the solar cycle]{}. In: [Sawaya-Lacoste]{}, H. (Ed.), Solspa 2001, Proceedings of the Second Solar Cycle and Space Weather Euroconference. Vol. 477 of ESA Special Publication. pp. 351–354, 2002. , M., [F[á]{}rn[í]{}k]{}, F., [M[é]{}sz[á]{}rosov[á]{}]{}, H., [High-frequency slowly drifting structures in solar flares]{}. 395, 677–683, 2002. , J. I., [Vilmer]{}, N., [Saint-Hilaire]{}, P., [Benz]{}, A. O., [The solar coronal origin of a slowly drifting decimetric-metric pulsation structure]{}. 388, 363–372, 2002. , B., [Karlick[ý]{}]{}, M., [Benz]{}, A. O., [Solar flare radio pulsations as a signature of dynamic magnetic reconnection]{}. 360, 715–728, 2000. , A., [Tsitsipis]{}, P., [Caroubalos]{}, C., [Moussas]{}, X., [Preka-Papadema]{}, P., [Hilaris]{}, A., [Petoussis]{}, V., [Bouratzis]{}, C., [Bougeret]{}, J.-L., [Alissandrakis]{}, C. E., [Dumas]{}, G., [The improved ARTEMIS IV multichannel solar radio spectrograph of the University of Athens]{}. Experimental Astronomy 21, 41–55, 2006. , A., [Tsitsipis]{}, P., [Moussas]{}, X., [Preka-Papadema]{}, G., [Hillaris]{}, A., [Caroubalos]{}, C., [Alissandrakis]{}, C., [Bougeret]{}, J.-L., [Dumas]{}, G., [Observing the Sun at 20 650 MHz at Thermopylae with Artemis]{}. 122, 169–179, 2006. , A., [Tsitsipis]{}, P., [Caroubalos]{}, C., [Moussas]{}, X., [Preka-Papadema]{}, P., [Hilaris]{}, A., [Petoussis]{}, V., [Bougeret]{}, J.-L., [Alissandrakis]{}, C. E., [Dumas]{}, G., [Measuring solar radio bursts in 20–650 MHz ]{}. Measurement 41, 251–258, 2008. , A. A., [Tsap]{}, Y. T., [Double plasma resonance and fine spectral structure of solar radio bursts]{}. Advances in Space Research 39, 1432–1438, 2007. , G., [Jansen]{}, F., [MacDowall]{}, R. J., [Kaiser]{}, M. L., [Stone]{}, R. G., [A heliospheric density model and type III radio bursts]{}. 348, 614–620, 1999. , H., [Ryb[á]{}k]{}, J., [Zlobec]{}, P., [Magdaleni[ć]{}]{}, J., [Karlick[ý]{}]{}, M., [Ji[ř]{}i[č]{}ka]{}, K., [Statistical Analysis of Pulsations and Pulsations with Fibers in the Range 800-2000 MHZ]{}. In: The Dynamic Sun: Challenges for Theory and Observations. Vol. 600 of ESA Special Publication. pp. 133–136, 2005. , A., [Aurass]{}, H., [Pulsating Solar Radio Emission]{}. In: [Klein]{}, K.-L., [MacKinnon]{}, A. L. (Eds.), Lecture Notes in Physics, Berlin Springer Verlag. Vol. 725 of Lecture Notes in Physics, Berlin Springer Verlag. pp. 251–277, 2007. , V., [Tsitsipis]{}, P., [Kontogeorgos]{}, A., [Moussas]{}, X., [Preka-Papadema]{}, P., [Hillaris]{}, A., [Caroubalos]{}, C., [Alissandrakis]{}, C. E., [Bougeret]{}, J.-L., [Dumas]{}, G., [Type II and IV radio bursts in the active period October-November 2003]{}. In: [Solomos]{}, N. (Ed.), Recent Advances in Astronomy and Astrophysics. Vol. 848 of American Institute of Physics Conference Series. pp. 199–206, 2006. , S., [Gopalswamy]{}, N., [Michalek]{}, G., [St. Cyr]{}, O. C., [Plunkett]{}, S. P., [Rich]{}, N. B., [Howard]{}, R. A., [A catalog of white light coronal mass ejections observed by the SOHO spacecraft]{}. 109 (18), 7105, 2004. [^1]: Appareil de Routine pour le Traitement et l’ Enregistrement Magnetique de l’ Information Spectral [^2]: http//www.sel.noaa.gov/ftpmenu/indices [^3]: http://cdaw.gsfc.nasa.gov/CME list
--- abstract: 'We use Renewal Theory for the estimation and interpretation of the flare rate from the *Geostationary Operational Environmental Satellite* (GOES) soft [$X$-ray ]{}flare catalogue. It is found, that in addition to the flare rate variability with the solar cycles, a much faster variation occurs. The fast variation on time scales of days and hours down to minute scale appears to be comparable with time intervals between two successive flares (waiting times). The detected fast non-stationarity of the flaring rate is discussed in the framework of the previously published stochastic models of the waiting time dynamics.' author: - 'A. $^{a}$' - 'M. $^{b,c}$' title: Solar Flare Occurrence Rate and Waiting Time Statistics --- INTRODUCTION {#intro} ============ The phenomena related to energy transformation and release on the Sun, possibly are of highest importance for modern solar astrophysics. Solar flaring, as a steady process of energy release, plays a central role and has been drawing the attention of the scientific community for decades. In this work, we focus on the statistical properties of so-called solar flares *waiting time(s)* (wt), *i.e.* the interval between two flares close in time. The data are provided by the *Geostationary Observational Environmental Satellites* (GOES) in the soft [$X$-ray ]{}band. This catalogue is chosen as the longest available record of uninterrupted observations. The [H$\alpha$ ]{}flare records from the NGDC-NOAA catalogue$\footnote{http://www.ngdc.noaa.gov/stp/solar/solarflares.html}$ are shorter, and used to emphasise the invariance of the reported results. Flare waiting times statistics has been extensively debated in the literature. fitted the waiting time probability density function (pdf)[^1] with a power-law in the range between $6$ and $67$ hours. This estimate was implemented using a single record over $20$ years. considered the variation of the mean flaring rate with the solar cycle, and proposed the model of the time-dependent Poisson process (see also ). In turn, this model gives a power-law-like [pdf ]{}only in the limit [@Wheat-final]. However, argued for a local departure of the [wt ]{}series from the Poisson process, *i.e.* “memory” had been detected in the data. We apply an alternative statistical description to the methods that had been used in the articles cited above. This method is equivalently unambiguous in describing the stochastic processes. The flare rate is estimated explicitly from the waiting time pdf. Such an approach has the significant advantage of linking the waiting time of the flare and its rate, which is the instantaneous probability of the flare per unit of time. The paper is organised as follows. In Section \[math\] the rather simple mathematics used in the paper is summarised, considered for the paper’s consistency to avoid referring the reader to specific literature. The main results are presented Section \[data\_analysis\], and interpreted in Section \[discussion\]. The conclusion and final remarks are presented in Section \[conclusion\]. THEORETICAL BACKGROUND {#math} ====================== We adopted the terminology of Renewal Theory (*e.g.* ) to set a mathematical framework and for intuitive and easy further interpretation. We focus on two objects of study: random events and time interval between near-in-time events, *i.e.*, *waiting times*. By analogy with Renewal Theory, the *failure* of some abstract device is attributed as the elementary random *event*, which we assign to the *flare*. Then the *flare waiting time* is associated with a random non-negative variable $X$, called *failure time*[^2] being the interval between adjacent[^3] *failures*, that is, between two flares. The random variable $X$ is characterised by a *probability density function* $f(x)$ $$\label{eq:pdf}f(x)= \lim_{\Delta x\rightarrow0} \frac{Prob (x\leq X \leq x+\Delta x)}{\Delta x},$$ with $\int_{0}^{\infty} f(x) dx = 1.$ The probability that a flare *has* occurred (a device has failed) by time $x$ is given by the *cumulative distribution function* $F(x)$: $$\label{eq:cum}F(x)=Prob(X \le x)=\int_{0}^{x} f(u) du.$$ The probability that a flare *has not* occurred (a device has not failed) up to time $x$ is given by the *survivor function* $\mathcal{F}(x)$: $$\begin{aligned} \mathcal{F}(x) &=& Prob(X > x) = \\ 1-F(x) & = &\int_{x}^{\infty} f(u) du.\end{aligned}$$ \[eq:sur\] The probability of immediate failure of a device (occurrence of a flare event) known to be of age $x$ (no flare during $x$) is given by the *age-specific failure rate* $h(x)$[^4]. Consider a device known not to have failed at time $x$ and let $h(x)$ be the limit of the ratio to $\Delta x$ of the probability of failure in time interval $(x, x +\Delta x]$ $$\label{eq:h}h(x)= \lim_{\Delta x\rightarrow 0} \frac{Prob(x<X \leq x+\Delta x|x<X)}{\Delta x}.$$ The latter permits further transformation according to the definition of the conditional probability for two events $a, b,$ $$\label{eq:cond}Prob(a|b)=\frac{Prob(a \mbox{ and } b)}{Prob(b)}.$$ The event $$"x<X \leq x+\Delta x \mbox{ \textit{and} } x<X"$$ is included in essentially the same event $$"x<X \leq x+\Delta x"\:;$$ then [Equation (\[eq:h\])]{} reads $$\begin{aligned} \lefteqn{h(x)=}\\ &&=\lim_{\Delta x\rightarrow0} \frac{Prob(x<X \leq x+\Delta x)}{\Delta x}\frac{1}{Prob(x<X)}\\ &&=\frac{f(x)}{\mathcal{F}(x)}\end{aligned}$$ \[eq:h1\] and the next form is used for the computations in the paper: $$\label{main_eq}h(x)=\frac{f(x)}{1-F(x)}=\frac{f(x)}{1-\int_{0}^{x} f(u) du}.$$ The stochastic *Poisson process* has the exponential waiting time [pdf ]{}$$\label{exp}f(x)=\lambda e^{-\lambda x},$$ where $\lambda=<x>^{-1}$ is the *constant* failure rate $$\label{hexp}h_p(x)= \frac{\lambda e^{-\lambda x}}{\int_{x}^{\infty} \lambda e^{-\lambda u}du}= \lambda.$$ Conversely, if the stochastic process has a constant $h$, it is a Poisson process. The constancy of $h_p(x)$ reveals the “no-memory” property of the Poisson stochastic process. DATA ANALYSIS {#data_analysis} ============= The GOES flare catalogues are analysed with particular interest in the interval between consecutive flares. The GOES observations provide the longest record and complementary H$\alpha$ data provided allow one to consider both [$X$-ray ]{}and [H$\alpha$ ]{}flares. The catalogued event is recorded by specifying starting time, peak time and ending time. Thus, the waiting time is not unambiguously defined. We consider two definitions: the *Peak-to-Peak* (PtP) waiting timethe interval between times of maximum flux rate of two near-in-time flares, and the *End-to-Start* (EtS) waiting timethe interval between the end time of the predecessor and the starting time of the current event. The latter definition is intuitive, but might be of questionable applicability from the point of view of statistical methods that will be used (see Section \[discussion\]). However, it is used in the article to demonstrate the influence of the definition of the [wt ]{}on its statistical properties. Initially, the catalogue should be preprocessed to avoid errors due to data gaps. The neighbourhood events to every catalogue item marked with the data gap flags (notably `D`, `E`) were ignored; thus, after filtering we have the set of $52181$ [$X$-ray ]{}events (including GOES $B$-class events) dating from September $1975$ until December $2008$. The [H$\alpha$ ]{}events record is shorter, *viz.* $11124$ events and it is used only for numerical comparison in the following. The calculated [*PtP *]{}waiting times from the GOES catalogue are shown in [Figure \[fig.1\]]{}. The correlations between monthly averaged sunspots number and monthly averaged [wt ]{}in both both Soft X-rays and H$\alpha$ are evident and intuitive. During the solar minimum years the flare rate is lower, leading to longer [wt ]{}prevalence; conversely, during the solar maximum years the rate is higher and thus waiting times are shorter on average. This substantial variation leads to separate consideration of the solar cycle phases according to corresponding traces in the calculated [wt ]{}series. To separate phases, we consider the time derivative of the monthly average sunspot number $\dot{N}= d N(t)/dt$. The qualitative agreement (in [Figure \[fig.1\]]{} is shown by grey lines) in the fluctuations of [wt ]{}and $N(t)$ is attained empirically, by considering intervals with limited fluctuations in $\dot{N}$ around the corresponding zeros, to be less than $25\%$ for the minima phases and less than $50\%$ for the maxima. Figures \[fig.2\]-\[fig.4\] show a [pdf ]{}of waiting times for joint solar cycle phases, solar minimum and maximum, respectively, by means of the two [wt ]{}definitions mentioned above. The solid line in each panel stands for the reference exponential distribution $\lambda e^{-\lambda x}$, with the average $\lambda^{-1}$ estimated from the series whose [pdf ]{}is shown by the scatter plot. To some extent, the similarity with the pure exponential is attained in a single case only (the bottom panel in [Figure \[fig.4\]]{}). The density shapes are similar within a given [wt ]{}definition. Remarkably, the variation in [wt ]{}definition affects the most probable events: the bell-like [pdf ]{}versus plateau-like over almost $1.5$ decades. The plots in [Figure \[fig.2\]]{} should be interpreted as power-laws according to . However, the power-law fit has to be considered with special care, as the findings illustrated in the following are challenging the straightforward applicability of the power-law fit. Flaring rate estimation ----------------------- had ignored flares whose peak [$X$-ray ]{}flux is less than $10^{-6}$ Wm$^{-2}$, due to substantial variation of the soft [$X$-ray ]{}background with the solar cycles. Thus GOES $B$-class events ($B$-flares for short) had not been considered. Following , additional sets of data without $B$-flares were generated. In the following, the effect of this removal is considered systematically. We estimate the flaring rate $h$ explicitly from the [pdf ]{}according to [Equation (\[main\_eq\])]{}. The result for joint solar cycle phases is reported in [Figure \[fig.5\]]{}; in [Figure \[fig.55\]]{} joint phases with $B$-flares excluded are shown for comparison, and the phase-wise estimated rates are shown in [Figure \[fig.6\]]{}. The effect of the [wt ]{}definition on the curvature of the estimated function $h$ is shown for the entire dataset. In [Figure \[fig.5\]]{}, the solid lines represent the smoothing adjacent running average with $10$-point window, to emphasise the character of the functions. [Figure \[fig.55\]]{} is complementary to the previous one, and demonstrates the alteration of the smoothed rates, when the $B$-flares are removed. The smoothed rates are shown by the scatter plot. In [Figure \[fig.6\]]{}, the estimated rates are shown for the different solar cycle phases separately. The estimates are smoothed by the $15$-point adjacent running average. The phases are coded by the colour of the scatter plot; the [wt ]{}definition by the symbol shapes; the cases without $B$-flares are shown by lines. The estimated flare rate allows one to reconsider the applicability of the power-law fit for the waiting time [pdf ]{}by trivial algebra: the simplest functional form for the power-law fit is given by $$\label{pw}g(x)=Ax^{-\alpha},$$ where $A$ and $\alpha$ are the estimated parameters, and due to [Equation (\[main\_eq\])]{} the corresponding failure rate to the power-law [pdf ]{}$g(x)$ reads $$\label{ppdf}h_g(x)= \frac{Ax^{-\alpha}}{A\int_{x}^{\infty} u^{-\alpha}du}=\frac{x^{-\alpha}}{ (\alpha-1)x^{-\alpha+1}}\approx x^{-1},$$ with $|\alpha|>1$. To verify this relation for the estimated rates, we fit the smoothed[^5] rate $h(x)$ by a function of the form of [Equation (\[pw\])]{}; the estimated exponent $\gamma$ is compared with $-1$. Tables \[Table-ptp\] and \[Table-ets\] report the average [wt ]{}and the exponent $\gamma$ for all datasets that had been analysed in this work. Catalogue Phase $<x>^{a}$$[h]$ $<x>^{b}$$[h]$ ---------------- --------- ---------------- ---------------- ------------------ --------------------- *joint* $4.48$ $5.89$ $-0.86\pm0.006 $ $-0.91\pm0.005^{c}$ [$X$-ray ]{} *max* $3.14$ $3.27$ $-0.65\pm0.008 $ $-0.73\pm0.007^{c}$ *min* $9.58$ $28.99$ $-0.61\pm0.01 $ $-0.67\pm0.01$ *joint* $6.00$ $9.87$ $-0.54\pm0.006$ $-0.53\pm0.008$ [H$\alpha$ ]{} *max* $4.58$ $7.01$ $-0.46\pm0.01$ $-0.43\pm0.01$ *min* $11.15$ $50.14$ -$^{d}$ –$^{d}$ : Numerical characteristics of the datasets generated according to the [*PtP *]{}flare waiting time definition.[]{data-label="Table-ptp"} 1. events including GOES $B$-class flares. 2. events excluding GOES $B$-class flares. 3. the power-law region is remarkably present. 4. insufficient record length to produce a reliable result. Catalogue Phase $<x>^{a}$$[h]$ $<x>^{b}$$[h]$ ----------------- --------- ---------------- ---------------- ----------------- ----------------------- *joint* $4.15$ $5.54$ $-0.82\pm0.005$ $-0.99\pm0.006^{c,d}$ [$X$-ray ]{} *max* $2.79$ $2.92$ $-0.65\pm0.01$ $-0.78\pm0.009^{c}$ *min* $9.33$ $28.72$ $-0.66\pm0.01$ $-0.56\pm0.007$ *joint* $5.68$ $9.54$ $-0.50\pm0.005$ $-0.52\pm0.006$ [H$\alpha$ ]{}  *max* $4.20$ $6.68$ $-0.46\pm0.01$ $-0.45\pm0.01$ *min* $10.94$ $49.75$ -$^{e}$ -$^{e}$ : Numerical characteristics of the datasets generated according to the [*EtS *]{}flare waiting time definition. Note: the power-law regions in these data appear to be more pronounced and longer.[]{data-label="Table-ets"} 1. events including GOES $B$-class flares. 2. events excluding GOES $B$-class flares. 3. the power-law region is remarkably present. 4. variations of the fit parameters may lead to the $-1$ power-law index. 5. insufficient record length to produce a reliable result. DISCUSSION ========== In this section, the opposing results and interpretations existing in the literature are compared and discussed in the context of the findings reported. We shall highlight the discrepancies and agreement with those previously published by the cited authors. Uncertainties ------------- Evidently, our results are strongly affected by the completeness and accuracy of the GOES catalogue. In fact, the event detection is done in an automated way and any change in the detection algorithm would alter the statistics. Furthermore, the flare event *obscuration* (for details see ) brings about quite hard arguments against the reliability of any study: the missing of substantial amount of events (up to $75\pm23\%$ for events above GOES $C1$-class) might be so crucial that the [pdf ]{}would dramatically change its behaviour. However, our results are comparable with those based on the same datasets. In the case of the solar minima, the longer record is required most of all. The relatively high probability of longer [wt ]{}([Figure \[fig.3\]]{}) conceals the probability variation along the domain, since the [pdf ]{}is shrunk along the ordinate and has too wide spreading of the tail. The longer series should clarify the [pdf ]{}shape. [Equation (\[main\_eq\])]{} diverges as the cumulative distribution approaches values at the very tail $$\label{lim}\lim_{x\rightarrow\infty}F(x)=1\:,$$ and, consequently, the wide tails of the [pdf ]{}are amplified by means of the non-linear transformation given by [Equation (\[main\_eq\])]{}. Thus, the resulting tails of the estimated rates appear to be even more scattered ([Figure \[fig.5\]]{}). This motivated us to apply a smoothing to the functions obtained, which, in turn, eliminates details and provides rather qualitative result. Waiting time definition ----------------------- A random event[^6] should be defined as a point-like instance in time *i.e.*, the event has no length. From this point of view, the [*PtP *]{}definition is more mathematically adequate, rather than the [*EtS *]{}waiting time defined by the subtraction of the flare duration from the base time line. Practically, the flux maximum point is easier to detect precisely, with respect to the starting/ending times, since their flux values are closer to the background noise level. and used the [wt ]{}defined as the difference between the times of peak flux rates of two near-in-time flares. Later in the [wt ]{}was defined as the difference between the start times. However, the latter definition is qualitatively coincident with what we called [*PtP *]{}. We would ignore possible numerical discrepancies. The major differences in the estimated rates caused by the [wt ]{}definition had appeared in the range of short waiting times (Figures $5-7$). In spite of the cut of wide-spreading tails, according to the authors the convergence of rates at longer waiting times can be identified, regardless of the [wt ]{}definitions that had been used. This is seen where the tails of the flare rates seem to coincide. Such a convergent behaviour is independent of the solar cycle phase, as well as of the $B$-flares exclusion ([Figure \[fig.6\]]{}). However, the $B$-flares exclusion leads to more ambiguous plots, since fewer events are considered in the dataset. The joint consideration of the phases demonstrates this behaviour too ([Figure \[fig.5\]]{}). If we assume that energetically large flares are separated by longer waiting times on the average, we can conclude that the relatively long duration of energetic flares appears to be statistically indistinguishable from the long waiting times defined by the point-like events. Thus, the [*EtS *]{}definition tends to mimic the [*PtP *]{}in the right-hand half of the $x$ domain, where the waiting times are longer. Waiting time pdf versus flaring rate ------------------------------------ In this work, the [pdf ]{}are presented mostly for illustrative purposes, *i.e.* to demonstrate a degree of divergence between the exponentials, the variation due to the solar cycle phase and the effect of the [wt ]{}definition. We do not consider the [pdf ]{}fit, but study the underlying models by means of the estimated flaring rate. The failure rate formalism described in [Section \[math\]]{} leads to a broader view on the solar flaring and physics underlying the waiting time statistics. The function $h(x)$ permits a qualitative description of the dynamics of the stochastic process being characterised by the pdf $f(x)$. In fact, the rate $h(x)$ and the pdf $f(x)$ are related to different stochastic processes: the rate $h(x)$ is the almost instantaneous probability of an event[^7] with no event during the waiting time $x$, by the definition. In turn, the [pdf ]{}$f(x)$ is the probability change rate of the waiting time $x$. The function $h(x)$, by definition, explicitly combines the almost instantaneous probability of the flare with the waiting time that had elapsed before its occurrence, *i.e.* it is the conditional probability. In other words, the rate $h(x)$ gives the probability of the event delimiting the length of $x$, namely it is related to the process *inducing* the one being presented by the series of the waiting times in the catalogue. In the present work, we use the estimated rates $h$ to analyse the flaring dynamics which can be derived from the GOES catalogue, and then we reconsider the cited stochastic models of the waiting time pdf fit. Particularly, in the piece-wise constant Poisson model the flare rate is a free parameter. Non-constancy over the waiting time domain ------------------------------------------ The most important feature of the estimated $h$ is the variation over the $x$ domain. The function $h$ is the explicit non-linear function of $x$:$$\label{var}h=h(x)$$ with bell-like shape in semi-logarithmical plot. This has far-reaching consequences for the flaring dynamics. First, the flaring rate is by no means stationary: it non-linearly depends on waiting time of short scales of days, hours and minutes (the fast non-stationarity for short). Second, the flare rate has a certain behaviour: less energetic flares ($B$-flares at large, but not only these) exhibit an *increasing* rate. Next, the rate reaches *maximum* values, and then it rather slowly *declines* (similarly to a power-law, which in some cases manifests itself very notably). Thus, we can point out *characteristic waiting times*, which indicate the time intervals of the most probable flare occurrence. We would not define this time as numerically accurate due to significant uncertainties. However, one can empirically define a range just by examining [Figure \[fig.55\]]{} and [Figure \[fig.6\]]{}: for joint phases, say 35–45 $min$. ([*EtS *]{}); 50–60 $min.$ ([*PtP *]{}); for the solar minimum, say 20–55 $min$. ([*EtS *]{}); 50–75 $min$. ([*PtP *]{}); and for the solar maximum 30–45$~min.$ ([*EtS *]{}); 60–100 $min.$ ([*PtP *]{}). The character of the fast non-stationarity is invariable within the solar cycle phases ([Figure \[fig.6\]]{}), and remains recognisable for smaller data records, when the $B$-flares are excluded (with a word of caution on the reliability of the result based on the smaller number of samples). Rate change with the solar cycle phase -------------------------------------- The slow non-stationarity of the flare rate is given by the solar cycles, say “large scale”. The phases have different probability of flaring per unit time, as it is natural to assume just on the basis of the comparative overview in [Figure \[fig.1\]]{}. The most frequent and thus very short flares (corresponding to the smaller $x$) are common features for both phases. They can hide statistical differences between solar cycle phases: this fact is supported by the match in the increasing regions of the rates corresponding to the disjoint phases. In [Figure \[fig.6\]]{}, the [*PtP *]{}rates are equivalent in the range from $4$ to $20~min.$, regardless of the solar cycle phase considered. The same statement but somewhat weaker holds for the [*EtS *]{}rates. Qualitatively, the rate shape is invariable during the solar cycle, however, the variation in the mean [wt ]{}may reach one order of magnitude. This is the case when the $B$-flares are excluded (see Tables \[Table-ptp\] and \[Table-ets\]). Summarising, the flare rate takes the form $$\label{hxt}{h=h(x,t)}$$ with explicit dependence on waiting time and generic time, *i.e.* the time frame when the corresponding observations were made. GOES $B$-class events --------------------- We considered separately the case of the excluded $B$-flares. For joint phases almost $32\%$ are $B$-flares and $3.4\%$ for the solar maxima. Qualitatively, the rates retain their shape over the [wt ]{}domain. Nevertheless, the rates without $B$-flares are systematically lower in the joint phases ([Figure \[fig.55\]]{}) and with almost negligible variations during the solar maxima (in [Figure \[fig.6\]]{} dashed and dash-dotted lines). Substantial variation occurs during the cycle minima phases, when $80\%$ of the events are $B$-flares. In this case, the notable rate matching in the shorter increasing range is very small for the [*PtP *]{}rate, and does not takes place for the [*EtS *]{}rates (in [Figure \[fig.6\]]{} solid grey lines). In fact, the elimination of the frequent events from the dataset is a very significant operation. Particularly, the power-law exponents are sensitive to the relative strength of the frequent events. This is caused by the intrinsic divergence of the power-law statistics at the origin, at the most probable (*i.e.* the most frequent) events. In some cases, excluding $B$-flares decreases the reliability of the results because of the small size of the resulting dataset. Nevertheless, it underlines the average waiting time order of magnitude change, which is the reason for the separate considerations of the solar cycle phases (see Tables \[Table-ptp\] and \[Table-ets\] for numerical estimates). Power-law fit ------------- Considering [Equation (\[ppdf\])]{}, we compare the flare rate power-law fit with the value of $-1$. One should notice that the tails of $h(x)$ are quite uncertain in considering the indices listed in Tables \[Table-ptp\] and \[Table-ets\] to be steady and exact: the change of the region boundaries being chosen for the fit can substantially modify the numerical value of the exponent, and, in a certain sense, it may be considered to be subjective. But we rely on the fact that the modulus of the estimated indices is systematically *less* than $1$. Thus, arguments for fitting the GOES data by [Equation (\[pw\])]{} can, very likely, be rejected. On the other hand, jointly considered solar cycle phases in the [$X$-ray ]{}band with flares above $B$-class have revealed a value comparable with $-1$. By noting, that this takes place when average waiting times differ by one order of magnitude, we would point the accordance with the power-law limit [pdf ]{}of the non-stationary exponential random variable, reported by . This argument is missing in , when joint phases have been fitted confidently by the power-law. Excess of short events ---------------------- The excess of short waiting times was pointed out by and . It appears to be an intrinsic property of the flaring process, and it is detectable regardless of the instrument and(or) the band that is used. The rising character of the rate $h$ at small $x$ appears to be an indicator of the importance of the energetically small events. Short waiting times mostly separate less energetic events. Empirically, this point is supported by the influence of the *B*-flares’ presence on the flare rate. In turn, the increasing probability at *short* [wt ]{}suggests that the true flare occurrence per short time unit is very large, but it is hidden due to sensitivity of the instruments and the obscuration. Certainly, somewhere on these time scales the very sympathetic flaring [@Moon; @Biesecker] would take place, and one can realise that a dependence of the form $h=h(x)$ poses explicitly the “memory” in the record[^8]. Thus, it is hard to ignore the arguments by concerning the memory effect in the waiting time pdf. That is also in accordance with the divergence (*viz.* short flare overabundance) of the flare duration [pdf ]{}at the origin, which is commonly accepted to be a power-law. The GOES soft [$X$-ray ]{}flare catalogue gives an overall flaring picture, considering the Sun as a whole. The beauty and power of the non-stationary Poisson model is in its potential ability to represent the global solar flare dynamics as consisting of somewhat trivial (“memoryless”) mathematical objects — exponentials. Presumably, the different active regions (or even smaller regions) may contribute to the global flaring with notably different rates and thus appear in the limit as a power-law-like distribution. However, the individual active regions have very poor statistics (about 100 events at most) and a series of assumptions should be made *a priori* (). In addition, reported a piece-wise constant Poisson fit for the active region, which gives a motivation to consider even smaller flaring areas to be an “elementary piece” in the mechanism just speculated on above. CONCLUSIONS {#conclusion} =========== The fast variation on time scales from minutes to hours of the flare rate is a highlight among other findings in this work. The large-scale variation with the solar cycle exhibits a complexity that can be coped with by a phase-wise splitting of the data. However, the fast non-stationarity during the waiting time appears to be an intrinsic feature of the flaring dynamics, which requires a further elaboration of the present solar flaring models. It is worth mentioning that we see somewhat intermediate character of the reported results with respect to the works by , and by . The “memory” revealed in the data rejects models with Poisson statistics. From another perspective, a simple power-law fit very likely fails for the records relevant to a specific solar cycle phase. The piecewise-constant Poisson model had inspired us to consider solar cycle phases separately. We support the argument of this model that solar cycle phases having quite inhomogeneous statistical properties that are to be considered jointly. In other words, the separate consideration of the solar cycles should be a cornerstone of a realistic modelling of the solar flaring activity. On the other hand, our findings exclude the presence of the timescale of true rate constancy (with GOES catalog precision, at least). Even if it exists, it would be very small, and may be hard to detect. The application of Renewal Theory emphasises the importance of the short (energetically small) events in the pdf, whose dynamics had not been pointed out in the literature: partially due to removal from the dataset of the $B$-class flares, and partially due to systematic fitting solely the tails of the [pdf ]{}’s. This brings us to problems similar to those that had arisen in the context of coronal heating by nanoflares, where, perhaps, the most significant effects are at the limit of the instrumental noise. We thank the GOES teams at NOAA and SIDC for data management and availability and Christoph Keller for useful comments and discussions. M.M. acknowledges the support of the Italian Space Agency (ASI) and COST Action ES0803. Mrs. S. Fabrizio (INAF-OATS) is gratefully acknowledged for careful proofreading. Biesecker, D. A., Thompson, B. J.: 2000, Journal of Atmospheric & Solar-Terrestrial Physics, **62**, 1449. Boffetta, G., Carbone, V., Giuliani, P., Veltri, P., Vulpiani, A.: 1999, Phys. Rev. Lett. **83**, 4662. Cox, M. R.: 1962, *Renewal Theory*, Spottiswoode Ballantyne $\&$ Co., Ltd., 142. Lepreti, F., Carbone, P., Veltri, P.: 2001, [ [*Astrophys. J. Lett.*]{}]{}, **555**, L133. Moon, Y.-J., Choe, G. S., Park, Y. D., Wang, H., Gallagher, P. T., Chae, J., Yun, H. S., Goode, P. R.: 2002, [ [*Astrophys. J.*]{}]{}, **574**, 434. Moon, Y.-J., Choe, G. S., Yun, H. S., and Park, Y. D.,: 2001, [ [*J. Geophys. Res.*]{}]{}, **106**, 29951. Pearce, G., Rowe, A. K., Yeung, J.: 1993, [ [*Astrophys. Space Sci.*]{}]{}, **208**, 99. Wheatland, M. S.: 2000, [ [*Astrophys. J.*]{}]{}, **536**, L109. Wheatland, M. S.: 2001, [[*Solar Phys.*]{}]{}, **203**, 87. doi: 10.1023/A:1012749706764 Wheatland, M. S., Litvinenko, Y. E.: 2002, [[*Solar Phys.*]{}]{}, **211**, 255. Wheatland, M. S., Sturrock, P. A., McTiernan, J. M.: 1998, [ [*Astrophys. J.*]{}]{}, **509**, 448. [^1]: We use the mathematically strict definition of the *density* as the differential quantity with respect to the *distribution*, which is often used in a misleading way instead. [^2]: This time interval is also called the *age* of a device, *i.e.,* its lifetime without any failure. [^3]: An abstract picture is considered here: a broken device replacement (repair) is thought to be a very instant, in such a way that the time required for the repair is zero. A continuous operation is meant, and provided, for instance, by multiple hardware availability. [^4]: This function is also known as the *hazard function*, *hazard rate* or *failure rate*. [^5]: The smoothing is the same as for the graphs in [Figure \[fig.6\]]{}. [^6]: Here by “event” we mean the event as defined in Probability Theory. [^7]: per unit time ($minute$). [^8]: Recall that memoryless stochastic processes are those with $h=const$, *i.e.* the probability of an event is independent of neither the waiting time elapsed before it occurs nor the preceding event.
--- abstract: 'It is proposed by Cvetic et al [@1] that the product of all horizon areas for general rotating multi-change black holes has universal expressions independent of the mass. When we consider the product of all horizon entropies, however, the mass will be present in some cases, while another new universal property [@2] is preserved, which is more general and says that the sum of all horizon entropies depends only on the coupling constants of the theory and the topology of the black hole. The property has been studied in limited dimensions and the generalization in arbitrary dimensions is not straight-forward. In this Letter, we prove a useful formula, which makes it possible to investigate this conjectured universality in arbitrary dimensions for the maximally symmetric black holes in general Lovelock gravity and $f(R)$ gravity. We also propose an approach to compute the entropy sum of general Kerr-(anti-)de-Sitter black holes in arbitrary dimensions. In all these cases, we prove that the entropy sum only depends on the coupling constants and the topology of the black hole.' --- [The Universal Property of the Entropy Sum of Black Holes in All Dimensions]{} 1.0cm Yi-Qiang Du [^1] Yu Tian [^2]\ [School of Physics, University of Chinese Academy of Sciences, Beijing 100049, China]{} 1.5cm Studying the black hole entropy has been an attracting work after the establishment of black hole thermodynamics, but it is still a challenge to explain the black hole entropy at the microscopic level. Recently, the microscopic entropy of extreme rotating solutions has drawn some attention, as well as the detailed microscopic origin of the entropy of non-extremal rotating charged black holes. There has been some promising progress and results [@5; @6]. The further studies of the properties of black hole entropy may give us a deeper understanding of black holes and to study the product of all horizon entropies [@1] is an important aspect among them, which is motivated by the following consideration. When the black hole only has an outer horizon and an inner horizon, the inner event horizon plays an important role in studying the black hole physics [@chen1; @chen2]. For general $4D$ and $5D$ multi-charged rotating black holes, the entropies of the outer and inner horizons are $$\mathcal{S}_{\pm}=2\pi (\sqrt{N_L}\pm\sqrt{N_R}),$$ respectively, with $N_L$,$N_R$ interpreted as the levels of the left-moving and right-moving excitations of a two-dimensional CFT [@a1; @a2; @a3]. So the entropy product $$\mathcal{S}_+\mathcal{S}_-=4\pi^2(N_L-N_R)$$ should be quantized and must be mass-independent, being expressed solely in terms of quantized angular momenta and other charges. When there are more than two horizons, however, the actual physics of the entropy product or the area product of all the horizons is still not obvious. Actually, the authors of Ref.[@1] have studied the product of all (more than two) horizon areas/entropies for a general rotating multi-charge[d]{} black hole, both in asymptotically flat and asymptotically anti-de Sitter spacetimes in four and higher dimensions, showing that the area product of the black hole does not depend on its mass $M$, but depends only on its charges $Q_i$ and angular momenta $J_i$. Recently, a new work [@4] also studies the entropy product and another entropy relation in the Einstein-Maxwell theory and $f(R)$(-Maxwell) gravity. As is well-known, in the Einstein gravity (including the theories studied in Ref.[@1]), the entropy and the horizon area of the black hole are simply related by $\mathcal{S}=\frac{A}{4}$, so the area product is proportional to the entropy product. However, in (for example) the Gauss-Bonnet gravity where the horizon area and entropy do not satisfy the relation $\mathcal{S}=\frac{A}{4}$ and the entropy seems to have more physical meaning than the horizon area, the mass will be present in the entropy product (see the next section). In fact, Ref. [@Giribet] has studied the entropy product by introducing a number of possible higher curvature corrections to the gravitational action, showing that the universality of this property fails in general. Recently, it is found by Meng et al [@2] that the sum of all horizon entropies including “virtual” horizons has a universal property that it depends on the coupling constants of the theory and the topology of the black hole, but does not depend on the mass and the conserved charges such as the angular momenta $J_i$ and charges $Q_i$. The conjectural property has only been discussed in limited dimensions. It is believed that the property of entropy sum is more general than that of the entropy product. In this Letter, we prove a useful formula that makes it possible for us to investigate the universal property in all dimensions. Based on this formula, we discuss the entropy sum of general maximally symmetric black holes in the Lovelock gravity, $f(R)$ gravity. As well, we propose a method to calculate the entropy sum of Kerr-(anti-)de-Sitter (Kerr-(A)dS) black holes in the Einstein gravity. In all these cases, we prove that the entropy sum depends only on the coupling constants of the theory and the topology of the black holes. Note that here we just focus on the universal properties, and the actual physics behind it still needs to be further investigated. This Letter is organized as follows. In the next section, we will discuss the Gauss-Bonnet case, and then we will express the formula and give a brief proof. In the sections 4 and 5, we will use the formula to calculate the entropy sum of (A)dS black holes in the Einstein-Maxwell theory and the Lovelock gravity in all dimensions. In the section 6, we will study rotating black holes to calculate the entropy sum of Kerr-(A)dS metrics in arbitrary dimensions. In the section 7, we will discuss the $f(R)$ gravity where the universal property also holds. At last, we give the conclusion and brief discussion. (A)dS black holes in the Gauss-Bonnet gravity ============================================= The action of the Einstein-Gauss-Bonnet-Maxwell in $d$ dimensions is $$I=\frac{1}{16\pi G}\int d^dx\sqrt{-g}[R-2\Lambda+\alpha(R_{\mu\nu\kappa\lambda}R^{\mu\nu\kappa\lambda}-4R_{\mu\nu}R^{\mu\nu}+R^2)-F_{\mu\nu}F^{\mu\nu}]$$ Here $G$ is the Newton constant in $d$ dimensions, $\alpha$ is the Gauss-Bonnet coupling constant, and $\Lambda=\pm\frac{(d-1)(d-2)}{2l^2}$ is the cosmological constant. Varying this action with respect to the metric tensor gives equations of motion, which admits the $d$-dimensional static charged Gauss-Bonnet-(A)dS black hole solution [@162; @163; @7; @8; @9] $$\label{e14} ds^2=-V(r)dt^2+\frac{dr^2}{V(r)}+r^2d\Omega_{d-2}^2$$ where $d\Omega_{d-2}^2$ represents the line element of a $(d-2)$-dimensional maximal symmetric Einstein space with constant curvature $(d-2)(d-3)k$, and $k=-1, 0$ and $1$, corresponding to the hyperbolic, planar and spherical topology of the black hole horizon, respectively. The function $V(r)$ in the metric is given by $$\begin{aligned} \label{e13} V(r)=k+\frac{r^2}{2\tilde\alpha}(1-\sqrt{1+\frac{64\pi\tilde\alpha M}{(d-2)r^{d-1}}-\frac{2\tilde\alpha Q^2}{(d-2)(d-3)r^{2d-4}}+\frac{8\tilde\alpha\Lambda}{(d-1)(d-2)}}),\end{aligned}$$ where $\tilde\alpha=(d-3)(d-4)\alpha$, $M$ and $Q$ are the black hole mass and black hole charge respectively. Horizons of the black holes are located at the roots of $V(r)=0$. The entropy is $$\mathcal{S}=\frac{\Omega_{d-2}r^{d-2}}{4}(1+\frac{2(d-2)k\tilde\alpha}{(d-4)r^2}),$$ where $\Omega_{d-2}=2\pi^{(d-1)/2}/{\Gamma(\frac{d-1}{2})}$. The area of the horizon is $$A=\frac{\Omega_{d-2}r^{d-2}}{4}.$$ When we consider the five dimensional charged black hole, according to the function , the equation that determines the horizons is $$\label{e15} 2\Lambda r^6-12kr^4+(64\pi M-12k^2\tilde\alpha)r^2-Q^2=0.$$ Then, we can calculate the product of the areas by using Vieta’s theorem and $$\displaystyle\prod_{i=1}^6A_i=(\frac{\Omega_3}{4})^6\displaystyle\prod_{i=1}^6r_i^3=(\frac{\Omega_3}{4})^6(\frac{-Q^2}{2\Lambda})^3.$$ The result does not include the mass $M$, preserving the property revealed in Ref.[@1]. As we have mentioned in the Introduction, the entropy seems to have more physical meaning than the horizon area in the case that the horizon area and entropy are not proportional to each other. In five dimensions, the entropy product has been calculated when $\Lambda=0$ [@Giribet]. Here we will give the explicit result with a non-vanishing cosmological constant $\Lambda$. The product of the entropies is $$\displaystyle\prod_{i=1}^6\mathcal{S}_i=(\frac{\Omega_3}{4})^6\displaystyle\prod_{i=1}^6(r_i^3+k\tilde\alpha r_i)=-(\frac{\Omega_3}{4})^6\frac{Q^2}{4\Lambda^2}[Q^2+(64\pi M-12k^2\tilde\alpha)k\tilde\alpha+12k^3\tilde\alpha^2+2\Lambda k^3\tilde\alpha^3]$$ and the result depends on the mass. However, it seems that the sum of all entropies including non-physical entropies proposed by [@2] has a better performance, which depends only on the coupling constants of the theory and the topology of the black holes. We find that the Gauss-Bonnet case, which is included in the Lovelock gravity, obeys the property in all dimensions, and we will give the proof later. A useful formula ================ In this section, we will prove a formula, which is useful in the following sections. With regard to the polynomial as follows:$$a_mr^m+a_{m-1}r^{m-1}+\dots+a_0r^0=0,$$ we denote the roots as $r_i, i=1, 2\cdots m$, and denote $s_n=\displaystyle\sum_{i=1}^mr_i^n$, then we have $$\begin{aligned} \label{e5} s_n=\frac{-1}{a_m}\displaystyle\sum_{i=0}^{m-1}s_{n-m+i}a_i,\end{aligned}$$ with $s_{n-m+i}=0$ for $n-m+i<0$ and $s_{n-m+i}=n$ for $n-m+i=0$. The proof is briefly described as follows: $$\begin{aligned} \begin{split} \frac{-1}{a_m}(a_{m-1}s_{n-1}+a_{m-2}s_{n-2})=(r_1^n+\cdots+r_m^n)-\displaystyle\sum_{i=1}^m [r_i^{n-2}(\displaystyle\sum_{0<j_1<j_2<m+1,j_1,j_2\neq i}r_{j_1}r_{j_2})], \end{split}\end{aligned}$$ $$\begin{aligned} \frac{-1}{a_m}(a_{m-1}s_{n-1}+a_{m-2}s_{n-2}+a_{m-3}s_{n-3})=(r_1^n+\cdots+r_m^n)+\displaystyle\sum_{i=1}^m [r_i^{n-3}(\displaystyle\sum_{0<j_1<j_2<j_3<m+1,j_1,j_2,j_3\neq i}r_{j_1}r_{j_2}r_{j_3})].\end{aligned}$$ Continue the process, if $m\ge n$, $$\begin{aligned} \begin{split} \frac{-1}{a_m}(a_{m-1}s_{n-1}+a_{m-2}s_{n-2}+\cdots+a_{m-n+1}s_{1}) =(r_1^n+\cdots+r_m^n)+(-1)^nn\displaystyle\sum_{0<j_1<\cdots<j_{n}<m+1}r_{j_1}\cdots r_{j_{n}}, \end{split}\end{aligned}$$ so if we set $s_0=n$, then $$\begin{aligned} \begin{split} \frac{-1}{a_m}&(a_{m-1}s_{n-1}+a_{m-2}s_{n-2}+\cdots+a_{m-n+1}s_{1}+a_{m-n}s_{0}) =r_1^n+\cdots+r_m^n. \end{split}\end{aligned}$$ If $m<n$, we continue the process until $a_{m-l}=a_0$, with $1\leq l\leq m$, one can also find that $$\begin{aligned} \frac{-1}{a_m}\displaystyle\sum_{i=0}^{m-1}s_{n-m+i}a_i=r_1^n+\cdots+r_m^n.\end{aligned}$$ (A)dS black holes in the Einstein-Maxwell theory ================================================ The Einstein-Maxwell action in $d$ dimensions is $$I=\frac{1}{16\pi G}\int d^dx\sqrt{-g}[R-F_{\mu\nu}F^{\mu\nu}-2\Lambda].$$ In the maximally symmetric case, solving the equation of motion from the above action gives the RN-(A)dS solution, which is of the form . The horizons are located at the roots of the function $V(r)$ [@120; @121; @122; @12] $$\begin{aligned} \label{e8} V(r)=k-\frac{2M}{r^{d-3}}+\frac{Q^2}{r^{2(d-3)}}-\frac{2\Lambda}{(d-1)(d-2)}r^2.\end{aligned}$$ The entropy of horizon is given by $$\begin{aligned} \mathcal{S}_i=\frac{A_i}{4}=\frac{\pi^{(d-1)/2}}{2\Gamma(\frac{d-1}{2})}r_i^{d-2}.\end{aligned}$$ In odd dimensions,just as [@2] has showed, the radial metric function is a function of $r^2$ and the entropy $\mathcal{S}_i$ is a function of $r_i$ with odd power. The pairs of roots $r_i$ and $-r_i$ vanish the entropy sum, i.e. $\sum_i\mathcal{S}_i=0$. In even dimensions, according to equations and , we have $$\begin{split} s_{d-2}=\displaystyle\sum_{i=1}^{2(d-2)}r_i^{d-2}&=\frac{-a_{2d-6}}{a_{2d-4}}s_{d-4}=\cdots=(\frac{-a_{2d-6}}{a_{2d-4}})^{\frac{d-4}{2}}s_2\nonumber\\ &=2(\frac{-a_{2d-6}}{a_{2d-4}})^{\frac{d-2}{2}}=2(\frac{(d-1)(d-2)k}{2\Lambda})^{(d-2)/2}. \end{split}$$ Then we get $$\sum_i\mathcal{S}_i=\sum_i\frac{A_i}{4}=\frac{\pi^{(d-1)/2}}{\Gamma(\frac{d-1}{2})}(\frac{(d-1)(d-2)k}{2\Lambda})^{(d-2)/2}$$ which depends only on the cosmological constant $\Lambda$ and the horizon topology k. To summarize briefly, considering all the horizons including the un-physical “virtual” horizons, we find out the general expression of the entropy sum, which depends only on the cosmological constant and the topology of the horizon. Black holes in the Lovelock gravity =================================== In this section, we will discuss the case of Lovelock gravity. The action of general Lovelock gravity can be written as[@150; @15] $$I=\int d^dx(\frac{\sqrt{-g}}{16\pi G}\displaystyle\sum_{k=0}^m\alpha_kL_k+\mathcal{L}_{matt})$$ with $\alpha_k$ the coupling constants and $$L_k=2^{-k}\delta_{c_1d_1\cdots c_kd_k}^{a_1b_1\cdots a_kb_k}R^{c_1d_1}_{a_1b_1}\cdots R^{c_kd_k}_{a_kb_k},$$ where $\delta^{ab\cdots cd}_{ef\cdots gh}$ is the generalized delta symbol which is totally antisymmetric in both sets of indices. If only keeping $\alpha_0=-2\Lambda$ and $\alpha_1=1$ nonvanishing, we obtain the Einstein gravity, while keeping $\alpha_2$ nonvanishing as well, we get the Gauss-Bonnet gravity. Varying the above action with respect to the metric tensor and then solving the resultant equation of motion [@160; @161; @162; @163; @164; @17; @18; @19] by assuming that the metric has the form , one can find that the function $V(r)$ is determined by $$\frac{d-2}{16\pi}\Omega_{d-2}r^{d-1}\displaystyle\sum_{k=0}^N\tilde\alpha_k(\frac{1-V(r)}{r^2})^k-M+\frac{Q^2(d-2)\Omega_{d-2}}{16\pi r^{d-3}}=0,$$ where $$N=[\frac{d}{2}],\quad\tilde\alpha_0=\frac{\alpha_0}{(d-1)(d-2)},\quad\tilde\alpha_1=\alpha_1, \quad\tilde\alpha_{k>1}=\alpha_k\displaystyle\prod_{j=3}^{2k}(d-j).$$ This is a polynomial equation for $V(r)$ with arbitrary degree $N$, so generically there is no explicit form of solutions. However, assuming $V(r)=0$ in the above equation, we can also find that horizons of the black holes are located at the roots of the following equation $$\label{e9} \frac{d-2}{16\pi}\Omega_{d-2}r^{2d-4}\displaystyle\sum_{k=0}^N\tilde\alpha_k(\frac{1}{r^2})^k-Mr^{d-3}+\frac{Q^2(d-2)\Omega_{d-2}}{16\pi}=0,$$ The entropy of horizon is given by $$\label{e10} \mathcal{S}=\frac{d-2}{4}\Omega_{d-2}r^{d-2}\displaystyle\sum_{k=1}^N\frac{\tilde\alpha_kk}{d-2k}(\frac{1}{r^2})^{k-1}.$$ In odd dimensions, $\sum_i\mathcal{S}_i=0$ with the same reason as before. For the even dimensions, according to and , when we calculate $\displaystyle\sum_{j=1}^{2d-4}r_j^{d-2}$, $$s_{d-2}=\displaystyle\sum_{j=1}^{2d-4}r_j^{d-2}=\frac{-a_{2d-5}}{a_{2d-4}}s_{d-3}+\cdots+\frac{-a_{d-2}}{a_{2d-4}}s_0,$$ we only use the coefficient of $r$ whose power is not smaller than $d-2$, so the mass $M$ and the charge $Q$ will not be present for they belong to the coefficients $a_{d-3}$ and $a_0$ respectively. When we calculate the sum of the entropy , the sum of the highest power of roots is $\displaystyle\sum_{j=1}^{2d-4}r_j^{d-2}$, so the mass $M$ and the charge $Q$ will be disappear in the sum of the other power of roots according to . It is suggested that the sum of the entropies is independent of mass and charge, just depends on the coupling constants of the theory and the topology constants of the horizon. Kerr-(anti-)de-Sitter black holes ================================= Thus far we have only considered the maximally symmetric black holes. It is of great interest to investigate the entropy sum of rotating black holes, albeit in the Einstein gravity. In this section, we will discuss the sum of the entropies in Kerr-de Sitter metrics of all dimensions [@200; @201; @20; @21; @22]. It is necessary to deal with the case of odd dimensions and that of even dimensions separately. odd dimensions -------------- In odd spacetime dimensions, $d=2n+1$, the equation that determines the horizons can be written as $$\label{e3} \frac{1}{r^2}(1-\Lambda r^2)\displaystyle\prod_{i=1}^n(r^2+a_i^2)-2M=0$$ where $\Lambda$ is the cosmological constant. The area of the horizon is given by $$\label{e4} A_j=\frac{\mathcal{A}_{2n-1}}{r_j}\displaystyle\prod_{i=1}^n\frac{r^2_j+a_i^2}{1+\Lambda a_i^2}$$ where $$\mathcal{A}_m=\frac{2\pi^{(m+1)/2}}{\Gamma[(m+1)/2]}.$$ The entropy is $\mathcal{S}_i=\frac{A_i}{4}$. The sum of the area can be divided into two parts:$$\displaystyle\sum_{j=1}^{2n+2}[A_j-\frac{\mathcal{A}_{2n-1}}{r_j}\displaystyle\prod_{i=1}^{n}\frac{a_i^2}{1+\Lambda a_i^2}] ~~\mbox{and}~~\displaystyle\sum_{j=1}^{2n+2}[\frac{\mathcal{A}_{2n-1}}{r_j}\displaystyle\prod_{i=1}^{n}\frac{a_i^2}{1+\Lambda a_i^2}].$$ The first part is a function of $r$ with odd power. The horizon function is a function of $r^2$, which results in roots $r_i$ and $-r_i$ in pair and vanishes the first part. In the second part,$$\displaystyle\sum_{j=1}^{2n+2}[\frac{\mathcal{A}_{2n-1}}{r_j}\displaystyle\prod_{i=1}^{n}\frac{a_i^2}{1+\Lambda a_i^2}]=\mathcal{A}_{2n-1}\displaystyle\prod_{i=1}^{n}\frac{a_i^2}{1+\Lambda a_i^2}\frac{\displaystyle\sum_{0<i_1<i_2<\dots<i_{2n+1}<2n+3}r_{i_1}r_{i_2}\dots r_{i_{2n+1}}}{r_1r_2\dots r_{2n+2}},$$ so it also vanishes because we can find $\displaystyle\sum_{0<i_1<i_2<\dots<i_{2n+1}<2n+3}r_{i_1}r_{i_2}\dots r_{i_{2n+1}}$ vanishes from according to Vieta’s theorem. Therefore, the sum of entropies vanishes, i.e. $\sum_i\mathcal{S}_i=0$. even dimensions --------------- In even dimensions, $d=2n$, the equation that determines the horizons can be written as $$\label{e6} \frac{1}{r}(1-\Lambda r^2)\displaystyle\prod_{i=1}^{n-1}(r^2+a_i^2)-2M=0.$$ The area of the horizon is given by $$\label{e7} A_j=\mathcal{A}_{2n-2}\displaystyle\prod_{i=1}^{n-1}\frac{r^2_j+a_i^2}{1+\Lambda a_i^2}.$$ The sum of all the areas (\[e7\]) is difficult to calculate directly. However, we can calculate it by the following trick. By using (\[e6\]), the sum can be recast as $$\label{sum} \displaystyle\sum_{j=1}^{2n}A_j=\frac{\mathcal{A}_{2n-2}}{\displaystyle\prod_{i=1}^{n-1}(1+\Lambda a_i^2)}\displaystyle\sum_{j=1}^{2n}\frac{2Mr_j}{1-\Lambda r_j^2}=\frac{\mathcal{A}_{2n-2}M}{\sqrt{\Lambda}\displaystyle\prod_{i=1}^{n-1}(1+\Lambda a_i^2)}\displaystyle\sum_{j=1}^{2n}[\frac{1}{1-\sqrt{\Lambda}r_j}-\frac{1}{1+\sqrt{\Lambda}r_j}].$$ Firstly, we focus our attention on the $$\displaystyle\sum_{j=1}^{2n}\frac{1}{1-\sqrt{\Lambda}r_j}$$ term in the right hand side of (\[sum\]). Let $1-\sqrt{\Lambda}r=:\tilde r.$ Then, by substituting $\tilde r$ for $r$, develops into $$(2\tilde r-{\tilde r}^2)\frac{1}{\Lambda^{n-1}}\displaystyle\prod_{i=1}^{n-1}({\tilde r}^2-2\tilde r+1+a_i^2\Lambda)+\frac{2M\tilde r}{\sqrt{\Lambda}}-\frac{2M}{\sqrt{\Lambda}}=0.$$ The coefficient of $\tilde r$ in the above equation is $$a_1=\frac{2}{\Lambda^{n-1}}\displaystyle\prod_{i=1}^{n-1}(1+a_i^2\Lambda)+\frac{2M}{\sqrt{\Lambda}},$$ and the constant term of the equation is $$a_0=\frac{-2M}{\sqrt{\Lambda}}.$$ So we obtain $$\displaystyle\sum_{j=1}^{2n}\frac{1}{1-\sqrt{\Lambda}r_j}=\displaystyle\sum_{j=1}^{2n}\frac{1}{\tilde r_j}=-\frac{a_1}{a_0}=\frac{\sqrt{\Lambda}}{M\Lambda^{n-1}}\displaystyle\prod_{i=1}^{n-1}(1+a_i^2\Lambda)+1.$$ Similarly, we can get $$\displaystyle\sum_{j=1}^{2n}\frac{1}{1+\sqrt{\Lambda}r_j}=-\frac{\sqrt{\Lambda}}{M\Lambda^{n-1}}\displaystyle\prod_{i=1}^{n-1}(1+a_i^2\Lambda)+1.$$ Therefore the sum of entropies is $$\displaystyle\sum_{j=1}^{2n}\mathcal{S}_j=\frac 1 4\displaystyle\sum_{j=1}^{2n}A_j=\frac{\mathcal{A}_{2n-2}}{2\Lambda^{n-1}},$$ which depends only on $\Lambda$. The result is independent of the signature of $\Lambda$. (A)dS black holes in the $f(R)$ gravity ======================================= In this section, we consider the action of $R+f(R)$ gravity coupled to a Maxwell field in d-dimensional spacetime[@30; @31; @3] $$I=\int d^dx\sqrt{-g}[R+f(R)-(F_{\mu\nu}F^{\mu\nu})^p]$$ where $f(R)$ is an arbitrary function of scalar curvature $R$. Solving the corresponding equation of motion in the maximally symmetric case again gives a solution of the form , where the function $V(r)$ is given by $$\begin{aligned} \label{e12} V(r)=k-\frac{2M}{r^{d-3}}+\frac{Q^2}{r^{d-2}}\frac{(-2Q^2)^{(d-4)/4}}{1+f^{'}(R_0)}-\frac{2\Lambda_f}{(d-1)(d-2)}r^2\end{aligned}$$ with $f^{'}(R_0)=\frac{\partial f(R)}{\partial R}\mid_{R=R_0}, R_0=\frac{2d}{d-2}\Lambda_f$, $\Lambda_f$ is the cosmological constant. $V(r)=0$ gives the horizons of the black holes. The entropy of horizon is given by $$\begin{aligned} \mathcal{S}_i=\frac{A_i}{4}(1+f^{'}(R_0)),\end{aligned}$$ and the area of the horizon is given by $$A_i=\frac{2\pi^{(d-1)/2}}{\Gamma(\frac{d-1}{2})}r_i^{d-2}.$$ According to equations and , in odd dimensions, considering $s_1=\displaystyle\sum_{i=1}^dr_i=0$, we obtain $$s_{d-2}=\displaystyle\sum_{i=1}^dr_i^{d-2}=\frac{-a_{d-2}}{a_d}s_{d-4}=\cdots=(\frac{-a_{d-2}}{a_d})^{\frac{d-3}{2}}s_1=0$$ So the sum of entropies vanishes, i.e. $\sum_i\mathcal{S}_i=0$. In even dimensions, $$\begin{aligned} s_{d-2}=\displaystyle\sum_{i=1}^dr_i^{d-2}=\frac{-a_{d-2}}{a_d}s_{d-4}=\cdots=(\frac{-a_{d-2}}{a_d})^{\frac{d-2}{2}}s_0=2(\frac{(d-1)(d-2)k}{2\Lambda_f})^{\frac{d-2}{2}}.\end{aligned}$$ So the entropy sum is $$\sum_i\mathcal{S}_i=\frac{\pi^{(d-1)/2}}{\Gamma(\frac{d-1}{2})}(1+f^{'}(R_0))(\frac{(d-1)(d-2)k}{2\Lambda_f})^{(d-2)/2} ,$$ which does not depend on the mass $M$ and the conserved charge $Q$. Conclusion and discussion ========================= In order to investigate the property of entropy sum in all dimensions, we find that the formula is very useful for the calculation. By studying the maximally symmetric black holes in Lovelock gravity and $f(R)$ gravity and Kerr-(anti)de-Sitter black holes in Einstein gravity, we prove that the sum of all horizons indeed only depends on the coupling constants of the theory and the topology of the black hole, and does not depend on the conserved charges like $J_i$, $Q_i$ and mass $M$, therefore we can believe that it is a real universal property in all dimensions. Especially, we have developed a method for calculating the entropy sum in the (even-dimensional) Kerr-(anti)de-Sitter case, which can be used to calculate more complicated symmetric rational expressions and may be useful for further study of universal entropy relations. In this Letter, we have just discussed some special black hole solutions in several gravitational theories. It is important to verify this universal property in more general settings, i.e. black holes with less symmetry in more general gravitational theories with various matter contents. The rotating black holes in the Gauss-Bonnet (or even Lovelock) gravity are of special interest, whose exact analytical form for general parameters is not yet known. However, some approximate forms (e.g. in the slowly rotating case [@KC]) are known, which can be used to investigate the universal property of the entropy sum. The actual physics behind the universal properties that we have proved still needs more investigation. We wish to explore these aspects in future works. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Xiao-Ning Wu and Zhao-Yong Sun for useful discussions and comments. This work is supported by the Natural Science Foundation of China under Grant Nos. 11475179 and 11175245. 1.0cm [99]{} M. Cvetic, G. W. Gibbons, and C. N. Pope, *Universal Area Product Formulae for Rotating and Charged Black Holes in Four and Higher Dimensions*, Phys. Rev. Lett. [**106**]{}, 121301 (2011) \[[arXiv:1011.0008](http://arxiv.org/abs/1011.0008)\]. Jia Wang, Wei Xu and Xin-He Meng, *The “universal property” of Horizon Entropy Sum of Black Holes in Four Dimensional Asymptotical (anti-)de-Sitter Spacetime Background*, JHEP [**1401**]{}, 031 (2014) \[[arXiv:1310.6811](http://arxiv.org/abs/1310.6811)\]; Jia Wang, Wei Xu and Xin-He Meng, *A Note on Entropy Relations of Black Hole Horizons*, Int.J.Mod.Phys. A, [**29**]{} 1450088 (2014) \[[arXiv:1401.5180](http://arxiv.org/abs/1401.5180v2)\]; Jia Wang, Wei Xu and Xin-He Meng, *“Entropy sum” of (A)dS Black Holes in Four and Higher Dimensions*, \[[arXiv:1310.7690](http://arxiv.org/abs/1310.7690v1)\]. M. Guica, T. Hartman, W. Song and A. Strominger, *The Kerr/CFT correspondence*, Phys. Rev. D [**80**]{}, 124008 (2009) \[[arXiv:hep-th/0809.4266](http://arxiv.org/abs/0809.4266)\]. A. Castro, A. Maloney and A. Strominger, *Hidden con-formal symmetry of the Kerr black hole*, Phys. Rev. D [**82**]{}, 024008 (2010) \[[arXiv:hep-th/1004.0996](http://arxiv.org/abs/1004.0996)\]. B. Chen, S.-x. Liu, and J.-j. Zhang, JHEP [**1211**]{}, 017 (2012) Bin Chen, Jia-ju Zhang, *Thermodynamics in Black-hole/CFT Correspondence*, \[[arXiv:1305.3757](http://arxiv.org/abs/1305.3757v1)\]. F. Larsen, Phys. Rev. D [**56**]{}, 1005 (1997) M. Cvetic and F. Larsen, Phys. Rev. D [**56**]{}, 4994 (1997) M. Cvetic and F. Larsen, Nucl. Phys. B [**506**]{}, 107 (1997) Jia Wang, Wei Xu and Xin-He Meng, *The Entropy Relations of Black Holes with Multi-horizons in Higher Dimensions*, Phys. Rev. D [**89**]{} 044034 (2014), \[[arXiv:1312.3057](http://arxiv.org/abs/1312.3057v1)\]. Alejandra Castro, Nima Dehmami, Gaston Giribet, David Kastor, *On the Universality of Inner Black Hole Mechanics and Higher Curvature Gravity*, \[[arXiv:1304.1696](http://arxiv.org/abs/1304.1696)\]. D. G. Boulware and S. Deser, Phys. Rev. Lett. [**55**]{}, 2656 (1985). D. L. Wiltshire, *Spherically Symmetric Solutions of Einstein-maxwell Theory With a Gauss-Bonnet Term*, Phys. Lett. B [**169**]{}, 36 (1986). Rong-Gen Cai, *Gauss-Bonnet Black Holes in AdS Spaces*, Phys. Rev. D [**65**]{}, 084014 (2002) \[[arXiv:hep-th/0109133](http://arxiv.org/abs/hep-th/0109133)\]. M. Cvetic, S. Nojiri, S. D. Odintsov, *Black Hole Thermodynamics and Negative Entropy in deSitter and Anti-deSitter Einstein-Gauss-Bonnet gravity*, Nucl. Phys. B [**628**]{}, 295 (2002) \[[arXiv:hep-th/0112045](http://arxiv.org/abs/hep-th/0112045)\]. Andrew Chamblin, Roberto Emparan, Clifford V. Johnson, Robert C. Myers, *Charged AdS Black Holes and Catastrophic Holography*, Phys.Rev. D [**60**]{}, 064018 (1999) \[[arXiv:hep-th/9902170](http://arxiv.org/abs/hep-th/9902170)\]. L. J. Romans, *Supersymmetric, cold and lukewarm black holes in cosmological Einstein-Maxwell theory*, Nucl.Phys. B [**383**]{}, 395-415 (1992) \[[arXiv:hep-th/9203018](http://arxiv.org/abs/hep-th/9203018)\]. L.A.J. London, Nucl. Phys. B [**434**]{}, 709-735 (1995). Dumitru Astefanesei, Robert Mann, Eugen Radu, *Reissner-Nordstrom-de Sitter black hole, planar coordinates and dS/CFT*, JHEP [**0410**]{}, 029 (2004) \[[arXiv:hep-th/0310273](http://arxiv.org/abs/hep-th/0310273)\]. C. Lanczos, Ann. Math. [**39**]{}, 842 (1938). D. Lovelock, *The Einstein tensor and its generalizations*, J. Math. Phys. B [**12**]{}, 498 (1971) \[[SPIRES](http://inspirehep.net/search?p=find+j+jmapa,12,498)\]. B. Zumino, Phys. Rep. [**137**]{}, 109 (1985). B. Zwiebach, Phys. Lett. B [**156**]{}, 315 (1985). J. T. Wheeler, Nucl. Phys. B [**268**]{}, 737 (1986). J. T. Wheeler, Nucl. Phys. B [**273**]{}, 732 (1986). Bin Chen, Jia-ju Zhang, *Note on generalized gravitational entropy in Lovelock gravity*, JHEP [**07**]{}, 185 (2013) \[[arXiv:hep-th/1305.6767](http://arxiv.org/abs/1305.6767)\]. Rong-Gen Cai, *A Note on Thermodynamics of Black Holes in Lovelock Gravity*, Phys. Lett. B [**582**]{}, 237-242 (2004) \[[arXiv:hep-th/0311240](http://arxiv.org/abs/hep-th/0311240)\]. Yu Tian, Xiao-Ning Wu, *Thermodynamics on the Maximally Symmetric Holographic Screen and Entropy from Conical Singularities*, JHEP [**1101**]{}, 150 (2011) \[[arXiv:1012.0411](http://arxiv.org/abs/1012.0411)\]. R.P. Kerr, *Gravitational field of a spinning mass as an example of algebraically special metrics*, Phys. Rev. Lett. [**11**]{}, 237 (1963). R.C. Myers and M.J. Perry, *Black holes in higher dimensional space-times*, Ann. Phys. [**172**]{}, 304 (1986). G. W. Gibbons, H. Lu, D. N. Page, C. N. Pope, *The General Kerr-de Sitter Metrics in All Dimensions*, J. Geom. Phys. [**53**]{}, 49 (2005) \[[arXiv:hep-th/0404008](http://arxiv.org/abs/hep-th/0404008)\]. G. W. Gibbons, H. Lu, D. N. Page, C. N. Pope, *Rotating Black Holes in Higher Dimensions with a Cosmological Constant*, Phys. Rev. Lett. [**93**]{}, 171102 (2004) \[[arXiv:hep-th/0409155](http://arxiv.org/abs/hep-th/0409155)\]. Kirill Orekhov, *Integrable models associated with Myers-Perry-AdS-dS black hole in diverse dimensions* \[[arXiv:hep-th/1312.7640](http://arxiv.org/abs/1312.7640)\]. T. Moon, Y. S. Myung and E. J. Son, *$f(R)$ black holes*, Gen. Rel. Grav. [**43**]{}, 3079 (2011)\[[arXiv:1101.1153](http://arxiv.org/abs/1101.1153)\]. S. H. Hendi, *Some exact solutions of $f(R)$ gravity with charged (a)dS black hole interpretation*, Gen. Rel. Grav. [**44**]{}, 835 (2012) \[[arXiv:1102.0089](http://arxiv.org/abs/1102.0089)\]. Ahmad Sheykhi, *Higher dimensional charged $f(R)$ black holes*, Phys. Rev. D [**86**]{}, 024013 (2012) \[[arXiv:1209.2960](http://arxiv.org/abs/1209.2960)\]. H.-C. Kim and R.-G. Cai, Phys. Rev. D [**77**]{}, 024045 (2008) \[arXiv:0711.0885\]. [^1]: duyiqiang12@mails.ucas.ac.cn [^2]: ytian@ucas.ac.cn
To my son Philippe for his unbounded energy and optimism. **On the classification of Floer-type theories.** **Nadya Shirokova.** **Abstract** In this paper we outline a program for the classification of Floer-type theories, (or defining invariants of finite type for families). We consider Khovanov complexes as a local system on the space of knots introduced by V. Vassiliev and construct the wall-crossing morphism. We extend this system to the singular locus by the cone of this morphism and introduce the definition of the local system of finite type. This program can be further generalized to the manifolds of dimension 3 and 4 \[S2\], \[S3\]. **Contents** .3cm [**1. Introduction.**]{} .5cm [ **2. Vassiliev’s and Hatcher’s theories.**]{} .4cm 2.1. The space of knots, coorientation. .2cm 2.2. Vassiliev derivative. .2cm 2.3. The topology of the chambers of the space of knots. .5cm [ **3. Khovanov homology.**]{} .5cm 3.1. Jones polynomial as Euler characterictics. Skein relation. .2cm 3.2. Reidemeister and Jacobsson moves. .2cm 3.3. Wall-crossing morphisms. .2cm 3.4. The local system of Khovanov complexes on the space of knots. .5cm [**4. Main definition, invariants of finite type for families.**]{} .4cm 4.1. Some homological algebra. .2cm 4.2. Space of knots and the classifying space of the category. .2cm 4.3. Vassiliev derivative as a cone of the wall-crossing morphism. .2cm 4.4. The definition of a theory of finite type. .5cm [**5. Theories of finite type. Further directions.**]{} .4cm 5.1. Examples of combinatorially defined theories. .2cm 5.2. Generalizations to dimension 3 and 4. .2cm 5.3. Further directions. .5cm [**6. Bibliography**]{}. 1.5cm .5cm **1. Introduction.** Lately there has been a lot of interest in various categorifications of classical scalar invariants, i.e. homological theories, Euler characteristics of which are scalar invariants. Such examples include the original instanton Floer homology, Euler characteristic of which, as it was proved by C.Taubes \[T\], is Casson’s invariant. Ozsvath-Szabo \[OS\] 3-manifold theory categorifies Turaev’s torsion, the Euler characteristic of their knot homologies \[OS\] is the Alexander polynomial. The theory of M. Khovanov categorifies the Jones polynomial \[Kh\] and Khovanov-Rozhansky theory categorifies the $sl(n)$ invariants \[KR\] . .2cm The theory that we are constructing will bring together theories of V. Vassiliev, A. Hatcher and M. Khovanov, and while describing their results we will specify which parts of their constructions will be important to us. The resulting theory can be considered as a “categorification of Vassiliev theory” or a classification of categorifications of knot invariants. We introduce the definition of a theory of finite type n and show that Khovanov homology theory in a categorical sense decomposes into a “Taylor series” of theories of finite type. The Khovanov functor is just the first example of a theory satisfying our axioms and we believe, that all theories mentioned above will fit into our template. .2cm Our main strategy is to consider a knot homology theory as a local system, or a constructible sheaf on the space of all objects (knots, including singular ones), extend this local system to the singular locus and introduce the analogue of the “Vassiliev derivative” for categorifications. By studying spaces of embedded manifolds we implicitly study their diffeomorphism groups and invariants of finite type. In his seminal paper \[V\] Vassiliev introduced finite type invariants by considering the space of all immersions of $S^1$ into $R^3$ and relating the topology of the singular locus to the topology of its complement via Alexander duality. He resolved and cooriented the discriminant of the space and introduced a spectral sequence with a filtration, which suggested the simple geometrical and combinatorial definition of an invariant of finite type, which was later interpreted by Birman and Lin as a “Vassiliev derivative” and led to the following skein relation. If $\lambda$ be an arbitrary invariant of oriented knots in oriented space with values in some Abelian group $A$. Extend $\lambda$ to be an invariant of $1$-singular knots (knots that may have a single singularity that locally looks like a double point $\doublepoint$), using the formula $$\lambda(\doublepoint)=\lambda(\overcrossing)-\lambda(\undercrossing)$$ Further extend $\lambda$ to the set of $n$-singular knots (knots with $n$ double points) by repeatedly using the skein relation. .2cm [**Definition**]{} We say that $\lambda$ is of type $n$ if its extension to $(n+1)$-singular knots vanishes identically. We say that $\lambda$ is of finite type if it is of type $n$ for some $n$. .2cm Given the above formula, the definition of an invariant of finite type n becomes similar to that of a polynomial: its (n+1)st Vassiliev derivative is zero. It was shown that all known invariants are either of finite type, or are infinite linear combinations of those, e.g. in \[BN1\] it was shown that the nth coeffitient of the Conway polynomial is a Vassiliev invariant of order $\leq n$. In this paper we are working with Khovanov homology, which will be our main example, however the latest progress in finding the combinatorial formula for the differential of the Ozsvath-Szabo knot complex \[MOS\], makes us hopeful that more and more examples will be coming. For the construction of the local system it is important to understand the topological type of the base. The topology of the connected components of the complement to the discriminant in the space of knots, called chambers, was studied by A. Hatcher and R.Budney \[H\], \[B\]. They introduced simple homotopical models for such spaces. Recall that the local system is well-defined on a homotopy model of the base, so Hatcher’s model is exactly what is needed to construct the local system of Khovanov complexes. Throughout the paper the following observation is the main guideline for our constructions: [*local systems of the classifying space of the category are functors from this category to the triangulated category of complexes*]{}. It would be very interesting to understand the relation between the Vassiliev space of knots is the classifying space of the category, whose objects are knots and whose morphisms are knot cobordisms. Our construction provides a [*Khovanov functor*]{} from the category of knots into the triangulated category of complexes. This allows us to translate all topological properties of the space of knots and Khovanov local system on it into the language of homological algebra and then use the methods of triangulated categories and homological algebra to assign algebraic objects to topological ones (singular knots and links). .3cm Recall that in his paper \[Kh\] M.Khovanov categorified the Jones polynomial, i.e. he found a homology theory, the Euler characteristics of which equals the Jones polynomial. He starts with a diagram of the knot and constructs a bigraded complex, associated to this diagram, using two resolutions of the knot crossing: .4cm ![image](ch4p5.eps) .4cm The Khovanov complex then becomes the sum of the tensor products of the vector space V, where the homological degree is given by the number of 1’s in the complete resolution of the knot. The local system of Khovanov homologies on the Vassiliev’s space of knots can be considered as invariants of families of knots. The discriminant of Vassiliev’s space corresponds to knots with transversal self-intersection, i.e. moving from one chamber to another we change overcrossing to undercrossing by passing through a knot with a single double point. We study how the Khovanov complex changes under such modification and find the corresponding morphism. After defining a wall-crossing morphism we can extend the invariant to the singular locus by the cone of a morphism which is our “categorification of the Vassiliev derivative”. Then we introduce the definition of a local system of finite type: the local system is of finite type n if for any selfintersection of the discriminant of codimension n, its n’th cone is an acyclic complex. .3cm The categorification of the Vassiliev derivative allows us to define the filtration on the Floer - type theories for manifolds of any dimension. In \[S4\] we prove the first finiteness result: .3cm [**Theorem \[S4\]**]{}. Restricted to the subcategory of knots with at most $n$ crossing, Khovanov local system is of finite type $n$, $n\geq 3$ and of type zero $n=0,1,2.$ .3cm This definition can be generalized to the categorifications of the invariants of manifolds of any dimension: we construct spaces of 3 and 4-manifolds by a version of a Pontryagin-Thom construction, consider homological invariants of 3 and 4-manifolds as local systems on these spaces and extend them to the discriminant. In subsequent papers our main example will be the Heegaard Floer homology \[OS\], the Euler characteristic of which is Turaev’s torsion. We show that local systems of such homological theories on the space of 3 - manifolds \[S1\] will carry information about invariants of finite type for families and information about the diffeomorphism group. We also have a construction \[S2\] for the refined Seiberg-Witten invariants on the space of parallelizable 4-manifolds. .3cm [**Acknowledgements**]{}. My deepest thanks go to Yasha Eliashberg for many valuable discussions, for inspiration and for his constant encouragement and support. I want to thank Maxim Kontsevich who suggested that I work on this project, for his attention to my work during my visit to the IHES and for many important suggestions. I want to thank graduate students Eric Schoenfeld and Isidora Milin for reading the paper and making useful comments. This paper was written during my visits to the IAS, IHES, MPIM and Stanford and I am grateful to these institutions for their exceptional hospitality. This work was partially supported by the NSF grant DMS9729992. .5cm **2. Vassiliev theory, invariants of finite type.** .5cm Vassiliev considered the space of all maps $E = f: S^1 \rightarrow R^3$. This space is a space of functions, so it is an infinite-dimensional Euclidean space. It is linear, contractible, and consists of singular (D) and nonsingular(E - D) knots. The discriminant D forms a singular hypersurface in E and subdivides into chambers, corresponding to different isotopy types of knots. To move from one chamber to another one has to change one overcrossing to undercrossing, passing through a singular knot with one double point. .2cm The discriminant of the space of knots is a real hypersurface, stratified by the number of the double points, which subdivides the infinite-dimensional space into [*chambers*]{}, corresponding to different isotopy types of knots. Vassiliev resolved and cooriented the discriminant, so we can assume that all points of selfintersection are transversal, with $2^n$ chambers adjacent to a point of selfintersection of the discriminant of codimension $n$. To study the topology of the complement to the discriminant, Vassiliev wrote a spectral sequence, calculating the homology of the discriminant and then related it to the homology of its complement via Alexander duality. His spectral sequence had a filtration, which suggested the simple geometrical and combinatorial definition of an invariant of finite type: an invariant is of type $n$ if for any selfintersection of the discriminant of codimension (n+1) its alternated sum over the $2^{n+1}$ chambers adjacent to a point of selfintersection is zero. .4cm For our constructions it will be very important to have a coorientation of the discriminant, which was introduced by Vassiliev. .2cm [**Definition**]{}. A hypersurface in a real manifold is said to be [*coorientable*]{} if it has a non-zero section of its normal bundle, i.e. if there exists a continuous vector field which is not tangent to the hypersurface at any point and doesn’t vanish anywhere. .2cm So there are two sides of the hypersurface : one where this vector field is pointing to and the other is where it is pointing from. And there are two choices of such vector field. The [*coorientation*]{} of a coorientable hypersurface is the choice of one of two possibilities. For example, Mobius band in $R^3$ is not coorientable. .2cm Vassiliev shows \[V\] that the discriminant of the space of knots has a coorientation, the conistent choice of normal directions. Recall that the nonsingular point $ \psi \in D$ of the discriminant is a map $S^1 \rightarrow R^3$, gluing together 2 distinct points $t_1, t_2$ of $S^1$, s.t. derivatives of the map $\psi$ at those points are transversal. .4cm [**Coorientation of the discriminant**]{}. Fix the orientation of $R^3$ and choose positively oriented local coordinates near the point $\psi(t_1) = \psi(t_2)$. For any point $\psi_1 \in D$ close to $\psi$ define the number $r(\psi_1)$ as the determinant: $$(\frac {\partial \psi_1} {\partial \tau} \arrowvert t_1,\frac {\partial \psi_1} {\partial \tau} \arrowvert t_2, \psi_1(t_1) - \psi_1(t_2))$$ with respect to these coordinates. This determinant depends only of the pair of points $t_1, t_2$, not on their order. A vector in the space of functions at the point $ \psi \in D$, which is transversal to the discriminant, is said to be positive, if the derivative of the function r along this vector is positive and negative, if this derivative is negative. This rule gives the coorientation of the hypersurface $D$ at all its nonsingular points and also of any nonsingular locally irreducible component of D at the points of selfintersection of D. .2cm The consistent choice of the normal directions of the walls of the discriminant will give the “directions” of the cobordisms (which are embedded into $E \times I$) between knots of the space E. .2cm .2cm [**Note**]{}. It is interesting to compare this construction with the result of E.Ghys \[Gh\], who introduced a metric on the space of knots and 3-manifolds.) .5cm [*2.3. The topology of the chambers of the space of knots.*]{} .5cm The study of the topology of the chambers of the space of knots was started by A. Hatcher \[H\], who found a simple homotopy models for these spaces. .2cm The main result is based on an earlier theorem regarding the topology of the classifying space of diffeomorphisms of an irreducible 3-manifold with nonempty boundary. In the following theorem A. Hatcher and D. McCullough answered the question posed by M. Kontsevich \[K\], regarding the finiteness of the homotopy type of the classifying space of the group of diffeomorphisms \[HaM\]: .2cm [**Theorem \[HaM\].**]{} Let M be an irreducible compact connected orientable 3-manifold with nonempty boundary. Then $BDif f (M, rel\partial)$ has the homotopy type of a finite aspherical CW- complex. .2cm The proof of this theorem uses the JSJ-decomposition of a 3-manifold. When applied to knot complements, the JSJ-decomposition defines a fundamental class of links in $S^3$, the “knot generating links” (KGL). A KGL is any $(n+1)$-component link $L=(L_0,L_1,\cdots,L_n)$ whose complement is either Seifert fibred or atoroidal, such that the $n$-component sub-link $(L_1,L_2,\cdots,L_n)$ is the unlink. If the complement of a knot $f$ contains an incompressible torus, then $f$ can be represented as a ‘spliced knot’ $f=J \Box L$ in unique way, where $L$ is an $(n+1)$-component KGL, and $J=(J_1,\cdots,J_n)$ is an $n$-tuple of non-trivial long knots. The spliced knot $J \Box L$ is obtained from $L_0$ by a generalized satellite construction. For any knot there is a representation of a knot as an iterated splice knot of atoroidal and hyperbolic KGLs. The order of splicing determines the “companionship tree” of $f$, $G_f$, and is a complete isotopy invariant of long knots. Given a knot $f \in K$, denote the path-component of $K$ containing $f$ by $K_f$. The topology of the chambers $K_f$ was further studied by R. Budney The main result of his paper \[Bu\] is the computation of the homotopy type of $K_f$ if $f$ is a hyperbolically-spliced knot ie: $f=J \Box L$ where $L$ is a hyperbolic KGL. .2cm The combined results can be summarized in the following theorem: .2cm . If $f=J \Box L$ where $L$ is an $(n+1)$-component hyperbolic KGL, then $$K_f \backsimeq S^1 \times \left( SO_2 \times_{A_f} \prod_{i=1}^n K_{J_i} \right)$$ $A_f$ is the maximal subgroup of $B_L$ such that induced action of $A_f$ on $K^n$ preserves $\prod_{i=1}^n K_{L_i}$. The restriction map $A_f \to Diff(S^3,L_0) \to Diff(L_0)$ is faithful, giving an embedding $A_f \to SO_2$, and this is the action of $A_f$ on $SO_2$. This result completes the computation of the homotopy-type of $K$ since we have the prior results: 1. If $f$ is the unknot, then $K_f$ is contractible. 2. If $f$ is a torus knot, then $K_f \simeq S^1$. 3. If $f$ is a hyperbolic knot, then $K_f \backsimeq S^1 \times S^1$ 4. If a knot $f$ is a cabling of a knot $g$ then $K_f \backsimeq S^1 \times K_g$. 5. If the knot $f$ is a connected sum of $n \geq 2$ prime knots $f_1, f_2, \cdots, f_n$ then $K_f \backsimeq \left( ({\mathcal C}_2(n) \times \prod_{i=1}^n K_{f_i}\right)/\Sigma_f$. Here $\Sigma_f \subset S_n$ is a Young subgroup of $S_n$, acting on ${\mathcal C}_2(n)$ by permutation of the labellings of the cubes, and similarly by permuting the factors of the product $\prod_{i=1}^n K_{f_i}$. The definition of $\Sigma_f \subset S_n$ is that it is the subgroup of $S_n$ that preserves a partition of $\{1,2,\cdots,n\}$, the partition being given by the equivalence relation $i \sim j \Longleftrightarrow K_{f_i} =K_{f_j}$. 6. If a knot has a non-trivial companionship tree, then it is either a cable, in which case H4 applies, a connect-sum, in which case B5 applies or is hyperbolically spliced. If a knot has a trivial companionship tree, it is either the unknot, in which case H1 applies, or a torus knot in which case H2 applies, or a hyperbolic knot, in which case H3 applies. Moreover, every time one applies one of the above theorems, one reduces the problem of computing the homotopy-type of $K_f$ to computing the homotopy-type of knot spaces for knots with shorter companionship trees, thus the process terminates after finitely-many iterations. .5cm For constructing a local system we need only the homotopy type of the chamber. The theorem of Hatcher and Budney provides us with a complete classification of homotopy types of chambers, corresponding to all possible knot types. **3. Khovanov’s categorification of Jones polynomial.** [ *3.1. Jones polynomial as Euler characterictics. Skein relation.*]{} .5cm In his paper \[Kh\] M. Khovanov constructs a homology theory, with Euler characteristics equal to the Jones polynomial. .3cm He associated to any diagram $D$ of an oriented link with n crossing points a chain complex $CKh(D)$ of abelian groups of homological length $(n+1)$, and proved that for any two diagrams of the same link the corresponding complexes are chain homotopy equivalent. Hence, the homology groups $Kh(D)$ are link invariants up to isomorphism. His construction is as follows: given any double point of the link projection $D$, he allows two smoothings: .4cm ![image](ch4p5.eps) .4cm If the the diagram has n double points, there are $2^n$ possible resolutions. The result of each complete smoothing is the set of circles in the plane, labled by $n$-tuples of 1’s and 0’s: $$CKh( \underbrace{\bigcirc,...,\bigcirc}_{n times}) = V^{\otimes n}$$ The cobordisms between links, i.e., surfaces embedded in ${\mathbb{R}}^3\times [0,1],$ should provide maps between the associated groups. A surface embedded in the 4-space can be visualized as a sequence of plane projections of its 3-dimensional sections (see \[CS\]). Given such a presentation $J$ of a compact oriented surface $S$ properly embedded in ${\mathbb{R}}^3\times [0,1]$ with the boundary of $S$ being the union of two links $L_0\subset {\mathbb{R}}^3\times \{ 0\} $ and $L_1 \subset {\mathbb{R}}^3\times \{ 1\},$ , Khovanov associates to $J$ a map of cohomology groups $$\theta_J: Kh^{i,j}(D_0)\rightarrow Kh^{i, j + \chi(S)}(D_1), \hspace{0.4in} i,j\in {\mathbb{Z}}$$ The differential of the Khovanov complex is defined using two linear maps $m:V\otimes V\to V$ and $\Delta:V\to V\otimes V$ given by formulas : $$\big(V\otimes V\overset{m}{\rightarrow}V\big) \quad m:\begin{cases} v_+\otimes v_-\mapsto v_- & v_+\otimes v_+\mapsto v_+ \\ v_-\otimes v_+\mapsto v_- & v_-\otimes v_-\mapsto 0 \end{cases}$$ $$\big(V\overset{\Delta}{\rightarrow}V\otimes V\big) \quad \Delta:\begin{cases} v_+ \mapsto v_+\otimes v_- + v_-\otimes v_+ &\\ v_- \mapsto v_-\otimes v_- & \end{cases}$$ The differential in Khovanov complex can be informally described as “all the ways of changing 0-crossing to 1-crossing”. .2cm Homological degree of the Khovanov complex in the number of 1’s in the plane diagram resolution. The sum of “quantum” components of the same homological degree i gives the ith component of the Khovanov complex. .3cm One can see that the i-th differential $d^i$ is the sum over “quantum” components, it will map one of the quantum components in homological degree i to perhaps several quantum components of homological degree i+1. .3cm Khovanov theory can be considered as a (1+1) dimensional TQFT. The cubes, that are used in it’s definition come from the TQFT corresponding to the Frobenius algebra defined by $V, m, \Delta$. As we will see later, our constructions will give the interpretation of Khovanov local system as a topological D-brane and will suggest to study the structure of the category of topological D-branes as a [**triangulated category**]{}. .2cm We prove the following important property of the Khovanov’s complex: .2cm [**Theorem 1**]{}. Let k denote the kth crossing point of the knot projection $D$, then for any k the Khovanov’s complex $C$ decomposes into a sum of two subcomplexes $C= C^k_0 \oplus C^k_1$ with matrix differential of the form $$d_C = \left(\begin{array}{cc} d_0&d_{0,1}\\ 0&d_1 \end{array}\right)$$ .2cm [**Proof**]{}. Let $C^k_0$ denote the subcomplex of $C$, consisting of vector spaces, which correspond to the complete resolutions of $D$, having 0 on the kth place. The differential $d_0$ obtained by restricting $d$ only to the arrows between components of $C^k_0$. We define $C^k_1$ the same way, by restricting to the complete resolutions of $D$, having 1 on the kth place. The only components of the differential, which are not yet used in our decomposition, are the ones which change 0-resolution on the kth place of $C^k_0$ to 1 on the kth place in $C^k_1$, we denote them $d_{0,1}$. One can easily see from the definition of the Khovanov’s differential (which can be intuitivly described as “all the ways to change 0-resolution in the ith component of the complex to the 1-resolution in the (i+1)st component”), that there is no differential mapping ith component of $C^k_1$ to the (i+1)st component of $C^k_0$. .2cm .2cm .2cm [ *Mirror images and adjoints.*]{} Taking the mirror image of the knot will dualize Khovanov complex. So if we want to invert the cobordism between two knots, we should consider the “dual” cobordism between mirror images of these knots. .3cm [*3.2. Reidemeister and Jacobsson moves.*]{} .5cm A cobordism (a surface S embedded into $R^3 \times [0,1]$) between knots $K_0$ and $K_1$ provide a morphism between the corresponding cohomology: $$F_S :Kh^{i,j}(D_0) \rightarrow Kh^{i,j+\chi(S)}(D_1)$$ where $D_0$ and $D_1$ are diagrams of the knots $K_0$ and $K_1$ and $\chi(S)$ is the Euler characteristic of the surface. .3cm We will distinguish between two types of cobordisms - first, corresponding to the wall crossing (and changing the type of the knot). And second, corresponding to nontrivial loops in chambers which will reflect the dependence of Khovanov homologies on the selfdiffeomorphisms of the knot, similar to the Reidemeister moves. In this paragraph we will discuss the second type of cobordisms. .3cm By a surface $S$ in ${\mathbb{R}}^4$ we mean an oriented, compact surface $S,$ possibly with boundary, properly embedded in ${\mathbb{R}}^3\times [0,1].$ The boundary of $S$ is then a disjoint union $$\partial S = \partial_0 S \sqcup - \partial_1 S$$ of the intersections of $S$ with two boundary components of ${\mathbb{R}}^3\times [0,1]$: $$\begin{aligned} \partial_0 S & = & (S\cap {\mathbb{R}}^3\times \{ 0\}) \\ - \partial_1 S & = & (S\cap {\mathbb{R}}^3\times \{ 1\}) \end{aligned}$$ Note that $\partial_0 S$ and $\partial_1 S$ are oriented links in ${\mathbb{R}}^3.$ The surface $S$ can be represented by a sequence $J$ of plane diagrams of oriented links where every two consecutive diagrams in $J$ are related either by one of the four Reidemeister moves or by one of the four moves [*birth, death, fusion*]{} described by Carter-Saito \[CS\]. To each Reidemeister move between diagrams $D_0$ and $D_1$ Khovanov \[Kh\] associates a quasi-isomorphism map of complexes $C(D_0)\rightarrow C(D_1).$ Given a representation $J$ of a surface $S$ by a sequence of diagrams, we can associate to $J$ a map of complexes $$\varphi_J: C(J_0) \to C(J_1)$$ Any link cobordism can be described as a one-parameter family $D_t, t \in [0,1]$ of planar diagrams, called a [**movie**]{}. The $D_t$ are link diagrams, except at finitely many singular points which correspond to either a Reidemeister move or a Morse modification. Away from these points the diagrams for various $t$ are locally isotopic . Khovanov explained how local moves induce chain maps between complexes, hence homomorphisms between homology groups. The same is true for planar isotopies. Hence, the composition of these chain maps defines a homomorphism between the homology groups of the diagrams of links. In his paper \[Ja\] Jacobsson shows that there are knots, s.t. a [**movie**]{} as above will give a nontrivial morphism of Khovanov homology: [**Theorem \[Ja\]**]{} For oriented links $L_0$ and $L_1$, presented by diagrams $D_0$ and $D_1$, an oriented link cobordism $\Sigma$ from $L_0$ to $L_1$, defines a homomorphism $\mathcal{H}(D_0) \rightarrow \mathcal{H}(D_1)$, invariant up to multiplication by -1 under ambient isotopy of $\Sigma$ leaving $\partial \Sigma$ setwise fixed. Moreover, this invariant is non-trivial. Jacobsson constructs a family of derived invariants of link cobordisms with the same source and target, which are analogous to the classical Lefschetz numbers of endomorphisms of manifolds. The Jones polynomial appears as the Lefschetz polynomial of the identity cobordism. .3cm From our perspective the Jacobsson’s theorem shows that the Khovanov local system will have nontrivial monodromies on the chambers of the space of knots. .5cm [ *3.3. Wall-crossing morphisms.*]{} .4cm In 3.2 we described what kind of modifications can occur in the cobordism, when we consider the “movie” consisting only of manifolds of the same topological type. These modifications implied corresponding monodromies of the Khovanov complex. However, morphisms that are the most important for Vassiliev-type theories are the “wall-crossing” morphisms. We will define them now (locally). .3cm Consider two complexes $A^\bullet$ and $B^\bullet$ adjacent to the generic wall of the discriminant. Recall, that the discriminant is cooriented ( 2.2). If $B^\bullet$ is “right” via coorientation (or "further in the Ghys metric form the unknot) of $A^\bullet$, then we shift $B^\bullet$’s grading up by one and consider $B^\bullet[1]$: 1.5cm $ A^\bullet | B^\bullet[1]$ In general, and this will be very important for us in subsequent chapters, if the complex $K^\bullet$ is n steps (via the coorientation) away from the unknot, we shift its grading up by n. Thus adjacent complexes will have difference in grading by one (as above), defined by the coorientation. .3cm Now we want to understand what happens to the Khovanov complex when we change the kth over-crossing (in the knot diagram $D$) to an under-crossing. We will illustrate these changes on one of the Bar-Natan’s trademark diagrams (with his permission)\[BN1\]. By “I” we mark the arrow , connecting components of the complex which will exchange places under wall-crossing morphisms when we change over-crossing to under-crossing for the self-intersection point 1. By “II” when we do it for point 2 and “III” when we do it for 3: .3cm Now recall the theorem proved in (3.1): for any k, where k is the number of crossings of the diagram $D$, the Khovanov complex can be split into the sum of two subcomplexes with the uppertriangular differential. Notice from the diagram above that when we change kth overcrossing to an undercrossing, 0 and 1-resolutions are exchanged , so $A^\bullet = A^\bullet_0 \oplus A^\bullet_1$, $B^\bullet[1]=B^\bullet_0[1] \oplus B^\bullet_1[1]$, thus for every k we can define the [**wall-crossing morphism**]{} $\omega$ as follows: The map defined as the identity on $ A^\bullet_0 $ and as a trivial map on $A^\bullet_1$: $$\xymatrix@C+0.5cm{\omega: A_0^\bullet \ar[r]^-{ Id} & B_0^\bullet[1] \\ \omega:A_1^\bullet \ar[r]^-{ \emptyset} & B_1^\bullet[1] }$$ is the morphism of complexes. [**Proof.**]{} From the Theorem 1 we know that for any crossing k the Khovanov complex can be decomposed as a direct sum with uppertriangular differential: $$d = \left(\begin{array}{cc} d_0&d_{0,1}\\ 0&d_1 \end{array}\right)$$ .2cm It is an easy check that the wall-crossing morphism defined as above is indeed a morphism of complexes (i.e. it commutes with the differential): $$\xymatrix@C+0.5cm{ A^\bullet \ar[d]^{d} \ar[r]^-{ \omega} & B^\bullet \ar[d]^{d} \\ A^\bullet \ar[r]^-{ \omega} & B^\bullet }$$ Since we defined the morphism as 0 on $A_1^\bullet$, the diagram above becomes the following commutative diagram: $$\xymatrix@C+0.5cm{ A_0^\bullet \ar[d]^{d_0} \ar[r]^-{Id} & B_0^\bullet[1] \ar[d]^{d_0} \\ A_0^\bullet \ar[r]^-{Id} & B_0^\bullet[1] }$$ .5cm .3cm [*3.4. The local system of Khovanov complexes on the space of knots.*]{} .5cm In this paragraph we introduce the Khovanov local system on the space of knots. .3 cm [**Definition**]{}. A local system on the locally connected topological space M is a fiber bundle over M, the sections of which are abelian groups. The fiber of the bundle depend continuously on the point of the base (such that the group structure on the set of fibers can be extended over small domains in the base). .3 cm Any local system on M with fiber A defines a representation $\pi_1 (M) \rightarrow Aut(A)$. To any loop there corresponds a morphism of the fibers of the bundle over the starting point of the loop. The set of isomorphism classes of local systems with fiber $A$ are in one-to-one corresondence with the set of such representations up to conjugation. For example any representation of an arbitrary group $\pi$ in $Aut(A)$ uniquely (up to isomorphism) defines a local system on the space $K(\pi, 1)$ \[GM\]. Morphisms of local systems are morphisms of fiber bundles, preserving group structure in the fibers. Thus introducing the continuation functions (maps between fibers) over paths in the base will define a local system over the manifold M. .3 cm Next we set up the Khovanov complexes as a local system on the space of knots. If we were doing it “in coordinates”, we would introduce charts on the chambers of the space of knots and define our local system via transition maps, starting with some “initial” point . This would be a very interesting and realistic approach, since the homotopy models for chambers are understood \[H\], \[B\], e.g. we would have just one chart for the chamber, containing the unknot (since that chamber is contractible), two for a torus knot, four for a hyperbolic one, etc. Then monodromies of the Khovanov local system along nontrivial loops in the chamber will be given via Jacobsson movies. It would be also very interesting to find a unique special point in every chamber of the space $E$ and study monodromies of the local system with respect to this point. The candidate for such point is introduced in the works of J. O’Hara, who studied the minima of the electrostatical energy function of the knot \[O’H\]: $$E(K)=\int\int |(x-y)|^{-2} dxdy$$ It was shown that under some assumptions and for perturbation of the above functional, its critical points on the space of knots will provide a ’distinguished" point in the chamber. The first natural question for this setup is: which nontrivial loops in the chamber $E_K$, corresponding to the knot $K$ are distinguised by Khovanov homologies and which are not? .5cm However, assuming Khovanov’s theorem \[Kh\] (that his homology groups are invariants of the knot, independent on the choices made) and assuming also the results of Jacobsson \[J\] , it is enough for us to introduce the continuation maps, along any path $\gamma$ in the chamber of the space of knots. These methods were developed by several authors (see \[Hu\]): Let $K_1$ and $K_2$ be two knots in the same chamber of the space $E$, let ${K}_i$ be generic, and let $\gamma=\{K_t\mid t\in[0,1]\}$ be any path of equivalent objects in $E$ from $K_1$ to $K_2$. Then a generic path $\gamma$ induces a chain map $$\label{eqn:continuation} F({{\gamma}}): CKh_*({K}_1){\longrightarrow} CKh_*({K}_2)$$ called the “continuation” map, which has the following properties: .5cm - 1)[*Homotopy*]{} A generic homotopy rel endpoints between two paths ${\gamma}_1$ and ${\gamma}_2$ with associated chain maps $F_1$ and $F_2$ induces a chain homotopy $$H:HKh_*({K}_1){\longrightarrow}\nonumber HKh_{*+1}({K}_2)$$ $$\partial H + H\partial = F_1 - F_2$$ .3cm - 2)[*Concatenation*]{} If the final endpoint of ${\gamma}_1$ is the initial endpoint of ${\gamma}_2$, then $F({{\gamma}_2 {\gamma}_1})$ is chain homotopic to $F({{\gamma}_2}) F({{\gamma}_1})$. .3cm - 3)[*Constant*]{} If ${\gamma}$ is a constant path then $F({\gamma})$ is the identity on chains. .3cm These three properties imply that if $K_1$ and $K_2$ are equivalent, then $HKh_*({K}_1)\simeq HKh_*({K}_2)$. (Khovanov’s theorem). This isomorphism is generally not canonical, because different homotopy classes of paths may induce different continuation isomorphisms on Khovanov homology (Jacobsson moves). However, since the loop is contractible, we do know that $HKh_*({K})$ depends only on $K$, so we denote this from now on by $HKh_*(K)$. .3cm We now define the [**restriction**]{} of the Khovanov local system to finite-dimensional subspaces of the space of knots. Note that in the original setting our complexes may have had different length. For example, the complex corresponding to the standard projection of the unknot will have length 1, however, we can consider very complicated “twisted” projections of the unknot with an arbitrary large number of crossing points. The corresponding complexes will be quasiisomorphic to the original one. This construction resembles the definition of Khovanov homology introduced in \[CK\], \[W\]. They define Khovanov homology as a relative theory, where homology groups are calculated relative to the twisted unknots. When considering the restrictions of the Khovanov local system to the subcategories of knots with at most n crossings, we would like [**all**]{} complexes to be of length $n+1$. This can be achieved by “undoing” the local system, starting with the knots of maximal crossing number n and then using the wall-crossing morphisms, define complexes of length $(n+1)$, quasiisomorphic to the original ones, in all adjacent chambers. We continue this process till it ends, when we reach the chamber containing unknot. .3cm Recall that Khovanov homology is defined for the knot projection (though is independent of it by Khovanov’s theorem). So we will consider a ramification of Vassiliev space, a pair, the embedding of the circle into $R^3$ and its projection on (x,y)-plane. Then each chamber will be subdivided into “subchambers” corresponding to nonsingular knot projections and the “subdiscriminant” will consist of singular projections of the given knot. The local system, defined on such ramification will live on the universal cover of the base, the original Hatcher chamber corresponding to knot K and morphisms of the local system between “subchambers” are given by Reidemaister moves. The composition of such moves may constitute the Jacobsson’s movie and will give nontrivial monodromies of the local system within the original chamber. [**Note.**]{} As we will see later, if one assines cones of Reidemeister morphisms to the walls of the “subdiscriminant”, all such cones will be acyclic complexes. This statement in a different form was proved in the original Khovanov \[Kh\] paper. **4. The main definition, invariants of finite type for families.** [*4.1. Some homological algebra.*]{} .5cm We describe results and main definitions from the category theory and homological algebra which will be used in subsequent chapters. The standard references on this subject are \[GM\], \[Th\]. .2cm By constructing the local system of (3.4) we introduced the [**derived category**]{} of Khovanov complexes. The properties of the derived category are summarized in the axiomatics of the [**triangulated category**]{}, which we will discuss in this chapter. .3cm [**Definition**]{}. An [*additive*]{} category is a category ${\mathcal{A}}$ such that - Each set of morphisms $Hom(A,B)$ forms an abelian group. - Composition of morphisms distributes over the addition of morphisms given by the abelian group structure, i.e. $f\circ(g+h)=f\circ g+f\circ h$ and $(f+g)\circ h=f\circ h+g\circ h$. - There exist products (direct sums) $A\times B$ of any two objects $A,B$ satisfying the usual universal properties. - There exists a zero object $0$ such that $Hom(0,0)$ is the zero group (i.e. just the identity morphism). Thus $Hom(0,A)=0=Hom(A,0)$ for all $A$, and the unique zero morphism between any two objects is the one that factors through the zero object. .2cm So in an abelian category we can talk about exact sequences and *chain complexes*, and cohomology of complexes. Additive functors between abelian categories are *exact* (respectively left or right exact) if they preserve exact sequences (respectively short exact sequences $0\to A\to B\to C$ or $A\to B\to C\to0$). .2cm [**Definition**]{}. The [*bounded derived category*]{} $D^b({\mathcal{A}})$ of an abelian category ${\mathcal{A}}$ has as objects bounded (i.e. finite length) ${\mathcal{A}}$-chain complexes, and morphisms given by chain maps with quasi-isomorphisms inverted as follows. We introduce morphisms $f$ for every chain map between complexes $f:\,X_f\to Y_f$, and $g^{-1}:\,Y_g\to X_g$ for every quasi-isomorphism $g:\,X_g\stackrel{\sim\,}{\to}Y_g$. Then form all products of these morphisms such that the range of one is the domain of the next. Finally identify any combination $f_1f_2$ with the composition $f_1\circ f_2$, and $gg^{-1}$ and $g^{-1}g$ with the relevant identity maps id$_{Y_g}$ and id$_{X_g}$. .2cm Recall that a triangulated category $C$ is an additive category equipped with the additional data: .2cm [**Definition**]{}. *Triangulated category* is an additive category with a functor $T: X \rightarrow X [1]$ (where $X^i[1] = X^{i+1}$) and a set of *distinguished triangles* satisfying a list of axioms. The triangles include, for all objects $X$ of the category: 1\) Identity morphism $$X \rightarrow X\rightarrow0\rightarrow X[1],$$ 2\) Any morphism $f:X\rightarrow Y$ can be completed to a distinguished triangle $$X\rightarrow Y\rightarrow C\rightarrow X[1],$$ 3\) There is also a derived analogue of the 5-lemma, and a compatibility of triangles known as the octahedral lemma, which can be understood as follows: If we naively interprete property 1) as the difference $X - X = 0$, property 2) as $C = X - Y$, then the octahedron lemma says: $$(X - Y) - Z = C - Z = X - (Y - Z)$$ When topological spaces considered up to homotopy there is no notion of kernel or cokernel. The cylinder construction shows that any map $f:\,X\to Y$ is homotopic to an inclusion $X\to\,$cyl$\,(f)=Y\sqcup(X\times[0,1])/f(x)\sim(x,1)$, while the path space construction shows it is also homotopic to a fibration. .5cm The cone $C_f$ on a map $f:\,X\to Y$ is the space formed from $Y\sqcup(X\times[0,1])$ by identifying $X\times\{1\}$ with its image $f(X)\subset Y$, and collapsing $X\times\{0\}$ to a point. It can be considered as a cokernel, i.e. if $f:\,X\to Y$ is an inclusion, then $C_f$ is homotopy equivalent to $Y/X$. Taking the $i$th cohomology $H_i$ of each term, and using the suspension isomorphism $H_i(\Sigma X)\cong H_{i-1}(X)$ gives a sequence $$H_i(X)\to H_i(Y)\to H_i(Y,X)\to H_{i-1}(X)\to H_{i-1}(Y)\to\ldots$$ which is just the long exact sequence associated to the pair $X\subset Y$. Up to homotopy we can make this into a sequence of simplicial maps, so that taking the associated chain complexes we get a lifting of the long exact sequence of homology to the level of complexes. It exists for all maps $f$, not just inclusions, with $Y/X$ replaced by $C_f$. If $f$ is a fibration, $C_f$ can act as the “kernel” or fibre of the map. If $f:\,X\to$point, then $C_f=\Sigma X$, the suspension of the fibre $X$. Thus $C_f$ acts as a combination of both cokernel and kernel, and if $f:\,X\to Y$ is a map inducing an isomorphism of homology groups of simply connected spaces then the sequence $$H_i(X)\to H_i(Y)\to H_i(C_f)\to H_{i-1}(X)\to H_{i-1}(Y)\to\ldots$$ implies $H_*(C_f)$=0. Then $C_f$ homotopy equivalent to a point. Thus we can give the following definition. .3cm [**Definition**]{}. If $X$ and $Y$ are simplicial complexes, then a simplicial map $f:\,X\to Y$ , defines (up to isomorphism) an object in triangulated category, called the [**cone of morphism**]{} f, denoted $C_f$. $$\renewcommand\arraystretch{1} C_X^{\bullet}\oplus C_Y^{\bullet}[1] \quad\mathrm{with\ differential}\quad d_{C_f}=\left(\!\!\!\begin{array}{cc} d_X & f \\ 0 & d_Y[1] \end{array}\!\!\right),$$ where $[\,n\,]$ means shift a complex $n$ places up. Thus we can define the cone $C_f$ on any map of chain complexes $f:\,A^{\bullet}\to B^{\bullet}$ in an abelian category ${\mathcal{A}}$ by the above formula, replacing $C_X^{\bullet}$ by $A^{\bullet}$ and $C_Y^{\bullet}$ by $B^{\bullet}$. If $A^{\bullet}=A$ and $B^{\bullet}=B$ are chain complexes concentrated in degree zero then $C_f$ is the complex $\{A\stackrel{f\,}{\to}B\}$. This has zeroth cohomology $h^0(C_f)=$ker$\,f$, and $h^1(C_f)=$coker$f$, so combines the two (in different degrees). In general it is just the total complex of $A^{\bullet}\to B^{\bullet}$. So what we get in a derived category is not kernels or cokernels, but “exact triangles” $$A^{\bullet}\to B^{\bullet}\to C^{\bullet}\to A^{\bullet}\,[\,1\,].$$ Thus we have long exact sequences instead of short exact ones; taking $i$th cohomology $h^i$ of the above gives the standard long exact sequence $$h^i(A^{\bullet})\to h^i(B^{\bullet})\to h^i(C^{\bullet})\to h^{i+1}(A^{\bullet})\to\ldots$$ The cone will fit into a triangle: $$\xymatrix{ &C\ar[dl]^w_{[1]}&\\ A\ar[rr]^u&&B\ar[ul]^v } \label{eq:tri1}$$ The “$[1]$” denotes that the map $w$ increases the grade of any object by one. .5cm In this paragraph we will construct the Khovanov functor from the category of knots into the triangulated category of Khovanov complexes. .3cm [**Definition**]{}. The [**category of knots**]{} $\mathcal K$ is the category, the objects of which are knots, $S^1 \rightarrow S^3$, morphisms are cobordisms, i.e. surfaces $ \Sigma$ properly embedded in ${\mathbb{R}}^3\times [0,1]$ with the boundary of $\Sigma$ being the union of two knots $K_1\subset {\mathbb{R}}^3\times \{ 0\} $ and $K_2 \subset {\mathbb{R}}^3\times \{ 1\}$. We denote $\mathcal K_n$ the $\bf subcategory$ of knots with at most n crossings. (Recall that a knot’s crossing number is the lowest number of crossings of any diagram of the knot. ) .2cm Note that our cobordisms (morphisms in the category of knots) are [**directed**]{} via the coorientation of the discriminant of the space of knots. Note that to reverse cobordism, we can consider the same cobordism between mirror images of the knots. .2cm [**Definition**]{}. The [**nerve**]{} $\mathcal N (C)$ of a category C is a simplicial set constructed from the objects and morphisms of C, i.e. points of $\mathcal N (C)$ are objects of $C$, 1-simplices are morphisms of $C$, 2-simplices are commutative triangles, 3-simplices are commutative tetrahedrons of $C$, etc. $$\mathcal N (C) = (lim \mathcal N^i (C))$$ The geometric realization of a simplicial set $\mathcal N (C)$ is a topological space, called [**the classifying space**]{} of the category C, denoted $B(C)$. .3cm The following observation is the main guideline for our constructions: [*sheaves on the classifying space of the category are functors on that category*]{} \[Wi\]. .3cm Once we prove that the Vassiliev space of knots is a classifying space of the category $\mathcal K$, our local system will provide a representation of the Khovanov functor. Let C be a category and let Set be the category of sets. For each object A of C let Hom(A,Ð) be the hom functor which maps objects X to the set Hom(A,X). Recall that a functor $F : C \rightarrow Set$ is said to be [**representable**]{} if it is naturally isomorphic to $Hom(A,Ð)$ for some object A of C. A representation of F is a pair $(A, \Psi)$ where $$\Psi : Hom(A,Ð) \rightarrow F$$ is a natural isomorphism. .5cm If $E$ - the space of knots, denote $\mathcal K_E$ the category of knots, whose objects are points in $E$ and morphisms $Mor (x,y)= \{ \gamma : [0,1] \rightarrow X; s.t. \gamma(0)=x, \gamma(1)=y\}$ and $\mathcal K_K$ - subcategory corresponding to knots of the same isotopy type K. .2cm [**Proposition**]{}. The chamber $E_K$ of the space of long knots for $K$ - unknot, torus of hyperbolic knot is the classifying space of the category $\mathcal K _K$. .2cm [**Proof**]{}. By Hatcher’s theorem \[H\] the chambers of the space of knots $E_K$, corresponding to unknot, torus or hyperbolic knot are $K(\pi, 1)$. .2cm By definition the space of long knots is $E = \{f: R^1 \rightarrow R^3\}$, nonsingular maps which are standard outside the ball of large radius. If $f_1, f_2$ are vector equations giving knots $K_1, K_2$, then $t f_1 +(1-t)f_2$ is a path in the mapping space, defining a knot for each value of t. The cobordism between two embeddings is given by equations in $R^3\times I$. All higher cobordisms can be contracted, since there are no higher homotopy groups in $E_K$. So both the classifying space of the category and the chamber of the space of knots are $K(\pi, 1)$ with the same $\pi$. They are the same as simplicial complexes. Note, that in the case of hyperbolic knots one can choose the distinguished point in the chamber - corresponding to the hyperbolic metric on the complement to the knot. .5cm [ *4.3. Vassiliev derivative as a cone of the wall-crossing morphism.*]{} .5cm To be able to construct a categorification of Vassiliev theory, we have to extend the local system, which we defined on chambers, to the discriminant of the space of knots. .2cm Recall that according to the axiomatics of the triangulated category, described in (4.1), we assign an new object to every morphism in the category: for a complex $X=(X^i,d_x^i)$ define a complex $X[1]$ by $$(X[1])^i=X^{i+1}, d_{X[1]}=-d_X$$ For a morphism of complexes $f:X \rightarrow Y$ let $f[1]:X[1] \rightarrow Y[1]$ coincide with f componentwise. .2cm Let $f:X\rightarrow Y$ be a wall-crossing morphism. The [**cone of f**]{} is the following complex $C(f)$: $$X \rightarrow Y \rightarrow Z=C(f) \rightarrow X[1]$$ i.e. $$C(f)^i=X[1]^i \oplus Y^i, d_{C(f)}(x^{i+1},y^i)=(-d_X x^{i+1},f(x^{i+1})-d_Yy^i)$$ .2cm Recall, that we set up the local system on the space of knots (3.4) s.t. if the complex $X^\bullet$ is n steps (via the coorientation) away from the unknot, we shift its grading up by n. So complexes in adjacent chambers will have difference in grading by one, defined by the coorientation. Thus, given a bigraded complex, associated to the generic wall of the discriminant, we get two natural specialization maps into the neighbourhoods, containing $X^\bullet$ and $Y^\bullet$: So with any morphism $f$ we associate the triangle: $$\xymatrix{ &C_f\ar[dl]^w_{[1]}&\\ X^\bullet \ar[rr]^f&&Y^\bullet \ar[ul]^v } \label{eq:tri1}$$ With any commutative cube $$\xymatrix@C+0.5cm{\bullet \ar[d]^{\omega} \ar[r]^-{u} & \bullet \ar[d]^{\omega} \\ \bullet \ar[r]^-{ u} & \bullet }$$ (in the space of knots the above picture corresponds to the cobordism around the selfintersection of the discriminant of codimension two), we associate the map between cones, corresponding to the vertical and horisontal walls, and assign it to the point of their intersection: .5cm $$\xymatrix@C+0.5cm{C_u \ar[r]^{C_{u \omega}} & C_{\omega}}$$ .5cm .2cm [**Lemma**]{}. Given four chambers as above, the order of taking cones of morphisms is irrelevant, $C_{u \omega}=C_{\omega u}$. .2cm [**Proof**]{}. see \[GM\]. .3cm Consider a point of selfintersection of the discriminant of codimension n. There are $2^n$ chambers adjacent to this point. Since the discriminant was resolved by Vassiliev \[V\], this point can be considered as a point of transversal selfintersection of n hyperplanes in $R^n$, or an origin of the coordinate system of $R^n$. .5cm Now our local system looks as follows. On chambers of our space we have the local system of Khovanov complexes, to any point $t$ of the generic wall between chambers containing $X^\bullet$ and $Y^\bullet$ (corresponding to a singular knot), we assign the cone of the morphism $X^\bullet \rightarrow Y^\bullet$ (with the specialization maps from the cone to the small neighborhoods of $t$ containing $X^\bullet$ and $Y^\bullet$). To the point of codimention n we assign the nth cone, $2^n$-graded complex, etc. .5cm [**Definition.**]{} The Khovanov homology of the singular knot (with a single double point ) is a bigraded complex $$\renewcommand\arraystretch{1} X^{\bullet}\oplus Y^{\bullet}[1] \quad\mathrm{with \ the \ matrix \ differential}\quad d_{C_\omega}=\left(\!\!\!\begin{array}{cc} d_X & \omega \\ 0 & d_Y[1] \end{array}\!\!\right),$$ where $X^{\bullet}$ is Khovanov complex of the knot with overcrossing, $Y^{\bullet}$ is the Khovanov complex of the knot with undercrossing and $\omega$ is the wall-crossing morphism. .2cm In \[S4\] we give the geometric interpretation of the above definition. .3cm [*4.4. The definition of a theory of finite type.*]{} .5cm Once we extended the local system to the singular locus, it is natural to ask if such an extension will lead to the categorification of Vassiliev theory. The first natural guess is that the theory, set up on some space of objects, which has quasiisomorphic complexes on all chambers is a theory of order zero. Such theory will consist of trivial distinguished triangles as in (a) of the axiomatics of the triangulated category. When complexes, corresponding to adjacent chambers are quasiisomorphic, the cone of the morphism is an acyclic complex. .3cm [*Baby example of a theory of order 0.*]{} .2cm Let M be an n-dimensional compact oriented smooth manifold. Consider the space of functions on M. This is an infinite-dimensional Euclidean space. The chambers of the space will correspond to Morse functions on M, the walls of the discriminant - to simple degenerations when two critical points collide, etc. Let’s consider the Morse complex, generated by the critical points of a Morse function on M. As it was shown by many authors, such complex is isomorphic to the CW complex, associated with M. Since we are calculating the homology of M via various Morse functions, complexes may vary, but will have the same homology and Euler characteristics. Then we can proceed according to our philosophy and assign cones of morphisms to the walls and selfintersections of the discriminant. Since complexes on the chambers of the space of functions are quasiisomorphic, all cones are acyclic. .2cm Now we can introduce the main definition of a Floer-type theory being of finite type n: .5cm [**Main Definition**]{}. The local system of (Floer-type) complexes, extended to the discriminant of the space of manifolds via the cone of morphism, is a [**local system of order n**]{} if for any selfintersection of the discriminant of codimension $(n+1)$, its (n+1)st cone is an [**acyclic complex**]{}. .2cm How one shows that an 2n-graded complex is acyclic? For example, if one introduces inverse maps to the wall-crossing morphisms and construct the homotopy $\mathcal H$, s.t.: $$d \mathcal H - \mathcal H d = I$$ It is easy to check that the existence of such homotopy $\mathcal H$ implies, that the complex doesn’t have homology. Suppose $dc=0$, i.e. c is a cycle, then: $$d \mathcal H c - \mathcal H d c = d \mathcal H c = I$$ .5cm [**Example**]{}. Suppose some local system is conjectured to be of finite type 3. How one would check this? By our definition, we should consider $2^3$ chambers adjacent to the every point of selfintersection of the discriminant of codimension 3, and 8 complexes, representing the local system in the small neighbourhood of this point. This will correspond to the following commutative cube: $$\xymatrix@C-0.1cm{ & {B^ \bullet} \ar[rr]^{h} \ar'[d][dd]^{b} & & {C^ \bullet} \ar[dd] ^{c}\\ { A^ \bullet} \ar[ur]_{f} \ar[rr]^(0.65){g} \ar[dd]^{a} & & {D^ \bullet} \ar[ur]_{w} \ar[dd]^{e} & \\ & {F^ \bullet} \ar'[r]^-{l}[rr] & & {G^ \bullet} \\ {E^ \bullet} \ar[ru]_{k} \ar[rr]^{m} && {H^ \bullet} \ar[ru]_{n} & }$$ Let’s write the homotopy equation in the matrix form. Consider dual maps $f^*, g^*,...,w^*$. Then we get formulas for $ d$ and $\mathcal H$ as $8 \times 8$ matrices: $$d = \left( \begin{array}{cccccccc} d_A&f&g&0&a&0&0&0\\ 0&d_B&0&h&0&b&0&0\\ 0&1&d_D&w&0&0&e&0\\ 0&0&1&d_C&0&0&0&c\\ 0&0&0&0&d_E&k&m&0\\ 0&0&0&0&1&d_F&0&l\\ 0&0&0&0&0&1&d_H&n\\ 0&0&0&0&0&0&1&d_G\\ \end{array} \right)$$ $$\mathcal H = \left( \begin{array}{cccccccc} d_A&0&0&1&0&0&0&0\\ f^*&d_B&0&0&0&0&0&0\\ g^*&0&d_D&0&0&0&0&0\\ 0&h^*&w^*&d_C&0&0&0&0\\ a^*&0&0&0&d_E&0&0&1\\ 0&b^*&0&0&k^*&d_F&0&0\\ 0&0&e^*&0&m^*&0&d_H&0\\ 0&0&0&c^*&0l^*&h^*&d_G\\ \end{array} \right)$$ After substituting these matrices into the equation $d \mathcal H - \mathcal H d =I$ we obtain the diagonal matrix which must be homotopic to the identity matrix: $$\left( \begin{array}{cccccccc} ff^* + gg^*+ aa^*&0&0&0&0&0&0&0\\ 0&-"-&0&0&0&0&0&0\\ 0&0&-"-&0&0&0&0&0\\ 0&0&0&-"-&0&0&0&0\\ 0&0&0&0&-"-&0&0&0\\ 0&0&0&0&0&-"-&0&0\\ 0&0&0&0&0&0&-"-&0\\ 0&0&0&0&0&0&0&cc^* + nn^* + ll^*\\ \end{array} \right)$$ Thus the condition for the local to be of finite type n can be interpreted as follows. For any selfintersection of the discriminant of codimension $n+1$ consider $2^n$ complexes, forming a commutative cube (representatives of the local system in the chambers adjacent to the selfintersection point). Then the naive geometrical interpretation of the local system being of finite type n is the following: each complex can be “split” into $n+1$ subcomplexes, which map quasiisomorphically to $n+1$ neighbours, at least no homologies die or being generated. .2cm . .5cm [*5.1. Examples of combinatorially defined theories.*]{} .5cm In the following table we give the examples of theories, which are the categorifications of classical invariants. All these theories fit into our framework and may satisfy the finitness condition. .5cm $\lambda$ $\lambda = \chi H^* (M)$ ---------------------- ----------------------------------------- Jones polynomial Khovanov homology \[Kh\] Alexander polynomial Ozsvath-Szabo knot homology \[OS2\] $sl(n)$ invariants Khovanov - Rozhansky homology \[KhR\] Casson invariant Instanton Floer homology \[F\] Turaev’s torsion Ozsvath-Szabo 3 manifold theory \[OS1\] Vafa invariant Gukov-Witten categorification \[GW\] .7cm Note, that the only theory which is not combinatorially defined is the original Instanton Floer homology \[F\]. The fact that it’s Euler characteristics is Casson’s invariant was proved by C.Taubes \[T\]. .5cm [ *5.2. Generalization to dimension 3 and 4.*]{} .5cm In our paper \[S1\] we generalized Vassiliev’s construction to the case of 3-manifolds. In \[S2\] we construct the space of parallelizable 4-manifolds and consider the paramentrized version of the Refined Seiberg-Witten invariant \[BF\]. .2cm [*a). The space of 3-manifolds and invariants of finite type*]{}. .2cm Note that all 3-manifolds are parallelizable and therefore carry spin-structures. .2cm Following Vassiliev’s approach to classification of knots, we constructed spaces $E_1$ and $E_2$ of 3-manifolds by a version of the Pontryagin-Thom construction. Our main results are as follows: .3cm [**Theorem \[S1\].**]{} In $E_1-D$ each connected component corresponds to a homeomorphism class of 3-dimensional framed manifold. For any connected framed manifold as above there is one connected component of $E_1-D$ giving its homeomorphism type. .2cm [**Theorem \[S1\].**]{} In $E_2- D$ each connected component corresponds to a homeomorphism class of 3-dimensional spin manifold. For any connected spin manifold there is one connected component of $E_2-D$ giving its homeomorphism type. .3cm By a spin manifold we understand a pair $(M,\theta)$ where $M$ is an oriented 3-manifold, and $\theta$ is a spin structure on $M$. Two spin manifolds $(M,\theta)$ and $(M',\theta')$ are called homeomorphic, if there exists a homeomorphism $M\rightarrow M'$ taking $\theta$ to $\theta'$. .5cm The construction of the space naturally leads to the following definition: .4cm [**Definition**]{}. A map $I: (M,\theta) \rightarrow C$ is called a finite type invariant of (at most) order k if it satisfies the condition: $$\sum_{ L' \in L}(-1)^{\# L'}I(M_{L'})=0$$ where $L'$ is a framed sublink of link L with even framings, L corresponds to the self-intersection of the discriminant of codimension k+1, ${\# L'}$ - the number of components of $L'$, $M_{L'}$ - spin 3-manifold obtained by surgery on $L'$. .3cm We introduced an example of Vassiliev invariant of finite order. Given a spin 3-manifold $M^3$ we consider the Euler characteristic of spin 0-cobordism $W$. Denote by $I(M,spin) = (sgn (W, spin) -1) (mod2)$. .2cm [**Theorem \[S1\].**]{} Invariant $I(M,spin)$ is finite type of order 1. .2cm The construction of the space of 3-manifolds chambers of which correspond to spin 3-manifolds is important for understanding, which additional structures one needs in order to build the theory of finite-type invariants for homologically nontrivial manifolds. It suggests that one should consider spin ramifications of known invariants. .5cm In the following paper we will generalize our constructions and the main definition to the case of 3-manifolds. We will construct a local system of Ozsvath-Szabo homologies, extend it to the singular locus via the cone of morphism and find examples of theories of finite type. .2cm [*b). Stably parallelizable 4-manifolds.*]{} .2cm In this section we modify the previous construction \[S1\] to get the space of parallelizable 4-manifolds. .2cm By the definition the manifold is parallelizable if it admits the global field of frames, i.e. has a trivial tangent bundle. In the case of 4-manifolds this condition is equivalent to vanishing of Euler and the second Stieffel-Whitney class. In particular signature and the Euler characterictic of such manifolds will be 0. We will use the theorem of Quinn: .2cm [**Theorem** ]{} Any punctured 4-manifold posesses a smooth structure. .2cm Recall also the result of Vidussi, which states that manifolds diffeomorphic outside a point have the same Seiberg-Witten invariants,so one cannot use them to detect eventual inequivalent smooth structures. Thus for the purposes of constructing the family version of the Seiberg-Witten invariants, it will be sufficient for us to consider “asymptotically flat” 4-manifolds, i.e. such that outside the ball $B_R$ of some large radius R they will be given as the set of common zeros of the system of linear equations (e.g. $f_i(x_1,...x_{n+4})=x_i$ for $i=1,...n$.) .3cm By Gromov’s h-principle any smooth 4-manifold (with all of its smooth structures and metrics) can be obtained as a common set of zeros of a system of equations in $R^{{\mathbb{N}}}$ for sufficiently large N. .5cm [**Theorem \[S2\].**]{} Any parallelizable smooth 4-manifold can be obtained as a set of zeros of n functions on the trivial (n+4)-bundle over $S^n$. Each manifold will be represented by $|H^1(M,Z_2) \oplus H^3(M,Z)|$ chambers. .5cm There is a theory which also fits into our template - Ozsvath-Szabo homologies for 3-manifolds, Euler characteristic of which is Turaev’s torsion. It would be interesting to show that this theory is also of finite type or decomposes as the Khovanov theory. .2cm [*c). Ozsvath-Szabo theory as triangulated category*]{}. .2cm In \[S3\] we put the theory developed by P.Ozsvath and Z. Szabo into the context of homological algebra by considering a local system of their complexes on the space of 3-manifolds and extending it to the singular locus. We show that for the restricted category the Heegaard Floer complex $CF^{\infty}$ is of finite type one. For other versions of the theory we will be using the new combinatorial formulas, obtained in \[SW\]. .2cm .2 cm Recall that the categorification is the process of replacing sets with categories, functions with functors, and equations between functions by natural isomorphisms between functors. On would hope that after establishing this correspondence, homological algebra will provide algebraic structures which one should assign to geometrical objects without going into the specifics of a given theory. One can see that this approach is very useful in topological category, in particular we will be getting knot and link invariants of Ozsvath and Szabo after setting up their local system on the space of 3-manifolds. .2cm [**Note**]{} Floer homology can be also considered as invariants for families, so it would be interesting to connect our work to the one of M.Hutchings \[H\]. His work can be interpreted as construction of local systems corresponding to various Floer-type theories on the chambers of our spaces. Then we extend them to the discriminant and classify according to our definition. .2cm [ *6.3. Further directions.*]{} .5cm 1\. There is a number of immediate questions from the finite-type invariants story: .2cm a). What will substitute the notion of the chord diagram? What is the “basis ” in the theories of finite type? b). What are the “dimensions” of the spaces of theories of order n? .2cm 2\. What is the representation-theoretical meaning of the theory of finite type? .2cm a). Is it possible to construct a “universal” knot homology theory in a sense of T.Lee \[L\] ? b). Is it possible to rise such a “universal” knot homology theory to the Floer-type theory of 3-manifolds? .2cm 3\. There are “categorifications” of other knot invariants: Alexander polynomial \[OS\], HOMFLY polynomial \[DGR \]. These theories also fit into our setting and it will interesting to show that they decompose into the series of theories of finite type or that their truncations are of finite type. .2cm 4. The next step in our program \[S3\] is the construction of the local system of Ozsvath-Szabo homologies on the space of 3-manifolds introduced in \[S1\]. We also plan to raise Khovanov theory to the homological Floer-type theory of 3-manifolds. .2cm 5. It should be also possible to generalize our program to the study of the diffeomorphism group of a 4-manifold by considering Gukov-Witten \[GW\] categorification of Vafa invariant on the moduli space constructed in \[S2\]. . .5cm .2cm \[BF\] Bauer S., Furuta M., A stable cohomotopy refinement of Seiberg-Witten invariants: I, II, math.DG/0204340. .2cm \[BN1\] Bar-Natan D.,On Khovanov’s categorification of the Jones polynomial, math.QA/0201043. .2cm \[BN2\] Bar-Natan D., Vassiliev and Quantum Invariants of Braids, q-alg/9607001. .2cm \[Bu\] R. Budney, Topology of spaces of knots in dimension 3, math.GT/0506524. .2cm \[CJS\] Cohen R., Jones J.,Segal G., Morse theory and classifying spaces, preprint 1995. .2cm \[CKV\] Champanerkar A., Kofman I., Viro O., Spanning trees and Khovanov homology, preprint. .2cm \[D\] Donaldson S., The Seiberg-Witten Equations and 4 manifold topology, Bull.AMS, v. 33, 1, 1996. .2cm \[DGR\] Dunfield N., Gukov S., Rasmussen J., The Superpolynomial for Knot Homologies , math.GT/0505662. .2cm \[F\] Floer A., Morse theory for Lagrangian interesections, J. Differ. Geom. 28 (1988), 513547. .2cm \[Fu\] Fukaya K., Morse homotopy, $A^{\infty}$-categories and Floer homologies, Proc. of the 1993 Garc Workshop on Geometry and Topology, v.18 of Lecture Notes series,p.1-102.Seoul Nat.Univ.,1993. .2cm \[G-M\] Gelfand S., Manin Yu., Methods of homological algebra, Springer, 1996. .2cm \[Gh\] Ghys E., Braids and signatures, preprint 2004. .2cm \[Ha\] A.Hatcher, Spaces of knots, math.GT/9909095 .2cm \[HaM\] A. Hatcher, D. McCullough,Finiteness of Cassifying Spaces of Relative Diffeomorphism Groups of 3-manifolds,Geom.Top., 1 (1997) .2cm \[Hu\] Hutchings M., Floer homology of families 1, preprint SG/0308115 .2cm \[Ja\] Jacobsson M., An invariant of link cobordisms from Khovanov homology, Algebraic&Geometric Topology, v. 4 (2004), 1211-1251. .2cm \[Kh\] Khovanov M., A Categorification of the Jones Polynomial, Duke Math. J. 101 (2000), no. 3, 359–426. .2cm \[KhR\] Khovanov M., Rozansky L.,Matrix factorizations and link homology, math.QA/0401268 .2cm \[K\] Kontsevich M., Feynmann diagrams and low-dimensional topology. First European Congress of Mathematics, Vol 2(Paris, 1992), Progr.Math.,120, Birkhauser,1994. .2cm \[L\] Lee T. An Invariant of Integral Homology 3-Spheres Which Is Universal For All Finite Type Invariants, q-alg/9601002. .2cm \[MOS\] Manolescu C., Ozsvath P., Sarkar S., A combinatorial description of knot Floer homology. , math.GT/0607691. .2cm \[O’H\] O’Hara J., Energy of Knots and Conformal Geometry , World Scientific Publishing (Jun 15 2000). .2cm \[O\] Ohtsuki T., Finite Type Invariants of Integral Homology 3-Spheres, J. Knot Theory and its Ramifications 5 (1996). .2cm \[OS1\] Ozsvath P., Szabo Z., Holomorphic disks and three-manifold invariants: properties and applications, GT/0006194. .2cm \[OS2\] Ozsvath P., Szabo Z., Holomorphic disks and knot invariants, math.GT/0209056. .2cm \[R\] D.Ruberman,A polynomial invariant of diffeomorphisms of 4-manifold, geometry and Topology, v.2, Proceeding of the Kirbyfest, p.473-488, 1999. .2cm \[S1\] Shirokova N., The Space of 3-manifolds, C.R. Acad.Sci.Paris, t.331, Serie1, p.131-136, 2000. .2cm \[S2\] Shirokova N., On paralelizable 4-manifolds and invariants for families, preprint 2005. .2cm \[S3\] Shirokova N., The constructible sheaf of Heegaard Floer Homology on the Space of 3-manifolds, in preparation. .2cm \[S4\] Shirokova N., The finiteness result for Khovanov homology, preprint 2006. .2cm \[SW\] Sarkar S., Wang J., A combinatorial description of some Heegaard Floer homologies, math.GT/0607777. .2cm \[T\] Taubes C.,Casson’s invariant and gauge theory, J.Diff.Geom., 31, (1990), 547-599. .2cm \[Th\] Thomas, R.P.,Derived categories for the working mathematician, math.AG/0001045 .2cm \[V\] Vassiliev V., Complements of Discriminants of Smooth Maps, Transl. Math. Monographs 98, Amer. Math.Soc., Providence, 1992. .2cm \[Vi\] Viro O., Remarks on the definition of Khovanov homology, math.GT/0202199 .2cm \[W\] Wehrli S., A spanning tree model for Khovanov homology, math.GT/0409328. .5cm nadya@math.stanford.edu
--- abstract: 'Indirect reciprocity based on reputation is a leading mechanism driving human cooperation, where monitoring of behaviour and sharing reputation-related information are crucial. Because collecting information is costly, a tragedy of the commons can arise, with some individuals free-riding on information supplied by others. This can be overcome by organising monitors that aggregate information, supported by fees from their information users. We analyse a co-evolutionary model of individuals playing a social dilemma game and monitors watching them; monitors provide information and players vote for a more beneficial monitor. We find that (1) monitors that simply rate defection badly cannot stabilise cooperation—they have to overlook defection against ill-reputed players; (2) such overlooking monitors can stabilise cooperation if players vote for monitors rather than to change their own strategy; (3) STERN monitors, who rate cooperation with ill-reputed players badly, stabilise cooperation more easily than MILD monitors, who do not do so; (4) a STERN monitor wins if it competes with a MILD monitor; and (5) STERN monitors require a high level of surveillance and achieve only lower levels of cooperation, whereas MILD monitors achieve higher levels of cooperation with loose and thus lower cost monitoring.' author: - Mitsuhiro Nakamura - Ulf Dieckmann bibliography: - 'refs.bib' title: Voting by Hands Promotes Institutionalised Monitoring in Indirect Reciprocity --- Introduction {#sec:introduction} ============ The evolution of cooperation is a universal problem across species [@MaynardSmith1997; @Axelrod1984; @Nowak2006]. To achieve cooperation, individuals often need to overcome a social dilemma: for the population, all-out cooperation is the best, whereas for each individual, it is better to free ride on the contributions of others [@Ostrom1990; @Colman2006]. Indirect reciprocity, among several other mechanisms, is a leading explanation for the evolution of human cooperation [@Trivers1971; @Alexander1987; @Nowak1998a; @Nowak2005; @Sigmund2012]. In indirect reciprocity, an individual helping another will be helped in the future; cooperative individuals are highly valued and obtain help from others because of their good reputation. Indirect reciprocity fundamentally depends on the individuals’ ability to evaluate others and share information about their reputation (, via gossip). This requires an individual to obtain information about the others’ reputation. However, doing so is usually costly. It demands considerable cognitive capacity to recognise and memorise others’ past actions [@Milinski1998; @Milinski2001; @Suzuki2013]. Gossip-based information sharing is vulnerable to liars who strategically spread fake information [@Nakamaru2004]. As a recently emerging example, electronic marketplaces are adopting feedback mechanisms to assess each seller. However, customers often fail to submit such feedback as this involves extra work [@Gazzale2005; @Gazzale2011; @Masclet2012; @Rockenbach2012]. Consequently, the availability and reliability of information suffers from a tragedy of the commons [@Rockenbach2012; @Rand2013]. An important difference between a material good and information is that information can be copied and distributed among many individuals at negligible cost (even though its acquisition may be costly). Therefore, as Arrow wrote, ‘it does not pay that everyone in a society acquires this information, but only a number needed to supply the necessary services’ [@Arrow2010]. In human societies, such specialised servicing organisations gathering and providing reputation information, , modern credit companies and online marketplaces, have played a major role [@Fujiwara-Greve2012; @Resnick2002]. These organisations are maintained by their information users; the users demand the supply of information and contribute fees in return. This can be understood as a mutualism between monitoring services and information users. As far as we know, this has not been explored in the context of indirect reciprocity. In this study, we apply evolutionary game theory to the analysis of mutualism between users of reputation-related information (, the players) and information-providing services (, the monitors) in the context of indirect reciprocity. We present a co-evolutionary model in which players and monitors seek to adapt their strategies through social learning. The population of players is engaged in a social dilemma game called the donation game; from time to time, one player can decide whether to help another player or not. The strategy can be unconditional: to always help, or to always refuse to help. In this case, cooperation loses out. But players can also use a conditional strategy, and help only those players who have a good reputation. We analyse whether competition between information providers can lead to cooperation in the population of players. In our evolutionary model, players can occasionally change their behaviour, which fits into one of the afore-mentioned three types: conditional cooperation, unconditional cooperation, or unconditional defection. The conditional cooperators are further permitted to select a better monitor by voting; the voters display their preference for a better monitor, from which the monitors anticipate their potential future payoff if they continue to obey the present strategy. We shall see that a cooperative mutualism is achieved if the voters are ready to select a better monitor in voting rather than change their behaviour in the donation game. A frequently-studied issue in indirect reciprocity is the evolution of moral assessment rules which determine what kind of behaviour leads to a good reputation [@Sigmund2012]. Well-known assessment rules are SCORING, MILD, and STERN. The SCORING rule is the simplest assessment rule: cooperation is good and defection is bad. Under the MILD and STERN rules, defection against players of bad reputation (cheaters) is good. The only disagreement between the MILD and STERN rules is that STERN prescribes punishing players of bad reputation by withholding help, whereas the MILD rule leaves both cooperation and defection options open. The SCORING rule cannot achieve stable cooperation if players simply interact with one another in random matching games (though the SCORING rule is also known to stabilise cooperation with some additional assumptions such as players’ growing social networks, multiple reputation states, and assortment in interactions [@Brandt2005; @Tanabe2013; @Nax2015]). The MILD and STERN rules belong to the few that achieve stable cooperation in random matching games [@Brandt2004; @Ohtsuki2004; @Ohtsuki2007]. We study the three above-mentioned assessment rules and find that SCORING monitors cannot establish cooperative populations, whereas MILD and STERN monitors can. When comparing MILD and STERN rules, we find that cooperation has a broader basin of attraction with the STERN rule. Moreover, STERN wins when MILD and STERN monitors compete. However, the MILD rule realises a more cooperative population with less frequent (and hence, less costly) monitoring than the STERN rule. This slight difference in the two assessment rules implies a trade-off: STERN is more stable, but MILD is more efficient. MILD always wins against SCORING, but SCORING can displace STERN (and thus subvert cooperation). Methods {#sec:methods} ======= Here we summarise the model by which we numerically simulate the co-evolutionary dynamics. The derivation of the dynamics is described in more detail in the supporting information (SI text, Sec. S1). Population structure, the donation game, and the behaviour of players --------------------------------------------------------------------- We consider a large, well-mixed population of players (see Fig. \[fig:schema\]). From time to time, the players interact with each other in a social dilemma game called the donation game [@Nowak1998a; @Nowak2005]. In a (one-shot) donation game, two players are selected at random from the population, and one of them, called the donor, decides whether or not to help the other, called the recipient. These two alternatives are called cooperation ($\C$) and defection ($\D$), respectively. A donor who cooperates pays a cost $c$ ($> 0$) to increase the recipient’s payoff by an amount $b$ ($> c$). Each player adopts one of three strategies: unconditional cooperation, unconditional defection, or conditional cooperation. An unconditional cooperator or defector always selects $\C$ or $\D$, respectively. By contrast, a conditional cooperator selects $\C$ or $\D$ depending on whether a recipient has a good ($\G$) or bad ($\B$) reputation, respectively. This reputation information comes at a price $\beta$ ($\ge 0$). Behaviour of monitors --------------------- A monitor, or information provider, asks a fee, $\beta$, for its service. It observes each interaction with a probability $q$, for which it has to pay a cost $C(q) \ge 0$, and updates the record of the player’s reputation accordingly. We assume that $C(q)$ is a monotonically increasing convex function such that the cost is zero with no observation and is infinite with complete observation. The cost function is proportional to a parameter $\gamma \ge 0$ (see SI text, Sec. S1.5). With probability $1-q$, the monitor records fake information randomly based on the average ratio of good and bad players in the population. For example, if 90% of the players have a good reputation, then a faking monitor assigns a good reputation to the recipient with a probability of 90%, irrespective of the recipient’s actual behaviour. We assume that faking incurs no cost to the monitor. Assessment rules: SCORING, MILD, and STERN ------------------------------------------ A monitor assesses the donor’s behaviour according to an assessment rule, which determines whether the donor obtains a good or a bad reputation (G or B). We consider three assessment rules called SCORING, MILD, and STERN (see Tab. \[tab:morals\]). The SCORING rule simply considers that cooperation and defection are good and bad, respectively, irrespective of the recipient’s reputation. MILD and STERN rules follow the same assessment when the recipient has a good reputation, whereas they consider that defection against bad players is justified, , a good behaviour (see $\DB$ column in Tab. \[tab:morals\]). The MILD and STERN rules differ when a donor helps a bad recipient. Such a behaviour is regarded as good by the MILD rule, whereas it is regarded as bad by the STERN rule (see $\CB$ column in Tab. \[tab:morals\]) We introduce errors in the monitors’ assessments. With a small probability $\mu$, a monitor may assign a reputation opposite to that intended. Moreover, we assume that all players have a good reputation to begin with. Social learning among players ----------------------------- We study the co-evolution of players and monitors by combining pairwise comparison and adaptive dynamics, both well established techniques in evolutionary game theory [@Sandholm2010; @Hofbauer1998]. The players gradually change the relative frequencies of their strategies, denoted by $(x_\C, x_\D, x_\R)$, where the subscripts denote unconditional cooperators (C), unconditional defectors (D), and conditional cooperators (R, for ‘reciprocators’). Their evolution is driven by an imitation process based on a pairwise payoff comparison with random exploration, given by $$\label{eq:player-dynamics} \dot{x}_\sigma = \epsilon \left[\frac{1}{3} - x_\sigma\right] + \left(1-\epsilon\right) x_\sigma \sum_{\sigma^\prime} x_{\sigma^\prime} \tanh\left[\frac{w}{2} \left(\pi_\sigma - \pi_{\sigma^\prime}\right)\right]$$ for each strategy $\sigma \in \{\C, \D, \R\}$, where $\pi_\sigma$ represents the payoffs of players obeying strategy $\sigma$ (see SI text, Sec. S1.4 for its derivation). The first term of the right-hand side of Eq.  represents random exploration; with a small probability $\epsilon$, the players explore different strategies in a uniformly random manner. The second term of the right-hand side of Eq.  represents imitation based on a pairwise payoff comparison; with a probability $1-\epsilon$, a randomly selected player compares her payoff and another randomly selected player’s payoff, and imitate the latter player’s strategy with a probability given by a sigmoid function, $1/\left[1 + \exp(-w \Delta)\right]$, where $\Delta$ is the payoff difference [@Traulsen2006]. Equation  is tuned by a parameter $w > 0$, which represents the speed with which players switch to a better strategy [@Traulsen2006]. Voting between monitors and their adaptive dynamics --------------------------------------------------- The monitors’ evolution is driven by voting by their clients (, conditional cooperators). We assume for simplicity that only two monitors, denoted by 1 and 2, are competing. Most of the time, the two monitors behave alike. Occasionally, one monitor (monitor 1) slightly changes the parameter values from $(q, \beta)$ to $(q^\prime, \beta^\prime)$ at random. The clients of the monitors compare their payoffs, which are different between the two monitors, and ‘vote with their hands’ on which monitor is better. That is, the clients show the monitors how many of them will move to a better monitor, given by $$\label{eq:player-softmax-selection} \frac{x^\prime_{\R_i}}{x_\R} = \frac{ \mathrm{e}^{\alpha\pi^\prime_{\R_i}} }{ \mathrm{e}^{\alpha\pi^\prime_{\R_1}} + \mathrm{e}^{\alpha\pi^\prime_{\R_2}} }$$ for monitor $i \in \{1, 2\}$, if the monitors continue to use the slightly-changed parameter values (, $(q, \beta)$ and $(q^\prime, \beta^\prime)$. Here, $x^\prime_{\R_i} / x_\R$ is the frequency of clients that vote for monitor $i$ (numerator) relative to the total frequency of clients (denominator) and $\pi^\prime_{\R_i}$ represents the payoff of clients that use monitor $i$. Moreover, the parameter $\alpha > 0$ represents how strongly the clients vote for the monitor whose clients do better. This parameter corresponds to how nimbly the monitors evolve their parameters. On receiving the results of the voting, a less popular monitor, who will lose some clients in the future if it continues to use the present parameter values, will quickly follow suit and adopt the more popular monitor’s parameter values. This process can be modelled by adaptive dynamics (see SI text, Sec. S1.5) [@Hofbauer1990]. The voting is assumed to be much faster than the change in the player’s behaviour from conditional to unconditional cooperation or defection. Results {#sec:results} ======= The SCORING rule cannot stabilise cooperation {#sec:results:1st-order-fails} --------------------------------------------- When both monitors adopt the SCORING rule, the system cannot reach stable cooperation, even if the initial population of players consists entirely of conditional cooperators. Figure \[fig:examples\](a) displays a typical example of the failure of the SCORING rule. The frequency of monitoring, , of $x_\R$ and of $q$, first increases. Then, because the SCORING rule does not distinguish defection against bad players from defection against good players (, so-called justified defection), the fraction of good conditional cooperators decreases rapidly, as shown by the decrease of the frequency of cooperation in Fig. \[fig:examples\](a). This implies that monitoring harms the population in the case of the SCORING rule, so that the frequency of monitoring begins to decrease. Finally, monitoring vanishes and unconditional defectors invade and take over. STERN and MILD rules can stabilise cooperation if voters strongly support a beneficial monitor {#sec:results:red-king-effect} ---------------------------------------------------------------------------------------------- When the monitors adopt the MILD or STERN rule, they can secure stable cooperation supported by frequent monitoring, provided the initial fraction of conditional cooperators is sufficiently large (Fig. \[fig:examples\](b–e)). Interestingly, this mutualism between conditional cooperators and monitors is achieved even if the initial frequency of monitoring is zero, , $q = 0$. A bootstrapping process allows the monitoring frequency to quickly increase (see Fig. \[fig:examples\](c,e)). What controls this growth of monitoring is the intensity with which players select a better monitor in voting (, $\alpha$) relative to that with which they change their own strategy (, $w$). We numerically find the minimum fraction of conditional cooperators (, the minimum $x_\R$) needed to establish a stable mutualism for various values of $\alpha$ and $w$ (Fig. \[fig:bootstrap\]). In the case of the SCORING rule, as expected, the monitors cannot sustain their monitoring frequency even if the population consists entirely of conditional cooperators (Fig. \[fig:bootstrap\](a,d)). For the MILD and the STERN rules, we find that stable mutualism can be reached if $\alpha$ is sufficiently large (Fig. \[fig:bootstrap\](b,c,e,f)); a strong competition between monitors is essential. Moreover, the required initial fraction of conditional cooperators decreases as $w$ becomes smaller, provided that the benefit-to-cost ratio of cooperation (, $b/c$) is sufficiently large (Fig. \[fig:bootstrap\](e,f)). These two observations together imply that if the voters (, conditional cooperators) select monitors faster than they switch strategies, then the monitors are forced to establish reliable monitoring, and thereby the users enjoy a cooperative society supported by the monitoring system. The STERN rule establishes cooperation more easily than the MILD rule {#sec:results:stability} --------------------------------------------------------------------- Furthermore, we observe a difference between MILD and STERN; the region leading to a cooperative mutualism is larger under the STERN rule than under the MILD rule (compare Fig. \[fig:bootstrap\](b,e) and Fig. \[fig:bootstrap\](c,f)). The intensity of competition between monitors (, $\alpha$) required to reach the cooperative equilibria is larger with the MILD rule than with the STERN rule. That is, with a STERN assessment, the system can more easily succeed in establishing the mutualism, even when the competition between the monitors is relatively weak. STERN is dominant if STERN and MILD rules compete {#sec:results:competition} ------------------------------------------------- So far, we have assumed that the two monitors adopt the same assessment rule. What if different assessment rules compete? Let us assume that, after a long time over which the two monitors use the same assessment rule, one of them adopts a different rule, but both monitors still use the same parameters $q$ and $\beta$. We can easily see that the payoff to the STERN monitor is always higher than that to the MILD monitor (see SI text, Sec. S2). This is because conditional cooperators using the STERN monitor’s information (STERN users) gain relatively higher payoffs than those using the MILD monitor (MILD users); when they interact, MILD users cooperate more with STERN users, whereas STERN users cooperate less with MILD users [@Uchida2010a]. Thus, STERN is again more robust than MILD, in the sense of the competition between the two assessment rules [@Pacheco2006; @Uchida2010a]. The STERN rule achieves lower cooperation with severe surveillance, whereas the MILD rule achieves higher cooperation with loose monitoring {#sec:results:efficiency} ------------------------------------------------------------------------------------------------------------------------------------------- Given a population that has established a stable mutualism, it is interesting to see whether monitoring is severe or not and how cooperative the players are. To study this, we numerically observe the equilibrium states of populations varying in the benefit-to-cost ratio of cooperation in the donation game (, $b/c$) and in the ratio between monitoring cost and cooperation cost (, $\gamma/c$) under the two assessment rules MILD and STERN. The characteristics of equilibria under the three assessment rules differ qualitatively with respect to the frequency of monitoring (Fig. \[fig:equilibria\](a,b,c)) and the cooperativeness of the players (Fig. \[fig:equilibria\](d,e,f)). In the case of the SCORING rule, again, the monitors cannot increase their monitoring frequency and the players fail to establish cooperative populations (Fig. \[fig:equilibria\](a,d)). In contrast, MILD and STERN rules succeed in establishing cooperative populations under a wide range of parameter settings (Fig. \[fig:equilibria\](b,c,e,f)). The equilibrium frequencies of monitoring under MILD and STERN rules are the same (100%) when monitoring is cost free (, when $\gamma = 0$; see the left edges of the panels in Fig. \[fig:equilibria\](a,b)). When monitoring is costly (, when $\gamma > 0$), one might expect that the frequency of monitoring would diminish as the cost increases. This prediction is verified for the MILD rule (Fig. \[fig:equilibria\](a)), but fails for the STERN rule (Fig. \[fig:equilibria\](b)); in the latter case, information users still need accurate information although the cost of monitoring is large. Why does this happen? Consider that two STERN monitors have conflicting opinions about a player’s reputation; one monitor (monitor 1) regards the player (player A) as good but the other monitor (monitor 2) regards the player as bad. In a donation game, a donor (player B, a conditional cooperator) is informed about player A’s reputation by, say, monitor 1. Player B helps player A, because player A has a good reputation according to monitor 1. In this situation, monitor 1 assigns a good reputation to player B, because the monitor thinks that the game is in the $\CG$ scenario (see Tab. \[tab:morals\]). However, monitor 2 assigns a bad reputation to player B, because it thinks that the game is in the $\CB$ scenario. In this process, the existence of player A, who has conflicting reputations in the eyes of the two monitors, yields another player who also has conflicting reputations. Thus the number of players with conflicting reputation inexorably grows [@Nakamura2012]. As a consequence, the degree of cooperation under the STERN rule becomes significantly smaller than that under the MILD rule (Fig. \[fig:equilibria\](e,f)). To avoid mistakenly cooperating with players that have conflicting reputations, conditional cooperators need accurate information and require severe surveillance under the STERN rule. Another difference between MILD and STERN rules is that in case of the MILD rule, as the cost of monitoring increases, the minimum benefit-to-cost ratio (, $b/c$) required for sustaining mutualism becomes larger, whereas in the case of the STERN rule, it does not change (compare Fig. \[fig:equilibria\](b,e) with Fig. \[fig:equilibria\](c,f)). Mutualism under the STERN rule is easier to establish than under the MILD rule, as previously shown in Fig. \[fig:bootstrap\]. Finally, we mention that if a SCORING monitor competes with a STERN monitor (both having the same ($q, \beta$)-values), then it may happen that SCORING wins, thus subverting cooperation (see SI text, Sec. S3). This holds if the number of unconditional defectors is sufficiently high. It follows that under certain conditions, we encounter a rock-paper-scissors type of competition for the three assessment rules: SCORING beats STERN, MILD beats SCORING, and STERN beats MILD. Robustness checks {#sec:results:robustness-checks} ----------------- For the results of comparisons between different initial states of players (, $(x_\C, x_\D, x_\R)$) and different shapes of the cost function for monitoring (, $C(q)$), see the SI text, Secs. S3 and S4, respectively. Neither consideration changes our results qualitatively. In a few parameter sets unde the MILD rule, we observed stable periodic oscillations (see the SI text, Sec. S6 for detail). Discussion {#sec:discussion} ========== We have studied a co-evolutionary model of indirect reciprocity in which players request information about reputations and monitors supply it. Thus players and monitors mutually benefit from using and providing information. We compared three different assessment rules called SCORING, MILD and STERN, and found that only the MILD and STERN rules can establish a cooperative mutualism. We confirmed that the SCORING rule fails to foster cooperation (Sec. \[sec:results:1st-order-fails\]). Mutualism can emerge and be stabilised in the case of the MILD or STERN rule if the initial frequency of conditional cooperators is sufficiently high and if they strongly support a better monitor rather than rapidly changing their strategy; the slow speed of evolution of players’ strategy relative to that of monitors’ is important (Sec. \[sec:results:red-king-effect\]). The STERN and the MILD rules differ in their stability. The STERN rule is more robust than the MILD rule in admitting a larger basin of attraction leading to cooperation (Sec. \[sec:results:stability\]). The intensity of competition between monitors (, $\alpha$) can be smaller in case of the STERN rule than in the case of the MILD rule. The STERN rule is more robust than the MILD rule in another sense: the competition between two monitors, one STERN and one MILD, always leads to victory by the STERN rule (Sec. \[sec:results:competition\]). Moreover, the difference between the MILD and the STERN rules substantially affects the outcome of co-evolution. With MILD monitors, players achieve more cooperative states under less-frequent monitoring, whereas with STERN monitors, players achieve less cooperative states and are under severe surveillance, , $q \approx 1$ (Sec. \[sec:results:efficiency\]). However, cooperative mutualism can be more easily obtained with STERN monitors than with MILD monitors in the sense that the cost-to-benefit ratio and the cost for monitoring can be larger. In evolutionary studies of symbiosis, the so-called Red Queen’s hypothesis is often invoked. It says that competing species are exposed to arms races and therefore those evolving faster are advantaged [@VanValen1973; @Dawkins1979]. However, recent theoretical studies have found that sometimes the species evolving slowly can win. This is called the Red King effect [@Bergstrom2003; @Damore2011]. In the Red King effect, immobility can be a form of commitment that obliges other species to give way. In the present study, a similar effect enables a stable mutualism between players and monitors; players are the hosts that evolve slowly and promote the monitors’ costly monitoring. Several works in economics have studied repeated games with costly monitoring of opponents’ actions [@Ben-Porath2003; @Miyagawa2008; @Flesch2009; @Fujiwara-Greve2012]. These studies focused on the individual trade-off between the value of information and the cost of its acquisition and did not consider how to promote costly sharing of information among individuals. Gazzale presented a model of seller–buyer transaction in which buyers can report information about sellers to a rating system and their reporting is visible by sellers, and Gazzale and Khopkar experimentally studied how this mechanism promotes costly sharing of information [@Gazzale2005; @Gazzale2011]. In their model, a buyer’s costly reporting of information about a seller builds the buyer’s reputation as an information spreader. This increases the effort level of the buyer’s future partners afraid of receiving a bad reputation, and thus buyers have an incentive to report information even if it is costly to do so. In our model, instead, monitors make an effort because by doing so their information users reward them. The above-mentioned studies did not assume that the reported information may be fake and that deceivers who shirk costly monitoring gain more than serious information providers. This problem of spreading false information about reputations was, as far as we know, first studied in biology by Nakamaru and Kawata [@Nakamaru2004]. In their study, a ‘conditional advisor’ was capable of detecting and suppressing free-riding liars. This is a strategy by which a player (player A) spreads reputation information about others, which is received from another player (player B) only when B had previously cooperated with A. The conditional advisor strategy, therefore, needs a large amount of information acquisition for the verification of reputation information. In contrast, our model does not require individuals to verify their information; they only need to select a more beneficial monitor. This implies that information users can trust information providers more easily when the providers are exposed to competition with each other. Rockenbach and Sadrieh conducted a behavioural experiment on the subject of costly information spreading [@Rockenbach2012]. They demonstrated that people tend to share helpful information with others even if reporting it provides no individual benefit. Such an instinct for the acquisition and sharing of information could evolve if it is usually rewarded [@Rand2013]. In our model, we assumed that all individuals including players and monitors are only motivated by self-interest. We demonstrated theoretically that the reward for reporting helpful information can overcome the problem of costly information acquisition, even if individuals have no social preferences other than pure self-interest. The present study is restricted to a simple model, and the following extensions would provide further insights. First, we studied competition between two monitors only, rather than between many. In real life, situations with more than two competitors are common, and ‘hub’ individuals with huge numbers of connections on social networks are observed [@Newman2003]. Whether a hub information provider emerges from competition among many monitors or not is an interesting question. Second, we assumed that when monitors fail to engage in costly observation, they deceive client players by faking random information. In real life, such falsification might be strategic; for example, monitors might be corrupted by players offering them money for reporting a good reputation [@Masclet2012]. Third, we showed that the competition between monitors driven by their clients’ voting ‘by hands’ rather than ‘by feet’ enables cooperation; clients only show their preference over monitors under voting by hands, whereas they actually move to a better monitor under voting by feet. This is in contrast to most studies of evolutionary dynamics, which typically assume voting by feet. If monitors compete under voting by feet, it seems likely that one monitor could take the entire of the clients, even if they used the same parameters. Therefore, it is important to study whether cooperation emerges if clients vote with their feet as well as the difference between the two types of voting. Fourth, our model assumed that social learning among players occurs in a well-mixed manner, , that the population does not have structure. However, it could be the case that a population has a structure; people may learn from their neighbours [@Perc2009]. In that case, cooperation might be established even if the initial fraction of conditional cooperators is smaller than that in the present result (see Fig. \[fig:bootstrap\]). This is because a structure increases clustering of players having the same strategy and helps cooperation[@Nowak2010]. Fifth, in our model, we only introduced errors in the monitors’ assessments, which yielded conflicting opinions about a player’s reputation and thus players under the STERN rule were less cooperative than those under the MILD rule. To introduce other types of errors, , errors in each player’s perception about reputation-related information, increases such conflicting opinions and therefore it could reduce cooperation more. An important characteristic of human behaviour is the ability to establish large-scale cooperation [@Fehr2004]. Such large-scale cooperation partially depends upon the development of large-scale information sharing, which suffers from a tragedy of the commons. As we have discussed, one possibility for overcoming this dilemma is to introduce competition between information sharing systems. We hope that this study helps to build understanding of sustainable mechanisms for information provision under indirect reciprocity. Authors’ contributions {#authors-contributions .unnumbered} ====================== MN carried out the mathematical analysis. MN and UD conceived of the study, designed the study, and wrote the manuscript. All authors gave final approval for publication. Competing interests {#competing-interests .unnumbered} =================== We have no competing interests. Funding {#funding .unnumbered} ======= MN gratefully acknowledges support by JSPS KAKENHI Grant No. 13J05595. UD gratefully acknowledges support by the Austrian Science Fund (FWF), through a grant for the research project [*The Adaptive Evolution of Mutualistic Interactions*]{} (TECT I-106 G11) as part of the multi-national collaborative research project [*Mutualisms, Contracts, Space, and Dispersal*]{} (BIOCONTRACT) selected by the European Science Foundation as part of the European Collaborative Research (EUROCORES) Programme [*The Evolution of Cooperation and Trading*]{} (TECT). UD gratefully acknowledges additional support by the European Science Foundation (EUROCORES), and the European Science Foundation (ESF). Acknowledgments {#acknowledgments .unnumbered} =============== We thank Karl Sigmund for valuable discussions throughout this work. Figures {#figures .unnumbered} ======= ![ [**Schematic overview of the model.**]{} We consider donation games among three types of players: unconditional cooperators, unconditional defectors, and conditional cooperators. Unconditional cooperators always cooperate (C), unconditional defectors always defect (D), and conditional cooperators cooperate and defect towards recipients with good and bad reputations, respectively. The reputation information thus required by conditional cooperators is provided to them by monitors in exchange for a fee. To allow for competition among different monitoring strategies, we consider two monitors who independently observe the players (at a cost to the observing monitor) and provide reputation information accordingly (at a cost to the requesting conditional cooperator). Monitors differ in the fractions of players they observe and in the fees they charge for providing information. A monitor asked for reputation information about a player who was not observed provides a random answer, and each conditional cooperator selects either one of the two monitors by comparing the resultant long-term payoffs obtained by the monitor’s clients. []{data-label="fig:schema"}](img/schema) ![ [**Failures and successes in the bootstrapping of institutionalised monitoring.**]{} Bootstrapping occurs when a group without any monitoring gradually evolves to exhibit stable and finite levels of monitoring and cooperation. Panels show how the frequencies of unconditional cooperators, unconditional defectors, and conditional cooperators (blue, red, and green curves, respectively), as well as those of monitoring (by monitors; cyan curve) and of cooperation (by unconditional or conditional cooperators; black curve) evolve from different initial conditions. (a) With the SCORING rule, bootstrapping always fails, even for groups initially comprised entirely of conditional cooperators. (b) With the MILD rule, bootstrapping fails if the initial frequency of conditional cooperators is too low (inside the green band). (c) With the MILD rule, bootstrapping succeeds if the initial frequency of conditional cooperators is high enough (outside of the green band). (d) With the STERN rule, bootstrapping fails if the initial frequency of conditional cooperators is too low (inside the green band). (e) With the STERN rule, bootstrapping succeeds if the initial frequency of conditional cooperators is high enough (outside of the green band). Within one unit of time, on average, the reputations of all players are updated. The time axes are scaled logarithmically to show short-term and long-term changes together. Parameters: $w = 0.01, \alpha = 10, \mu = 0.1, \epsilon = 0.001, \gamma = 0.01, \kappa = 2, c = 1$, and $b = 10$. Initial conditions: $q = 0, \beta = 0, x_\C= 0, x_\D = 1-x_\R$, and $x_\R = 1$ (a), $x_\R= 0.3$ (b, d), or $x_\R = 0.5$ (c, e). []{data-label="fig:examples"}](img/examples) ![ [**The bootstrapping of institutionalised monitoring is facilitated by slowly evolving players and nimbly adapting monitors.**]{} Bootstrapping occurs when a group without any monitoring gradually evolves to exhibit stable and finite levels (larger than 10%) of monitoring and cooperation. Panels show how the minimum fraction of conditional cooperators required for bootstrapping changes with the intensity $w$ of imitation among players and the intensity $\alpha$ of competition between monitors. Higher intensities imply faster adaptation. Low thresholds facilitating bootstrapping are shown in green, and high thresholds impeding bootstrapping are shown in red. Fully red colouration indicates that bootstrapping is impossible. (a,b,c) Low benefit-to-cost ratio of cooperation, $b/c = 5$. (d,e,f) High benefit-to-cost ratio of cooperation, $b/c= 10$. (a,d) The SCORING rule. (b,e) The MILD rule. (c, f) The STERN rule. Under the SCORING rule, the frequency of monitoring always declines to 0, so institutionalised monitoring cannot be established. Under the MILD and the STERN rules, bootstrapping is possible and is easiest, , requires the least frequency of conditional cooperators, when players adapt slowly and monitors adapt quickly. Parameters: $\mu= 0.1, \epsilon= 0.001, \gamma = 0.01, \kappa = 2$, and $c = 1$. Initial conditions: $q = 0, \beta = 0, x_\C = 0$, and $x_\D = 1 - x_\R$. []{data-label="fig:bootstrap"}](img/bootstrap) ![ [**The MILD rule establishes higher cooperation while requiring only loose surveillance, whereas the STERN rule establishes lower cooperation while requiring severe surveillance.**]{} Panels show how the equilibrium frequencies of (a,b,c) monitoring and (d,e,f) cooperation vary with the ratio $\gamma/c$ between observation cost and cooperation cost and the benefit-to-cost ratio $b/c$ of cooperation. (a,d) The SCORING rule. (b,e) The MILD rule. (c,f) The STERN rule. Under the SCORING rule, the frequency of monitoring always declines to $0$, so institutionalised monitoring cannot be established. Under the MILD rule, monitor evolution equilibrates at infrequent monitoring (loose surveillance) while enabling high frequencies of cooperation. Under the STERN rule, monitor evolution equilibrates at frequent monitoring (severe surveillance) while enabling only intermediate frequencies of cooperation. In comparison with the MILD rule, the STERN rule is more robust against increasing the ratio $\gamma/c$ between observation cost and cooperation cost. Parameters: $w = 0.01, \alpha = 100, \mu = 0.1, \epsilon = 0.001, \kappa = 2$, and $c = 1$. Initial conditions: $q = 0, \beta = 0, x_\C = 0, x_\D = 0$, and $x_\R = 1$. []{data-label="fig:equilibria"}](img/equilibria) Tables {#tables .unnumbered} ====== --------- ------ ----- ------ ------ SCORING Good Bad Bad Good MILD Good Bad Good Good STERN Good Bad Good Bad --------- ------ ----- ------ ------ : [**Assessment rules.**]{} When observing a donation game, each monitor assigns a reputation, either good (G) or bad (B), to the participating donor according to an assessment rule (SCORING, MILD, or STERN). These assessment rules differ in the four social scenarios: $\CG$, $\DG$, $\DB$, and $\CB$. In the $\CG$ scenario, a donor cooperates with a good recipient, in the $\DG$ scenario, a donor defects against a good recipient, in the $\DB$ scenario, a donor defects against a bad recipient, and in the $\CB$ scenario, a donor cooperates with a bad recipient. In the table, each cell represents the reputation that the donor receives in each scenario under the two assessment rules. The SCORING rule regards cooperating ($\CG$ and $\CB$) donors as good and defecting ($\DG$ and $\DB$) donors as bad. The MILD and the STERN rules are the same except for the cell $\CB$; they regard the donor in this scenario as good and bad, respectively. []{data-label="tab:morals"}
--- abstract: 'We introduce a method to estimate the complexity function of symbolic dynamical systems from a finite sequence of symbols. We test such complexity estimator on several symbolic dynamical systems whose complexity functions are known exactly. We use this technique to estimate the complexity function for genomes of several organisms under the assumption that a genome is a sequence produced by a (unknown) dynamical system. We show that the genome of several organisms share the property that their complexity functions behaves exponentially for words of small length $\ell$ ($0\leq \ell \leq 10$) and linearly for word lengths in the range $11 \leq \ell \leq 50$. It is also found that the species which are phylogenetically close each other have similar complexity functions calculated from a sample of their corresponding coding regions.' author: - 'R. Salgado-García' - 'E. Ugalde' bibliography: - 'StructGen.bib' nocite: '[@*]' title: 'Symbolic Complexity for Nucleotide Sequences: A Sign of the Genome Structure' --- During the last decade there has been an intense debate about what does complexity mean for biological organisms and how it has evolved. Moreover, the problem of how to measure such a complexity at the level of nucleotide sequences, has became a challenge for geneticists [@lynch2003origins; @adami2002complexity; @adami2000evolution]. Even having some well defined mathematical measures of complexity (most of them coming from the dynamical systems theory), there are several problems in implementing such measures in real scenarios. The main difficulty lies on the fact that, due to the finiteness of the sample, the statistical errors are generally very large and the convergence in many cases cannot be reached (see Ref. [@koslicki2011topological] and references therein). Here we will be concerned with the complexity function $C(\ell)$ (particularly for genomic sequences) defined as the number of sub-words of length $\ell$ (lets us call $\ell$-words hereafter) occurring in a given finite string. The importance of estimating such a quantity lies on the fact that it should give some information about the structure of the considered string, or, in other words, the mechanisms that *produce* such a string. The problem of determining the complexity function for finite sequences (and in particular of genomic sequences) has been previously considered by several authors [@koslicki2011topological; @colosimo2000special]. It was found that the complexity function for a finite string has a profile which is independent on how the string was produced [@koslicki2011topological; @colosimo2000special]. For small values of $\ell$ (approximately $\ell \leq 10$ for nucleotide sequences) the complexity is an increasing function of $\ell$, after that, it becomes nearly constant on a large domain, and eventually becoming a decreasing function that reach zero at some finite $\ell$. This behavior is actually a finite size effect. Indeed, if we would like to compute the complexity function for the string, we would need a very large sample in order to obtain a good estimation. Assume, for sake of definiteness, that we are producing a random sequence, from a finite alphabet, as a fair Bernoulli trial (i.e., with the invariant measure of maximal entropy on the *full shift* [^1]). If the produced word $\mathbf{x}$ were of infinite length, then, all the words of all the lengths would *typically* be present. Indeed, counting directly the number of different $\ell$-words appearing in $\mathbf{x}$ we would obtain $\#\mathcal{A}^\ell$ *almost always*, where $\#\mathcal{A}$ stands for the cardinality of the alphabet $\mathcal{A}$. However, if the produced sequence $\mathbf{g}$ have a finite length (which occurs when we stop the process at some finite time) then the number of $\ell$-words appearing in $\mathbf{g}$ should be regarded as a random variable which depends on the number of trails. Then, to compute the value of the complexity from a finite sequence we need to have a large enough sample in order to have an accurate estimation. For example, if $\ell = 20$ and the alphabet has four elements, then, as we know for random sequences, the complexity $C(20) = 4^{20} \approx 10^{12}$. This means that for estimating this number, we would need a string with a size at least of $10^{12}$ symbols. This example makes clear that the difficulty we face when we try to estimate the complexity function is the size of the sample. Bellow we will show that, even with a small sample we can give accurate estimations for the symbolic complexity by using an appropriate estimator. The point of view that we adopt here is to regard the complexity as an unknown property of a given stochastic system. Hence, this property has to be estimated from the realization of a random variable. The latter will be defined bellow and has a close relation with the number of different $\ell$-words occurring in a sample of size $m$. In this way the proposed estimator lets us obtain accurate estimations for the complexity finction of symbolic dynamical systems. We use this technique to give estimation of this symbolic complexity for coding DNA sequences. In Fig. \[fig:Complexity\_Hominidae\], we compare the symbolic complexity obtained from coding sequences of $6\times 10^6$ bp long (of the first chromosomes) of *Homo sapiens*, *Pan troglodytes*, *Gorilla gorilla gorilla*, *Pongo abelii* and *Macaca mulatta* taken from the GenBank database [@benson1997genbank]. From every sequence we taken a sample of $10^5$ words of lengths in the range $1-50$ bp. Then we calculated the corresponding values of $K$ for every $\ell$ which is our estimation of the symbolic complexity (see Eq.  bellow). In this figure we appreciate that the human coding sequences have the lowest complexity of all the species analyzed. From the same figure, we should also notice the progressive increasing of complexity as the species gets away from human, in the phylogenetic sense, according to the reported phylogenetic trees [@nei2000molecular]. In such a figure we cal also appreciate a behavior which seems common to all organisms analyzed. First, we can observe that almost all the “genomic words” in the range $1-10$ are present in the (coding) nucleotide sequences analyzed. This is clear from the exponential growth of words in this range which fits to $C(\ell) \approx 3.94^\ell$ with a correlation coefficient $0.99$. Beyond the range $1-10$, our estimations let us conclude that the behavior of the complexity becomes linear. The latter suggest that the genomic sequences are highly ordered, or, in other words, the process by which this sequences are the result of a (quasi) deterministic one. In the literature it can be found that several symbolic dynamical systems having a linear complexity are actually the result of a substitutive process, like Thue-Morse, Toeplitz or Cantor sequences among others [@allouche1994complexite; @ferenczi1999complexity]. Actually, the fact that the DNA could be the result of a random substitutive process has been suggested by several authors [@li1991expansion; @zaks2002multifractal; @hsieh2003minimal; @koroteev2011scale]. Now lets us state the setting under which we give the estimator for the complexity. Assume that a genome is produced by some stochastic process on a given symbolic dynamical system $(Y, \sigma)$. Here $Y \subset \mathcal{A}^\mathbb{N}$ is a subset of semi-infinite symbolic sequences, made up from a finite alphabet $ \mathcal{A}$, which is invariant under the shift mapping $\sigma$. Although the underlying dynamics producing the genome of a given individual is not known, we can assume that the set of allowed realizations of the genome $Y$ (the “atractor” of such a dynamics) can be characterized by a *language* [@lind1995introduction]. The language of a symbolic dynamical system is defined as the set of all the words of all sizes, appearing in any point belonging to $Y$. If $\mathcal{A}_\ell$ the set of all the $\ell$-words appearing in any point $\mathbf{x}\in Y$, then the *language* of $Y$ is $\cup_{n\in \mathbb{N}} \mathcal{A}_n$. The symbolic complexity of $Y$ is then given by the cardinality of $\mathcal{A}_\ell$, i.e., $ C(\ell) := \# \mathcal{A}_\ell$. Within this framework, a genome $\mathbf{g}$ of an individual can be considered as the observation of a point $\mathbf{x} \in Y$ with a finite precision. Moreover, from such a point we can reconstruct the (truncated) orbit of $\mathbf{x}$ by applying successively the shift map to $\mathbf{g}$. If the sequence observed $\mathbf{g}$ is assumed to be typical with respect to some ergodic measure defined on the dynamical system (possibly an invariant measure of maximal entropy fully supported on $Y$), we can assume that the orbit generated by $\mathbf{g}$ explores the whole the attractor $Y$. Then, $\mathbf{g}$ must carry information about the structure of $Y$, and in particular of its symbolic complexity. As we saw above, the direct counting of words of a given length as a measure of the complexity function requires a large sample to have an accurate enough estimation. The problem we face can be stated as follows: given a sample of size $m$ of words of length $\ell$ we need to estimate the complexity $C(\ell)$ with the restriction $m < C(\ell)$ (and very often $m \ll C(\ell)$ ). To this purpose, lets us assume that the words in the sample are randomly collected and that the realization of every word in the sample is independent from the rest. Let $Q$ be a random variable that counts the number of different words in the sample. It is clear that $1 \leq Q \leq m$. Under the assumption that all the words are equally probable to be realized in the sample, the probability function for $Q$ can be calculated exactly by elementary combinatorics, $$f_Q(x) = \frac{\binom{m-1}{x-1}\binom{C}{x}}{\binom{C+m-1}{m-1}}, \label{eq:distributionQ}$$ and the expected number of $Q$ can be calculated straightforwardly to give, $$\mathbb{E}[Q] = \frac{Cm}{C + m -1}. \label{eq:expectedQ}$$ From the above we can see that, whenever the sample size $m$ is large enough compared to the complexity $C$ (the number of words of size $\ell$) the expected value of the random variable tends to the complexity $C$. The variance of $Q$ can also be calculated in a closed form, giving $$\mbox{Var}[Q] = \frac{C m (C-1)(m-1) }{ (C+m-1)^2(C+ m -2)}. \label{eq:varQ}$$ From this expression we should notice that the variance of $Q$ is small whenever $C\gg m$, and actually it goes as $\mbox{Var}[Q] \approx m^2/C$. This means that the deviations of $Q$ from its expected value are of the order of $m/\sqrt{C}$. In this regime, the expected value of $Q$ is approximately $m - \frac{m(m-1)}{C}$. We should notice from this asymptotic expressions that there is a regime in which the variance of $Q$ is small compared with the difference $\mathbb{E}[Q]-m$, namely, when $ \sqrt{C}/m \ll 1$. In this regime we have that almost any realization of $Q$ result in a value in which does not deviate significantly from $\mathbb{E}[Q] \approx m - \frac{m(m-1)}{C} $ due to “random fluctuations”. The latter is important since, as we can appreciate, it carry information about the complexity, which is in this case unknown. From this reasoning we propose the following estimator for the symbolic complexity $C$, $$K = \frac{m Q}{ m+1 - Q}, \label{eq:estimator}$$ An few calculations shows that the expected value of $K$ is given by $$\mathbb{E}[K] = C + \frac{m^2-C^2}{m}\mathbb{P}(\{Q = m\}).$$ from which it is easy to see that proposed estimator $K$ is unbiased if $m>N$. We can see that, in the case in which the probability that all the words in the sample be different is small, any realization of $K$ is near $C$. Now, to implement this estimator to calculate the complexity we need to state how to meet the conditions imposed for the validity of the distribution given in Eq. . We have to satisfy two main conditions: ($i$) that the words obtained in the sample be independent, and ($ii$) that the words of the same length have equal probability to occur. Lets us assume that a sequence $\mathbf{g}$ is a symbolic sequence of length $N$ obtained from some dynamical system. The orbit under the shift mapping generated by $\mathbf{g}$ can be written as $\mathcal{O}(\mathbf{g}) = \{ \mathbf{g}, \sigma(\mathbf{g}), \sigma^2(\mathbf{g}), \dots, \sigma^{N-1}(\mathbf{g}) \} $. A sample of words of length $\ell$ can be obtained from each point in the orbit by taking the first $\ell$ symbols. However, it is clear that the words obtained in this way are not independent. The latter is due to the correlations between words generated by the overlapping when shifting to obtain the points in the orbit, and by the probability measure naturally present in the system which cause correlations even when two words sampled do not overlap. Thus, the sample should be taken from the orbit in such a way that the words are separated as most as possible along the orbit. Using this criterium we estimated the complexity for well known symbolic dynamical systems. First we produced long sequences of $6\times 10^6$ symbols from three different systems: the full shift (random sequences), the Fibonacci shift (sequences with the forbidden word $\mathrm{00}$), and the run-limited length shift (a *sofic* shift, with a countable infinite set of forbidden words [@lind1995introduction]). In every case the sequences were produced at random with the probability measure of maximal entropy. Then we have taken a sample of $10^5$ words of lengths ranging from $1$ to $50$ for every sequence. Sampling in this way we have a separation of $10$ symbols between neighbor words of the maximal length analyzed $\ell = 50$. In Fig. \[fig:Complexity\_Shifts\] we show the values obtained for the random variables $Q$ and $K$ as functions of $\ell$ using the sample described above. From this figure we see that the values obtained for $Q$ as a function of $\ell$ exhibit a “kink”, which has been previously observed in Refs [@koslicki2011topological; @colosimo2000special]. This behavior is consistent with the predicted by Eq. , which can be calculated for these cases since we know the exact value of $C(\ell)$. Then, from the values of $Q$ we can obtain the values for $K$ which, as stated in Eq. , gives an estimation for $C(\ell)$. It is known that $C(\ell)$ behaves exponentially in all the cases analyzed, i.e., $C (\ell) \asymp \exp( h \ell)$, where $h$ is the topological entropy. It is known that the respective topological entropies are: $h_{\mathrm{RLL}} = \ln(t^*)\approx 0.382$ for the (1,3)-run-length limited shift (where $t^*$ is the largest solution of $t^4-t^2 -t -1 = 0$), $h_{\mathrm{fib}} = \ln(\phi) \approx 0.481$ for the fibonacci shift (where $\phi $ is the golden ratio), and $h_{\mathrm{rand} } = \ln(2) \approx 0.693$ for the full shift [@lind1995introduction]. From the curves for $K$ shown in the referred figure, we obtained the corresponding estimations for the topological entropies by means of the least squares method: $\hat h_{\mathrm{RLL} } = 0.384 \pm 0.0012$, $\hat h_{\mathrm{fib} } = 0.461 \pm 0.0014$, and $\hat h_{\mathrm{rand} } = 0.721 \pm 0.0025$. From these results we observe that the better estimation made corresponds to the one for which the topological entropy is the lower. This is clear from Fig. \[fig:Complexity\_Shifts\] since, due to the large number of words (especially in full shift) we have that the random variable $Q$ “saturates” rapidly, i.e., above some $\ell^*$ the expected value of $Q$ is differs in less than one, from the sample size $m$. The reason for which we used coding DNA to estimate the complexity is due to the fact that the correlations on these kind of genomic sequences are practically absent in coding regions in the range 10-100 bp [@buldyrev1995long; @arneodo1998nucleotide; @arneodo2011multi]. This means that our hypothesis that the words in the sample be independent is at least fulfilled in the sense of correlations. Even if we observe the behavior of the complexity in other regions of the genome (see Fig. \[fig:complex\_human\_chimp\] to appreciate the complexity functions for several chromosomes of *Homo sapiens* and *Pan troglodytes*), we found that the estimated complexity does not varies significantly from chromosome to chromosome. This also indicates that, at least in average, the coding regions seem to have a well defined complexity and therefore, a definite *grammatical* structure in the sense of symbolic dynamics. In conclusion, we have proposed an estimator for the complexity function of symbolic dynamical systems. We tested such an estimator to calculate the complexity function of several symbolic dynamical systems whose complexity function is well known. Using this estimator we obtained the symbolic complexity for nucleotide sequences of coding regions of genomes of four species belonging to the *Hominidae* family. This study gave us information about the structure of the genome, which seems to be ubiquitous at least for all the species analyzed here. The main characteristic we found is that the complexity function behaves as mixture of exponential behavior (for words in the range $1$-$10$ bp) and an exponential one (for words in the range $11$-$50$ bp). This behavior is in some way consistent with several proposed evolution models that include a substitutive process since the linear complexity (which we observe for large genomic words) is a common characteristic of substitutive dynamical systems [@allouche1994complexite; @ferenczi1999complexity] Moreover, the fact that the complexity does not varies significantly from chromosome to chromosome, suggest that there would exist a global architecture (a *language* in the symbolic dynamics sense) for the coding region of the genome. It would be interesting to look for the (biological or dynamical) mechanisms responsible for the structure we found in the genomes of the *Hominidae* family and if this structure is ubiquitous to the genomes of others organisms. We particularly found that the symbolic complexity correlates with the phylogenetic trees reported for these species. We believe that by analyzing the common features of the symbolic complexity several species could potentially be of help in the developing of whole-genome based phylogenetic reconstruction techniques. This work was supported by CONACyT through grant no. CB-2012-01-183358. R.S.-G. tanks F. Vázquez for carefully reading the manuscript and for giving useful comments on this work. [^1]: The *full shift* is the set of all the infinite, or semi-infinite, sequences of symbols
--- abstract: 'We first extract the binding energy $\bar \Lambda$ and decay constants of the D wave heavy meson doublets $(1^{-},2^{-})$ and $(2^{-},3^{-})$ with QCD sum rule in the leading order of heavy quark effective theory. Then we study their pionic $(\pi, K, \eta)$ couplings using the light cone sum rule, from which the parameter $\bar \Lambda$ can also be extracted. We then calculate the pionic decay widths of the strange/non-strange D wave heavy $D/B$ mesons and discuss the possible candidates for the D wave charm-strange mesons. Further experimental information, such as the ratio between $D_s\eta$ and $DK$ modes, will be very useful to distinguish various assignments for $D_{sJ}(2860, 2715)$.' author: - Wei Wei - Xiang Liu - 'Shi-Lin Zhu' title: D Wave Heavy Mesons --- Introduction ============ Recently BarBar reported two new $D_{s}$ states, $D_{sJ}(2860)$ and $D_{sJ}(2690)$ in the $DK$ channel. Their widths are $\Gamma=48\pm7\pm10 $ MeV and $\Gamma=112\pm7\pm36 $ MeV respectively [@babar]. For $D_{sJ}(2860)$ the significance of signal is $5\sigma$ in the $D^0K^+$ channel and 2.8 $\sigma$ in the $D^+K_s^0$ channel. Belle observed another state $D_{sJ}(2715)$ with $J^P=1^{-}$ in $B^+\rightarrow \bar{D^0}D_{sJ} \rightarrow\bar{D^0}D^0K^+$ [@belle]. Its width is $\Gamma=115\pm20$ MeV. No $D^{*}K$ or $D_s\eta$ mode has been detected for all of them. The $J^P$ of $D_{sJ}(2860)$ and $D_{sJ}(2690)$ can be $0^+,1^-, 2^+,3^-,\cdots$ since they decay to two pseduscalar mesons. $D_{sJ}(2860)$ was proposed as the first radial excitation of $D_{sJ}(2317)$ based on a coupled channel model [@rupp] or an improved potential model [@close]. Colangelo et al considered $D_{sJ}(2860)$ as the D wave $3^{-}$ state [@colangelo]. The mass of $D_{sJ}(2715)$ or $D_{sJ}(2690)$ is consistent with the potential model prediction of the $c\bar{s}$ radially excited $2^3S_1$ state [@isgur; @close]. Based on chiral symmetry consideration, a D wave $1^-$ state with mass $M=2720 $ MeV is also predicted if the $D_{sJ}(2536)$ is taken as the P wave $1^+$ state [@nowak]. The strong decay widths for these states are discussed using the $^{3}P_{0}$ model in [@bozhang]. The heavy quark effective theory (HQET) provides a systematic expansion in terms of $1/ m_Q$ for hadrons containing a single heavy quark, where $m_Q$ is the heavy quark mass [@grinstein]. In HQET the heavy mesons can be grouped into doublets with definite $j_{\ell}^P$ since the angular monument of light components $j_{\ell}$ is a good quantum number in the $m_Q\to\infty$ limit. They are $\frac{1}{2}^-$ doublet $(0^-, 1^-)$ with orbital angular monument $L=0$, $\frac{1}{2}^+$ doublet $(0^+,1^+)$ and $\frac{3}{2}^+$ doublet $(1^+,2^+)$ with $L=1$. For $L=2$ there are $(1^{-},2^{-})$ and $(2^{-},3^{-})$ doublets with $j_{\ell}^P=\frac{3}{2}^-$and $\frac{5}{2}^-$ respectively. The states with the same $J^P$, such as the two $1^-$ and two $1^+$ states, can be distinguished in the $m_Q\to\infty$ limit, which is one of the advantage of working in HQET. The D wave heavy mesons (($B_1^{*'} ,B_2^{*}$), ($B_2^{*'}, B_3$)) were considered in the quark model [@quark; @model]. The semileptonic decay of $B$ meson to the D wave doublets was calculated using three point QCD sum rule [@colangelo2]. The decay property of the heavy mesons till $L=2$ was calculated using the $^{3}P_{0}$ model in [@close2]. Light cone QCD sum rule (LCQSR) has proven very useful in extracting the hadronic form factors and coupling constants in the past decade [@light; @cone]. Unlike the traditional SVZ sum rule [@svz], it was based on the twist expansion on the light cone. The strong couplings and semileptonic decay form factors of the low lying heavy mesons have been calculated using this method both in full QCD and in HQET [@lc]. In this paper we first extract the mass parameters and decay constants of D wave doublets in section \[mass\]. Then we study the strong couplings of the D wave heavy doublets with light pseduscalar mesons $\pi$, $K$ and $\eta$ in section \[lcqsr\]. We work in the framework of LCQSR in the leading order of HQET. We present our numerical analysis in section \[numerical\]. In section \[width\] we calculate the strong decay widths to light hadrons and discuss the possible D wave charm-strange heavy meson candidates. The results are summarized in section \[summary\]. Two-point QCD sum rules {#mass} ======================= The proper interpolating currents $J_{j,P,j_{\ell}}^{\alpha_1\cdots\alpha_j}$ for the states with the quantum number $j$, $P$, $j_{\ell}$ in HQET were given in [@huang], with $j$ the total spin of the heavy meson, $P$ the parity and $j_{\ell}$ the angular momentum of light components. In the $m_Q\to\infty$ limit, the currents satisfy the following conditions $$\begin{aligned} \label{decay} &&\langle 0|J_{j,P,j_{\ell}}^{\alpha_1\cdots\alpha_j}(0)|j',P',j_{\ell}^{'}\rangle= f_{Pj_l}\delta_{jj'} \delta_{PP'}\delta_{j_{\ell}j_{\ell}^{'}}\eta^{\alpha_1\cdots\alpha_j}\;,\nonumber\\ \label{corr}&&i\:\langle 0|T\left (J_{j,P,j_{\ell}}^{\alpha_1\cdots\alpha_j}(x)J_{j',P',j_{\ell}'}^{\dag \beta_1\cdots\beta_{j'}}(0)\right )|0\rangle= \delta_{jj'}\delta_{PP'}\delta_{j_{\ell}j_{\ell}'}\nonumber\\ &&\times\:(-1)^j\:{\cal S}\:g_t^{\alpha_1\beta_1}\cdots g_t^{\alpha_j\beta_j} \int \,dt\delta(x-vt)\:\Pi_{P,j_{\ell}}(x)\;,\nonumber\\\end{aligned}$$ where $\eta^{\alpha_1\cdots\alpha_j}$ is the polarization tensor for the spin $j$ state. $v$ denotes the velocity of the heavy quark. The transverse metric tensor $g_t^{\alpha\beta}=g^{\alpha\beta}-v^{\alpha}v^{\beta}$. ${\cal S}$ denotes symmetrizing the indices and subtracting the trace terms separately in the sets $(\alpha_1\cdots\alpha_j)$ and $(\beta_1\cdots\beta_{j})$. $f_{P,j_{\ell}}$ is a constant. $\Pi_{P,j_{\ell}}$ is a function of $x$. Both of them depend only on $P$ and $ j_{\ell}$. The interpolating currents are [@huang] $$\begin{aligned} \label{curr1} &&J^{\dag\alpha}_{1,-,{3\over 2}}=\sqrt{\frac{3}{4}}\:\bar h_v(-i)\left( {\cal D}_t^{\alpha}-\frac{1}{3}\gamma_t^{\alpha}{\cal {D}\!\!\!\slash}_t\right)q\;,\\ \label{curr2} &&J^{\dag\alpha_1,\alpha_2}_{2,-,{3\over 2}}=\sqrt{\frac{1}{2}}\:T^{\alpha_1,\alpha_2;\;\beta_1,\beta_2}\bar h_v (-i)\nonumber\\ &&\qquad\quad\times\:\left({\cal D}_{t \beta_1}{\cal D}_{t \beta_2}-\frac{2}{5}{\cal D}_{t \beta_1} \gamma_{t \beta_2} {\cal D\!\!\!\slash}_t\right)q\;,\\ \label{curr3} &&J^{\dag\alpha_1,\alpha_2}_{2,-,{5\over 2}}=-\sqrt{\frac{5}{6}}\:T^{\alpha_1,\alpha_2;\;\beta_1,\beta_2}\bar h_v \gamma^5 \nonumber\\ &&\qquad\quad\times\:\left({\cal D}_{t \beta_1}{\cal D}_{t \beta_2}-\frac{2}{5}{\cal D}_{t\beta_1} \gamma_{t\beta_2} {\cal D\!\!\!\slash}_t\right)q\;,\\ \label{curr4}&&J^{\dag\alpha,\beta,\lambda}_{3,-,{5\over 2}}=-\sqrt{\frac{1}{2}}\:T^{\alpha,\beta,\lambda;\;\mu,\nu,\sigma}\bar h_v \gamma_{t\mu}{\cal D}_{t\nu}{\cal D }_{t\sigma} q\;,\\ \label{curr5} &&J^{\dag\alpha}_{1,-,{1\over 2}}=\sqrt{\frac{1}{2}}\:\bar h_v\gamma_t^{\alpha} q\;,\hspace{0.4cm} J^{\dag}_{0,-,{1\over 2}}=\sqrt{\frac{1}{2}}\:\bar h_v\gamma_5q\;,\\ \label{curr6} &&J^{\dag\alpha}_{1,+,{1\over 2}}={\sqrt{\frac{1}{2}}}\:\bar h_v\gamma^5\gamma^{\alpha}_tq\;,\\ \label{curr7} &&J^{\dag\alpha}_{1,+,{3\over 2}}=\sqrt{\frac{3}{4}}\:\bar h_v\gamma^5(-i)\left( {\cal D}_t^{\alpha}-\frac{1}{3}\gamma_t^{\alpha}{\cal D\!\!\!\slash}_t\right)q\;,\\ \label{curr8} &&J^{\dag\alpha_1,\alpha_2}_{2,+,{3\over 2}}=\sqrt{\frac{1}{2}}\:\bar h_v \frac{(-i)}{2}\nonumber\\ &&\qquad\quad\times\:\left(\gamma_t^{\alpha_1}{\cal D}_t^{\alpha_2}+ \gamma_t^{\alpha_2}{\cal D}_t^{\alpha_1}-{2\over 3}g_t^{\alpha_1\alpha_2} {\cal D\!\!\!\slash}_t\right)q\;,\end{aligned}$$ where $h_v$ is the heavy quark field in HQET and $\gamma_{t\mu}=\gamma_% \mu-v_\mu {v}\!\!\!\slash$. The definitions of $T^{\alpha,\beta;\;\mu,\nu}$ and $T^{\alpha,\beta,\lambda;\;\mu,\nu,\sigma}$ are given in Appendix \[appendix1\]. We first study the two point sum rules for the ($1^-, 2^-$) and ($2^-, 3^-$) doublets. We consider the following correlation functions: [$$\begin{aligned} &&\label{correlator 1} i \int d^4x\: e^{ikx}\langle \pi|T(J^{\alpha}_{1,-,{3\over 2}}(x)J^{\beta}_{1,-,{3\over 2}})|0\rangle=-g_t^{\alpha\beta}\Pi_{-,{3\over 2}}(\omega)\;, \nonumber\\ ~\\ && \label{correlator 2}i \int d^4 x e^{ikx}\langle \pi|T(J^{\alpha_1\alpha_2}_{2,-,{5\over 2}}(x)J^{\beta_1\beta_2}_{2,-,{5\over 2}})|0\rangle=\frac{1}{2}(g_t^{\alpha_1\beta_1}g_t^{\alpha_2\beta_2} \nonumber\\ &&\qquad +g_t^{\alpha_1\beta_2}g_t^{\alpha_2\beta_1}-\frac{2}{3}g_t^{\alpha_1\alpha_2}g_t^{\beta_1\beta_2})\Pi_{-,{5\over 2}}(\omega)\;,\end{aligned}$$]{} where $\omega=2v\cdot k$. At the hadron level, $$\Pi_{P,j_{\:l}}=\frac{f^2_{P,j_l}}{2\bar{\Lambda}_{P,j_l}-\omega}\nonumber +\cdots\;.\nonumber$$ At the quark-gluon level it can be calculated with the leading order lagrangian in HQET. Invoking quark-hadron duality and making the Borel transformation, we get the following sum rules from eqs. (\[correlator 1\]) and (\[correlator 2\]) $$\begin{aligned} &&f_{-,{3\over2}}^2\exp\Big[{-{2\bar\Lambda_{-,{3\over2}}\over T}}\Big]\nonumber\\&&={1\over 2^6\pi^2}\int_0^{\omega_c}\omega^4e^{-\omega/{T}}d\omega +\frac{1}{16}\:m_0^2\:\langle\bar qq\rangle -{1\over 2^5}\langle{\alpha_s\over\pi}G^2\rangle T,\nonumber\\\label{form1} &&f_{-,{5\over2}}^2\exp\Big[{-{2\bar\Lambda_{-,{5\over2}}\over T}}\Big]\nonumber\\&&={1\over 2^7\cdot 5\pi^2}\int_0^{\omega_c}\omega^6e^{-\omega/{T}}d\omega+{1\over 120}\langle{\alpha_s\over\pi}G^2\rangle T^3\;\label{form2}.\end{aligned}$$ Here $m_0^2\,\langle\bar qq\rangle=\langle\bar qg\sigma_{\mu\nu}G^{\mu\nu}q\rangle$. Only terms of the lowest order in $\alpha_s$ and operators with dimension less than six have been included. For the ${5\over 2}^-$ doublet there is no mixing condensate due to the higher derivation. We use the following values for the QCD parameters: $\langle\bar qq\rangle=-(0.24 ~\mbox{GeV})^3$, $\langle\alpha_s GG\rangle=0.038 ~\mbox{GeV}^4$, $ m_0^2=0.8 ~\mbox{GeV}^2$. Requiring that the high-order power corrections is less than $30\%$ of the perturbation term without the cutoff $\omega_c$ and the contribution of the pole term is larger than $35\%$ of the continuum contribution given by the perturbation integral in the region $\omega > \omega_c$, we arrive at the stability region of the sum rules $\omega_c=3.2-3.6$ GeV, $T=0.8-1.0$ GeV. The results for $\bar\Lambda$’s are $$\begin{aligned} \label{result1} \bar\Lambda_{-,{3\over2}}&=&1.42 \pm 0.08 ~~\mbox{GeV}\;,\\ \bar\Lambda_{-,{5\over2}}&=&1.38 \pm 0.09 ~~\mbox{GeV}\;.\end{aligned}$$ The errors are due to the variation of $T$ and the uncertainty in $\omega_c$. In Fig. \[fig1\] and \[fig2\], we show the variations of masses with $T$ for different $\omega_c$. The masses of D wave mesons in the quark model are around $2.8~ \mbox{GeV}$ for $D$ meson and $6 ~\mbox{GeV}$ for $B$ meson [@quark; @model]. $1/m_Q$ correction may be quite important for D wave heavy mesons, which will be investigated in a subsequent work. In the following sections we also need the values of $f$’s: $$\begin{aligned} f_{-, {3\over2}}&=&0.39\pm 0.03 ~\mbox{GeV}^{5/2}\;,\\ f_{-, {5\over2}}&=&0.33\pm 0.04 ~\mbox{GeV}^{7/2}\;.\end{aligned}$$ Sum rules for decay amplitudes {#lcqsr} =============================== Now let us consider the strong couplings of D wave doublets ${3\over 2}^-$ and ${5 \over 2}^-$ with light hadrons. For the light quark being a $u$ (or $d$) quark, the D wave heavy mesons decay to pion, while for the light quark being a strange quark, it can decay either to $BK$ or $B_s\eta$. In the following we will denote the light hadron as pion and discuss all three cases. The strong decay amplitudes for D wave $1^-$ and $3^-$ states to ground doublet $(0^-,1^-)$ are $$\begin{aligned} &&M(B_1^{*'}\rightarrow B\pi)=I \epsilon^{\mu}q_{t\mu}g(B_1^{*'}B)\;,\nonumber\\ &&M(B_1^{*'}\rightarrow B^{*}\pi) =I\:i\epsilon^{\mu\nu\rho\sigma}\epsilon_{\mu}\epsilon^{'*}_{\nu}v_{\rho}q_{t\sigma}g(B_1^{*'}B^{*})\;,\nonumber\\ &&M(B_3\rightarrow B\pi) =I\:\epsilon^{\alpha\beta\lambda}(q_{t\alpha}q_{t\beta}q_{t\lambda}- \frac{1}{6}q^2_t(g_{t\alpha\beta}q_{t\lambda} \nonumber\\ &&\qquad\qquad\qquad\qquad + g_{t\alpha\lambda}q_{t\beta}+ {4\over 3}g_{t\beta\lambda}q_{t\alpha} ))g(B_3B)\;,\nonumber\\ &&M(B_3\rightarrow B^{*}\pi) =I\:i\epsilon^{\mu\nu\sigma\alpha}\:\epsilon_{\alpha\beta\lambda}\:\epsilon^{*}_{\mu}\:v_{\sigma}\;,\nonumber\\ &&\qquad\qquad\qquad\qquad \times \bigg[q_{t\nu}\:q_{t}^{\beta}\:q_{t}^{\lambda}-\frac{1}{6}q^2_t\bigg(g_{t\nu}^{\beta}q_t^{\lambda} + g_{t\nu}^{\lambda}q_t^{\beta}\nonumber\\ &&\qquad\qquad\qquad\qquad + {4\over 3}g_t^{\beta\lambda}q_{t\nu}\bigg)\bigg]\:g(B_3B^{*})\;,\label{amp}\end{aligned}$$ where $\epsilon^{\alpha\beta\lambda}$, $\epsilon^{\mu}$, and $\epsilon^{'}_{\nu}$ are polarizations of the D wave $3^-$, $1^-$ states and ground $1^-$ state respectively. $I=1, \frac{1}{\sqrt{2}}$ for charged and neutral pion mesons respectively, while for the $K$ and $\eta$ mesons it equals one. $g(B_1^{*'}B)$ etc are the coupling constants in HQET and are related to those in full QCD by $$g^{\mbox{\tiny{full QCD}}}(B_1^{*'}B)=\sqrt{m_{B_1^{*'}}m_{B}}\;g^{\mbox{\tiny{HQET}}}(B_1^{*'}B)\;.$$ Because of the heavy quark symmetry, the coupling constants in eq. (\[amp\]) satisfy $$\begin{aligned} g(B_1^{*'}B)&=&g(B_1^{*'}B^{*})\;,\nonumber\\ g(B_3^{*}B)&=&g(B_3^{*}B^{*})\;.\end{aligned}$$ In order to derive the sum rules for the coupling constants we consider the correlators $$\begin{aligned} && \int d^4x\;e^{ik\cdot x}\langle\pi(q)|T\left(J^{\alpha}_{1,-,\frac{3}{2}}(x) J^{\dagger}_{0,-,\frac{1}{2}}(0)\right)|0\rangle = q_t^{\alpha}I\;G_{1}(\omega,\omega')\;,\label{c1}\\ && \int d^4x\;e^{ik\cdot x}\langle\pi(q)|T\left(J^{\alpha}_{1,-,\frac{3}{2}}(x) J^{\dagger \beta}_{1,+,\frac{1}{2}}(0)\right)|0\rangle = (q_t^{\alpha}q_t^{\beta}-\frac{1}{3}g_t^{\alpha\beta}q_t^2)I\;G_{2}(\omega,\omega')\;,\label{c2}\\ && \int d^4x\;e^{ik\cdot x}\langle\pi(q)|T\left(J^{\beta}_{1,+,\frac{3}{2}}(x) J^{\dagger\alpha}_{1,-,\frac{3}{2}}(0)\right)|0\rangle = (q_t^{\alpha}q_t^{\beta}-\frac{1}{3}g_t^{\alpha\beta}q_t^2)I\;G_{3}^d(\omega,\omega') +g_t^{\alpha\beta}I G_{3}^s\;,\label{c3}\\ && \int d^4x\;e^{ik\cdot x}\langle\pi(q)|T\left(J^{\alpha}_{1,-,\frac{3}{2}}(x) J^{\dagger\alpha_1\alpha_2}_{2,-,\frac{3}{2}}(0)\right)|0\rangle =\Big[\frac{1}{2}(g_t^{\alpha\alpha_1}q_t^{\alpha_2}+g_t^{\alpha\alpha_2}q_t^{\alpha_1}) -\frac{1}{3}g_t^{\alpha_1\alpha_2}q_t^{\alpha}\Big]I \;G_{4}^p(\omega,\omega')\nonumber\\ && \qquad \qquad \qquad \qquad \qquad \qquad\qquad \qquad +\Big[ q_t^{\alpha}q_t^{\alpha_1}q_t^{\alpha_2}- \frac{1}{6}q^2_t(g_t^{\alpha\alpha_1}q_t^{\alpha_2} + g_t^{\alpha\alpha_2}q_t^{\alpha_1} + {4\over 3}g_t^{\alpha_1\alpha_2}q_t^{\alpha} )\Big]I\; G_{4}^f(\omega,\omega')\;,\end{aligned}$$ for the $j_{\ell}^P=\frac{3}{2}^-$ doublet; $$\begin{aligned} && \int d^4x\;e^{ik\cdot x}\langle\pi(q)|T\left(J^{\alpha\beta\lambda}_{3,-,\frac{5}{2}}(x) J^{\dagger}_{0,-,\frac{1}{2}}(0)\right)|0\rangle = \Big[q_t^{\alpha}q_t^{\beta}q_t^{\lambda}- \frac{1}{6}q^2_t(g_t^{\alpha\beta}q_t^{\lambda} + g_t^{\alpha\lambda}q_t^{\beta} + {4\over 3}g_t^{\beta\lambda}q_t^{\alpha} )\Big ]I\;G_{5}(\omega,\omega')\;, \\ && \int d^4x\;e^{ik\cdot x}\langle\pi(q)|T\left(J^{\alpha\beta}_{2,-,\frac{5}{2}}(x) J^{\dagger }_{0,+,\frac{1}{2}}(0)\right)|0\rangle = (q_t^{\alpha}q_t^{\beta}-\frac{1}{3}g_t^{\alpha\beta}q_t^2)I\;G_{6}(\omega,\omega')\;,\label{c6} \end{aligned}$$ $$\begin{aligned} &&\int d^4x\;e^{ik\cdot x}\langle\pi(q)|T\left(J^{\alpha\beta}_{2,-,\frac{5}{2}}(x) J^{\dagger \gamma}_{1,+,\frac{3}{2}}(0)\right)|0\rangle = \frac{1}{2}i(\epsilon^{\beta\gamma\mu\nu}q_t^{\alpha}+\epsilon^{\alpha\gamma\mu\nu}q_t^{\beta}) q_{t\mu}v_{\nu}I \;G_{7}(\omega,\omega')\;,\label{c7}\end{aligned}$$ $$\begin{aligned} &&\int d^4x\;e^{ik\cdot x}\langle\pi(q)|T\left(J^{\alpha\beta}_{2,-,\frac{5}{2}}(x) J^{\dagger\lambda}_{1,-,\frac{3}{2}}(0)\right)|0\rangle =\Big[\frac{1}{2}(g_t^{\alpha\lambda}q_t^{\beta} + g_t^{\beta\lambda}q_t^{\alpha})-\frac{1}{3}g_t^{\alpha\beta}q_t^{\lambda}\Big]I\;G^p_{8}(\omega,\omega')\nonumber\\ &&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\qquad +\Big[q_t^{\alpha}q_t^{\beta}q_t^{\lambda}- \frac{1}{6}q^2_t(g_t^{\alpha\lambda}q_t^{\beta} + g_t^{\beta\lambda}q_t^{\alpha} + {4\over 3}g_t^{\alpha\beta}q_t^{\lambda} )\Big] I\;G^f_{8}(\omega,\omega')\;,\label{c8}\\ && \int d^4x\;e^{ik\cdot x}\langle\pi(q)|T\left(J^{\alpha\beta}_{2,-,\frac{5}{2}}(x) J^{\dagger\mu\nu\sigma}_{3,-,\frac{5}{2}}(0)\right)|0\rangle =T^g I\; G_{9}^g(\omega,\omega')+T^{f} I\;G_{9}^{f }(\omega,\omega')+ T^{p1} I\; G_{9}^{p1}(\omega,\omega')\nonumber\\ &&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\qquad + T^{p2}I\; G_{9}^{p2}(\omega,\omega')\;,\label{c9}\end{aligned}$$ for the $j_{\ell}^P=\frac{5}{2}^-$ doublet, where $k^{\prime}=k+q$, $\omega=2v\cdot k$, $\omega^{\prime}=2v\cdot k^{\prime}$. Note that the two P wave couplings between $({2,-,\frac{5}{2}})$ and $({3,-,\frac{5}{2}})$ in eq. (\[c9\]) are not independent and satisfy the relation $g_9^{p2}=-\frac{1}{3}g_9^{p1}$. First let us consider the function $G_1(\omega,\omega^{\prime})$ in eq. (\[c1\]). As a function of two variables, it has the following pole terms from the double dispersion relation $$\begin{aligned} \label{pole} {f_{-,{1\over 2}}f_{-,{3\over 2}}g_1\over (2\bar\Lambda_{-,{1\over 2}} -\omega')(2\bar\Lambda_{-,{3\over 2}}-\omega)}+{c\over 2\bar\Lambda_{-,{1\over 2}} -\omega'}+{c'\over 2\bar\Lambda_{-,{3\over 2}}-\omega}\;,~~\nonumber\end{aligned}$$ where $f_{P,j_\ell}$ denotes the decay constant defined in eq. (\[decay\]). $\bar\Lambda_{P,j_\ell}=m_{P,j_\ell}-m_Q$. We calculate the correlator (\[c1\]) on the light-cone to the leading order of ${\cal O}(1/ m_Q)$. The expression for $G_1(\omega, \omega')$ reads $$\begin{aligned} \label{f1} &&-{\sqrt{6}\over 8}i \int_0^{\infty} dt \int dx e^{ikx} \delta (x-vt){\rm Tr} \Big[ (\mathcal{D}^t_\alpha -{1\over 3}\gamma^t_\alpha { \mathcal{D}\!\!\!\slash}^t )\nonumber\\ &&\qquad\qquad\qquad \times (1+{v\!\!\!\slash})\gamma_5 \langle \pi (q)|u(0) {\bar d}(x) |0\rangle \Big]\; .\end{aligned}$$ The pion (or $K$/$\eta$) distribution amplitudes are defined as the matrix elements of nonlocal operators between the vacuum and pion state. Up to twist four they are [@ball; @ball2]: $$\begin{aligned} &&\langle\pi(q)| {\bar d} (x) \gamma_{\mu} \gamma_5 u(0) |0\rangle=-i f_{\pi} q_{\mu} \int_0^1 du \; e^{iuqx}\Big[\varphi_{\pi}(u)\nonumber\\ &&\quad +\frac{1}{16}m_{\pi}^2x^2 A(u)\Big]-\frac{i}{2} f_\pi m_{\pi}^2 {x_\mu\over q x} \int_0^1 du \; e^{iuqx} B(u)\;,\nonumber\\ &&\langle\pi(q)| {\bar d} (x) i \gamma_5 u(0) |0\rangle = f_{\pi} \mu_{\pi} \int_0^1 du \; e^{iuqx} \varphi_P(u)\;, \nonumber\\ &&\langle\pi(q)| {\bar d} (x) \sigma_{\mu \nu} \gamma_5 u(0) |0\rangle ={i\over 6}(q_\mu x_\nu-q_\nu x_\mu) f_{\pi} \mu_{\pi}\nonumber\\ &&\qquad\qquad\qquad\qquad\times \int_0^1 du \; e^{iuqx} \varphi_\sigma(u)\;.\end{aligned}$$ The expressions for the light cone wave functions $\varphi_{\pi}(u)$ etc are presented in Appendix \[appendix2\] together with the relevant parameters for $\pi$, $K$ and $\eta$. Expressing eq. (\[f1\]) with the light cone wave functions, we get the expression for the correlator function in the quark-gluon level $$\begin{aligned} \label{q1} &&G_1(\omega, \omega')= -i{\sqrt{6}\over 12}f_\pi\int_0^{\infty} dt \int_0^1 du e^{i (1-u) {\omega t \over 2}} e^{i u {\omega' t \over 2}} u \nonumber\\ &&\quad\times\Big \{ {i \over t}[u\varphi_{\pi} (u)]'+{1\over 16}m_{\pi}^2[uA(u)]'+{1\over 2}B(u)\Big[{iu\over q\cdot v}\nonumber\\ &&\quad -{1\over (q\cdot v)^2t}\Big]+\mu_{\pi}u\varphi_p(u)+{1 \over 6}\mu_{\pi}\varphi_{\sigma}(u) \Big \} +\cdots\;.\end{aligned}$$ After performing wick rotation and double Borel transformation with the variables $\omega$ and $\omega'$ the single-pole terms in eq. (\[pole\]) are eliminated and we arrive at the following result: $$\begin{aligned} \label{g1} &&g_1 f_{-,{1\over 2} } f_{-, {3\over 2} }\nonumber\\&&={\sqrt{6}\over 6}f_{\pi} \exp\Big[{ { \Lambda_{-,{1\over 2} } +\Lambda_{-,{3\over 2} } \over T }}\Big] \bigg\{ \frac{1}{2}[u\varphi_{\pi} (u)]^{'}T^2 f_1\Big({\omega_c\over T}\Big)\nonumber\\ &&-\frac{1}{8}m_{\pi}^2[uA(u)]^{'}+m_{\pi}^2[G_1(u)+G_2(u)]\nonumber\\ &&-\mu_{\pi} [u\varphi_P (u)+\frac{1}{6}\varphi_{\sigma}(u)]T f_0\Big({\omega_c\over T}\Big)\bigg\}\bigg|_{u=u_0}\;,\end{aligned}$$ where $u_0={T_1\over T_1+T_2}$, $T={T_1T_2\over T_1+T_2}$. $T_1$, $T_2$ are the Borel parameters, and $f_n(x)=1-e^{-x}\sum\limits_{k=0}^{n}{x^k\over k!}$. The factor $f_n$ is used to subtract the integral $\int_{\omega_c}^\infty s^n e^{-{s\over T}} ds$ as a contribution of the continuum. The sum rules we have obtained from the correlators (\[c1\])-(\[c9\]) are collected in Appendix \[appendix3\] together with the definitions of $G_1$ etc. Numerical results {#numerical} ================= For the ground states and P wave heavy mesons, we will use [@braun; @slz]: $$\begin{aligned} \label{fvalue} &&\bar\Lambda_{-,{1\over2}}=0.5 ~\mbox{GeV}\;,\hspace{0.8cm} f_{-,{1\over2}}=0.25 ~\mbox{GeV}^{3/2}\;,\nonumber\\ &&\bar\Lambda_{+,{1\over2}}=0.85 ~\mbox{GeV}\;,\hspace{0.6cm} f_{+,{1\over2}}=0.36\pm 0.10 ~\mbox{GeV}^{1/2}\;,\nonumber\\ &&\bar\Lambda_{+,{3\over2}}=0.95 ~\mbox{GeV}\;,\hspace{0.6cm} f_{+,{3\over2}}=0.26\pm 0.06 ~\mbox{GeV}^{5/2}\;.\nonumber\end{aligned}$$ The mass parameters and decay constants for the D wave doublets have been obtained in section \[mass\] from the two point sum rule: $$\begin{aligned} \label{mass constant} &&\bar\Lambda_{-,{3\over2}}=1.42 ~\mbox{GeV}\;,\hspace{0.2cm}f_{-,{3\over2}}=0.39\pm 0.03 ~\mbox{GeV}^{5/2}\;,\nonumber\\ &&\bar\Lambda_{-,{5\over2}}=1.38 ~\mbox{GeV}\;,\hspace{0.2cm}f_{-,{5\over2}}=0.33\pm 0.04 ~\mbox{GeV}^{7/2}\;.\nonumber\end{aligned}$$ We choose to work at the symmetric point $T_1 = T_2 = 2T$, i.e., $u_0 = 1/2$ as traditionally done in literature [@lc]. The working region for $T$ can be obtained by requiring that the higher twist contribution is less than $30\%$ and the continuum contribution is less than $40 \%$ of the whole sum rule, we then get $\omega_c=3.2-3.6$ GeV and the working region $2.0<T<2.5$ GeV for eqs. (\[b1\]), (\[b2\]) and (\[b3\]) in Appendix \[appendix3\] and $1.2<T<2.0$ GeV for others. The working regions for the first three sum rules are higher than that for the others because there are zero points between 1 and $2$ GeV for them and stability develops only for $T$ above $2$ GeV. From eq. (\[g1\]) the coupling reads $$\begin{aligned} &&g_{1\pi}f_{-,{1\over 2} } f_{-, {3\over 2} } =(0.17\pm 0.04)~\mbox{GeV}^{3}\;.\end{aligned}$$ We use the central values for the mass parameters and the error is due to the variation of $T$ and the uncertainty of $\omega_c$. The central value corresponds to $T=1.6$ GeV and $\omega_c=3.4$ GeV. There is cancelation between the twist 2 and twist 3 contributions in the sum rule. For D wave heavy mesons with a strange quark, the couplings can be obtained by the same way. Notice that in the $\eta$ case, $f_{\pi}$ should be replaced by $-{2 \over \sqrt{6}}f_{\eta}$ due to the quark components of $\eta$ meson, where $f_{\eta}=0.16$ GeV is the decay constant of $\eta$ meson. From eq. (\[g1\]) we can get the couplings between the ground state doublet and D wave doublet with a strange quark, $$\begin{aligned} &&g_{1K}f_{-,{1\over 2} } f_{-, {3\over 2} } =(0.19\pm 0.06)~\mbox{GeV}^{3}\;,\nonumber\\ &&g_{1\eta}f_{-,{1\over 2} } f_{-, {3\over 2} } =(0.28\pm 0.06)~\mbox{GeV}^{3}\;.\end{aligned}$$ The couplings between the $\frac{3}{2}^-$ and $\frac{5}{2}^-$ doublets and other doublets are collected in Table\[table\]. We can see that the $SU(3)_f$ breaking effect is not very big here. $\frac{3}{2}^-$ $\frac{1}{2}^-$ $\frac{1}{2}^+$ ${\frac{3}{2}}^+_d$ ${\frac{3}{2}}^+_s$ ${\frac{3}{2}}^-_f$ ${\frac{3}{2}}^-_p$ ----------------- ----------------- ----------------- --------------------- --------------------- --------------------- --------------------- --------------------- --------------------- -- -- -- $\pi$ 0.17 0.086 0.16 0.10 0.056 0.071 $K$ 0.19 0.09 0.24 0.18 0.057 0.10 $\eta$ 0.28 0.046 0.22 0.11 0.030 0.078 $\frac{5}{2}^-$ $\frac{1}{2}^-$ $\frac{1}{2}^+$ $\frac{3}{2}^+$ ${\frac{3}{2}}^-_f$ ${\frac{3}{2}}^-_p$ ${\frac{5}{2}}^-_g$ ${\frac{5}{2}}^-_f$ ${\frac{5}{2}}^-_p$ $\pi$ 0.11 0.36 0.072 0.13 0.12 0.015 0.05 0.01 $K$ 0.14 0.48 0.083 0.11 0.16 0.015 0.09 0.02 $\eta$ 0.12 0.42 0.074 0.10 0.14 0.008 0.08 0.01 : \[table\]The pionic couplings between $\frac{3}{2}^-$ and $\frac{5}{2}^-$ doublets and other doublets. The values are the product of coupling constants and the decay constants of initial and final heavy mesons. With the central values of $f$’s, we get the absolute values of the coupling constants: $$\begin{aligned} &&g_{1\pi} =(1.74\pm 0.43)~\mbox{GeV}^{-1}\;,\nonumber\\ &&g_{1K} =(1.95\pm 0.63)~\mbox{GeV}^{-1}\;,\nonumber\\ &&g_{1\eta} =(2.87\pm 0.65)~\mbox{GeV}^{-1}\;.\end{aligned}$$ For the ${5 \over 2}^-$ doublet we have $$\begin{aligned} &&g_{5\pi} =(1.33\pm 0.29)~\mbox{GeV}^{-3}\;,\nonumber\\ &&g_{5K} =(1.70\pm 0.42)~\mbox{GeV}^{-3}\;,\nonumber\\ &&g_{5\eta} =(1.45\pm 0.30)~\mbox{GeV}^{-3}\;.\end{aligned}$$ We do not include the uncertainties due to $f$’s here. We can also extract the mass parameter from the strong coupling formulas obtained in the last section. By putting the exponential factor on the left side of eq. (\[b2\]) and differentiating it, one obtains $$\bar{\Lambda}_{-,{3\over 2} }={{T^2}\over 2} {{\rm d}[\varphi_{\pi}(u_0)Tf_0({\omega_c\over T})-\frac{1}{4}m_{\pi}^2A(u_0){1 \over T}]/{\rm d T} \over [\varphi_{\pi}(u_0)Tf_0({\omega_c\over T})-\frac{1}{4}m_{\pi}^2A(u_0){1 \over T}-\frac{1}{3}\mu_{\pi}\varphi_{\sigma}(u_0)]}\;.$$ With $\omega_c=3.2-3.6$ and the working region $2.0<T<2.5$ GeV, we get $$\bar\Lambda_{1,-,{3\over2}}=1.36-1.56 ~\mbox{GeV}\;,$$ which is consistent with the value obtained by two point sum rule. We present the variation of mass with $T$ and $\omega_c$ in Fig.\[fig3\]. Strong decay widths for D wave heavy mesons {#width} =========================================== Having calculated the coupling constants, one can obtain the pionic decay widths of D wave heavy mesons. The widths for D wave states decaying to $0^-$, $1^-$, $1^+$ states are $$\begin{aligned} &&\Gamma(B_{1}^{*'}\rightarrow B^0\pi^-)=\frac{1}{24\pi}\frac{M_{B}}{M_{B_{1}^{*'}}}g_{1}^2|\bm{p}_1|^3\;,\nonumber\\ &&\Gamma(B_{1}^{*'}\rightarrow B^{*0}\pi^-)=\frac{1}{12\pi}\frac{M_{B^{*}}}{M_{B_{1}^{*'}}}g_{1}^2|\bm{p}_1|^3\;,\nonumber\\ &&\Gamma(B_{1}^{*'}\rightarrow B_1^0\pi^-)=\frac{1}{36\pi}\frac{M_{B_1}}{M_{B_{1}^{*'}}}g_{2}^2|\bm{p}_1|^5\;,\nonumber\\ &&\Gamma(B_{2}^{*}\rightarrow B^{*0}\pi^-)=\frac{1}{36\pi}\frac{M_{B^{*}}}{M_{B_2}}g_{1}^2|\bm{p}_1|^3\;,\nonumber\\ &&\Gamma(B_{3}\rightarrow B^0\pi^-)=\frac{1}{140\pi}\frac{M_{B}}{M_{B_{3}}}g_{5}^2|\bm{p}_1|^7\;,\nonumber\\ &&\Gamma(B_{3}\rightarrow B^{*0}\pi^-)=\frac{1}{105\pi}\frac{M_{B^{*}}}{M_{B_{3}}}g_{5}^2|\bm{p}_1|^7\;,\end{aligned}$$ where $|\bm{p}_1|$ is the moment of final state $\pi$. Note that $g(B_2^{*}B^{*})=\sqrt{\frac{2}{3}}\;g({B_1^{*'}B^{*}})=\sqrt{\frac{2}{3}}\;g_1$. Nonstrange case --------------- We take 2.8 GeV and 6.2 GeV for the masses of D wave charmed and bottomed mesons respectively. $M_D=1.87$ GeV, $M_{D^*}=2.01$ GeV, $M_{D_1}=2.42$ GeV, $M_B=5.28$ GeV, $M_{B^*}=5.33$ GeV [@pdg] and $M_{B_1}=5.75$ GeV from quark model prediction [@quark; @model]. After summing over the charged and neutral modes, we get the results listed in Table \[table2\]. $D\pi$ $D^{*}\pi$ $D_1\pi$ $D^{*}\pi$ ------------------------- -------- ------------ ---------- ------------------------ ------------ $D_{1}^{*'}\rightarrow$ 9-27 13-39 0.2 $D_{2}^{*}\rightarrow$ 5-13 $B\pi$ $B^{*}\pi$ $B_1\pi$ $B^{*}\pi$ $B_{1}^{*'}\rightarrow$ 16-46 27-79 0.3 $B_{2}^{*}\rightarrow$ 9-27 : \[table2\]The decay widths (in unit MeV) of the charmed and bottomed D wave ($1^-, 2^-$) to ground doublets and $\pi$. Strange case ------------ We use $M_{D_s}=1.97$ GeV, $M_{D_s^{*}}=2.11$ GeV, $M_{B_s}=5.37$ GeV, $M_{B_s^{*}}=5.41$ GeV [@pdg]. Then for the charm-strange sector we have Table \[table3\]. $DK$ $D^{*}K$ $D_s\eta$ $D_s^{*}\eta$ -------------------------- ------- ---------- ----------- --------------- -- -- -- -- -- -- -- $D_{s1}^{*'}\rightarrow$ 8-28 10-48 8-22 8-20 $D_{s2}^{*}\rightarrow$ 4-16 3-7 $BK$ $B^{*}K$ $B_s\eta$ $B_s^{*}\eta$ $B_{s1}^{*'}\rightarrow$ 12-52 18-84 11-27 16-42 $B_{s2}^{*}\rightarrow$ 6-28 6-14 : \[table3\]The decay widths (in unit MeV) of the charm-strange and bottom-strange D wave ($1^-, 2^-$) to ground doublets and $K$/$\eta$. We do not consider the $D K^{*}$ mode and three-body modes in the present work. For the $2^-$, $3^-$ states with $j_{\ell}={5\over2}$, we find that the widths are quite small and the branching fraction is perhaps more useful. In the charm-strange sector the ratio of widths (central values) for $DK$, $D^{*}K$, $D_s\eta$ and $D^{*}\eta$ modes is $1:0.4:0.1:0.02$. Conclusion {#summary} ========== In this work we extract the mass and decay constants using the traditional two point sum rule and calculate the strong couplings of D wave heavy meson doublets with light hadrons $\pi$, $K$ and $\eta$ using LCQSR in the leading order of HQET. We also extract the mass parameter from LCQSR for the coupling within the same D wave doublet. The extracted mass parameters from two approaches are consistent with each other. We then calculate the widths of D wave heavy mesons decaying to light hadrons. We have not considered the $1/m_{Q}$ correction and radiative corrections. Heavy quark expansion works well for $B$ mesons where the $1/m_b$ correction is under control and not so large. However, the $1/m_{c}$ correction is not so small for the charmed mesons. It will be desirable to consider both the $1/m_{Q}$ and radiative corrections in the future investigation. According to our present calculation, the ratios such as ${\Gamma(D_{sJ}(2860)\rightarrow DK)\over \Gamma(D_{sJ}(2860)\rightarrow D_s\eta)}$ are useful in distinguishing various interpretations of $D_{sJ}(2860)$ and $D_{sJ}(2715)$. Treating $D_{sJ}(2860)$ as a D wave $1^-$ state, we find the above ratio is $0.4-2.2$. If it is the radial excitation of $D_s^\ast$, this ratio is 0.09 [@bozhang]. The pionic widths of D wave states are not very large. With a mass of $2.86$ GeV, the partial decay width of the $1^-$ D wave $D_s$ state into $DK$ and $D\eta$ modes is $34-118$ MeV. With a mass of 2.715 GeV its pionic width is $15-57$ MeV. Note that $DK^\ast$ modes may be equally important. So detection of other decay channels, such as $D_s\eta$ and $D^{*}K$ modes, will be very helpful in the classification of these new states. [**Acknowledgments:**]{} W. Wei thanks P. Z. Huang for discussions. This project is supported by the National Natural Science Foundation of China under Grants 10375003, 10421503 and 10625521, Ministry of Education of China, FANEDD and Key Grant Project of Chinese Ministry of Education (NO 305001). X.L. thanks the support from the China Postdoctoral Science Foundation (NO 20060400376). {#appendix1} The $T^{\alpha,\beta;\;\mu,\nu}$ and $T^{\alpha,\beta,\lambda;\;\mu,\nu,\sigma}$ are defined as $$\begin{aligned} T^{\alpha,\beta;\;\mu,\nu}&=&\frac{1}{2}(g^{\alpha\mu}g^{\beta\nu}+g^{\alpha,\nu}g^{\beta,\mu}) -\frac{1}{3}g_{t}^{\alpha\beta}g_{t}^{\mu\nu} \; ,\label{ax} \nonumber\\ T^{\alpha,\beta,\lambda;\;\mu,\nu,\sigma}&=& \frac{1}{6}(g^{\alpha\mu}g^{\beta\nu}g^{\lambda\sigma}+g^{\alpha\mu}g^{\beta\sigma}g^{\lambda\beta} +g^{\alpha\nu}g^{\beta\mu}g^{\lambda\sigma}\nonumber\\ &+&g^{\alpha\nu}g^{\beta\sigma}g^{\lambda\mu}+g^{\alpha\sigma}g^{\beta\mu}g^{\lambda\nu} +g^{\alpha\sigma}g^{\beta\nu}g^{\lambda\mu})\nonumber\\ &-&\frac{1}{9}(g_{t}^{\alpha\beta}g_{t}^{\mu\nu}g_{t}^{\lambda\sigma} +g_{t}^{\alpha\lambda}g_{t}^{\mu\nu}g_{t}^{\beta\sigma}+g_{t}^{\beta\lambda}g_{t}^{\mu\nu}g_{t}^{\alpha\sigma})\;.\nonumber \label{pscal}\end{aligned}$$ The tensor structures for G wave, F wave and two P wave decays in eq. (\[c9\]) for the coupling between the two $\frac{5}{2}^{-}$ state are $$\begin{aligned} &&T^g=T^{\mu,\nu,\sigma; \mu_1,\nu_1,\sigma_1}T^{\alpha,\beta;\alpha_1,\beta_1}q_{\mu_1}q_{\nu_1}q_{\sigma_1}q_{\alpha_1}q_{\beta_1}\;,\nonumber\\ &&T^{f}=\frac{1}{3}\Big\{\Big[q_t^{\mu}q_t^{\alpha}q_t^{\beta}- \frac{1}{6}q^2_t(g_t^{\mu\alpha}q_t^{\beta} + g_t^{\mu\beta}q_t^{\alpha} \nonumber\\ &&\qquad + {4\over 3}g_t^{\alpha\beta}q_t^{\mu})\Big]g_t^{\nu\sigma}+(\mu,\nu,\sigma)\Big\}\;,\nonumber\\ &&T^{p1}=\frac{1}{6}\Big[g_t^{\mu\alpha}g_t^{\nu\sigma} q_t^{\beta}+g_t^{\mu\beta}g_t^{\nu\sigma}q_t^{\alpha}+(\mu,\nu,\sigma)\Big]\;,\nonumber\\ && T^{p2}=\frac{1}{3}(q_t^{\mu}g_t^{\nu\sigma}+q_t^{\nu}g_t^{\mu\sigma}+q_t^{\sigma}g_t^{\mu\nu})g_t^{\alpha\beta}\;.\nonumber\end{aligned}$$ {#appendix2} The distribution amplitudes $\varphi_{\pi}$ etc can be parameterized as [@ball; @ball2] $$\begin{aligned} \varphi_{\pi}(u) &=& 6u\bar{u}\bigg[1+a_1C^{3/2}_1(\zeta)+a_2C^{3/2}_2(\zeta)\bigg]\; ,\nonumber\\ \phi_p(u) &=& 1+\bigg[30\eta_3-\frac 52 \rho_{\eta}^2\bigg] C^{1/2}_2(\zeta)+\bigg[-3\eta_3\omega_3\nonumber\\&&-\frac{27}{20}\rho_{\eta}^2-\frac {81}{10}\rho_{\eta}^2a_2\bigg] C^{1/2}_4(\zeta) \; ,\nonumber\end{aligned}$$ $$\begin{aligned} \phi_{\sigma}(u)&=& 6u(1-u)\bigg\{1+\bigg(5\eta_3-\frac{1}{2}\eta_3\omega_3-\frac{27}{20}\rho_{\eta}^2\nonumber\\ &&-\frac {3}{5}\rho_{\eta}^2a_2\bigg)\bigg\} C^{3/2}_2(\zeta) \; ,\nonumber\\ g_{\pi}(u)&=& 1+\Big[1+\frac{18}{7}a_2+60\eta_3+\frac{20}{3}\eta_4\Big] C^{1/2}_2(\zeta)\nonumber\\ &&+\Big[-\frac{9}{28}a_2-6\eta_3\omega_3\Big]C^{1/2}_4(\zeta) \;,\nonumber\end{aligned}$$ $$\begin{aligned} A(u) &=& 6u\bar u \bigg\{ \frac{16}{15} + \frac{24}{35}a_2 + 20 \eta_3 + \frac{20}{9}\eta_4 + \Big[-\frac{1}{15} + \frac{1}{16}\nonumber\\ &&-\frac{7}{27} \eta_3 \omega_3 - \frac{10}{27}\eta_4 \Big]C_2^{3/2}(\zeta) + \Big[ -\frac{11}{210} a_2 \nonumber\\&&- \frac{4}{135}\eta_3\omega_3 \Big]C_4^{3/2}(\zeta) \bigg\} + \Big(-\frac{18}{5} a_2 + 21\eta_4\omega_4 \Big)\nonumber\\&&\bigg\{ 2 u^3 (10-15 u + 6 u^2) \ln u + 2\bar u^3 (10-15\bar u\nonumber\\&& + 6 \bar u^2)\ln\bar u + u \bar u (2 + 13u\bar u)\bigg\} \; ,\end{aligned}$$ where $\bar{u} \equiv 1-u,\; \zeta \equiv 2u-1$. $C^{3/2,1/2}_{1,2}(\zeta)$ are Gegenbauer polynomials. Here $g_{\pi}(u)=B(u)+\varphi_{\pi}(u)$. $a_1^{\pi,\eta}=0, a_1^{K}=0.06$, $a_2^{\pi,K,\eta}=0.25$, $\eta_3^{\pi,K}=0.015$, $\eta_3^{\eta}=0.013$, $\omega_3^{\pi,K,\eta}=-3$, $\eta_4^{\pi}=10$, $\eta_4^K=0.6$, $\eta_4^{\eta}=0.5$, $\omega_4^{\pi, K, \eta}=0.2$. $\rho_{\pi}^2$ etc give the mass corrections and are defined as $\rho_{\pi}^2=\frac {(m_u+m_d)^2}{m_{\pi}^2}$, $\rho_{K}^2=\frac{m_s^2}{m_K^2}$, $\rho_{\eta}^2=\frac {m_s^2}{m_{\eta}^2}$. $m_s=0.125$ GeV. $\mu_{\pi}={m_{\pi}^2\over m_u+m_d}(1-\rho_{\pi}^2)$, $\mu_{\tiny{K},\eta}={m_{\tiny{K},\eta}^2\over m_s}(1-\rho_{K,\eta}^2)$. $f_{\pi}=0.13$ GeV, $f_{K}=0.16$ GeV, $f_{\eta}=0.156$ GeV. All of them are scaled at $\mu=1~\mbox{GeV}$. {#appendix3} In this appendix we collect the sum rules we have obtained for the strong couplings of D wave heavy doublets with light hadrons. $$\begin{aligned} g_1 f_{-,{1\over 2} } f_{-, {3\over 2} }&=&{\sqrt{6}\over 6}f_{\pi} \exp\bigg[ { \Lambda_{-,{1\over 2} } +\Lambda_{-,{3\over 2} } \over T }\bigg] \bigg\{ \frac{1}{2}[u\varphi_{\pi} (u)]^{'}T^2 f_1\Big({\omega_c\over T}\Big) -\frac{1}{8}m_{\pi}^2[uA(u)]^{'}+m_{\pi}^2[G_1(u)+G_2(u)]\nonumber\\ &&-\mu_{\pi} \Big[u\varphi_P (u)+\frac{1}{6}\varphi_{\sigma}(u)\Big]T f_0\Big({\omega_c\over T}\Big)\bigg\}\bigg|_{u=u_0}\;,\end{aligned}$$ $$\begin{aligned} g_2 f_{+,{1\over 2} } f_{-, {3\over 2} }&=&{\sqrt{6}\over 4}f_{\pi} \exp\bigg[ { \Lambda_{+,{1\over 2} } +\Lambda_{-,{3\over 2} } \over T }\bigg] u\bigg\{ \varphi_{\pi}(u)Tf_0\Big({\omega_c\over T}\Big)-\frac{1}{4}m_{\pi}^2A(u)\frac{1}{T} -\frac{1}{3}\mu_{\pi}\varphi_{\sigma}(u)\bigg\}\bigg|_{u=u_0}\;,\label{b1}\end{aligned}$$ $$\begin{aligned} g_3^d f_{+,{3\over 2}}f_{-,{3\over 2} }&=&\frac{1}{8}f_{\pi} \exp\bigg[ { \Lambda_{+,{3\over 2} } +\Lambda_{-,{3\over 2} } \over T }\bigg] \bigg\{ [(u(1-u)\varphi_{\pi}(u)]^{'}T^2f_1\Big({\omega_c\over T}\Big) -\frac{1}{4}m_{\pi}^2[(u(1-u)A(u)]^{'}\nonumber\\&& +2m_{\pi}^2\big[G_3(u)+G_4(u)+2G_5(u)\big] +2\mu_{\pi}\big[u(1-u)\varphi_{p}(u)+\frac{1}{6}\varphi_{\sigma}(u)\big]Tf_0\Big( {\omega_c\over T}\Big)\bigg\}\bigg|_{u=u_0}\;,\end{aligned}$$ $$\begin{aligned} g_3^s f_{+,{3\over 2}}f_{-,{3\over 2} }&=&-\frac{1}{48}f_{\pi} \exp\bigg[ { \Lambda_{+,{3\over 2} } +\Lambda_{-,{3\over 2} } \over T }\bigg] \bigg\{ [(u(1-u)\varphi_{\pi}(u)]^{'''}T^4f_3\Big({\omega_c\over T}\Big)+2\mu_{\pi}\bigg[\Big(u(1-u)\varphi_{p}(u)\Big)^{''}\nonumber\\ && +\frac{1}{6}\varphi_{\sigma}(u)^{''}\bigg]T^3f_2\Big({\omega_c\over T}\Big) -m_{\pi}^2\Big[4\big(u(1-u)\varphi_{\pi}(u)\big)^{'}+\frac{1}{4}\big(u(1-u)A(u)\big)^{'''}\nonumber\\ && +\frac{3}{2}A(u)^{'}-2(1-2u)B(u) -2\big(u(1-u)B(u)\big)^{'}-4G_6(u)\Big]T^2f_1\Big({\omega_c\over T}\Big)\;\nonumber\\&& -8m_{\pi}^2\mu_{\pi}\;\Big[u(1-u)\varphi_{p}(u)+\frac{1}{6}\varphi_{\sigma}(u)\Big]\;Tf_0\Big({\omega_c\over T}\Big)\bigg\}\bigg|_{u=u_0}\;,\end{aligned}$$ $$\begin{aligned} g_4^p f_{-,{3\over 2}}f_{-,{3\over 2}}&=&{\sqrt{6}\over 96}f_{\pi} \exp\bigg[{ { \Lambda_{-,{3\over 2} } +\Lambda_{-,{3\over 2} }\over T }}\bigg] \bigg\{m_{\pi}^2A(u)Tf_0\Big({\omega_c\over T}\Big)-\frac{2}{3}\mu_{\pi}\big[(1-u)\varphi_{\sigma}(u)\big]^{'}T^2f_1\Big({\omega_c\over T}\Big)\bigg\}\bigg|_{u=u_0}\;,\end{aligned}$$ $$\begin{aligned} g_4^f f_{-,{3\over 2}}f_{-,{3\over 2}}&=&{\sqrt{6}\over 4}f_{\pi} \exp\bigg[{ { \Lambda_{-,{3\over 2} } +\Lambda_{-,{3\over 2} } \over T }}\bigg] u(1-u)\bigg\{\varphi_{\pi}(u)Tf_0\Big({\omega_c\over T}\Big)-\frac{1}{4T}m_{\pi}^2A(u)-\frac{1}{3}\mu_{\pi}\varphi_{\sigma}(u) \bigg\}\bigg|_{u=u_0}\label{b2}\;,\end{aligned}$$ $$\begin{aligned} g_5 f_{-,{1\over 2} } f_{-, {5\over 2} }&=&{1\over 2}f_{\pi} \exp\bigg[{ { \Lambda_{-,{3\over 2} } +\Lambda_{-,{3\over 2} } \over T }} \bigg]u^2\bigg\{\varphi_{\pi}(u)Tf_0\Big({\omega_c\over T}\Big)-\frac{1}{4T}m_{\pi}^2A(u)+\frac{1}{3}\mu_{\pi}\varphi_{\sigma}(u) \bigg\}\bigg|_{u=u_0}\;,\end{aligned}$$ $$\begin{aligned} g_6 f_{+,{1\over 2}}f_{-,{5\over 2} }&=&-{\sqrt{15}\over 20}f_{\pi} \exp\bigg[{ { \Lambda_{+,{1\over 2} } +\Lambda_{-,{5\over 2} } \over T }}\bigg]\bigg \{ \big[u^2\varphi_{\pi}(u)\big]^{'}T^2f_1\Big({\omega_c\over T}\Big)-\frac{1}{4}m_{\pi}^2\big[u^2A(u)\big]^{'}\nonumber\\ && -\frac{1}{2}m_{\pi}^2\big[G_7(u)+2G_8(u)+2G_5(u)\big] +2\mu_{\pi}u\Big[u\varphi_{p}(u)+\frac{1}{3}\varphi_{\sigma}(u)\Big]Tf_0\Big({\omega_c\over T}\Big)\bigg\}\bigg|_{u=u_0}\;,\end{aligned}$$ $$\begin{aligned} g_7 f_{+,{3\over 2}}f_{-,{5\over 2}}&=&-{\sqrt{10}\over 30}f_{\pi} \exp\Big[{ { \Lambda_{+,{3\over 2} } +\Lambda_{-,{5\over 2} } \over T }}\Big] \bigg\{-\frac{1}{4}\big[u^2(1-u)\varphi_{\pi}(u)\big]^{''}T^3f_2\Big({\omega_c\over T}\Big)-\frac{1}{12}\mu_{\pi}\Big[7\big(u(1-\frac{2}{7}u)\varphi_{\sigma}(u)\big)^{'}\nonumber\\&& +\big(u^2(1-u)\varphi_{\sigma}(u)\big)^{''}\Big]T^2f_1\Big({\omega_c\over T}\Big) +m_{\pi}^2\Big[u^2(1-u)\varphi_{\pi}(u)+\frac{7}{8}u A(u)\nonumber\\&& +\frac{1}{16}\big(u^2(1-u)A(u)\big)^{''} \Big]Tf_0\Big({\omega_c\over T}\Big) +\frac{1}{3}m_{\pi}^2\mu_{\pi}u^2(1-u)\varphi_{\sigma}(u) \;\bigg\}\bigg|_{u=u_0}\;,\end{aligned}$$ $$\begin{aligned} g_8^p f_{-,{3\over 2}}f_{-,{5\over 2} }&=&{\sqrt{10}\over 18}f_{\pi} \exp\Big[{ { \Lambda_{-,{3\over 2} } +\Lambda_{-,{5\over 2} } \over T }}\Big] \bigg\{\frac{1}{8}\big[u^2(1-u)\varphi_{\pi}(u)\big]^{'''}T^4f_3\Big({\omega_c\over T}\Big)+\frac{1}{4}\mu_{\pi}\Big[\big(u^2(1-u)\varphi_{p}(u)\big)^{''}\nonumber\\&& +\frac{1}{3}\big(u(1-\frac{5}{2}u)\varphi_{\sigma}(u)\big)^{''}\Big]T^3f_2\Big({\omega_c\over T}\Big)-\frac{1}{2}m_{\pi}^2\Big[\big(u^2(1-u)\varphi_{\pi}(u)\big)^{'}+\frac{27}{40}\big(u A(u)\big)^{'}\nonumber\\&& +\frac{1}{16}\big(u^2(1-u)A(u)\big)^{'''}-\frac{1}{5}u\big(1-\frac{3}{2}u\big) B(u)-\frac{1}{2}\big(u^2(1-u)B(u)\big)^{'} -G_9(u)\nonumber\\&& +\frac{9}{5}G_{2}(u)\Big]T^2f_1\Big({\omega_c\over T}\Big) -m_{\pi}^2\mu_{\pi}\Big[u^2(1-u)\varphi_{p}(u)+\frac{1}{3}\big(1-\frac{5}{2}u\big)\varphi_{\sigma}(u)\Big]Tf_0\Big({\omega_c\over T}\Big)\bigg\}\bigg|_{u=u_0} \;,\end{aligned}$$ $$\begin{aligned} g^f_8 f_{-,{3\over 2}}f_{-,{5\over 2} }&=&{\sqrt{10}\over 15}f_{\pi} \exp\Big[{ { \Lambda_{1,-,{3\over 2} } +\Lambda_{2,-,{5\over 2} }\over T }}\Big] \bigg\{-\frac{1}{2}\big[u^2(1-u)\varphi_{\pi}(u)\big]^{'}T^2f_1\Big({\omega_c\over T}\Big)+\frac{1}{8}m_{\pi}^2\big[u^2(1-u)A(u)\big]^{'}\nonumber\\ && +m_{\pi}^2\big[G_{10}(u)-2G_{11}(u)+2G_{12}(u)-6G_{13}(u)\big]\nonumber\\ && -\mu_{\pi}u(1-u)\Big[u\varphi_{p}(u)+\frac{1}{12}\varphi_{\sigma}(u)\Big]Tf_0\Big({\omega_c\over T}\Big)\bigg\}\bigg|_{u=u_0}\;,\end{aligned}$$ $$\begin{aligned} g_9^g f_{-,{5\over 2}}^{2}&=&{\sqrt{15}\over 6}f_{\pi} \exp\Big[{ { \Lambda_{-,{5\over 2} } +\Lambda_{-,{5\over 2} } \over T }}\Big] u^2(1-u)^2\bigg\{\varphi_{\pi}(u)Tf_0\Big({\omega_c\over T}\Big)-\frac{1}{4}m_{\pi}^2A(u){1 \over T}-\frac{1}{3}\mu_{\pi}\varphi_{\sigma}(u) \bigg\}\bigg|_{u=u_0}\;,\nonumber\\\label{b3}\end{aligned}$$ $$\begin{aligned} g_9^f f_{-,{5\over 2}}^{2}&=&{\sqrt{15}\over 45}f_{\pi} \exp\Big[{ { \Lambda_{-,{5\over 2} } +\Lambda_{-,{5\over 2} } \over T }}\Big] \bigg\{-\frac{1}{4}\big[u^2(1-u)^2\varphi_{\pi}(u)\big]^{''}T^3f_2\Big({\omega_c\over T}\Big)+\frac{1}{12}{\mu}_{\pi}\Big[\big(u^2(1-u)^2\varphi_{\sigma}(u)\big)^{''}\;,\nonumber\\&& -\frac{3}{8}\big(u(1-u)^2\varphi_{\sigma}(u)\big)^{'}\Big]T^2f_1\Big({\omega_c\over T}\Big)+m_{\pi}^2\Big[u^2(1-u)^2\varphi_{\pi}(u)-\frac{1}{8}u(3+7u)A(u)\nonumber\\&& -3\big(G_{10}(u)+2G_{11}(u)+2G_{12}(u)-6G_{13}(u)\big)\Big]Tf_0\Big({\omega_c\over T}\Big) -\frac{1}{3}m_{\pi}^2\mu_{\pi}u^2(1-u)^2\varphi_{\sigma}(u)\bigg\}\bigg|_{u=u_0} ,\end{aligned}$$ $$\begin{aligned} g_9^{p1}f_{-,{5\over 2}}^{2}&=&-{\sqrt{15}\over 45}f_{\pi} \exp\Big[{ { \Lambda_{2,-,{5\over 2} } +\Lambda_{3,-,{5\over 2} } \over T }}\Big] \bigg\{\frac{1}{24}\mu_{\pi}\big[u^2(1-u)\varphi_{\sigma}(u)\big]^{'''}T^4f_3\Big({\omega_c\over T}\Big)\nonumber\\&& -\frac{1}{8}m_{\pi}^2\Big[\frac{3}{2}(u(1-2u)A(u))^{''}+u(2-3u)B(u)+\big(u^2(1-u)B(u)\big)^{'}\nonumber\\&& +2G_{9}(u)-6G_2(u)\Big]T^3f_2\Big({\omega_c\over T}\Big) -\frac{1}{6}m_{\pi}^2\mu_{\pi}\big[u^2(1-u)^2\varphi_{\sigma}(u)\big]^{'}T^2f_1\Big({\omega_c\over T}\Big)\bigg\}\bigg|_{u=u_0}\;,\end{aligned}$$ The $G$’s are defined as integrals of light cone wave function $B(u)$ $$\begin{aligned} &&G_1 (u)\equiv \int_0^{u} t B(t)dt,\hspace{0.4cm} G_2 (u)\equiv \int_0^{u}dx\int_0^xB(t)dt\;,\nonumber\\ &&G_3 (u)\equiv \int_0^{u} t(1-t)B(t)dt,\hspace{0.4cm} G_4 (u)\equiv \int_0^{u}dx\int_0^x(1-2t)B(t)dt\;,\nonumber\\ &&G_5 (u)\equiv \int_0^{u}dx\int_0^xdy\int_0^y B(t)dt\;, \hspace{0.4cm} G_6 (u)\equiv \int_0^{u} B(t)dt\;,\nonumber\\ &&G_7 (u)\equiv \int_0^{u} t^2 B(t)dt\;, \hspace{0.4cm}G_8 (u)\equiv \int_0^{u}dx\int_0^x t B(t)dt\;,\nonumber\\ &&G_{9} (u)\equiv \int_0^{u} (1-3t) B(t)dt\;,\hspace{0.4cm}G_{10} (u)\equiv \int_0^{u} t^2(1-t) B(t)dt\;,\nonumber\\ &&G_{11} (u)\equiv \int_0^{u}dx\int_0^x t(1-\frac{3}{2}t)B(t)dt\;,\hspace{0.4cm}G_{12} (u)\equiv \int_0^{u}dx\int_0^xdy\int_0^y (1-3t)B(t)dt\;,\nonumber\\ &&G_{13} (u)\equiv \int_0^{u}dx\int_0^xdy\int_0^ydz\int_0^z B(t)dt\;.\nonumber\end{aligned}$$ [99]{} BABAR Collaboration, B. Aubert et al., arXiv: [hep-ex/0607082]{}. Belle Collaboration, K. Abe et al., arXiv: [hep-ex/0608031]{}. E.V. Beveren and G. Rupp, Phys. Rev. Lett. [**97**]{}, 202001 (2006). F.E. Close, C.E. Thomas, O. Lakhina and E.S. Swanson, arXiv: [hep-ph/0608139]{}. P. Colangelo, F.D. Fazio and S. Nicotri, Phys. Lett. [**B 642**]{}, 48 (2006). S. Godfrey and N. Isgur, Phys. Rev. [**D 32**]{}, 189 (1985). M.A. Nowak, M. Rho, I. Zahed, Acta Phys. Polon. [**B 35**]{}, 2377 (2004). B. Zhang, X. Liu, W. Deng and S.L. Zhu, arXiv: [hep-ph/0609013]{}. B. Grinstein, Nucl. Phys. [**B 339**]{}, 253 (1990); E. Eichten and B. Hill, Phys. Lett. [**B 234**]{}, 511 (1990); A.F. Falk, H. Georgi, B. Grinstein and M.B. Wise, Nucl. Phys. [**B 343**]{}, 1 (1990); F. Hussain, J.G. Körner, K. Schilcher, G. Thompson and Y.L. Wu, Phys. Lett. [**B 249**]{}, 295 (1990); J.G. Körner and G. Thompson, Phys. Lett. [**B 264**]{}, 185 (1991). A. Le Yaouanc, L. Oliver, O, Pène and J.C. Raynal, Phys. Rev. [**D 8**]{}, 2223 (1973); [**D 11**]{}, 1272 (1975); S. Godfrey and N. Isgur, Phys. Rev. [**D 32**]{}, 189 (1985); E.J. Eichten, C.T. Hill and C. Quigg, Phys. Rev. Lett. [**71**]{}, 4116 (1993). P. Colangelo, F.D. Fazio and G. Nardulli, Phys. Lett. [**B 478**]{}, 408 (2000). F.E. Close, E.S. Swanson, Phys. Rev. [**D 72**]{}, 094004 (2005). I.I. Balitsky, V.M. Braun and A.V. Kolesnichenko, Nucl. Phys. [**B 312**]{}, 509 (1989); V.M. Braun and I.E. Filyanov, Z. Phys. [**C 44**]{}, 157 (1989) ; V.L. Chernyak and I.R. Zhitnitsky, Nucl. Phys. [**B 345**]{}, 137 (1990). M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Nucl. Phys. [**B 174**]{}, 385, 448, 519 (1979). V.M. Belyaev, V.M. Braun, A. Khodjamirian and R. Rückl, Phys. Rev. [**D 51**]{}, 6177 (1995); P. Colangelo et al., Phys. Rev. [**D 52**]{}, 6422 (1995); T.M. Aliev, N.K. Pak and M. Savci, Phys. Lett. [**B 390**]{}, 335 (1997) ; P. Colangelo and F.D. Fazio, Eur. Phys. J. [**C 4**]{}, 503 (1998); Y.B. Dai and S.L. Zhu, Eur. Phys. J. [**C 6**]{}, 307 (1999); Y.B. Dai et al., Phys. Rev. [**D 58**]{}, 094032 (1998); Erratum-ibid. [**D 59**]{}, 059901 (1999); W. Wei, P.Z. Huang and S.L. Zhu, Phys. Rev. [**D 73**]{}, 034004 (2006). Y.B. Dai, C.S. Huang, M.Q. Huang and C. Liu, Phys. Lett. [**B 390**]{}, 350 (1997); Y.B. Dai, C.S. Huang and M.Q. Huang, Phys. Rev. [**D 55**]{}, 5719 (1997). E. Bagan, P. Ball, V.M. Braun and H.G. Dosch, Phys. Lett. [**B 278**]{}, 457 (1992); M. Neubert, Phys. Rev. [**D 45**]{}, 2451 (1992); D.J. Broadhurst and A.G. Grozin, Phys. Lett. [**B 274**]{}, 421 (1992). S.L. Zhu and Y.B. Dai, Mod. Phys. Lett. [**A 14**]{}, 2367 (1999). Particle Data Group, W.M. Yao et al., J. Phys. [**G 33**]{}, 1 (2006). P. Ball, JHEP [**9901**]{}, 010 (1999). P. Ball, V. Braun and A. Lentz, JHEP [**0605**]{}, 004 (2006).
--- abstract: 'The hunt for high temperature superfluidity has received new impetus from the discovery of atomically thin stable materials. Electron-hole superfluidity in coupled MoSe$_2$-WSe$_2$ monolayers is investigated using a mean-field multiband model that includes the band splitting caused by the strong spin-orbit coupling. This splitting leads to a large energy misalignment of the electron and hole bands which is strongly modified by interchanging the doping of the monolayers. The choice of doping determines if the superfluidity is tuneable from one- to two-components. The electron-hole pairing is strong, with high transition temperatures in excess of $T_c\sim 100$ K.' author: - 'Sara Conti$^{1,2}$, Matthias Van der Donck$^{2}$, Andrea Perali$^{3}$, Francois M. Peeters$^{2}$, and David Neilson$^{1,2}$' title: | A doping-dependent switch from one- to two-component superfluidity at\ high temperatures in coupled electron-hole van der Waals heterostructures --- Recently strong signature of electron-hole superfluidity was reported in double bilayer graphene (DBG) [@Burg2018], in which an $n$-doped bilayer graphene was placed in close proximity with a $p$-doped bilayer graphene, separated by a very thin insulating barrier to block recombination. The transition temperature is very low, $T_c\sim 1$ K. This can be traced back to the very strong interband screening [@Conti2019] due to bilayer graphene’s tiny band gap [@Zhang2009]. Monolayers of the Transition Metal Dichalcogenides (TMDC) MoS$_2$, MoSe$_2$, WS$_2$, and WSe$_2$ are semiconductors with large and direct bandgaps, $E_g\gtrsim1$ eV [@Mak2010; @Jiang2012] that make interband processes and screening negligible. The effective masses in their low-lying nearly parabolic bands, are larger than in bilayer graphene, resulting also in much stronger coupling of the electron-hole pairs [@Fogler2014]. Because of the strong spin-orbit coupling, the heterostructure MoSe$_2$-hBN-WSe$_2$, with one TMDC monolayer $n$-doped and the other $p$-doped, is an interesting platform for investigating novel multicomponent effects for electron-hole superfluidity [@Rivera2015; @Ovesen2019; @Forg2019]. The hexagonal Boron Nitride (hBN) insulating layer inhibits electron-hole recombination [@Britnell2012a], and avoids hybridization between the MoSe$_2$ and WSe$_2$ bands. Table \[table:TMDs\] gives the parameters for the MoSe$_2$ and WSe$_2$ monolayers, and Fig. \[fig:TMD\_bands\] shows their low-lying band structures. The splitting of the conduction and valence bands by spin-orbit coupling into multibands consisting of two concentric parabolic spin-polarised subbands, makes superfluidity in double TMDC monolayers resemble high-$T_c$ multiband superconductivity. Multiband superconductivity is emerging as a complex quantum coherent phenomenon with physical outcomes radically different, or even absent, from its single-band counterparts [@Bianconi2013]. There are close relations with multiband superfluidity in ultracold Fermi gases [@Shanenko2012] and with electric-field induced superconductivity at oxide surfaces [@Mizohata2013; @Singh2019]. -------------------- -------- -------- ------------ ------------------ ------------------ TMDC a (nm) t (eV) $E_g$ (eV) $\lambda_c$ (eV) $\lambda_v$ (eV) \[0.5ex\] MoSe$_2$ 0.33 0.94 1.47 -0.021 0.18 \[0.5ex\] WSe$_2$ 0.33 1.19 1.60 0.038 0.46 \[0.5ex\] -------------------- -------- -------- ------------ ------------------ ------------------ : TMDC monolayer lattice constant (a), hopping parameter (t), band gap ($E_g$), and splitting of conduction band ($\lambda_c$) and valence band ($\lambda_v$) by spin-orbit coupling [@Xiao2012; @Zhu2011; @Kosmider2013]. []{data-label="table:TMDs"} Table \[table:TMDs\] shows that the spin splitting of the valence bands $\lambda_v$ is an order of magnitude larger than the spin splitting of the conduction bands $\lambda_c$. This results in a misalignment between the electron and hole bands, as shown in Fig. \[fig:systems\]. (For the $p$-doped monolayer, we are using the standard particle-hole mapping of the valence band to a conduction band, with positively charged holes filling conduction band states up to the Fermi level. Thanks to the large band gaps, we only need to consider conduction band processes [@Conti2017; @Conti2019].) A Coulomb pairing interaction, in contrast with conventional BCS pairing, has no dependence on the electron and hole spins. Therefore for each monolayer, we label the bottom and top conduction subbands by $\beta=b$ and $\beta=t$. Due to the large valley separation in momentum space, intervalley scattering is negligible, so the effect of the two valleys appears only in a valley degeneracy factor, $g_v=2$. We will find the misalignment strongly affects the electron-hole pairing processes, and that due to the very different misalignments of the bands (Fig. \[fig:systems\]), the $n$-doped-MoSe$_2$ with $p$-doped-WSe$_2$ (denoted as system A) has markedly different properties from the $p$-doped-MoSe$_2$ with $n$-doped-WSe$_2$ (system B). The multiband electron-hole Hamiltonian is, $$\begin{aligned} &H=\sum_{k,\beta} \;\left\{\xi^{(e)}_\beta(k) \,c^{\dagger}_{\beta,k} \, c_{\beta,k} + \xi^{(h)}_\beta(k) \,d^{\dagger}_{\beta,k} \, d_{\beta,k} \right\} \\ &+ \!\!\!\sum_{\substack{k,k',q\\ \beta,\beta'}} \!\!\! V^D_{k\, k'} \,c^{\dagger}_{\beta,k+ q/2} \,d^{\dagger}_{\beta,-k+ q/2} \,c_{\beta',k'+ q/2} \,d_{\beta',-k'+ q/2} \end{aligned} \label{eq:Hamiltonian}$$ For the $n$-doped monolayer, $c^{\dagger}_{\beta,k}$ and $c_{\beta,k}$ are the creation and annihilation operators for electrons in conduction subband $\beta$, while for the $p$-doped monolayer, $d^{\dagger}_{\beta,k}$ and $d_{\beta,k}$ are the corresponding operators for holes. The kinetic energy terms are $\xi^{(i)}_\beta(k) = \varepsilon^{(i)}_\beta(k)-\mu^{(i)}$ where $\varepsilon^{(i)}_\beta(k)$ is the energy dispersion for the $i=e,h$ monolayer [@Vanderdonck2018]. Because of the small difference between electron and hole effective masses, we assume bands of the same curvature, and so since we consider only equal carrier densities $n^e=n^{h}=n$, for the chemical potentials $\mu^{(e)}=\mu^{(h)}\equiv \mu$. $V^D_{k\, k'}$ is the bare attractive Coulomb interaction between electrons and holes in opposite monolayers separated by a barrier of thickness $d$, $$V^D_{k\, k'} = -V^S_{k\, k'} e^{-d|\textbf{k}-\textbf{k}'|}\ , \quad V^S_{k\, k'} = \frac{2\pi e^2}{\epsilon}\frac{1}{|\textbf{k}-\textbf{k}'|}\ , \label{eq:bare_interacions}$$ where $V^S_{k\, k'}$ is the bare repulsive Coulomb interaction between carriers in the same monolayer. In principle there are four possible electron-hole pairings, corresponding to four superfluid condensates [@Shanenko2015] $\{\beta\beta'\}$. The first index $\beta$ refers to the electron subbands and the second $\beta'$ to the hole subbands. We find that the $\{bt\}$ and $\{tb\}$ cross-pairing make negligible contributions to the condensates, so for simplicity, we confine our attention to the mean-field equations for the superfluid gaps $\Delta_{bb}(k)$ and $\Delta_{tt}(k)$. Since there are no spin-flip scattering processes, Josephson-like pair transfer is forbidden. At zero temperature these gap equations are (see Appendix), $$\begin{aligned} \Delta_{bb}(k) &=-\frac{1}{L^2}\sum_{k'} F^{bb}_{kk'} \, V^{eh}_{k\, k'} \,\frac{\Delta_{bb}(k')}{2 E_b(k')} \quad , \vspace*{2mm} \label{eq:gapbb} \\ \Delta_{tt}(k) &=-\frac{1}{L^2}\sum_{k'} F^{tt}_{kk'} \, V^{eh}_{k\, k'} \,\frac{\Delta_{tt}(k')}{2 E_t(k')} \theta[E^-_t(k')] \ . \label{eq:gaptt}\end{aligned}$$ $E_\beta(k)=\sqrt{\xi_\beta(k)^2 + \Delta^2_{\beta\beta}(k)}$ is the quasi-particle excitation energy for subband $\beta$, with $\xi_\beta(k)=(\xi^{(e)}_\beta+\xi^{(h)}_\beta)/2$. $E^\pm_t(k)=E_t(k)\pm\delta \lambda$ with **$\delta\lambda= (\lambda_h-\lambda_e)/2$**. $\lambda_h$ is the spin-splitting of the conduction band of the $p$-doped monolayer, and $\lambda_e$ the corresponding spin-splitting for the $n$-doped monolayer, with values taken from Table \[table:TMDs\]. $\theta[E^-_t(k)]=1-\mathit{f}[E^-_t(k),0]$ is a step function associated with the zero temperature Fermi-Dirac distribution. $F^{\beta\beta}_{kk'}=|\Braket{\beta k|\beta k'}|^2$ is the form factor that accounts for the overlap of single-particle states in $k$ and $k'$ for subbands $\beta$ in opposite monolayers [@Lozovik2009] (see Appendix). $V^{eh}_{kk’}$ in Eqs. (\[eq:gapbb\]-\[eq:gaptt\]) is the screened electron-hole interaction. We use the linear-response random phase approximation for static screening in the superfluid state [@Conti2019], $$V^{eh}_{k\, k'} = \frac{V^D_{k\, k'} + \Pi_a(q)[(V^S_{k\, k'} )^2-(V^D_{k\, k'} )^2]} {1- 2[V^S_{k\, k'} \Pi_n(q) + V^D_{k\, k'} \Pi_a(q)] + [\Pi_n^2(q) - \Pi_a^2(q)][(V^S_{k\, k'} )^2 - (V^D_{k\, k'} )^2]} \ , \label{eq:VeffSF}$$ where $q=|\textbf{k}-\textbf{k}'|$. $\Pi_n(q)$ is the normal polarizability in the superfluid state and $\Pi_a(q)$ is the anomalous polarizability [@Lozovik2012; @Perali2013], which is only non-zero in the superfluid state. $\Pi_n(q)$ depends on the population of free carriers (see Appendix). $\Pi_a(q)$, with opposite sign, depends on the population of electron-hole pairs. The combined effect of $\Pi_n(q)$ and $\Pi_a(q)$ is that a large superfluid condensate fraction of strong-coupled and approximately neutral pairs is associated with very weak screening [@Neilson2014]. This is because of the small remaining population of charged free carriers available for screening. Equation (\[eq:gapbb\]) has the same form as for a decoupled one-band system,because the two $b$ bands are aligned [@Kochorbe1993]. In contrast, Eq. (\[eq:gaptt\]) shows explicitly the effect of misalignment of the $t$ bands (Fig. \[fig:systems\]) through the term $\theta[E^-_t(k')]\equiv\theta[\sqrt{\xi_t(k)^2 + \Delta^2_{tt}(k)}-\delta \lambda]$. This can only drop below unity at higher densities where the pair coupling strength is weak compared with the misalignment. For a given chemical potential $\mu$, the carrier density $n$ of one monolayer is determined as a sum of the subband carrier densities $n_{b}$ and $n_{t}$ by, $$\begin{aligned} n&= g_s g_v \sum_{\beta=b,t} n_{\beta} \label{eq:density}\\ n_{b}&=\frac{1}{L^2}\sum_k v^2_{b}(k)\label{eq:densityb} \\ n_{t}&=\frac{1}{L^2}\sum_k v^2_{t}(k)\theta[E^+_t(k)] + u^2_{t}(k)(1-\theta[E^-_t(k)]) \label{eq:densityt}\end{aligned}$$ where $v^2_{\beta}$ and $u^2_{\beta}$ are the Bogoliubov amplitudes for the subbands $\beta$ (see Appendix). Because of the spin polarisation in the valleys, the spin degeneracy is $g_s=1$. The regimes of the superfluid crossover are characterized by the superfluid condensate fraction $C$ [@Salasnich2005; @LopezRios2018]. $C$ is defined as the fraction of carriers bound in pairs relative to the total number of carriers. For $C>0.8$ the condensate is in the strong-coupled BEC regime, for $0.2\leq C \leq 0.8$ in the crossover regime, and for $C<0.2$ in the BCS regime. In our system, the two condensate fractions are given by, $$C_{\beta\beta}=\frac{\sum_{k} u_{\beta}^2(k)\; v_{\beta}^2(k)}{\sum_{k} v_{\beta}^2(k)}. \label{eq:CF}$$ ![(Color online) (a) Chemical potential as function of density $n$ of WSe$_2$. Positive density corresponds to system A, negative density to system B. For reference, the energy bands are shown as a function of $k$ with the same energy scale. The bound state energies $E^b_B/2$, $E^t_B/2$ are also indicated with respect to the bands. (b) The maximum of the superfluid gaps $\Delta_{bb}$ $\Delta_{tt}$ as a function of $n$. (c) Corresponding condensate fraction $C_{bb}$ and $C_{tt}$. The blue shaded area is the BEC regime. []{data-label="fig:result"}](Fig3.eps){width="1\columnwidth"} Figure \[fig:result\](b) shows the dependence on WSe$_2$ electron density of the maximum of the superfluid gaps $\Delta_{\beta\beta}=\max_k\Delta_{\beta\beta}(k)$ for the b and t bands (Eqs. (\[eq:gapbb\]-\[eq:gaptt\])) in systems A and B. We took equal effective masses $m^*_e=m^*_h=0.44 m_e$, a barrier thickness $ d=1$ nm, and dielectric constant $\epsilon=2$, for monolayers encapsulated in few layers of hBN [@Kumar2016]. Figure \[fig:result\](c) shows the evolution of the condensate fractions (Eq. (\[eq:CF\])) as a function of density, and Fig. \[fig:result\](a) the evolution of the chemical potential. We see in Fig. \[fig:result\](b) that the form of $\Delta_{bb}$ is similar for systems A and B. At low densities the system is in the strong coupled BEC regime, with condensate fraction $C_{bb} >0.8$. At these densities the $\{bb\} $ pairing is to a deep bound state with binding energy $E_B^{b}\sim 400$ meV below the bottom of the $b$ band [@Randeria1990; @Pistolesi1994] . The chemical potential is $\mu\sim -E_B^{b}/2\,$ (Fig. \[fig:result\](a)). With increasing density, $\Delta_{bb}$ increases and then passes through a maximum. $\mu$ also increases and approaches zero. Eventually, $\Delta_{bb}$ drops sharply to zero at a superfluid threshold density $n_0$. For $n>n_0$, the screening of the pairing interaction is so strong that it kills superfluidity [@Perali2013]. In contrast, $\Delta_{tt}$ is only non-zero in system B. At low density, $\Delta_{tt}=0$ also in system B, since the pairing population is zero. This is because the chemical potential $\mu$ at these densities lies below the isolated bound state associated with the $t$ bands, located at energy $E_B^{t}=E_B^{b}-(\lambda_e+ \lambda_h)$. It is only when $\mu$ passes above $-E_B^{t}/2$ that this state can be populated, so $\Delta_{tt}$ can become non-zero. Further increasing the density increases the $\{tt\}$ pair population, $\Delta_{tt}$ increases and then passes through a maximum. When $\mu$ becomes positive, the build up of free carriers, as evidenced by $C_{bb} < 0.8$ in Fig. \[fig:result\](c), combined with the misalignment of the $t$ bands, starts to significantly weaken the effective electron-hole screened interaction. Eventually screening kills the superfluidity in both $\{bb\}$ and $\{tt\}$ channels at the same threshold density. We see in Fig. \[fig:result\](b) that the behavior of $\Delta_{tt}$ in systems A and B is completely different. In system A the chemical potential remains below the isolated bound state $E_B^{t}$ associated with the $t$ bands over the full range of densities up to $n_0$. With $\mu$ lying below $E_B^{t}$, the population of pairs in the $\{tt\}$ channel remains zero. The only difference between system A and B is the choice of doping which results in the markedly different misalignment of the $t$ bands, leading to one-component or two-components superfluidity. In Fig. \[fig:result\](c), we note that the threshold densities $n_0$ for the superfluidity are much larger than the threshold densities $n_0\sim 8\times 10^{11}$ cm$^{-2}$ in double bilayer graphene [@Burg2018; @Conti2019], and the $n_0\sim 4\times 10^{12}$ cm$^{-2}$ predicted for double layer phosphorene [@Saberi2018]. $n_0$ is large for the double TMDC monolayers for two main reasons: (i) the large effective masses of the electrons and holes means a large effective Rydberg energy scale, thus large superfluid gaps $\Delta$ that strongly suppress the screening; (ii) the large TMDC monolayer bandgaps $E_g$ eliminate valence band screening, making the electron-hole pairing interaction very strong [@Conti2019]. These large threshold densities in the double TMDC monolayers lead to high Berezinskii-Kosterlitz-Thouless transition temperatures $T_{KT}$ [@Kosterlitz1973]. The monolayers have near parabolic bands, so we can approximate [@Benfatto2008; @Botelho2006], $$T_{KT} = \frac{\pi}{2} \rho_s(T_{KT}) \simeq n\, \frac{\pi\hbar^2}{8 g_s g_v m^*} \ . \label{eq:T_KT1}$$ $\rho_s(T)$ is the superfluid stiffness. Equation (\[eq:T\_KT1\]) gives transition temperatures for systems A and B at their threshold densities of $T_{KT}^A= 110$ K and $T_{KT}^B= 120$ K. The strikingly different behavior of $\Delta_{tt}$ in the two systems is a new and remarkable effect that can be probed using angle-resolved photoemission spectroscopy (ARPES) [@Rist2013]. ARPES measures the spectral function, which in a one-component superfluid state like system A will have a single peak centred at a negative frequency corresponding to $\Delta_{bb}$. However in system B, when it switches from one-component to two-components superfluidity, two peaks associated with the gaps $\Delta_{bb}$ and $\Delta_{tt}$ will appear in the spectral function at negative frequencies [@Miao2012]. Other experimental techniques that can be used to detect the presence or absence of the second gap $\Delta_{tt}$ are Andreev reflection spectroscopy [@Daghero2014; @Kuzmicheva2016] and scanning tunneling microscopy (STM) [@Yin2015]. The large gaps at zero temperature and in the BCS-BEC crossover regime should lead to pseudogaps in the single-particle excitation spectra [@Perali2002] above $T_{KT}$, that persist up to temperatures of the order of the zero temperature gaps. These could also be detected by theARPES and STM. System B at densities where both the superfluid components are close to their maximum gaps would favour large pseudogaps, while configurations with one large gap and one small or zero gap would lead to screening of superfluid fluctuations and suppression of the pseudogap [@Salasnich2019]. In summary, we have investigated multicomponent effects for electron-hole multiband superfluidity in $n$-$p$ and $p$-$n$ doped MoSe$_2$-hBN-WSe$_2$ heterostructures (systems A and B, respectively). Both systems are multiband and can stabilize electron-hole superfluidity at temperatures above $100$ K. Surprisingly we find that only in system B can superfluidity have two components. For both systems we would have expected to be able to tune from one- to two- component superfluidity by increasing the density, as recently observed in multiband superconductors [@Singh2019], and this is indeed the case for system B. However for system A, the very large misalignment of the electron and hole top bands, means that there are no carriers available for pairing in the topmost band before screening has become so strong that it completely suppresses superfluidity. Therefore only one-component superfluidity is possible in system A. This is a remarkable result: activation of the second-component of the superfluidity in this heterostructure depends crucially on the choice of which TMDC monolayer is $n$-doped and which $p$-doped. *After completion of this paper we became aware of a recent experiment on MoSe$_2$-WSe$_2$ where exciton condensation with high transition temperatures above $100$ K consistent with our predictions were reported.*\ This work was partially supported by the Fonds Wetenschappelijk Onderzoek (FWO-Vl), the Methusalem Foundation and the FLAG-ERA project TRANS-2D-TMD. We thank A. R. Hamilton and A. Vargas-Paredes for useful discussions. Appendix: Mean field equations {#appendix} ============================== To describe our system we introduce the temperature dependent normal and anomalous multiband Matsubara Green functions, with subband indices $\alpha$ and $\beta$, $$\begin{cases} \mathcal{G}^{\alpha\beta}(k,\tau)&=-<T c^{\alpha}_{k}(\tau) c^{\beta \dagger}_{k}(0)> \\ \mathcal{F}^{\alpha\beta}(k,\tau)&=-<T c^{\alpha}_{k}(\tau) d^{\beta}_{k}(0)>. \end{cases} \label{eq:GreenFun}$$ The mean field equations for the gaps and the densities are[@Shanenko2015]: $$\Delta_{\alpha\beta}(k)=-\frac{T}{L^2}\sum_{\substack{\alpha',\beta',\\ k', i \omega_n} } F^{\alpha\beta\alpha'\beta'}_{kk'} \, V^{eh}_{k\, k'} \,\mathcal{F}^{\alpha'\beta'}(k', i \omega_n) \label{eq:gap_sum}$$ $$n_{\alpha\beta}=\frac{T}{L^2}\sum_{k, i \omega_n } \mathcal{G}^{\alpha\beta}(k, i \omega_n) \label{eq:density_sum}$$ where $F^{\alpha\beta\alpha'\beta'}_{kk'}= \Braket{\alpha' k'|\alpha k}\Braket{\beta k|\beta' k'}$ is the form factor representing the overlap of the single particle wave functions. On the right hand side of Eq. \[eq:gap\_sum\], the gaps $\Delta_{\alpha\beta}(k)$ appear implicitly in the $\mathcal{F}^{\alpha\beta}$. Since we are neglecting the cross-pairing contributions, we retain the Green functions and the form factors only for $\alpha=\beta$($\alpha'=\beta'$). The screened Coulomb interaction $V^{eh}_{k\, k'}$ conserves the spin of the electron-hole pair and there are no spin-flip scattering processes implying $F^{\beta\beta\beta'\beta'}_{kk'}= 0$ for $\beta\neq\beta'$, so Josephson-like pair transfers are forbidden. The resulting gap equations are thus decoupled. For brevity, we adopt the notation $F^{\beta\beta\beta'\beta'}_{kk'}\equiv F^{\beta\beta'}_{kk'}$. In terms of Bogoliubov amplitudes: $$\!\!\!\! v^2_{\beta}(k) = \frac{1}{2}\left(1 -\frac{\xi_\beta(k)}{E_\beta(k)} \right);\: u^2_{\beta}(k) = \frac{1}{2}\left(1 +\frac{\xi_\beta(k)}{E_\beta(k)}\right), \label{eq:B_a}$$ Eqs. (\[eq:GreenFun\]) become $$\begin{aligned} \mathcal{G}^{\beta\beta}(k, i\omega_n)&=\frac{u_\beta^2}{i\omega_n-E_\beta^-} + \frac{v_\beta^2}{i\omega_n+E_\beta^+} \\ \mathcal{F}^{\beta\beta}(k, i \varepsilon_n)&=\frac{u_\beta v_\beta}{i\omega_n-E_\beta^-} + \frac{u_\beta v_\beta}{i\omega_n+E_\beta^+}, \label{eq:GreenFun_uv}\end{aligned}$$ with $E_\beta^\pm$ defined in the main manuscript. Performing the summation over the Matsubara frequencies $\omega_n= \pi T (2n + 1)$ in the limit of zero temperature, we obtain the gap equations (Eqs. (3-4)) and the density equations (Eqs. (7-8)) in the main manuscript. The polarizabilities in the presence of the superfluid are [@Lozovik2012]: $$\Pi_n(q,\Omega_l)= T\,\frac{g_s g_v}{L^2}\hspace{-0.2cm}\sum_{\beta,k', i \omega_n }\hspace{-0.2cm} F^{\beta\beta}_{kk'} \mathcal{G}^{\beta\beta}(k', i\omega_n+i\Omega_l) \mathcal{G}^{\beta\beta}(k, i\omega_n) \label{Pi_n}$$ $$\Pi_a(q,\Omega_l)= T\,\frac{g_s g_v}{L^2}\hspace{-0.2cm}\sum_{\beta,k', i \omega_n }\hspace{-0.2cm} F^{\beta\beta}_{kk'}\mathcal{F}^{\beta\beta}(k', i\omega_n+i\Omega_l)\mathcal{F}^{\beta\beta}(k, i\omega_n) \label{Pi_a}$$ where $q=|\textbf{k}-\textbf{k}'|$. The polarizabilities in the effective electron-hole interaction (Eq. (5) in the main manuscript) are obtained by evaluating Eqs. (\[Pi\_n\]) and (\[Pi\_a\]) at zero temperature in the static limit, $\Omega_l\rightarrow0$.
--- abstract: 'If ultra-high-energy cosmic rays originate from extragalactic sources, the offsets of their arrival directions from these sources imply an upper limit on the strength of the extragalactic magnetic field. The Pierre Auger Collaboration has recently reported that anisotropy in the arrival directions of cosmic rays is correlated with several types of extragalactic objects. If these cosmic rays originate from these objects, they imply a limit on the extragalactic magnetic field strength of $B < 0.7$–$2.2 \times 10^{-9} \left( \lambda_B / {\rm 1~Mpc} \right)^{-1/2}$ G for coherence lengths $\lambda_B < 100$ Mpc and $B < 0.7$–$2.2 \times 10^{-10}$ G at larger scales. This is comparable to existing upper limits at $\lambda_B = 1$ Mpc, and improves on them by a factor  at larger scales. The principal source of uncertainty in our results is the unknown cosmic-ray composition.' author: - 'J. D. Bray' - 'A. M. M. Scaife' bibliography: - 'all.bib' --- Introduction ============ Magnetic fields are a pervasive ingredient of astrophysical structure, from the small-scale inhomogeneities associated with star formation to the large-scale over-densities associated with galaxy clusters, filaments and the cosmic web. Across this wide range of scales, a common paradigm is accepted of amplification via dynamo and compression processes; however, in each case the amplification requires the presence of a pre-existing seed field. The origin of this seed field remains an open question in astrophysics. One of the key issues with tracing modern-day magnetic fields back to their origin is the problem of saturation effects, which result in amplified field strengths largely independent of their initial values. Since amplification is linked to local density, it is therefore the least-dense environments that retain the most information about their seed magnetic fields, and are of the greatest value in determining their origin. The low-density intergalactic medium, which incorporates the voids in the web of large-scale structure, is of particular interest for the study of cosmic magnetism. There are a variety of mechanisms for placing observational constraints on the strength of the extragalactic magnetic field (EGMF) in voids. A limit of $B < 9 \times 10^{-10}$ G has been found using power-spectrum analyses of the cosmic microwave background [CMB; @ade2014; @ade2016]. An observed absence of correlation between diffuse synchrotron emission and large-scale structure has been used to place a limit on the field strength in filaments which implies a similar limit in voids, $B < 10^{-9}$ G [@brown2017]. These upper limits complement the lower limit of $B \geq 3 \times 10^{-16}$ G set by the non-detection of gamma-ray cascades [@neronov2010]. For a comprehensive review of observational constraints on the EGMF we refer the reader to @durrer2013. Another method for probing the EGMF is through observations of ultra-high-energy cosmic rays (UHECRs). The trajectories of these charged particles are deflected as they pass through magnetic fields, and the magnitude of this deflection acts in principle as a measure of the field strength. If a UHECR source can be identified and shown to be extragalactic, then the displacement between the source and the corresponding UHECRs depends on the strength of the EGMF. To date, no individual UHECR source has been conclusively identified, but recent results from the Pierre Auger Observatory show collective correlations of UHECR arrival directions with several types of extragalactic objects, indicating an extragalactic origin for these particles [@aab2017; @aab2018a]. In the absence of intervening magnetic fields, we would expect the observed arrival directions of UHECRs to be aligned with their sources within the instrumental resolution. In practice, there will be an offset due to magnetic deflection of UHECRs by a combination of the EGMF and the Galactic magnetic field (GMF). The uncertain Galactic component may be neglected, and the entire deflection attributed to the EGMF, in order to place a conservative upper limit on the EGMF contribution. The concept of constraining the EGMF with this approach has been discussed by @lee1995, prior to the recent detection of UHECR anisotropy. More recently there have been detailed analyses of UHECR diffusion in theoretically-motivated models of the EGMF [@vazza2017; @hackstein2017], which are able to reproduce the observed large-scale anisotropy, though not yet to discriminate between these models. There is a clear need for the refinement and application of the approach of @lee1995, with data from recent UHECR observations, to constrain the EGMF in a simple parameterized model. In the following, we consider the scenario in which the UHECR anisotropy observed by the Pierre Auger Observatory is associated with one or more of the types of extragalactic objects with which they report correlations, and derive a conditional limit on the strength of the EGMF in the nearby Universe. In [Section \[sec:uhecr\]]{} we describe the propagation of UHECRs in the presence of a magnetic field, in [Section \[sec:derivation\]]{} we derive a new limit on the strength of the EGMF, in [Section \[sec:discussion\]]{} we discuss this limit in the context of existing constraints, and in [Section \[sec:conclusions\]]{} we draw our conclusions. Propagation of ultra-high-energy cosmic rays {#sec:uhecr} ============================================ UHECRs are charged particles, consisting of fully-ionized atomic nuclei, and consequently are deflected by magnetic fields. The effect of these deflections on the propagation of UHECRs depends on the field strength. For strong magnetic fields, the propagation is fully diffusive, and the arrival direction of a UHECR bears no relation to the direction of its source. This scenario predicts that the UHECR sky should be primarily isotropic, though with a small degree of anisotropy from the Compton-Getting effect [@compton1935]. For weak magnetic fields, the arrival directions of UHECRs will be offset from their sources by an angle depending on the magnitude of the deflection, which in the small-angle limit is proportional to the field strength. Any observed correlations of UHECRs with the directions of sources, if such are identified, imply that we are in the latter regime, with the offset angles providing a measure of the magnetic field strength. In the weak-field scenario, the offset angles between the arrival directions of UHECRs and their sources due to deflections in the EGMF will depend both on the strength of the EGMF and on the scale of its coherent structure. In general, we expect the EGMF to have a turbulence spectrum that spans a range of scales. We will consider here two special cases: one in which the coherence length $\lambda_B$ of the EGMF is longer than the distance $D$ to a UHECR source ([Section \[sec:uniform\]]{}), making it uniform on this scale; and one in which the EGMF consists of independent cells of size ${\lambda_B \ll D}$ ([Section \[sec:turbulent\]]{}). In a uniform magnetic field {#sec:uniform} --------------------------- If the EGMF has a coherence length $\lambda_B$ longer than the distance $D$ to a source of UHECRs, a UHECR propagating from this source to Earth will experience a near-uniform magnetic field. Assuming this field to have a strength $B_\perp$ perpendicular to the motion of the UHECR, it will follow a curved path with a gyroradius $$r_{\rm g} = \frac{E}{Z e c B_\perp} \label{eqn:r_g}$$ where $E$ is the energy of the UHECR, $Z$ its atomic number, $e$ the electron charge, and $c$ the speed of light. As illustrated in [Figure \[fig:coherent\]]{}, this leads to an offset $\theta$ between the observed arrival direction of the UHECR and the position of its source. From [Equation \[eqn:r\_g\]]{} and geometrical considerations, this offset angle can be found as $$\begin{aligned} \sin\theta &= \frac{D}{2}\frac{ Z e c B_\perp }{E} \label{eqn:theta} \\ &= 2.65^\circ Z {\ensuremath{\left( \dfrac{D}{\rm 10~Mpc} \right)}} {\ensuremath{\left( \dfrac{B_\perp}{10^{-9}~{\rm G}} \right)}} {\ensuremath{\left( \dfrac{E}{10^{20}~{\rm eV}} \right)}}^{\!\!-1} \label{eqn:thetavals} . \end{aligned}$$ Note that this offset angle differs from the deflection of the path of the UHECR as given in equation (5) of @lee1995 and equation (135) of @durrer2013, which is $2\theta$ in our notation. ![Motion of a UHECR in a uniform magnetic field. The magnetic deflection of the UHECR causes its arrival direction at Earth to be offset by an angle $\theta$ from the position of its source. The distance $D$ to the source and the gyroradius $r_{\rm g}$ of the UHECR obey the relation ${D = 2 r_{\rm g} \sin\theta}$.[]{data-label="fig:coherent"}](coherent){width="\linewidth"} Given a constraint ${\theta < \theta_{\rm max}}$ on the offset angle due to magnetic deflection by the EGMF, it is possible to place an upper limit on the EGMF strength $B$. As $B_\perp$ represents the strength of the magnetic field in only two spatial dimensions, and assuming no preferred orientation of the field relative to Earth, we can estimate ${B = B_\perp \sqrt{3/2}}$. Consequently we obtain the limit $$\begin{aligned} B &< \sin(\theta_{\rm max}) \frac{\sqrt{6}}{D} \frac{E}{Z e c} \label{eqn:B1} \\ &< 2.65 \times 10^{-8}~{\rm G} \, \frac{\sin(\theta_{\rm max})}{Z} {\ensuremath{\left( \dfrac{D}{\rm 10~Mpc} \right)}}^{\!\!-1} {\ensuremath{\left( \dfrac{E}{10^{20}~{\rm eV}} \right)}} \label{eqn:B1vals} . \end{aligned}$$ For a uniform magnetic field, UHECRs from different points on the sky will experience a similar deflection, expressed as a rotation around an axis aligned with the local orientation of the EGMF. The angle $\theta$ can therefore be interpreted as the offset of UHECR arrival directions for a single source, as described above, or the collective offset for a population of sources at a common distance $D$. In a turbulent magnetic field {#sec:turbulent} ----------------------------- If the EGMF is turbulent on small scales — that is, its coherence length $\lambda_B$ is smaller than the distance $D$ to a source of UHECRs — then a UHECR from this source will not follow a simple path as shown in [Figure \[fig:coherent\]]{}. In the limit ${\lambda_B \ll D}$, it will stochastically accumulate a series of small deflections as shown in [Figure \[fig:incoherent\]]{}. UHECRs from a single source will undergo different deflections, and the source will appear to be smeared out, with a root-mean-square scale $$\begin{aligned} \theta_{\rm rms} &\approx \frac{\sqrt{D \, \lambda_B}}{2} \, \frac{Z e c B_\perp}{E} \label{eqn:thetarms} \\ & \begin{aligned} \approx 0.84^\circ Z {\ensuremath{\left( \dfrac{D}{\rm 10~Mpc} \right)}}^{\!\frac{1}{2}} {\ensuremath{\left( \dfrac{\lambda_B}{\rm 1~Mpc} \right)}}^{\!\frac{1}{2}} \\ \times {\ensuremath{\left( \dfrac{B_\perp}{10^{-9}~{\rm G}} \right)}} {\ensuremath{\left( \dfrac{E}{10^{20}~{\rm eV}} \right)}}^{\!\!-1} . \label{eqn:thetarmsvals} \end{aligned} \end{aligned}$$ If we can place a constraint ${\theta_{\rm rms} < \theta_{\rm max}}$ on this angle then, similarly to [Equation \[eqn:B1\]]{}, we can constrain the strength of the EGMF to be $$\begin{aligned} B &\lesssim \theta_{\rm max} \frac{\sqrt{6}}{\sqrt{D \, \lambda_B}} \, \frac{E}{Z e c} \label{eqn:B2} \\ & \begin{aligned} \lesssim 8.37 \times 10^{-8}~{\rm G} \, \frac{\theta_{\rm max}}{Z} {\ensuremath{\left( \dfrac{D}{\rm 10~Mpc} \right)}}^{\!\!-\frac{1}{2}} \\ \times {\ensuremath{\left( \dfrac{\lambda_B}{\rm 1~Mpc} \right)}}^{\!\!-\frac{1}{2}} {\ensuremath{\left( \dfrac{E}{10^{20}~{\rm eV}} \right)}} . \label{eqn:B2vals} \end{aligned} \end{aligned}$$ Such a constraint may be obtained by observing a smeared-out UHECR source, or the angular scale of a statistical correlation between such sources and UHECR arrival directions. More generally, observing any structure in the all-sky distribution of UHECRs would imply ${\theta_{\rm rms} \lesssim 1}$ rad. This limit might be slightly exceeded, at the cost of reducing the amplitude of the observed structure, but in this case the small-angle approximation inherent to [Equation \[eqn:thetarms\]]{} breaks down and a more general simulation is required [e.g. @vazza2017; @hackstein2017]. ![Motion of a UHECR in a turbulent magnetic field with coherence length $\lambda_B$. A series of small deflections in individual turbulence cells, each approximated as having a uniform magnetic field, leads to an accumulated offset in the UHECR arrival direction ${\theta \propto \sqrt{D/\lambda_B}}$.[]{data-label="fig:incoherent"}](incoherent){width="\linewidth"} Other propagation effects {#sec:propagation} ------------------------- In the preceding discussion we have assumed that the energy and charge of a deflected UHECR remain unchanged as they propagate through the EGMF. In practice, UHECRs suffer energy losses or attenuation through interactions with background photon fields, so their observed energy on arrival at Earth does not accurately reflect their gyroradius during propagation. The principal energy-loss mechanisms affecting UHECR protons are pair production [the Bethe-Heitler process; @bethe1934] and the photopion interactions responsible for the GZK limit [@greisen1966; @zatsepin1966]. The relative impacts of these processes vary depending on energy. The GZK limit imposes a strong cut-off for UHECR protons with energies exceeding ${E_{\rm GZK} \sim 5 \times 10^{19}}$ eV, which will have a mean free path that decreases to $\lambda_{\rm GZK} \sim 6$ Mpc at $\gtrsim 10^{20}$ eV, but at energies $\lesssim 10^{19}$ eV the GZK limit is effectively equivalent to the cosmological horizon [@ruffini2016]. In contrast to GZK photopion interactions, the Bethe-Heitler pair-production process only removes a small fraction of the energy of a UHECR. Therefore, although the process has a much shorter mean free path of $\lambda_{\rm BH} \sim 437$ kpc [@ruffini2016], UHECRs will scatter many times before losing a significant portion of their energy. The horizon imposed by the Bethe-Heitler process is instead defined by the mean energy-loss distance, corresponding to the distance at which the energy of the UHECR has fallen to $1/e$ of its original value. For the Bethe-Heitler process this distance is $\gtrsim 1$ Gpc [@ruffini2016], which sets the effective horizon for UHECR protons with energies less than $E_{\rm GZK}$. Heavier UHECRs such as iron nuclei can additionally interact with background photons through photo-disintegration, splitting them into lighter nuclei. This process also imposes a GZK limit, at a similar threshold as photopion interactions do for UHECR protons. At energies over $E_{\rm GZK}$ the mean free path is ${\lesssim 1}$ Mpc, but at lower energies it is much longer, $\gtrsim 100$ Mpc at $10^{19}$ eV and $\gtrsim 1$ Gpc at $10^{18}$ eV [@allard2008]. Furthermore, to first order photo-disintegration does not change the charge-to-mass ratio (or charge-to-energy ratio), which determines the gyroradius, so the assumption that the gyroradius is constant will approximately hold over distances that exceed this length by a small factor. Derivation of a limit on the extragalactic magnetic field {#sec:derivation} ========================================================= The Pierre Auger Collaboration has recently reported correlations between UHECR arrival directions and several types of extragalactic objects [@aab2017; @aab2018a]. Each of these correlations, if it represents a true association between UHECRs and their sources, implies a limit on the strength of the EGMF. Per [Section \[sec:uniform\]]{} and [Equation \[eqn:B1vals\]]{}, the offset angles between the UHECR arrival directions and the associated sources imply a limit on any component of the EGMF with a scale larger than the distance to these sources. Per [Section \[sec:turbulent\]]{} and [Equation \[eqn:B2vals\]]{}, these offset angles also imply a scale-dependent limit on any turbulent component of the EGMF with a coherence length shorter than this distance. In practice, deflections of UHECRs from extragalactic sources will result from both the GMF and the EGMF. In general, the GMF component of the deflection will add to that applied by the EGMF, and so attributing the entire deflection to the latter, as we do here, will result in a conservative upper limit on its strength. Dipolar anisotropy of the ultra-high-energy cosmic-ray background {#sec:dipole} ----------------------------------------------------------------- The first element of anisotropy recently detected by the Pierre Auger Observatory in the arrival directions of UHECRs corresponds to a dipole with amplitude 6.5% and significance ${5.2\sigma}$, in a sample of ${3 \times 10^4}$ events with energies above a threshold of 8 EeV [@aab2017]. The dipole is centered on ($l$,$b$) = ($233^\circ$,$-13^\circ$), with an uncertainty around $\pm 10^\circ$. This position is separated by $125^\circ$ from the Galactic Center, strongly suggesting an extragalactic origin for these particles, in which case they will have experienced deflections in the EGMF. @aab2017 compare this result with anisotropy in the 2 Micron All-Sky Redshift Survey [2MRS; @erdogdu2006]. The 2MRS recorded the redshifts of 23,000 galaxies selected by their near-infrared flux, which is a good tracer of mass. Any extragalactic source of UHECRs is likely to have a distribution close to that of matter in the nearby Universe, so this represents a general prediction for the distribution of such sources. The simplest possible comparison is between the dipole anisotropy in UHECR arrival directions and the dipole moment in the all-sky distribution of 2MRS galaxies. The flux-weighted dipole in the 2MRS, excluding objects in the Local Group, is centered on Galactic coordinates ($l$,$b$) = ($251^\circ$,$+38^\circ$), with a magnitude defined by the peculiar velocity 1577 kms$^{-1}$ [@erdogdu2006]. For Local Group objects only, the dipole is in the direction ($l$,$b$) = ($121^\circ$,$-22^\circ$) with peculiar velocity 220 kms$^{-1}$. Combining these, we find the total dipole to be in the direction ($l$,$b$) = ($243^\circ$,$+38^\circ$), with an uncertainty that will be dominated by that of the first component (${\pm 10^\circ}$). The offset angle between this 2MRS dipole and the UHECR dipole reported by @aab2017 is $52 \pm 14^\circ$. This is sufficiently large to permit a chance coincidence, so it does not, on its own, constitute strong evidence that these anisotropies are associated with one another. It is possible that the UHECRs responsible for the anisotropy originate from a population of extragalactic objects that is not associated with the distribution of matter in the nearby Universe as measured by the 2MRS. Alternatively, the UHECRs responsible for the anisotropy may indeed originate from extragalactic objects associated with the 2MRS dipole, and the offset angle may result from their deflection in the GMF or in the EGMF. The direction of the offset matches that expected from deflections in the GMF [@aab2017; @jansson2012a], consistent with this picture, although uncertainties in the composition of UHECRs make it difficult to predict its magnitude. ### Resulting limit on the extragalactic magnetic field {#sec:dipole_lim} If there is a real association between the nearby-galaxy dipole measured by the 2MRS and the UHECR dipole measured by the Pierre Auger Observatory, it implies a limit on the strength of the EGMF. The strength of any component of the EGMF with a coherence length larger than the typical distance to the 2MRS sources is constrained by [Equation \[eqn:B1vals\]]{}, with ${\theta_{\rm max} = 52 \pm 14^\circ}$ the offset angle between the two dipoles. The strength of smaller-scale turbulence on the EGMF is constrained by [Equation \[eqn:B2vals\]]{}, with ${\theta_{\rm max} = 1~{\rm rad} = 57^\circ}$ required to allow the dipole structure to persist. Other parameters are required by [Equations \[eqn:B1vals\] and \[eqn:B2vals\]]{}: - the mean atomic number of the UHECRs associated with the dipole. The composition of UHECRs, in terms of the relative fractions of different elements, is poorly understood, leading to a substantial uncertainty in this value. Current results in this energy range exclude a composition solely of hydrogen, of heavy nuclei such as iron, or of a mixture of the two, suggesting instead a mixed composition of intermediate elements with likely values in the range $1.7 < Z < 5$ [@aab2014; @aab2017]. - the typical distance to 2MRS sources responsible for the dipole. Given the median redshift of sources in the 2MRS of ${z \approx 0.02}$ [@erdogdu2006] and a Hubble constant of $H_0 = 67.6$ kms$^{-1}$Mpc$^{-1}$ [@grieb2017], the median distance is ${D = z c H_0 \approx 90}$ Mpc. However, we instead take the value ${D = 70}$ Mpc, incorporating the moderate attenuation of UHECRs from the more distant sources [@aab2018a]. - the typical energy of UHECRs in the sample above a threshold of 8 EeV. As the dipole position measures the mean deflection of UHECRs, and these deflections are inversely proportional to energy, we calculate the harmonic mean as the typical value. Due to the steep spectrum of UHECRs, this value is very close to the threshold: from the modeled spectrum [@abraham2010] we calculate it to be ${E = 12}$ EeV with a systematic uncertainty of $\pm 14$% [@aab2015b]. The parameters for this correlation are listed in [Table \[tab:params\]]{}. Note that the typical energy $E$ is well below the GZK threshold, and $D$ is well within the UHECR horizon at this energy, so the assumption that the UHECR charge/mass ratio remains constant, discussed in [Section \[sec:propagation\]]{}, approximately holds. [lccc]{} 2MRS dipole & 12 & $52_{-14}^{+14}$ & 70\  SBGs & 50 & $13_{-3\phn}^{+4\phn}$ & 10\  $\gamma$AGN & 75 & $7_{-2\phn}^{+4\phn}$ & 150\ & 50 & $12_{-4\phn}^{+6\phn}$ & 70\ 2MRS & 49 & $13_{-4\phn}^{+7\phn}$ & 70\ From the parameters in [Table \[tab:params\]]{} and [Equations \[eqn:B1vals\] and \[eqn:B2vals\]]{}, we derive a scale-dependent limit on the EGMF, under the assumption that the correlation with the 2MRS dipole represents a true association between UHECRs and their sources: $$\begin{aligned} \frac{B}{\rm G} &< \begin{cases} 1.3\times 10^{-9} \, {\ensuremath{\left( \dfrac{Z}{2.9} \right)}}^{\!\!-1} {\ensuremath{\left( \dfrac{\lambda_B}{\rm 1~Mpc} \right)}}^{\!\!-\frac{1}{2}} & \lambda_B < 100~{\rm Mpc}\\ 1.3 \times 10^{-10} \, {\ensuremath{\left( \dfrac{Z}{2.9} \right)}}^{\!\!-1} & \lambda_B > 100~{\rm Mpc} . \end{cases} \end{aligned}$$ The 100 Mpc scale for the transition between these regimes differs from the scale ${D = 70}$ Mpc because of the introduction of a small-angle approximation from [Equation \[eqn:B1vals\]]{} to [Equation \[eqn:B2vals\]]{}. As the GMF may be responsible for some of the observed deflection [@aab2017], a limit incorporating the effect of the GMF may be more stringent than this conservative result. The principal uncertainty in this result is associated with the range $1.7 < Z < 5$ for the mean atomic number, dominating over smaller uncertainties in the offset angle and energy scale; the nominal values above represent the geometric mean of this range (${Z = 2.9}$). The possible range for the limit, depending on $Z$, is $B < 0.7$–$2.2 \times 10^{-9} \left( \lambda_B / {\rm 1~Mpc} \right)^{-1/2}$ G for coherence lengths $\lambda_B < 100$ Mpc and $B < 0.7$–$2.2 \times 10^{-10}$ G at larger scales. Intermediate-scale anisotropy of ultra-high-energy cosmic rays {#sec:correlations} -------------------------------------------------------------- The remaining elements of anisotropy recently detected by the Pierre Auger Observatory are correlations between UHECR arrival directions and extragalactic objects from several catalogs [@aab2018a]. These correlations are on intermediate angular scales (7–13$^\circ$), smaller than the all-sky dipole described in [Section \[sec:dipole\]]{}, but larger than the resolution of the instrument. Each correlation represents an excess of UHECRs (above some energy threshold) with arrival directions aligned (within some search radius) with objects in a given catalog, against a null hypothesis of an isotropic distribution of UHECRs. After imposing a statistical penalty for the *a posteriori* parameter search, @aab2018a report correlations (with corresponding statistical significances) with sources detected in gamma rays by  and classified as starburst galaxies (SBGs; $4.0\sigma$) or active galactic nuclei ($\gamma$AGN; $2.7\sigma$), X-ray sources detected by  ($3.2\sigma$), and infrared sources detected by the 2MRS ($2.7\sigma$). In each case, the best fit corresponds to a small fraction (7–16%) of UHECRs originating from objects of the specified type, and the remainder constituting an isotropic background. The best-fit energy thresholds are in the range  EeV, and the best-fit search radii of 7–13$^\circ$ define the angular scale of the correlations. Unlike the result described in [Section \[sec:dipole\]]{}, these correlations directly associate UHECRs with extragalactic sources. Barring an unlikely chance coincidence, each correlation implies that some fraction of UHECRs originate from the corresponding type of extragalactic object, or from another source class with a correlated extragalactic distribution. Note that these results are not fully independent, as the extragalactic objects in each result are correlated with one another; @aab2018a also consider joint fits to multiple source classes, which we neglect here. In this scenario, the offsets between the arrival directions of UHECRs and their corresponding sources result from the combination of deflections in the GMF and EGMF. Deflection in a component of the EGMF with a large coherence length would result in a systematic offset of UHECRs from multiple sources in a common direction, but @aab2018a do not say whether such an effect is observed. The use of a fixed search radius for UHECRs around a prospective source, irrespective of its distance, corresponds to an expectation of Galactic deflections only: for deflections in the EGMF, UHECRs from more distance sources would have a larger offset angle, meriting a larger search radius. ### Resulting limit on the extragalactic magnetic field {#sec:correlations_lim} For each of the correlations reported by @aab2018a we represent the typical energy with the harmonic mean $\overline{E}$ above the corresponding best-fit energy threshold, as in [Section \[sec:dipole\_lim\]]{}. For the typical source distance $D$ we use the radii calculated by @aab2018a within which 90% of the UHECR flux from the corresponding source population is expected to originate, allowing for attenuation. These values are listed in [Table \[tab:params\]]{}, along with the angular scale of each correlation, which we take as an upper limit $\theta_{\rm max}$ on the offset due to the EGMF, conservatively neglecting any deflection in the GMF. As in [Section \[sec:dipole\_lim\]]{}, we take the mean atomic number to be in the range ${1.7 < Z < 5}$. The energies of the UHECRs exhibiting these correlations are substantially higher than those responsible for the dipole anisotropy described in [Section \[sec:dipole\]]{}, and hence more susceptible to attenuation (see [Section \[sec:propagation\]]{}). For the correlations with  SBGs, and  and 2MRS sources, the typical energies do not exceed the GZK threshold, and so propagation with minimal attenuation is likely over the ${D \leq 70}$ Mpc distances involved. However, the correlation with  $\gamma$AGN involves more energetic UHECRs, above the GZK threshold, propagating over longer (${D = 150}$ Mpc) distances, and so this population of UHECRs is likely to have undergone substantial attenuation through the processes described in [Section \[sec:propagation\]]{}, violating the assumptions behind the calculations in [Sections \[sec:uniform\] and \[sec:turbulent\]]{}. This is consistent with the results of @aab2018a [Figure 1], which show attenuation to have a substantial effect only on the correlation with $\gamma$AGN. We therefore exclude this specific correlation from further analysis. It is notable from [Table \[tab:params\]]{} that the typical offset angle $\theta_{\rm max}$ has an approximately inverse relation with UHECR energy, as expected from [Equation \[eqn:theta\]]{}, but does not increase with the typical distance $D$ to the class of correlated sources. This is the outcome that would result if the deflection were entirely due to the GMF, and thus irrespective of the distance to the source, whereas deflections due to the EGMF will be greater for more distant sources. Further examination of this trend may allow discrimination between the GMF and EGMF contributions to the deflection of UHECRs, but the uncertainty in $\theta_{\rm max}$ is too large to permit this with the current data. For the present, it is safe to say that the limit obtained by attributing the entire deflection to the EGMF, as we do here, is likely to be quite conservative. For the remaining correlations we calculate limits (summarized in [Table \[tab:lims\]]{}) on the strength of the EGMF as in [Section \[sec:dipole\_lim\]]{}, under the assumption that each correlation represents a true association between UHECRs and their sources. For the correlation with  SBGs, the resulting limit is $$\begin{aligned} \frac{B}{\rm G} &< \begin{cases} 3.3\times 10^{-9} \, {\ensuremath{\left( \dfrac{Z}{2.9} \right)}}^{\!\!-1} {\ensuremath{\left( \dfrac{\lambda_B}{\rm 1~Mpc} \right)}}^{\!\!-\frac{1}{2}} & \lambda_B < 10~{\rm Mpc}\\ 1.0 \times 10^{-9} \, {\ensuremath{\left( \dfrac{Z}{2.9} \right)}}^{\!\!-1} & \lambda_B > 10~{\rm Mpc} , \end{cases} \end{aligned}$$ which is substantially less constraining than the limit in [Section \[sec:dipole\_lim\]]{}, due to the shorter typical distances to the correlated sources. The limit resulting from the correlation with  sources is $$\begin{aligned} \frac{B}{\rm G} &< \begin{cases} 1.1\times 10^{-9} \, {\ensuremath{\left( \dfrac{Z}{2.9} \right)}}^{\!\!-1} {\ensuremath{\left( \dfrac{\lambda_B}{\rm 1~Mpc} \right)}}^{\!\!-\frac{1}{2}} & \lambda_B < 70~{\rm Mpc}\\ 1.3 \times 10^{-10} \, {\ensuremath{\left( \dfrac{Z}{2.9} \right)}}^{\!\!-1} & \lambda_B > 70~{\rm Mpc} \end{cases} \end{aligned}$$ and the equivalent limit from the correlation with 2MRS sources is almost identical to this, being only 10% higher (less constraining). These two are both similarly close to the limit based on the dipole anisotropy described in [Section \[sec:dipole\_lim\]]{}: with no significant loss of precision compared to the uncertainties in the data, we can regard these three correlations to establish a single limit. [lcc]{} 2MRS dipole & $\phn1.3 \times 10^{-10}$ & $1.3 \times 10^{-9}$\  SBGs & $ 10.2 \times 10^{-10}$ & $3.3 \times 10^{-9}$\ & $\phn1.3 \times 10^{-10}$ & $1.1 \times 10^{-9}$\ 2MRS & $\phn1.4 \times 10^{-10}$ & $1.2 \times 10^{-9}$\ Discussion {#sec:discussion} ========== At present there are no direct measurements of the strength of the EGMF in voids, but various limits have been established in terms of its strength $B$ and coherence length $\lambda_B$. The limits derived in this work, based on the correlations of UHECR arrival directions with the dipolar distribution of 2MRS sources and with  and 2MRS sources on smaller angular scales, are shown in [Figure \[fig:limplot\]]{}. We also show previous limits, discussed below, confining ourselves for brevity to the most constraining measurements only. For a more comprehensive review of the observational and theoretical limits on the strength of the EGMF in voids we refer the reader to @durrer2013. ![Parameter space for the strength $B$ and coherence length $\lambda_B$ of the EGMF in voids, showing regions excluded by past limits (light shaded) and this work (dark shaded). The near-identical limits placed in this work, based on UHECR observations, have a substantial uncertainty associated with the mean UHECR atomic number $Z$; solid and dotted lines show respectively the cases $Z=1.7$ and $Z=5$, which represent the range permitted by current composition measurements. Theoretical constraints are set by MHD turbulence, which causes the decay of short-scale modes in magnetic fields [@durrer2013], and by the Hubble radius, which places an upper limit to the size of any observable structure. The lower limit is set by the non-detection of gamma-ray cascades [@neronov2010]. The upper limit shown from CMB observations is a projection from @paoletti2011, as represented by @durrer2013, and compatible with the limit $B < 9 \times 10^{-10}$ G established with Planck data [@ade2016].[]{data-label="fig:limplot"}](limplot){width="\linewidth"} Using the non-detection of the secondary photon-photon cascade, lower limits on the strength of the EGMF in voids have been set observationally using gamma-ray measurements from  [@neronov2010; @tavecchio2011]. The general limit given by @neronov2010 is $B \geq 3 \times 10^{-16}$ G, but towards individual blazars the limits span the range $\sim 10^{-17}$–$10^{-14}$ G when considering various emission and suppression scenarios. On the smallest scales a theoretical limit is set by the termination of evolutionary tracks in $(B, \lambda_B)$ space for various magnetohydrodynamic (MHD) turbulence scenarios. On the largest scale a limit is set by the Hubble radius, $\ell_{\rm H}$; fields coherent on scales larger than the Hubble radius are possible if due to seed fields that were generated during inflation. Observational limits for such fields with coherence lengths longer than the Hubble radius are not measurable and the upper-limit constraint of $B \lesssim 10^{-9}$ G on this scale currently comes from CMB power-spectrum analysis [@ade2014; @ade2016]. This upper limit extends uniformly to smaller scales (1 Mpc $\lesssim \lambda_B < \ell_{\rm H}$) and can vary somewhat (within a factor of $\sim 5$) when considering different primordial field scenarios. The strongest constraint of $B < 9\times 10^{-10}$ G is given by a scenario in which scale-invariant primordial magnetic fields are considered. These power-spectrum limits are more constraining than those from Faraday rotation of the CMB by several orders of magnitude [@ade2016]. On smaller scales ($\lambda_B \lesssim 1$ Mpc) the behavior of the CMB upper limit becomes more complex as spectral distortions need to be taken into account. The results presented here further constrain the upper limits on the magnetic field strength in voids on scales ${\lambda_B > 100}$ Mpc by around an order of magnitude (factor of $\sim 4$–12), depending on the composition of UHECRs. Composition is the largest source of uncertainty in this limit, as shown in [Figure \[fig:limplot\]]{} and discussed in [Sections \[sec:dipole\_lim\] and \[sec:correlations\_lim\]]{}. Conclusion {#sec:conclusions} ========== We have derived an upper limit on the strength of the EGMF, conditional on the distribution of UHECR arrival directions being associated with one or more of the types of extragalactic objects with which correlations have been observed [@aab2017; @aab2018a]. Three correlations of UHECR arrival directions — with an all-sky dipole in the distribution of 2MRS sources, and on smaller angular scales with both  and 2MRS sources — each imply a similar limit (within ${\sim 10}$%). This implied limit is similar to existing constraints from CMB observations for fields with a coherence length around 1 Mpc, and a factor more constraining for fields with a coherence length ${>100}$ Mpc. The UHECR dipole has a statistical significance of 5.2$\sigma$, but its correlation with the 2MRS dipole may be a chance alignment, if UHECRs do not originate from a class of object correlated with the extragalactic distribution of mass in the nearby Universe. The smaller-scale correlations with  and 2MRS have significances of 3.2$\sigma$ and 2.7$\sigma$ respectively, and are not susceptible to such an alternate explanation. Our derived limit on the EGMF holds if any of these three correlations represents a true association between UHECRs and their sources; but note that the last two correlations are not completely statistically independent. These results suggest that techniques for probing cosmic magnetic fields on large scales or amplified fields derived from them, such as observations of diffuse synchrotron or radio polarization, will not achieve a detection until they improve substantially in sensitivity. Conversely, if such techniques were to achieve a detection, it would cast doubt on the current evidence for an extragalactic origin of UHECRs. There remains a parameter space spanning five orders of magnitude in EGMF strength between our upper limit and the lower limit established by gamma-ray observations [@neronov2010]. The major source of uncertainty in our limit is the unknown UHECR composition, which is the subject of continued investigation. The Pierre Auger Observatory is undergoing an upgrade to enable it to discriminate between the muonic and electromagnetic components of particle cascades initiated by UHECRs [@aab2016b], which will improve its ability to discriminate between UHECRs of different elements. Competitive precision in cosmic-ray composition measurements has also been demonstrated at lower energies by radio measurements with LOFAR [@buitink2016], which may be extended to UHECRs with the upcoming SKA [@huege2014]. In principle, if the strength of the EGMF lies close to the limit established here, it may be possible to detect and measure it using UHECR observations. This will require improved models of the Galactic magnetic field, so the Galactic contribution to the deflection of UHECRs can be simulated and subtracted [@farrar2017]. It will also benefit from greater signal statistics, which may be accomplished through future instruments with larger collecting areas such as JEM-EUSO [@takahashi2009]. The potential to measure the EGMF through this technique will also depend on a precise knowledge of UHECR composition. The authors thank R.E. Spencer for helpful comments, and gratefully acknowledge support from ERC-StG307215 (LODESTONE).
--- abstract: 'Let $G$ be a finite non-cyclic $p$-group of order at least $p^3$. If $G$ has an abelian maximal subgroup, or if $G$ has an elementary abelian centre with $C_G(Z(\Phi(G))) \ne \Phi(G)$, then $|G|$ divides $|\text{Aut}(G)|$.' address: - 'Gustavo A. Fernández-Alcober: Department of Mathematics, University of the Basque Country UPV/EHU, 48080 Bilbao, Spain' - 'Anitha Thillaisundaram: Mathematisches Institut, Heinrich-Heine-Universität, 40225 Düsseldorf, Germany' author: - 'Gustavo A. Fernández-Alcober' - Anitha Thillaisundaram date: 26th October 2015 title: 'A note on automorphisms of finite $p$-groups' --- Introduction ============ From the 1970s, the question ‘Does every finite non-cyclic $p$-group $G$ of order $|G|\ge p^3$ have $|G|$ dividing $|\text{Aut}(G)|$?’ began to take form. Observe that, if $\text{Out}(G)=\text{Aut}(G)/\text{Inn}(G)$ denotes the group of outer automorphisms of $G$, this is equivalent to asking whether $|\text{Out}(G)|$ divides $|Z(G)|$. Over the past fifty years, this question was partially answered in the affirmative for specific families of $p$-groups, for instance $p$-abelian $p$-groups, $p$-groups of class 2, $p$-groups of maximal class, etc (see [@me] for a fairly up-to-date list). This led many to believe that the complete answer might be yes, which is why the question was reformulated as a conjecture: “If $G$ is a finite non-cyclic $p$-group with $|G|\ge p^3$, then $|G|$ divides $|\text{Aut}(G)|$". What is more, Eick [@Eick] proved that all but finitely many 2-groups of a fixed coclass satisfy the conjecture. Couson [@Couson] generalized this to $p$-groups for odd primes, but only to infinitely many $p$-groups of a fixed coclass. The coclass theory shed new light on the conjecture, and provided more evidence as to why it could be true. Looking at past efforts, it could also be said that an underlying theme was cohomology, which hinted that the full conjecture might be settled using such means. However, it came as a surprise that the conjecture is false. Very recently, González-Sánchez and Jaikin-Zapirain [@Jon] disproved the conjecture using Lie methods, where the question was first translated into one for Lie algebras. The main idea was to use the examples of Lie algebras with derivation algebra of smaller dimension, from which they constructed a family of examples of $p$-groups with small automorphism group. We remark that these counter-examples are powerful and $p$-central, which means that $G' \le G^p$ and $\Omega_1(G)\le Z(G)$ respectively, if $p$ is odd, and that $G'\le G^4$ and $\Omega_2(G)\le Z(G)$ respectively, if $p=2$. Now a new question may be formulated: which other finite non-cyclic $p$-groups $G$ with $|G|\ge p^3$ have $|G|$ dividing $|\text{Aut}(G)|$? In this short note, we prove that for $G$ a finite non-cyclic $p$-group with $|G|\ge p^3$, if $G$ has an abelian maximal subgroup, or if $G$ has elementary abelian centre and $C_G(Z(\Phi(G)))\ne \Phi(G)$, then $|G|$ divides $|\text{Aut}(G)|$. The latter is a partial generalization of Gaschütz’ result [@Gaschuetz] that $|G|$ divides $|\text{Aut}(G)|$ when the centre has order $p$. *Notation.* We use standard notation in group theory. All groups are assumed to be finite and $p$ always stands for a prime number. For $M,N$ normal subgroups in $G$, we set $\text{Aut}_N^M(G)$ to be the subgroup of automorphisms of $G$ that centralize $G/M$ and $N$, and let $\text{Out}^M_N(G)$ be its corresponding image in $\text{Out}(G)$. When $M$ or $N$ is $Z(G)$, we write just $Z$ for conciseness. On the other hand, if $G$ is a finite $p$-group then $\Omega_1(G)$ denotes the subgroup generated by all elements of $G$ of order $p$. An abelian maximal subgroup =========================== Let $G$ be a finite $p$-group with an abelian maximal subgroup $A$. We collect here a few well-known results (see [@Isaacs Lemma 4.6 and its proof]). \[Theorem63\] Let $G$ be a group having an abelian normal subgroup $A$ such that the quotient $G/A=\langle gA \rangle$, with $g\in G$, is cyclic. Then (i) $ G' = \{ [a,g] \mid a\in A \}$ and (ii) $|G'|=|A:A\cap Z(G)|$. \[Corollary64\] Let $G$ be a finite non-abelian $p$-group having an abelian maximal subgroup. Then $|G:Z(G)|=p|G'|$. In [@key-Webb], Webb uses the following approach to find non-inner automorphisms of $p$-power order, which we will use in the forthcoming theorem. For a maximal subgroup $M$ of $G$, we first define two homomorphisms on $Z(M)$. Let $g\in G$ be such that $G/M=\langle gM\rangle$, then$$\,\tau:m\quad \mapsto\, \quad g^{-p}(gm)^{p}=m^{g^{p-1} + \ldots + g + 1}$$ $$\quad\quad\quad \qquad \gamma:m\quad \mapsto \qquad\quad [m,g]=m^{g-1}\qquad \qquad\qquad \qquad\,$$ for all $m\in Z(M)$. We have $\text{im }\gamma\subseteq\ker\tau$ and $\text{im }\tau\subseteq\ker\gamma=Z(G)\cap M$. \[Webb\] [@key-Webb] Let $G$ be a finite non-abelian $p$-group and $M$ a maximal subgroup of $G$ containing $Z(G)$. Then $G$ has a non-inner automorphism of $p$-power order inducing the identity on $G/M$ and $M$ if and only if $\textup{im }\tau\neq\ker\gamma$. We remark that the proof of the above also tells us that if $\text{im }\tau\neq\ker\gamma$, then $|\textup{Out}_{M}^{M}(G)|=|\ker\gamma|/|\text{im }\tau|$. \[AbelianMaximal\] Let $G$ be a finite non-cyclic $p$-group with $|G|\ge p^3$ and with an abelian maximal subgroup $A$. Then $|G|$ divides $|\textup{Aut}(G)|$. We work with the subgroup $\text{Aut}^Z(G)$ of central automorphisms in $\text{Aut}(G)$. Now $$|\text{Aut}^Z(G) \text{Inn}(G)|=\frac{|\text{Aut}^Z(G)|\cdot |\text{Inn}(G)|}{|\text{Aut}^Z(G)\cap \text{Inn}(G)|} = \frac{|\text{Aut}^Z(G)|\cdot |G/Z(G)|}{|Z_2(G)/Z(G)|}$$ and hence it suffices to show that $|\text{Aut}^Z(G)|\ge |Z_2(G)|$. According to Otto [@Otto], when $G$ is the direct product of an abelian $p$-group $H$ and a $p$-group $K$ having no abelian direct factor, then one has $|H|\cdot |\text{Aut}(K)|_p$ divides $|\text{Aut}(G)|$. Hence, we may assume that $G$ has no abelian direct factor. It then follows by Adney and Yen [@key-PNgroup] that $|\text{Aut}^Z(G)|=|\text{Hom}(G/G',Z(G))|$. By Corollary \[Corollary64\], we have $|G:Z(G)|=p|G'|$ and equally, $$\label{Cor2.2} |G:G'|=p|Z(G)|.$$ Let $H=G/Z(G)$. Then $A/Z(G)$ is an abelian maximal subgroup of $H$. Applying Corollary \[Corollary64\] to $H$ yields $$|H:H'|=p|Z(H)|,$$ so $$|G:G'Z(G)|=p|Z_2(G):Z(G)|.$$ Hence $$\label{**} |Z_2(G)|=\frac{1}{p}\cdot |G'\cap Z(G)|\cdot |G:G'|.$$ Combining this with (\[Cor2.2\]) gives $$\label{3} |Z_2(G)|=|G' \cap Z(G)|\cdot |Z(G)|.$$ Next we have $G'=\{[a,g] \mid a\in A \}$ by Theorem \[Theorem63\]. By Webb’s construction, we know that $\text{im}\, \gamma \subseteq \ker \tau$ and here $\text{im}\, \gamma =[A,g] = G'$. Now for $a\in A$ and $g$ as in Theorem \[Theorem63\], we have $$[a,g]^{g^{p-1}+\ldots +g+1} =1.$$ We claim that $\exp(G' \cap Z(G))=p$. For, if $[a,g]\in Z(G)$, then $[a,g]^g=[a,g]$. Consequently, $$1=[a,g]^{g^{p-1}+\ldots +g+1} = [a,g]^p$$ and thus $o([a,g])\le p$. Clearly the minimal number $d:=d(G)$ of generators of $G$ is at least 2. In order to proceed, we divide into the following two cases: (a) $\exp (G/G')\ge \exp Z(G)$, and (b) $\exp (G/G') \le \exp Z(G)$. **Case (a):** Suppose $\exp(G/G')\ge \exp Z(G)$. We express $G/G'$ as $$G/G' = \langle g_1 G' \rangle \times \langle g_2 G' \rangle \times \ldots \times \langle g_d G' \rangle$$ where $o(g_1 G')= \exp (G/G')$ and by assumption $d\ge 2$. We consider homomorphisms from $G/G'$ to $Z(G)$. The element $g_1 G'$ may be mapped to any element of $Z(G)$, and $g_2 G'$ may be mapped to any element of $G' \cap Z(G)$, which is of exponent $p$. Thus, with the aid of (\[3\]), $$|\text{Hom}(G/G',Z(G))|\ge |Z(G)|\cdot |G' \cap Z(G)| =|Z_2(G)|.$$ **Case (b):** Suppose $\exp(G/G') \le \exp Z(G)$. Similarly we express $Z(G)$ as $$Z(G) = \langle z_1 \rangle \times \langle z_2 \rangle \times \ldots \times \langle z_r \rangle$$ where $r=d(Z(G))$ and $o(z_1)=\exp Z(G)$. We consider two families of homomorphisms from $G/G'$ to $Z(G)$. First, $$\quad G/G' \rightarrow Z(G)$$ $$g_i G' \mapsto z_1^{b_i}$$ for $1\le i\le d$, where $b_i$ is such that $o(z_1^{b_i})$ divides $o(g_i G')$. This gives rise to $|G/G'|$ homomorphisms. Next, we consider all homomorphisms from $G/G'$ to $Z(G)$ where each element $g_i G'$ is mapped to any element of order $p$ in $\langle z_2 \rangle \times \ldots \times \langle z_r \rangle$. This gives $$(p^{r-1})^d\ge p^{r-1}=\frac{|\Omega_1(Z(G))|}{p} \ge \frac{|G' \cap Z(G)|}{p}$$ different homomorphisms. Multiplying both together and then using (\[\*\*\]), we obtain $$|\text{Hom}(G/G',Z(G))|\ge \frac{|G/G'|\cdot |G' \cap Z(G)|}{p} = |Z_2(G)|.$$ Elementary abelian centre ========================= Let $G$ be a finite $p$-group with elementary abelian centre. In order to prove that $|G|$ divides $|\text{Aut}(G)|$, we may assume, upon consultation of [@Faudree] and the final remarks in [@Hummel], that $Z(G)<\Phi(G)$. One of the following three cases exclusively occurs. **Case 1.** $Z(M)=Z(G)$ for some maximal subgroup $M$ of $G$. **Case 2.** $Z(M)\supset Z(G)$ for all maximal subgroups $M$ of $G$. Then either **(A)** $C_{G}(Z(\Phi(G)))\neq\Phi(G)$; or **(B)** $C_{G}(Z(\Phi(G)))=\Phi(G)$. $\ $ The main result of this section is to show that if $G$ is a finite $p$-group with elementary abelian centre and not in Case 2B, then $|G|$ divides $|\text{Aut}(G)|$. With regard to Case 2B, we would also like to mention another long-standing conjecture for finite $p$-groups: does there always exist a non-inner automorphism of order $p$? The case 2B is the only remaining case for this conjecture (see [@DS]). First we deal with Case 1. \[lem:1\][@key-Muller Lemma 2.1(b)] Suppose $M$ is a maximal subgroup of $G$. If $Z(M)\subseteq Z(G)$ then $\textup{Aut}_{M}^{Z}(G)$ is a non-trivial elementary abelian $p$-group such that $\textup{Aut}_{M}^{Z}(G)\cap\textup{Inn}(G)=1$. We comment that the proof of this result in [@key-Muller] tells us that $$|\textup{Aut}_{M}^{Z}(G)|=|\textup{Hom}(G/M,Z(M))|=|\Omega_{1}(Z(M))|.$$ \[lem:Two\]Let $G$ be a finite $p$-group with elementary abelian centre. Suppose that $Z(M)=Z(G)$ for some maximal subgroup $M$ of $G$. Then $|G|$ divides $|\textup{Aut}(G)|$. Using Lemma \[lem:1\] and the above comment, it follows that $$|\text{Out}(G)|_{p}\ge|\text{Aut}_{M}^{Z}(G)|=|Z(G)|.$$ Hence $|G|$ divides $|\text{Aut}(G)|$ as required. Next, suppose $G$ is as in Case 2(A). We will need the following. [@key-Webb] \[thm:Webb\]Let $G$ be a finite non-abelian $p$-group. Then $p$ divides the order of $\textup{Out}_Z(G)$. Now we present our result. \[pro:One\]Let $G$ be a finite $p$-group with elementary abelian centre, such that $C_{G}(Z(\Phi(G)))\ne\Phi(G)$ and $Z(M)\supset Z(G)$ for all maximal subgroups $M$ of $G$. Then $|G|$ divides $|\textup{Aut}(G)|$. By Müller [@key-Muller proof of Lemma 2.2], there exist maximal subgroups $M$ and $N$ such that $G=Z(M)N$ and $Z(G)=Z(M)\cap N$. Write $G/M=\langle gM\rangle$ for some $g\in G$, and consider the homomorphism $$\begin{matrix} \tau_{M} & \colon & Z(M) & \longrightarrow & Z(M)\hfill \\ & & m & \longmapsto & g^{-p}(gm)^{p}=m^{g^{p-1}+ \cdots+g+1}. \end{matrix}$$ Since $$\frac{Z(M)}{Z(G)} = \frac{Z(M)}{Z(M)\cap N} \cong \frac{Z(M)N}{N} = \frac{G}{N}$$ and $Z(G)\subseteq \ker\tau_M$, it follows that $|\text{im }\tau_M|\le p$. If $\text{im }\tau_{M}=1$, the remark after Corollary \[Webb\] implies that $|\text{Out}_{M}^{M}(G)|=|Z(G)|$ and we are done. Hence we assume that $\text{im }\tau_{M}\cong C_{p}$. If $Z(G)\cong C_{p}$, then $|G|$ divides $|\text{Aut}(G)|$ by Gaschütz [@Gaschuetz]. So we may assume that $|Z(G)|>p$. Again by the same remark, we have $|\text{Out}_{M}^{M}(G)|=|Z(G)|/p$. By [@key-Muller Lemma 2.2], it follows that $G$ is a central product of subgroups $R$ and $S$, where $R/Z(R)\cong C_p \times C_p$ and $Z(R)=Z(G)=Z(S)=R\cap S$. Furthermore, $R=Z(M)Z(N)$ and $S=M\cap N=C_{G}(R)$. By Theorem \[thm:Webb\], there exists a non-inner automorphism $\beta\in\text{Aut}_{Z}(S)$ of $p$-power order. As observed by Müller [@key-Muller Section 3], the automorphism $\beta$ extends uniquely to some non-inner $\gamma\in\text{Aut}_{Z}(G)$ with trivial action on $R$. Certainly $\gamma$ does not act trivially on $M\cap N=S$, so $\gamma\not\in\text{Aut}_{M}(G)$. Let $\overline{\gamma}$ be the image of $\gamma$ in $\text{Out}(G)$. We now show that $\overline{\gamma}\not \in \text{Out}^M_M(G)$. On the contrary, suppose that $\gamma = \rho. \iota$ where $\rho \in \text{Aut}^M_M(G)$ and $\iota \in \text{Inn}(G)$. As $S\le M$, we have $\beta(s)=\gamma(s)=\rho (s)^x=s^x$ for all $s\in S$ and a fixed $x\in G$. Writing $x=rs'$ for some $r\in R$, $s'\in S$ and recalling that $S=C_G(R)$, we have $\beta(s)=s^{s'}$. This implies that $\beta \in \text{Inn}(S)$, a contradiction. Thus $$|\text{Out}(G)|_{p}\ge|\langle\overline{\gamma},\text{Out}_{M}^{M}(G)\rangle|\ge|Z(G)|.$$ It follows that $|G|$ divides $|\text{Aut}(G)|$. [^1] [99]{} J. E. Adney and T. Yen, Automorphisms of a $p$-group, *Ill. J. Math.* **9** (1965), 137–143. M. Couson, *On the character degrees and automorphism groups of finite $p$-groups by coclass*, PhD Thesis, Technische Universität Braunschweig, Germany, 2013. M. Deaconescu and G. Silberberg, Noninner automorphisms of order $p$ of finite $p$-groups, *J. Algebra* **250** (2002), 283-287. B. Eick, Automorphism groups of 2-groups, *J. Algebra* **300** (1) (2006), 91–101. R. Faudree, A note on the automorphism group of a $p$-group, *Proc. Amer. Math. Soc.* **19** (1968), 1379–1382. W. Gaschütz, Nichtabelsche $p$-Gruppen besitzen äussere $p$-Automorphismen (German), *J. Algebra* **4** (1966), 1–2. J. González-Sánchez and A. Jaikin-Zaipirain, Finite $p$-groups with small automorphism group, *Forum Math. Sigma* **3** (2015), e7. K. G. Hummel, The order of the automorphism group of a central product, *Proc. Amer. Math. Soc.* **47** (1975), 37–40. I. M. Isaacs, *Finite group theory*, Graduate Studies in Mathematics, Vol. 92, Amer. Math. Soc., Providence, 2008. O. Müller, On $p$-automorphisms of finite $p$-groups, *Arch. Math.* **32** (1979), 553–538. A. D. Otto, Central automorphisms of a finite $p$-group, *Trans. Amer. Math. Soc.* **125** (1966), 280–287. A. Thillaisundaram, The automorphism group for $p$-central $p$-groups, *Internat. J. Group Theory* **1** (2) (2012), 59–71. U. H. M. Webb, An elementary proof of Gaschütz’ theorem, *Arch. Math.* **35** (1980), 23–26. M. K. Yadav, On automorphisms of finite $p$-groups, *J. Group Theory* **10** (6) (2007), 859–866. [^1]: The authors are grateful to the various people who helped to read and improve this manuscript.
--- abstract: 'In this presentation, we report our recent studies on the $K^*\Lambda(1116)$ photoproduction off the proton target, using the tree-level Born approximation, via the effective Lagrangian approach. In addition, we include the nine (three- or four-star confirmed) nucleon resonances below the threshold $\sqrt{s}_\mathrm{th}\approx2008$ MeV, to interpret the discrepancy between the experiment and previous theoretical studies, in the vicinity of the threshold region. From the numerical studies, we observe that the $S_{11}(1535)$ and $S_{11}(1650)$ play an important role for the cross-section enhancement near the $\sqrt{s}_\mathrm{th}$. It also turns out that, in order to reproduce the data, we have the vector coupling constants $g_{K^*S_{11}(1535)\Lambda}=(7.0\sim9.0)$ and $g_{K^*S_{11}(1650)\Lambda}=(5.0\sim6.0)$.' author: - 'Sang-Ho Kim' - 'Seung-il Nam' - Yongseok Oh - 'Hyun-Chul Kim' title: '$K^*\Lambda(1116)$ photoproduction and nucleon resonances' --- Introdcution ============ The strangeness meson photoproduction off the nucleon target is one of the most well-studied experimental and theoretical subjects to reveal the hadron production mechanisms and its internal structures, in terms of the strange degrees of freedom, breaking the flavor SU(3) symmetry explicitly. Together with the recent high-energy photon beam developments in the experimental facilities, such as LPES2 at SPring-8 and CLAS12 at Jefferson laboratory [@EXP], higher-mass strange meson-baryon photoproducitons must be an important subject to be addressed theoretically for future studies on those reaction processes. In the previous works [@Oh:2006in; @Oh:2006hm], the $K^*\Lambda(1116)$ photoproduction was investigated, the Born approximation being used with the Regge contributions. In comparison with the preliminary experimental data [@Guo:2006kt], the theory reproduced the data qualitatively well, but the theoretical cross-section strength was underestimated in the vicinity of the $\sqrt{s}_\mathrm{th}$. In the present talk, we want to report our recent study to explain this discrepancy observed in the previous work. Based on the theoretical framework as employed in Ref. [@Oh:2006hm], we include the nucleon resonances in the $s$-channel baryon-pole contribution. As for the nucleon resonances $N^*$, we take into account nine of them, i.e. $P_{11}(1440,1/2^+)$, $D_{13}(1520,1/2^-)$, $S_{11}(1535,1/2^-)$, $S_{11}(1650,1/2^-)$, $D_{13}(1700,3/2^-)$, $P_{11}(1710,1/2^+)$, $P_{13}(1720,3/2^+)$, $P_{11}(1440,1/2^+)$, $D_{15}(1675,5/2^-)$, and $F_{15}(1680,5/2^+)$, in a full relativistic manner. Theoretical framework and numerical results =========================================== We start with explaining the theoretical framework briefly and represent the important numerical results in our study. We note that the nucleon resonances are carefully taken into account in a full-relativistic manner, in addition to the Born diagrams, $K$ and $\kappa(800)$ meson-exchanges in the $t$ channel, and $\Sigma(1192)$ and $\Sigma^*(1385)$ hyperon exchanges in the $u$ channel, which were already employed in Ref. [@Oh:2006hm]. All the effective interaction vertices for the nucleon resonances are taken from Ref. [@Oh:2007jd]. For instance, the invariant amplitudes for the spin-$1/2$ and spin-$3/2$ resonance contributions in the $s$ channel can be written as follows: $$\begin{aligned} \label{eq:AMP} \mathcal {M}_{N^*}\left( \frac{1}{2}^\pm\right)&=&\pm\frac{g_{{K^*}N^*\Lambda}}{s-M_{N^*}^2} {\varepsilon}_{\nu}^*(k_2){\bar u}_{\Lambda}(p_2) \left[\gamma^{\nu}-{\frac{i\kappa_{K^{*}N^*\Lambda}}{2M_N}} {\sigma^{\nu\beta}}k_{2\beta} \right] \cr &&\frac{ie_Q\mu_{N^*}}{2M_N}\Gamma^{\mp}(\not{k_1}+\not{p_1}+M_{N^*}) \Gamma^{\mp}\sigma^{\mu{i}}k_{1i}u_N(p_1){\varepsilon}_{\mu}(k_1), \cr \mathcal {M}_{N^*}\left(\frac{3}{2}^\pm \right)&=& \frac{g_{{K^*}N^*\Lambda}}{s-M_{N^*}^2} {\varepsilon}_{\nu}^*(k_2){\bar u}_{\Lambda}(p_2)({k_2^\beta}g^{{\nu}i}-k_2^ig^{\nu\beta}) \frac{e_Q}{2M_{K^*}}\Gamma_i^{\pm}\Delta_{\beta\alpha} \cr &&\left[\frac{\mu_{N^*}}{2M_N}\gamma_j\,\mp\,\frac{\bar \mu_{N^*}} {4M_N^2}p_{1j} \right]\Gamma^{\pm}({k_1^\alpha}g^{{\mu}j}-{k_1^j} g^{\mu\alpha})u_N(p_1){\varepsilon}_{\mu}(k_1),\end{aligned}$$ where $(k_1,p_1,k_2,p_2)$ stand for the $(\gamma,p,K^*,\Lambda)$ momenta and $\mu_{N^*}$ for the helicity amplitude. The $g_{K^*N^*\Lambda}$ and $\kappa_{K^*N^*\Lambda}$ denote the strong vector and tensor coupling strengths, respectively. The polarization vectors for the photon and $K^*$ are assigned as $\varepsilon_\mu(k_1)$ and $\varepsilon_\mu(k_2)$. The $\Gamma$ controls the parity of the relevant resonances in the following way: $$\label{eq:} \Gamma_\mu^{\pm} = \left( \begin{array}{c} \gamma_\mu\gamma_5 \\ \gamma_\mu \end{array} \right),\,\,\,\, \Gamma^{\pm} = \left( \begin{array}{c} \gamma_5 \\ \bold 1_{4\times4} \end{array} \right).$$ Relevant parameters for the resonances are estimated using the experimental and theoretical information [@Nakamura:2010zzi; @Capstick:1998uh; @Capstick:2000qj]. In Figure \[FIG1\], we show the total cross section for the present reaction process, i.e. $\gamma p \to K^{*+}\Lambda(1116)$. The numerical results are drawn separately for the cases including the $S_{11}(1535)$ (left) and $S_{11}(1650)$ (right), varying the coupling constants $g_{K^*N^*\Lambda}$. As shown in the figure, the cross-section enhancement is observed in the vicinity of the threshold region, if we choose the strong coupling strengths as $g_{K^*N^*\Lambda}\approx(4.0\sim9.0)$. We verified that other nucleon resonances are not so effective to interpret the discrepancy. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![Total cross section for the $\gamma p \to K^{*+}\Lambda(1116)$ reaction process. We show the numerical results for the cases including the $S_{11}(1535)$ (left) and $S_{11}(1650)$ (right), varying the coupling constants $g_{K^*N^*\Lambda}$.[]{data-label="FIG1"}](fff1.eps "fig:"){width="6cm"} ![Total cross section for the $\gamma p \to K^{*+}\Lambda(1116)$ reaction process. We show the numerical results for the cases including the $S_{11}(1535)$ (left) and $S_{11}(1650)$ (right), varying the coupling constants $g_{K^*N^*\Lambda}$.[]{data-label="FIG1"}](fff2.eps "fig:"){width="6cm"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- Summary and outlook =================== In the present work, we have studied the $K^*\Lambda$ photoproduction theoretically, employing the tree-level Born approximation and nucleon-resonance contributions below the $\sqrt{s}_\mathrm{th}$. Among the nucleon resonances, the $S_{11}(1535)$ and $S_{11}(1650)$ play a dominant role to reproduce the experimental data. It also turns out that other $N^*$ contributions are not so effective to improve the theoretical results. We note that the nucleon resonances beyond the $\sqrt{s}_\mathrm{th}$ may contribute to the threshold enhancement, especially due to the $D_{13}(2080)$, since the $\sqrt{s}_\mathrm{th}$ for the present reaction process is about $2008$ MeV. Related works are under progress and appear elsewhere. Acknowledgments {#acknowledgments .unnumbered} =============== The authors are grateful to A. Hosaka for fruitful discussions. The present work is supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (grant number: 2009-0089525). The work of S.i.N. was supported by the grant NRF-2010-0013279 from National Research Foundation (NRF) of Korea. [99]{} LEPS2 (http://www.hadron.jp) and CLAS12 (http://www.jlab.org/Hall-B/clas12) Y. Oh and H. Kim, Phys. Rev.  C [**74**]{}, 015208 (2006). Y. Oh and H. Kim, Phys. Rev.  C [**73**]{}, 065202 (2006). L. Guo and D. P. Weygand \[CLAS Collaboration\], arXiv:hep-ex/0601010. Y. Oh, C. M. Ko and K. Nakayama, Phys. Rev.  C [**77**]{}, 045204 (2008). K. Nakamura \[Particle Data Group\], J. Phys. G [**37**]{}, 075021 (2010). S. Capstick and W. Roberts, Phys. Rev.  D [**58**]{}, 074011 (1998). S. Capstick and W. Roberts, Prog. Part. Nucl. Phys.  [**45**]{}, S241 (2000).
--- abstract: 'Let $M$ be a projective toric manifold. We prove two results concerning respectively [Kähler]{}-Einstein submanifolds of $M$ and symplectic embeddings of the standard euclidean ball in $M$. Both results use the well-known fact that $M$ contains an open dense subset biholomorphic to ${\Bbb{C}}^n$.' address: - | Abdus Salam International Center for Theoretical Physics\ Strada Costiera 11\ Trieste (Italy) and Dipartimento di Matematica\ Università di Parma\ Parco Area delle Scienze 53/A\ Parma (Italy) - | (Andrea Loi) Dipartimento di Matematica\ Università di Cagliari (Italy) - | (Fabio Zuddas) Dipartimento di Matematica e Informatica\ Via delle Scienze 206\ Udine (Italy) author: - Claudio Arezzo - Andrea Loi - Fabio Zuddas title: Some remarks on the symplectic and Kähler geometry of toric varieties --- [^1] Introduction and statements of the main results {#Introduction} =============================================== In this paper we use the well-known fact that toric manifolds are compactifications of ${\Bbb{C}}^n$ in order to prove two results, of Riemannian and symplectic nature, given by the following two theorems. Let $N$ be a projective toric manifold equipped with a toric [Kähler]{} metric $G$ and $(M, g) \xhookrightarrow{\phi} (N, G)$ be an isometric embedding of a [Kähler]{}-Einstein manifold such that $\phi(M)$ contains a point of $N$ fixed by the torus action. Then $(M, g)$ has positive scalar curvature. Let $(M, \omega)$ a toric manifold endowed with an integral toric [Kähler]{} form and let $\Delta \subseteq {\Bbb{R}}^n$ be the image of the moment map for the torus action. Then, there exists a number $c(\Delta)$ (explicitely computable from the polytope, see Corollary \[corollariodivisore\]) such that any ball of radius $r > c(\Delta)$, symplectically embedded into $(M, \omega)$, must intersect the divisor $M \setminus {\Bbb{C}}^n$. These two results are proved and discussed respectively in Section 2 (Theorem \[theotoriche\]) and Section 3 (Corollary \[corollariodivisore\]). The paper ends with an Appendix where, for the reader’s convenience, we give an exposition (as self-contained as possible) of the classical facts about toric manifolds we need in Sections 2 and 3. [Kähler]{}–Einstein submanifolds of Toric manifolds =================================================== Let us briefly recall Calabi’s work on [Kähler]{} immersions and diastasis function [@ca]. Given a complex manifold $N$ endowed with a real analytic [Kähler]{} metric $G$, the ingenious idea of Calabi was the introduction, in a neighborhood of a point $p\in N$, of a very special [Kähler]{} potential $D_p$ for the metric $G$, which he christened [*diastasis*]{}. Recall that a [Kähler]{} potential is an analytic function $\Phi$ defined in a neighborhoood of a point $p$ such that $\Omega =\frac{i}{2}\partial \bar\partial\Phi$, where $\Omega$ is the [Kähler]{} form associated to $G$. In a complex coordinate system $(Z)$ around $p$ $$G_{\alpha\beta}= 2G(\frac{\partial}{\partial Z_{\alpha}}, \frac{\partial}{\partial \bar Z_{\beta}}) =\frac{{\partial}^2\Phi} {\partial Z_{\alpha}\partial\bar Z_{\beta}}.$$ A [Kähler]{} potential is not unique: it is defined up to the sum with the real part of a holomorphic function. By duplicating the variables $Z$ and $\bar Z$ a potential $\Phi$ can be complex analytically continued to a function $\tilde\Phi$ defined in a neighborhood $U$ of the diagonal containing $(p, \bar p)\in N\times\bar N$ (here $\bar N$ denotes the manifold conjugated of $N$). The [*diastasis function*]{} is the [Kähler]{} potential $D_p$ around $p$ defined by $$D_p(q)=\tilde\Phi (q, \bar q)+ \tilde\Phi (p, \bar p)-\tilde\Phi (p, \bar q)- \tilde\Phi (q, \bar p).$$ Among all the potentials the diastasis is characterized by the fact that in every coordinates system $(Z)$ centered in $p$ $$D_p(Z, \bar Z)=\sum _{|j|, |k|\geq 0} a_{jk}Z^j\bar Z^k,$$ with $a_{j 0}=a_{0 j}=0$ for all multi-indices $j$. The following proposition shows the importance of the diastasis in the context of holomorphic maps between [Kähler]{} manifolds. [**(Calabi)**]{}\[calabidiastasis\] Let $\varphi :(M, g)\rightarrow (N, G)$ be a holomorphic and isometric embedding between [Kähler]{} manifolds and suppose that $G$ is real analytic. Then $g$ is real analytic and for every point $p\in M$ $$\varphi (D_p)=D_{\varphi (p)},$$ where $D_p$ (resp. $D_{\varphi (p)})$ is the diastasis of $g$ relative to $p$ (resp. of $G$ relative to $\varphi (p))$. In Proposition \[teormain\] below, we are going to require that $N$ is a compactification of ${\Bbb{C}}^n$, or more precisely that $N$ contains an analytic subvariety $Y$ such that $X = N \setminus Y$ is biholomorphic to ${\Bbb{C}}^n$; as far as the [Kähler]{} metric $G$ on $N$ is concerned, in addition to the requirement that $G$ is real analytic, we impose two other conditions. The first one is [*Condition $(A)$: there exists a point $p_*\in X=N\setminus Y$ such that the diastasis $D_{{p_*}}$ is globally defined and non-negative on $X$.*]{} 0.3cm In order to describe the second condition we need to introduce the concept of Bochner’s coordinates (cfr. [@boc], [@ca], [@hu], [@hu1]). Given a real analytic [Kähler]{} metric $G$ on $N$ and a point $p\in N$, one can always find local (complex) coordinates in a neighborhood of $p$ such that $$D_p(Z, \bar Z)=|Z|^2+ \sum _{|j|, |k|\geq 2} b_{jk}Z^j\bar Z^k,$$ where $D_p$ is the diastasis relative to $p$. These coordinates, uniquely defined up to a unitary transformation, are called [*the Bochner’s coordinates*]{} with respect to the point $p$. One important feature of these coordinates which we are going to use in the proof of our main theorem is the following: ([**Calabi**]{})\[calabibochner\] Let $\varphi :(M, g)\rightarrow (N, G)$ be a holomorphic and isometric embedding between [Kähler]{} manifolds and suppose that $G$ is real analytic. If $(z_1,\dots ,z_m)$ is a system of Bochner’s coordinates in a neighborhood $U$ of $p\in M$ then there exists a system of Bochner’s coordinates $(Z_1,\dots ,Z_n)$ with respect to $\varphi (p)$ such that $$\label{zandZ} Z_1|_{\varphi(U)}=z_1,\dots ,Z_m|_{\varphi (U)}=z_m.$$ We can then state the following [*Condition $(B)$: the Bochner’s coordinates with respect to the point $p_*\in X$, given by the previous condition $(A)$, are globally defined on $X$.*]{} 0.3cm Our first result is then the following: \[teormain\] Let $N$ be a smooth projective compactification of $X$ such that $X$ is algebraically biholomorphic to ${{\Bbb{C}}}^n$ and let $G$ be a real analytic [Kähler]{}metric on $N$ such that the following two conditions are satisfied: - there exists a point $p_*\in X$ such that the diastasis $D_{{p_*}}$ is globally defined and non-negative on $X$; - the Bochner’s coordinates with respect to $p_*$ are globally defined on $X$. Then any K–E submanifold $(M, g) \xhookrightarrow{\phi} (N, G)$ such that $p_* \in \phi(M)$ has positive scalar curvature. \[condA\] The easiest example of compactification of ${{\Bbb{C}}}^n$ which satisfies condition $(A)$ is given by ${{\Bbb{C}}}P^n={{\Bbb{C}}}^n\cup Y$ endowed with the Fubini–Study metric $g_{FS}$, namely the metric whose associated [Kähler]{} form is given by $$\omega_{FS}=\frac{i}{2} \partial\bar{\partial}\log\displaystyle \sum _{j=0}^{n}|Z_{j}|^{2},$$ and $Y={{\Bbb{C}}}P^{n-1}$ is the hyperplane $Z_0=0$. Indeed the diastasis with respect to $p_*=[1, 0,\dots ,0]$ is given by: $$D_{p_*}(u, \bar{u})= \log (1+\sum _{j=1}^{n}|u_j|^2).$$ where $(u_1,\dots ,u_n)$ are the affine coordinates, namely $u_j=\frac{Z_j}{Z_0}, j=1,\dots, n$. Proposition \[teormain\] can be then considered as an extension of a theorem of Hulin [@hu1] which asserts that a compact [Kähler]{}–Einstein submanifold of ${{\Bbb{C}}}P^n$ is Fano (see also [@LZ09]). Other examples of compactifications of ${{\Bbb{C}}}^n$ satisfying conditions $(A)$ and $(B)$ are given by the compact homogeneous Hodge manifolds. These are not interesting since all compact homogeneous Hodge manifolds can be [Kähler]{} embedded into a complex projective space ([@DLH]) and so we are reduced to study the Hulin’s problem. We also remark that, by Proposition \[calabidiastasis\], condition $(A)$ is satisfied also by all the [Kähler]{} submanifolds of the previous examples. Let $p$ be a point in $M$ such that $\varphi (p)=p_*$, where $p_*$ is the point in $N$ given by condition $(A)$. Take Bochner’s coordinates $(z_1,\dots ,z_m)$ in a neighborhood $U$ of $p$ which we take small enough to be contractible. Since the [Kähler]{} metric $g$ is Einstein with (constant) scalar curvature $s$ then: $\rho_{\omega}=\lambda \omega$ where $\lambda$ is the Einstein constant, i.e. $\lambda=\frac{s}{2m}$, and $\rho_{\omega}$ is the *Ricci form*. If $\omega =\frac{i}{2} \sum _{j=1}^{m}g_{j\bar{k}} dz_{j}\wedge d\bar{z}_{\bar{k}}$ then $\label{rholocal} \rho _{\omega}=-i\partial \bar{\partial}\log \det g_{j\bar{k}}$ is the local expression of its *Ricci form*. Thus the volume form of $(M, g)$ reads on $U$ as: $$\label{voleucl} \frac{\omega^m}{m!}=\frac{i^m}{2^m} e^{-\frac{\lambda}{2}D_p+F+\bar F} dz_1\wedge d\bar z_1\wedge\dots\wedge dz_m\wedge d\bar z_m\, \, ,$$ where $F$ is a holomorphic function on $U$ and $D_p=\varphi^{-1}(D_{p_*})$ is the diastasis on $p$ (cfr. Proposition \[calabidiastasis\]). We claim that $F+\bar F=0$. Indeed, observe that $$\frac{\omega ^m}{m!}=\frac{i^m}{2^m} \det (\frac{\partial ^2 D_p} {\partial z_{\alpha} \partial \bar z_{\beta}}) dz_1\wedge d\bar z_1\wedge\dots\wedge dz_m\wedge d\bar z_m .$$ By the very definition of Bochner’s coordinates it is easy to check that the expansion of $\log\det (\frac{\partial ^2 D_p} {\partial z_{\alpha} \partial \bar z_{\beta}})$ in the $(z,\bar z)$-coordinates contains only mixed terms (i.e. of the form $z^j\bar z^k, j\neq 0, k\neq 0$). On the other hand by formula (\[voleucl\]) $$-\frac{\lambda}{2} D_p + F + \bar F= \log\det (\frac{\partial ^2 D_p} {\partial z_{\alpha} \partial \bar z_{\beta}}).$$ Again by the definition of the Bochner’s coordinates this forces $F + \bar F$ to be zero, proving our claim. By Theorem \[calabibochner\] there exist Bochner’s coordinates $(Z_1, \dots , Z_n)$ in a neighborhood of $p_*$ satisfying (\[zandZ\]). Moreover, by condition $(B)$ this coordinates are globally defined on $X$. Hence, by formula (\[voleucl\]) (with $F+\bar F=0$), the $m$-forms $\frac{\Omega^m}{m!}$ and $e^{-\frac{\lambda}{2}D_{p_*}} dZ_1\wedge d\bar Z_1\wedge\dots\wedge dZ_m\wedge d\bar Z_m$ globally defined on $X$ agree on the open set $\varphi (U)$. Since they are real analytic they must agree on the connected open set $\hat M=\varphi (M)\cap X$, i.e. $$\label{eqforms} \frac{\Omega^m}{m!}=\frac{i^m}{2^m} e^{-\frac{\lambda}{2} D_{p_*}} dZ_1\wedge d\bar Z_1\wedge\dots\wedge dZ_m\wedge d\bar Z_m.$$ Since $\frac{\Omega^m}{m!}$ is a volume form on $\hat M$ we deduce that the restriction of the projection map $$\pi :X\cong{{\Bbb{C}}}^n \rightarrow {{\Bbb{C}}}^m: (Z_1,\dots ,Z_n)\mapsto (Z_1,\dots ,Z_m)$$ to $\hat M$ is open. Since it is also algebraic (because the biholomorphism between $X$ and ${{\Bbb{C}}}^n$ is algebraic), its image contains a Zariski open subset of ${{\Bbb{C}}}^m$ (see Theorem $13.2$ in [@bo]), hence its euclidean volume, $\vol_{eucl}(\pi (\hat M))$, has to be infinite. Suppose now that the scalar curvature of $g$ is non-positive. By formula (\[eqforms\]) and by the fact that $D_{p_*}$ is non-negative, we get $\vol (\hat M, g)\geq \vol_{eucl}(\pi (\hat M))$ which is the desired contradiction, being the volume of $M$ (and hence that of $\hat M$) finite. 0.3cm Now, we are going to apply Proposition \[teormain\] to [*toric manifolds*]{} endowed with [*toric Kähler metrics*]{}. Recall that a toric manifold $M$ is a complex manifold which contains an open dense subset biholomorphic to $({\Bbb{C}}^*)^n$ and such that the canonical action of $({\Bbb{C}}^*)^n$ on itself by $(\alpha_1, \dots, \alpha_n)(\beta_1, \dots, \beta_n) = (\alpha_1 \beta_1, \dots, \alpha_n \beta_n)$ extends to a holomorphic action on the whole $M$ (see the Appendix for more details). A toric Kähler metric $\omega$ on $M$ is a Kähler metric which is invariant for the action of the real torus $T^n = \{ (e^{i \theta_1}, \dots , e^{i \theta_n}) \ | \ \theta_i \in {\Bbb{R}}\}$ contained in the dense, complex torus $({{\Bbb{C}}}^*)^n$ , that is for every fixed $\theta \in T^n$ the diffeomorphism $f_{\theta}: M \rightarrow M$ given by the action of $(e^{i \theta_1}, \dots , e^{i \theta_n})$ is an isometry. We have the following, well-known fact (compare, for example, with Section 2.2.1 in [@donaldson] or Proposition 2.18 in [@batyrev]). \[apertointoriche\] If $M$ is a projective, compact toric manifold then there exists an open dense subset $X \subseteq M$ which is algebraically biholomorphic to ${\Bbb{C}}^n$. More precisely, for every point $p \in M$ fixed by the torus action there are an open dense neighbourhood $X_p$ of $p$ and a biholomorphism $\phi_p: X_p \rightarrow {\Bbb{C}}^n$ such that $p$ is sent to the origin and the restriction of the torus action to $X_p$ corresponds via $\phi$ to the canonical action of $({\Bbb{C}}^*)^n$ on ${\Bbb{C}}^n$. A self-contained proof of this proposition in given in Section \[supapp1\] of the Appendix. Now we are ready to prove the main result of this section. \[theotoriche\] Let $N$ be a projective toric manifold equipped with a toric [Kähler]{} metric $G$. Then any K-E submanifold $(M, g) \xhookrightarrow{\phi} (N, G)$ such that $\phi(M)$ contains a point of $N$ fixed by the torus action has positive scalar curvature. As we have just recalled, $N$ is a smooth projective compactification of an open subset algebraically biholomorphic to ${\Bbb{C}}^n$. So, the Theorem will follow from Proposition \[teormain\] once we have shown that, for $p_*$ equal to a point $N$ fixed by the torus action, then the conditions (A) and (B) of the statement of Proposition \[teormain\] are satisfied. Let then $p_* \in N$ be such a point, and let $\xi = (\xi_1, \dots, \xi_n)$ be the system of coordinates given by the biholomorphism $\phi_{p_*}: X_{p_*} \rightarrow {\Bbb{C}}^n$ given in Proposition \[apertointoriche\] above. Let $\Omega$ be the [Kähler]{} form associated to $G$ and let $\Phi$ be a local potential for $\Omega$ around the origin in the coordinates $\xi = (\xi_1, \dots, \xi_n)$. Since $X = X_{p_*}$ is contractible, $\Phi$ can be extended to all $X$ (see, for example, Remark 2.6.2 in [@GMS]) and $$D(\xi, \bar \xi) = \Phi(\xi, \bar \xi) + \Phi(0,0) - \Phi(0, \bar \xi) - \Phi(\xi, 0)$$ is a diastasis function on all $X$ in the coordinates $\xi_1, \dots, \xi_n$. For every $\theta \in T^n$ and $\xi \in {\Bbb{C}}^n$, let us denote $$e^{i \theta} \xi := (e^{i \theta_1}, \dots , e^{i \theta_n})(\xi_1, \dots, \xi_n) = (e^{i \theta_1} \xi_1, \dots , e^{i \theta_n} \xi_n)$$ and $D_{\theta}(\xi, \bar \xi) := D(e^{i \theta} \xi, e^{-i \theta} \bar \xi).$ Then $$i \partial \bar \partial D_{\theta} = i \frac{\partial^2 D_{\theta}}{\partial \xi_k \partial \bar \xi_l}(\xi, \bar \xi) d\xi_k \wedge d \bar \xi_l = i e^{i(\theta_k - \theta_l)}\frac{\partial^2 D}{\partial \xi_k \partial \bar \xi_l}(e^{i \theta} \xi, e^{-i \theta} \bar \xi) d\xi_k \wedge d \bar \xi_l =$$ $$= i \frac{\partial^2 D}{\partial \xi_k \partial \bar \xi_l}(e^{i \theta} \xi, \overline{e^{i \theta}\xi}) d(e^{i\theta_k} \xi_k) \wedge d \overline{(e^{i\theta_l} \xi_l)} = (e^{i \theta})^*((\xi)^*(\Omega|_X))=$$ $$= (\xi)^*(f_{\theta}^*(\Omega|_X)) = (\xi)^*(\Omega|_X),$$ where the last equality follows by the invariance of $\Omega$ for the action of $T^n$. Then, for every $\theta \in T^n$, the function $D_{\theta}$ is a potential for $\Omega$ on $X$; moreover, it clearly satisfies the characterization for the diastasis. By the uniqueness of the diastasis around the origin, we then have $D = D_{\theta}$, that is $$D(\xi, \bar \xi) = D(e^{i \theta_1} \xi_1, \dots, e^{i \theta_n} \xi_n, e^{-i \theta_1} \bar \xi_1, \dots, e^{-i \theta_n} \bar \xi_n) .$$ This last equality means that $D$ depends on the norms $|\xi_1|^2, \dots, |\xi_n|^2$ (i.e. $D$ is a [*rotation invariant*]{} function), and in particular it is immediately seen to satisfy the condition for $\xi_1, \dots, \xi_n$ to be Bochner coordinates. In order to show that $D$ is non-negative, recall that, since $i \partial \bar \partial D$ is a Kähler form, $D$ must be a plurisubharmonic function, which means that, for any $a = (a_1, \dots, a_n), b = (b_1, \dots, b_n) \in {\Bbb{C}}^n$, the function of one complex variable $f(\xi) = D(a\xi+b) = D(a_1\xi+b_1, \dots, a_n \xi + b_n)$ is a subharmonic function, i.e. $\frac{\partial^2 f}{\partial \xi \partial \bar \xi} \geq 0$. To prove the claim it will be enough to show that, for any $a \in {\Bbb{C}}^n$, the rotation invariant subharmonic function $f_a(\xi) = D(a\xi)$ is non-negative. Now, we have $$0 \leq \frac{\partial^2 f_a}{\partial \xi \partial \bar \xi} = t \cdot \frac{d^2 f_a}{d t^2} + \frac{d f_a}{d t} = \frac{d}{d t}(t f_a(t))$$ where we are denoting $t = |\xi|^2$. It follows that $g(t) = t f_a(t)$ is a non-decreasing function, and since $g(0)=0$ we have $g(t) = t f_a(t) \geq 0$, that is $f_a(t) \geq 0$, as required. If $\phi(M)$ does not contain any point of $N$ fixed by the torus action, then for any $f \in Aut(N) \cap Isom(N,G)$ one could be tempted to replace $\phi$ by $f \circ \phi$ (which is clearly again a [Kähler]{} embedding) so to have that $f(\phi(M))$ contains a fixed point. Anyway, while the automorphisms group of a toric manifold can be explicitly described, in general we do not have control on $Isom(N, G)$, and in general this group can be too small. For example, for the [Kähler]{}-Einstein metric $G$ on $N = {\Bbb{C}}{\Bbb{P}}^2 \sharp 3 \overline{{\Bbb{C}}{\Bbb{P}}^2}$ ([@Siu], [@Tian]), one has that $Isom(N,G)$ is the real part of $Aut(N)$, whose component of the identity $Aut^{\circ}(N)$ contains only the automorphisms given by the action of the complex torus $({\Bbb{C}}^*)^n$ (indeed, one easily sees that the set of the [*Demazure roots*]{} is empty in this case, see for example Section 3.4 in [@Oda])), so $Isom(N, G) \simeq T^n$ and the isometries do not move the fixed points. By contrast, if $N$ is the complex projective space endowed with the Fubini-Study metric $G$, then $Isom(N, G)$ acts transitively and we can always guarantee the validity of the assumption of Theorem \[theotoriche\], so that we recover Hulin’s theorem (Remark \[condA\]). Notice that if $f \in Aut(N) \setminus Isom(N, G)$ then, in order to guarantee that $f \circ \phi$ is a [Kähler]{} embedding one has to replace $G$ by $(f^{-1})^*(G)$, and consequently the torus action, say $\rho$, by $\tilde \rho = f \circ \rho \circ f^{-1}$. Then any new fixed point is of the form $f(p)$, where $p$ is a point fixed by the action $\rho$. This implies that if $\phi(M)$ does not contain any point fixed by $\rho$, then $f(\phi(M))$ does not contain any point fixed by $\tilde \rho$. Gromov width of toric varieties =============================== Let us recall that the Gromov width (introduced in [@GROMOV85]) of a $2n$-dimensional symplectic manifold $(M, \omega)$ is defined as $$c_G(M, \omega) = \sup \{ \pi r^2 \ | \ (B^{2n}(r), \omega_{can}) \ \text{symplectically embeds into} \ (M, \omega) \}$$ where $\omega_{can} = \frac{i}{2} \sum_{j=1}^n d z_j \wedge d \bar z_j$ is the canonical symplectic form in ${\Bbb{C}}^n$. By Darboux’s theorem $c_G(M, \omega)$ is a positive number. Computations and estimates of the Gromov width for various examples can be found in [@BIRAN99], [@castro], [@GWgrass] and in particular for toric manifolds in [@Lu]. In what follows, we are going to make some remarks about the Gromov width of toric manifolds. More precisely, let $(M, \omega)$ be a toric manifold endowed with an integral toric [Kähler]{} form $\omega$. As it is known ([@delzant], [@Gu]), the image of the moment map $\mu: M \rightarrow {\Bbb{R}}^n$ for the isometric action of the real torus $T^n$ on $M$ is a convex [*Delzant*]{} polytope $\Delta = \{ x \in {\Bbb{R}}^n \ | \ \langle x, u_i \rangle \geq \lambda_i, \ i=1, \dots, d \} \subseteq {\Bbb{R}}^n$, i.e. such that the normal vectors $u_i$ to the faces meeting in a given vertex form a ${\Bbb{Z}}$-basis of ${\Bbb{Z}}^n$. The vertices of $\Delta$ (which, by the integrality of $\omega$, belong to ${\Bbb{Z}}^n$) are the images by $\mu$ of the fixed points for the action of $T^n$ on $M$. As recalled in Section \[appendix2\] of the Appendix, such a polytope $\Delta$ represents a very ample line bundle on the toric manifold $X_{\Sigma}$ associated to the fan $\Sigma$ which has the $u_i$’s as generators. Then, by the Kodaira embedding $i_{\Delta}$ we can embed $X_{\Sigma}$ into a complex projective space ${\Bbb{C}}{\Bbb{P}}^{N-1}$ and endow $X_{\Sigma}$ with the pull-back $i_{\Delta}^*( \omega_{FS})$ of the Fubini-Study form $\omega_{FS}= i \log (\sum_{j=1}^N |z_j|^2)$ of ${\Bbb{C}}{\Bbb{P}}^{N-1}$. We have the following, important result. \[equivsymp\] (see, for example, [@abreu], page 3 or [@Gu], Section A2.1) The manifolds $(X_{\Sigma}, i_{\Delta}^*( \omega_{FS}))$ and $(M, \omega)$ are equivariantly symplectomorphic. Now, by the following well-known result we can write the Kodaira embedding explicitly. \[prop2prima\] Let $p \in M$ be a fixed point for the torus action and $X_p$, $\phi_p: X_p \rightarrow {\Bbb{C}}^n$ be as in Proposition \[apertointoriche\]. The restriction to $X_p$ of the Kodaira embedding $i_{\Delta}: M \rightarrow {\Bbb{C}}{\Bbb{P}}^{N-1}$ writes, in the coordinates given by $\phi_p$, as $$i_g|_{X_{p}} \circ\phi_p^{-1}: {\Bbb{C}}^n \rightarrow {\Bbb{C}}{\Bbb{P}}^{N-1}, \ \ \xi \mapsto [\dots, \xi_1^{x_1} \cdots \xi_n^{x_n}, \dots]$$ where $(x_1, \dots, x_n)$ runs over all the points with integral coordinates in the polytope $\Delta'$ of ${\Bbb{R}}^n$ obtained by $\Delta$ via the transformation in $SL_n({\Bbb{Z}})$ and the translation which send the vertex of $\Delta$ corresponding to $p$ to the origin and the corresponding edge to the edge generated by the vectors $e_1, \dots, e_n$ of the canonical basis of ${\Bbb{R}}^n$. Notice that the existence of the transformation in $SL_n({\Bbb{Z}})$ invoked in the statement follows from the fact that the normal vectors to the faces meeting in any vertex of the polytope form a ${\Bbb{Z}}$-basis of ${\Bbb{Z}}^n$. We will give a detailed proof of Proposition \[prop2prima\] in the Appendix (Proposition \[prop2\]) It follows by Proposition \[prop2prima\] that the restriction $\omega_{\Delta}$ of the pull-back metric $i_{\Delta}^*( \omega_{FS})$ to the open subset $X_p$ is given in the coordinates $\xi_1, \dots, \xi_n$ by $i \log (\sum_{j=1}^N |\xi|^{2 J_j}))$, where $\{ J_k \}_{k=1, \dots, N} = \Delta' \cap {\Bbb{Z}}^n$ and for any $J = (J_1, \dots, J_n) \in {\Bbb{Z}}^n$ we are denoting $|\xi|^{2J} := |\xi_1|^{2J_1} \cdots |\xi_n|^{2J_n}$. Then, by combining Theorem \[equivsymp\] and Proposition \[prop2prima\], we conclude that the manifold $(M, \omega)$ has an open dense subset, say $A$, symplectomorphic to $({\Bbb{C}}^n, \omega_{\Delta} := i \log (\sum_{j=1}^N |\xi|^{2 J_j}))$. We now estimate from above the Gromov width of $({\Bbb{C}}^n, \omega_{\Delta})$. We are going to use the following \[theoremembedding\] Let $A$ be an open, connected subset of ${\Bbb{C}}^n$ such that $A \cap \{ z_j = 0\} \neq \emptyset$, $j =1, \dots,n$, endowed with a Kähler form $\omega = \frac{i}{2} \partial \bar \partial \Phi$, where $\Phi(\xi_1, \dots, \xi_n) = \tilde \Phi(|\xi_1|^2, \dots, |\xi_n|^2)$ for some smooth function $\tilde \Phi: \tilde A \rightarrow {\Bbb{R}}$, $\tilde A = \{(x_1, \dots, x_n) \in {\Bbb{R}}^n \ | \ x_i = |\xi_i|^2, \ (\xi_1, \dots, \xi_n) \in A \ \}$ (we say that $\omega$ is a rotation invariant form). Assume $\frac{\partial \tilde \Phi}{\partial x_k} > 0$ for every $k = 1, \dots, n$. Then the map $$\label{embeddingtheorem} \Psi: (A, \omega) \rightarrow ({\Bbb{C}}^n, \omega_{0}), \ \ (\xi_1, \dots, \xi_n) \mapsto \left( \sqrt{\frac{\partial \tilde \Phi}{\partial x_1}} \xi_1, \dots, \sqrt{\frac{\partial \tilde \Phi}{\partial x_n}} \xi_n \right)$$ is a symplectic embedding (where $\omega_0 = \frac{i}{2} \partial \bar \partial \sum_{k=1}^n |z_k|^2$). For a proof of this lemma, see Theorem 1.1 in [@LZ]. Our result is \[GWCn\] Let $\omega_{\Delta} = i \partial \bar \partial \log (\sum_{j=1}^N |\xi|^{2 J_j})$. Then $$\label{stimaGWCn} c_G({\Bbb{C}}^n, \omega_{\Delta}) \leq 2 \pi \min_{j=1, \dots, n} \left( \max_{k}\{ (J_k)_j \} \right)$$ Let us apply Lemma \[theoremembedding\] to $A = {\Bbb{C}}^n$ endowed with the rotation invariant Kähler form $i \partial \bar \partial \log (\sum_{j=1}^N |\xi|^{2 J_j}))$. In the notation of the statement of the lemma, we have then $\tilde \Phi = 2 \log \sum_{k=0}^N x^{J_k}$, where $x = (x_1, \dots, x_n) \in {\Bbb{R}}^n$ and we are denoting $x^J = x_1^{j_1} \cdots x_n^{j_n}$, for $J = (j_1, \dots, j_n)$. Since for every $k = 0, \dots, N$ we have $J_k = ({(J_k)}_1, \dots, {(J_k)}_n) \in ({\Bbb{Z}}_{\geq 0})^n$, we have $$\frac{\partial \tilde \Phi}{\partial x_j} = \frac{\frac{2}{x_j} \sum_{k=0}^N {(J_k)}_j x^{J_k} }{\sum_{k=0}^N x^{J_k}} > 0$$ and then we can embed symplectically ${\Bbb{C}}^n$ (endowed with the toric form) into ${\Bbb{C}}^n$ (endowed with the standard symplectic form) by $(\xi_1, \dots, \xi_n) \mapsto \left( \sqrt{\frac{\partial \tilde \Phi}{\partial x_1}} \xi_1, \dots, \sqrt{\frac{\partial \tilde \Phi}{\partial x_n}} \xi_n \right)$ so that $({\Bbb{C}}^n, i \partial \bar \partial \log (\sum_{j=1}^N |\xi|^{2 J_j}))$ is symplectomorphic to the domain $D = \Psi({\Bbb{C}}^n) \subseteq {\Bbb{C}}^n$ endowed with the canonical symplectic form $\omega_{can}$. Now, let $\pi_k: {\Bbb{C}}^n \rightarrow {\Bbb{C}}$, $\pi_k(z_1, \dots, z_n) = z_k$ denote the projection onto the $k$-th coordinate. Then $D$ is clearly contained in the cylinder $\pi_k(D) \times {\Bbb{C}}^{n-1} = \{ (z_1, \dots, z_n) \in {\Bbb{C}}^n \ | \ \textrm{there exists} \ p \in D \ \textrm{with} \ p_k = z_k \}$ over $\pi_k(D)$, and then in the cylinder $$C_R = \{ (z_1, \dots, z_n) \in {\Bbb{C}}^n \ | \ |z_k|^2 < R^2 \},$$ where $R$ is the radius of any ball in the $z_k$-plane containing $\pi_k(D)$. By the celebrated Gromov’s non-squeezing theorem, which states that the Gromov width of $C_R$ endowed with the canonical symplectic form $\omega_{can}$ is $\pi R^2$, we conclude that the Gromov width of $D$ is less or equal to $\pi R^2$, where $R$ is the radius of any euclidean ball of the $z_k$-plane containing $\pi_k(D)$. In order to calculate the best value of $R$, notice that $$\pi_k(D) = \{ \sqrt{\frac{\partial \tilde \Phi}{\partial x_j}} \xi_j \ | \ (\xi_1, \dots, \xi_n) \in {\Bbb{C}}^n \} =$$ $$= \{ \sqrt{\frac{\partial \tilde \Phi}{\partial x_j} x_j} \ e^{i \theta_j } \ | \ (x_1, \dots, x_n) \in ({\Bbb{R}}_{\geq 0})^n, \theta_j \in [0, 2 \pi] \}$$ (since $x_j = |\xi_j|^2$ and $\xi_j = \sqrt{x_j} e^{i \theta_j }$) that is the circle in ${\Bbb{R}}^2$ of radius $$\sup \{ \sqrt{\frac{\partial \tilde \Phi}{\partial x_j} x_j} \ | \ (x_1, \dots, x_n) \in ({\Bbb{R}}_{\geq 0})^n \}.$$ Now, $$\label{raggio} \sqrt{\frac{\partial \tilde \Phi}{\partial x_j} x_j}= \sqrt{ \frac{ 2 \sum_{k=0}^N (J_k)_j x^{J_k} }{\sum_{k=0}^N x^{J_k}}}$$ where we are denoting $J_k = ({(J_k)}_1, \dots, {(J_k)}_n)$. Now, fix $j = 1, \dots, n$. On the one hand, we clearly have $$\sum_{k=0}^N (J_k)_j x^{J_k} \leq \sum_{k=0}^N \max_{k}\{ (J_k)_j \} x^{J_k} = \max_{k}\{ (J_k)_j \} \sum_{k=0}^N x^{J_k}$$ so that $$\sup \sqrt{ \frac{ 2\sum_{k=0}^N (J_k)_j x^{J_k} }{\sum_{k=0}^N x^{J_k}}} \leq \sqrt{2 \max_{k}\{ (J_k)_j \}}.$$ On the other hand, we can show that the sup is actually equal to $\sqrt{2 \max_{k}\{ (J_k)_j \}}$ by setting $x_i = t$ for $i \neq j$ and $x_j = t^s$, for an integer $s$ large enough, and letting $t \rightarrow +\infty$. Indeed, after substituting $x_i = t$ for $i \neq j$ and $x_j = t^s$ we get the one variable function $$\sqrt{ \frac{ 2 \sum_{k=0}^N (J_k)_j t^{(J_k)_j s + \sum_{i \neq j} (J_k)_i} }{\sum_{k=0}^N t^{(J_k)_j s + \sum_{i \neq j} (J_k)_i}}}$$ and, if we set $f_k(s) = (J_k)_j s + \sum_{i \neq j} (J_k)_i$, it is clear that there is a value of $s$ for which the largest $f_k(s)$ is obtained for the value of $k$ for which $(J_k)_j$ (i.e. the slope of the affine function $f_k(s)$) is maximum. This concludes the proof. As an immediate corollary of Theorem \[GWCn\], we get the following \[corollariodivisore\] Let $(M, \omega)$ be a toric manifold endowed with an integral toric form, let $\Delta \subseteq {\Bbb{R}}^n$ be the image of the moment map for the torus action (which, up to a translation and a transformation in $SL_n({\Bbb{Z}})$, can be assumed to have the origin as vertex and the edge at the origin generated by the canonical basis of ${\Bbb{R}}^n$) and let $\{ J_k \}_{k=0, \dots, N} = \Delta \cap {\Bbb{Z}}^n$. Let $p$ be the point fixed by the torus action corresponding to the origin of $\Delta$ and $X_p \simeq {\Bbb{C}}^n$ be the open subset given by Proposition \[apertointoriche\]. Then, any ball of radius $r > \sqrt{2 \min_{j=1, \dots, n} \left( \max_{k}\{ (J_k)_j \} \right)}$,symplectically embedded into $(M, \omega)$, must intersect the divisor $M \setminus X_p$. Let $$\Delta = \{ x \in {\Bbb{R}}^n \ | \ \langle x, u_k \rangle \geq \lambda_k, \ k = 1, \dots, d \}.$$ Then Lu proves in Corollary 1.4 of [@Lu] that the Gromov width of the corresponding toric manifold is bounded from above by $$\Lambda(\Delta): = 2 \pi \max \{ - \sum_{i=1}^d \lambda_i a_i \ | \ a_i \in {\Bbb{Z}}_{\geq 0}, \ \sum_{i=1}^d a_i u_i = 0, \ 1 \leq \sum_{i=1}^d a_i \leq n+1 \}$$ in general, and by $$\gamma(\Delta): = 2 \pi \inf \{ - \sum_{i=1}^d \lambda_i a_i > 0 \ | \ a_i \in {\Bbb{Z}}_{\geq 0}, \sum_{i=1}^d a_i u_i = 0 \}$$ if the polytope $\Delta$ is Fano[^2], that is if there exist $m \in {\Bbb{R}}^n$ and $r > 0$ such that $$\label{Fanopolytope} r(\lambda_i + \langle m, u_i \rangle) = \pm 1, \ i=1, \dots, d, \ \ \ Int(r \cdot (m + \Delta)) \cap {\Bbb{Z}}^n = \{ 0 \}.$$ \[esempi\]Take the polytope $$\begin{aligned} \Delta = \{ (x_1, x_2) \in {\Bbb{R}}^2 \ & | & \ x_1 \geq 0, x_2 \geq 0, x_1 - x_2 \geq -1, x_2 - x_1 \geq -1, \nonumber \\ & & x_1 - 2 x_2 \geq -3, x_2 \leq 3 \} \nonumber\end{aligned}$$ which represents a [Kähler]{} class $\omega_{\Delta}$ on the Hirzebruch surface $S_2$ blown up at two points, denoted in the following by $\tilde S_2$. Notice that $\Delta$ is of the kind $\{ x \in {\Bbb{R}}^n \ | \ \langle x, u_k \rangle \geq \lambda_k, \ k = 1, \dots, d \}$, where $u_1 = (1,0), u_2 = (0,1), u_3 = (1,-1), u_4 = (-1,1), u_5 = (1,-2), u_6 = (0, -1)$ and $\lambda_1 = 0, \lambda_2 = 0, \lambda_3 = -1, \lambda_4 = -1, \lambda_5 = -3, \lambda_6 = -3$. We first show that $\Delta$ does not satisfy the above Fano condition (\[Fanopolytope\]). Indeed, these conditions read, for $m=(m_1, m_2)$ and $i = 1,2,3,6$, $$rm_1 = \pm 1, \ \ rm_2 = \pm 1, \ \ r(-1 + m_1 - m_2) = \pm 1 \ \ r(-3-m_2) = \pm 1 .$$ Combining the second and the last condition we get the four possibilities (the signs have to be taken independently) $-3r-1 = +1, \ \ -3r-1 = -1, \ \ -3r+1 = +1, \ \ -3r+1 = -1$, that is $r = - \frac{2}{3}, r= 0, r = \frac{2}{3}$. Since $r >0$ the only possibility is $r = \frac{2}{3}$, and $m_2 = - \frac{3}{2}$. Replacing this in the third condition, and taking into account the first one, we have $$r(-1+m_1-m_2) = \frac{2}{3}(-1 + \frac{3}{2} \pm \frac{3}{2})$$ which is either $\frac{4}{3}$ or $-\frac{2}{3}$, so different from $\pm 1$ for any choice of the signs. This proves the claim. \[paginafootnote\] Then Lu’s estimate by $\gamma(\Delta)$ does not apply[^3]. Since $\sum_i a_i u_i = 0$ reads $a_1 = a_4-a_3-a_5, a_2 = a_3 - a_4 +2a_5+a_6$ we have $$\Lambda(\Delta) = \max \{ 2 \pi (a_3 + a_4 + 3 a_5 + 3 a_6) \ | \ a_i \in {\Bbb{Z}}_{\geq 0}, 1 \leq 2 a_5 + 2 a_6 + a_3 + a_4 \leq 3 \}.$$ It is easy to see that $\Lambda(\Delta) = 8 \pi$ (attained for $a_2 = a_4 = a_5 =1$ and $a_1=a_3=a_6=0$). We then get $c_G(\tilde S_2, \omega_{\Delta}) \leq 8 \pi$, while it is easy to see that our estimate (\[stimaGWCn\]) yields $c_G({\Bbb{C}}^n, \omega_{\Delta}) \leq 6 \pi$. Then, Corollary \[corollariodivisore\] in this case states that any ball of radius strictly larger than $\sqrt 6$, simplectically embedded into $(\tilde S_2, \omega_{\Delta})$, must intersect the divisor. \[esempi2\]Consider the family of polytopes $$\begin{aligned} \Delta(m)= \{ (x_1, x_2) \in {\Bbb{R}}^2 \ & | & \ 0 \leq x_1 \leq 4, 0 \leq x_2 \leq 4, -2 \leq x_1 - x_2 \leq 2, \nonumber \\ & & 2 x_1 - x_2 \geq -\frac{2m}{m+1} \} \nonumber\end{aligned}$$ which, for every natural number $m \geq 1$, represents a [Kähler]{} class $\omega_{\Delta(m)}$ on the projective plane blown up at three points and blown up again (at one of the new fixed points by the toric action), which we denote from now on by $M$. Notice that $\Delta(m)$ is of the kind $\{ x \in {\Bbb{R}}^n \ | \ \langle x, u_k \rangle \geq \lambda_k, \ k = 1, \dots, d \}$, where $u_1 = (1,0), u_2 = (0,1), u_3 = (-1,1), u_4 = (-1,0), u_5 = (0,-1), u_6 = (1, -1), u_7 = (2, -1)$ and $\lambda_1 = 0, \lambda_2 = 0, \lambda_3 = -2, \lambda_4 = -4, \lambda_5 = -4, \lambda_6 = -2, \lambda_7= - \frac{2m}{m+1}$. One easily sees by a straight calculation as in the previous example that $\Delta(m)$ does not satisfy the above Fano condition (\[Fanopolytope\]). In fact, it is known (see for example Proposition 2.21 in [@Oda]) that $M$ is not Fano, so Lu’s estimate by $\gamma(\Delta)$ does not apply (see also the footnote at page ). Since $\sum_i a_i u_i = 0$ reads $a_1 = a_3+a_4-a_6-2a_7, a_2 = a_5 + a_6 + a_7 - a_3$ we have $$\Lambda(\Delta) = \max \{ 2 \pi (2a_3 + 4 a_4 + 4 a_5 + 2 a_6 + \frac{2m}{m+1} a_7) \}$$ over all the $a_i$’s in ${\Bbb{Z}}_{\geq 0}$ such that $1 \leq a_3 + 2 a_4 + 2 a_5 + a_6 \leq 3$. It is easy to see that $\Lambda(\Delta) = 2 \pi (6 + \frac{2m}{m+1})$ (attained for $a_3 = a_4 = a_7 =1$ and $a_1=a_2=a_5=a_6=0$). We have then $c_G(M, \omega_{\Delta(m)}) \leq 2 \pi (6 + \frac{2m}{m+1})$, while it is easy to see that our estimate (\[stimaGWCn\]) yields $c_G({\Bbb{C}}^n, \omega_{\Delta(m)}) \leq 8 \pi$ for every $m \geq 1$ (in fact, we need first to multiply $\Delta(m)$ by $m+1$ in order to get an integral polytope for which $\min_{j=1, \dots, n} \left( \max_{k}\{ (J_k)_j \} \right) = 4(m+1)$, and then we rescale by $\frac{1}{m+1}$, and use the fact that $c_G(M, \lambda \omega) = \lambda c_G(M, \omega)$). Then, Corollary \[corollariodivisore\] in this case states that any ball of radius strictly larger than $2 \sqrt 2$, simplectically embedded into $(M, \omega_{\Delta(m)})$, must intersect the divisor. It is worth to notice that, for the complex projective space ${\Bbb{C}}{\Bbb{P}}^n$ endowed with the Fubini-Study form $\omega_{FS} = i \log(\sum_i |Z_i|^2)$, the Gromov width is known to be equal to $2\pi$ and in fact it is attained by embedding simplectically an open ball of radius $\sqrt 2$ without intersecting the divisor (more precisely, one can see that the image of the symplectic embedding $({\Bbb{C}}^n, \omega_{FS}) \rightarrow ({\Bbb{C}}^n, \omega_0)$ given by (\[embeddingtheorem\]) is exactly a ball of radius $\sqrt 2$). Toric manifolds =============== Toric manifolds as compactifications of ${\Bbb{C}}^n$ {#supapp1} ----------------------------------------------------- Let us recall the following \[toric\] A [*toric variety*]{} is a complex variety $M$ containing an open dense subset biholomorphic to $({\Bbb{C}}^*)^n$ and such that the canonical action of $({\Bbb{C}}^*)^n$ on itself by $(\alpha_1, \dots, \alpha_n) (\beta_1, \dots, \beta_n) = (\alpha_1 \beta_1, \dots, \alpha_n \beta_n)$ extends to a holomorphic action on the whole $M$. A toric variety can be described combinatorially by means of [*fans of cones*]{}. In detail, by the [*cone $\sigma = \sigma(u_1, \dots, u_m)$ in ${\Bbb{R}}^n$ generated by the vectors $u_1, \dots, u_m \in {\Bbb{Z}}^n$*]{} we mean the set $$\{ x \in {\Bbb{R}}^n \ | \ x = \sum_{i=1}^m c_i u_i, \ c_i \geq 0 \}$$ of linear combinations of $u_1, \dots, u_m$ with non-negative coefficients. The $u_i$’s are called the [*generators*]{} of the cone. The [*dimension*]{} of a cone $\sigma = \sigma(u_1, \dots, u_m)$ is the dimension of the linear subspace of ${\Bbb{R}}^n$ spanned by $\{ u_1, \dots, u_m \}$. We will always assume that our cones are [*convex*]{}, i.e. that they do not contain any straight line passing through the origin, and that the generators of a cone are linearly independent. The [*faces*]{} of a cone $\sigma = \sigma(u_1, \dots, u_m)$ are defined as the cones generated by the subsets of $\{ u_1, \dots, u_m \}$. By definition, the cone generated by the empty set is the origin $\{ 0 \}$. A [*fan $\Sigma$ of cones*]{} in ${\Bbb{R}}^n$ is a set of cones such that 1. for any $\sigma \in \Sigma$ and any face $\tau$ of $\sigma$, we have $\tau \in \Sigma$; 2. any two cones in $\Sigma$ intersect along a common face. Let us now recall how one can construct from a fan $\Sigma$ a toric variety. Let $\{u_1, \dots, u_d \}$, $u_k = (u_{k1}, \dots, u_{kn}) \in {{\Bbb{Z}}}^n$, be the union of all the generators of the cones in $\Sigma$. For any cone $\sigma = \sigma( \{ u_i \}_{i \in I}) \in \Sigma$, $I \subseteq \{1, \dots, d \}$, let us denote $${{\Bbb{C}}}^d_{\sigma} = \{ (z_1, \dots, z_d) \in {{\Bbb{C}}}^d \ | \ z_i = 0 \Leftrightarrow i \in I \} .$$ Notice that if $\sigma = \sigma(\emptyset)$ is the cone consisting of the origin alone, then ${{\Bbb{C}}}^d_{\sigma} = ({{\Bbb{C}}}^*)^d$. Now, let ${{\Bbb{C}}}^d_{\Sigma} = \bigcup_{\sigma \in \Sigma} {{\Bbb{C}}}^d_{\sigma}$ and $K_{\Sigma}$ be the kernel of the surjective homomorphism $$\pi: ({\Bbb{C}}^*)^d \rightarrow ({\Bbb{C}}^*)^n, \ \ \pi(\alpha_1, \dots, \alpha_d) = (\alpha_1^{u_{11}} \cdots \alpha_d^{u_{d1}}, \dots, \alpha_1^{u_{1n}} \cdots \alpha_d^{u_{dn}}) .$$ \[deftoricquotient\] The [*toric variety $X_{\Sigma}$ associated to $\Sigma$*]{} is defined to be the quotient $X_{\Sigma} = \frac{{{\Bbb{C}}}^d_{\Sigma}}{K_{\Sigma}}$ of ${{\Bbb{C}}}^d_{\Sigma}$ for the action of $K_{\Sigma}$ given by the restriction of the canonical action $(\alpha_1, \dots, \alpha_d) (z_1, \dots, z_d) = (\alpha_1 z_1, \dots, \alpha_d z_d)$ of $({\Bbb{C}}^*)^d$ on ${\Bbb{C}}^d$ . The importance of this construction consists in the fact that any toric variety $M$ of complex dimension $n$ can be realized as $M = X_{\Sigma}$ for some fan $\Sigma$ in ${\Bbb{R}}^n$ (see Section 1.4 in [@Ful]). Notice that, by definition of $K_{\Sigma}$, we have $\frac{({{\Bbb{C}}}^*)^d}{K_{\Sigma}} \simeq ({{\Bbb{C}}}^*)^n$. So we have a natural action of this complex torus on $X_{\Sigma}$ given by $$\label{action} [(\alpha_1, \dots, \alpha_d)][(z_1, \dots, z_d)] = [(\alpha_1 z_1, \dots, \alpha_d z_d)].$$ From now on, and throughout this section, we will assume that $X_{\Sigma}$ is a compact, smooth manifold. From a combinatorial point of view, it is known ([@Ful], Chapter 2) that: 1. $X_{\Sigma}$ is compact if and only if the [*support $|\Sigma| = \cup_{\sigma \in \Sigma} \sigma$*]{} of $\Sigma$ equals ${\Bbb{R}}^n$. 2. $X_{\Sigma}$ is a smooth complex manifold if and only if for each $n$-dimensional cone $\sigma$ in $\Sigma$ its generators form a ${\Bbb{Z}}$-basis of ${\Bbb{Z}}^n$. \[paginasmooth\] Under these assumptions, we have the following well-known result. \[prop1\] Let $X_{\Sigma}$ be a compact, smooth toric manifold of complex dimension $n$. Then, for each $p \in X_{\Sigma}$ fixed by the torus action (\[action\]) there exists an open neighbourhood $X_p$ of $p$, dense in $X_{\Sigma}$, containing the complex torus $\frac{({{\Bbb{C}}}^*)^d}{K_{\Sigma}} \simeq ({{\Bbb{C}}}^*)^n$ and a biholomorphism $\phi_p: X_p \rightarrow {\Bbb{C}}^n$ such that $\phi_p(p)=0$ and that the restriction of the torus action (\[action\]) to $X_p$ coincides, via $\phi_p$, with the canonical action of $({\Bbb{C}}^*)^n$ on ${\Bbb{C}}^n$ by componentwise multiplication. In particular, any compact, smooth toric manifold of complex dimension $n$ is a compactification of ${\Bbb{C}}^n$. Let $\sigma = \sigma(u_{j_1}, \dots, u_{j_n})$ be an $n$-dimensional cone in $\Sigma$, and let $\{ j_{n+1}, \dots, j_{d} \} = \{ 1, \dots, d\} \setminus \{ j_{1}, \dots, j_{n} \}$. Let us consider the open dense subset $$X_{\sigma} = \frac{\bigcup_{\tau \subseteq \sigma} {\Bbb{C}}^d_{\tau}}{K_{\Sigma}} = \{ [(z_1, \dots, z_d)] \in X_{\Sigma} \ | \ z_{j_{n+1}}, \dots, z_{j_{d}} \neq 0 \}.$$ We are going to define a biholomorphism $\phi_{\sigma}: X_{\sigma} \rightarrow {\Bbb{C}}^n$. Recall that, by the assumption of smoothness, $u_{j_1}, \dots, u_{j_n}$ form a ${\Bbb{Z}}$-basis of ${\Bbb{Z}}^n$, or equivalently (with a further permutation if necessary) the matrix $$U = \left( \begin{array}{ccc} u_{{j_1}1} & \dots & u_{{j_n}1} \\ \dots & \dots & \dots \\ u_{{j_1}n} & \dots & u_{{j_n}n} \end{array} \right)$$ belongs to $SL(n, {\Bbb{Z}})$. Let $U^{-1} = \left( \begin{array}{ccc} w_{1 1} & \dots & w_{n1} \\ \dots & \dots & \dots \\ w_{1 n} & \dots & w_{nn} \end{array} \right)$ and let $\left( \begin{array}{ccc} v_{j_{n+1} 1} & \dots & v_{j_{d}1} \\ \dots & \dots & \dots \\ v_{j_{n+1} n} & \dots & v_{j_{d}n} \end{array} \right)$ be the matrix in $M_{n \ d-n}({\Bbb{Z}})$ obtained by deleting from $$\left( \begin{array}{ccc} w_{1 1} & \dots & w_{n1} \\ \dots & \dots & \dots \\ w_{1 n} & \dots & w_{nn} \end{array} \right)\left( \begin{array}{ccc} u_{1 1} & \dots & u_{d1} \\ \dots & \dots & \dots \\ u_{1 n} & \dots & u_{dn} \end{array} \right)$$ the $j$-th column, for $j = j_1, \dots, j_n$. We claim that $$\label{defphi} \phi_{\sigma}([(z_1, \dots, z_d)]) = (z_{j_{1}} z_{j_{n+1}}^{v_{j_{n+1} 1}} \cdots z_{j_{d}}^{v_{j_{d} 1}}, \dots, z_{j_{n}} z_{j_{n+1}}^{v_{j_{n+1} n}} \cdots z_{j_{d}}^{v_{j_{d} n}})$$ defines the required biholomorphism. In order to verify this, notice first that if $(\alpha_1, \dots, \alpha_d) \in K_{\Sigma}$ then, by definition, for every $k=1, \dots, n$, we have $$1 = (\alpha_1^{u_{11}} \cdots \alpha_d^{u_{d1}})^{w_{1k}} \cdots (\alpha_1^{u_{1n}} \cdots \alpha_d^{u_{dn}})^{w_{nk}} = \alpha_{j_k} \alpha_{j_{n+1}}^{v_{j_{n+1} k}} \cdots \alpha_{j_d}^{v_{j_{d}k}}$$ so that $$\label{paramK} \alpha_{j_k} = \alpha_{j_{n+1}}^{-v_{j_{n+1} k}} \cdots \alpha_{j_d}^{-v_{j_{d}k}}, \ \ k=1, \dots, n.$$ For any $\alpha_{j_{n+1}}, \dots, \alpha_{j_d} \in {\Bbb{C}}^*$, these equations give a parametric representation of $K_{\Sigma}$, using which it is easy to see that (\[defphi\]) is well defined. More in detail, if $[(z_1, \dots, z_d)] = [(w_1, \dots, w_d)] $ then there exist $\alpha_{j_{n+1}}, \dots, \alpha_{j_d} \in {\Bbb{C}}^*$ such that $w_{j_{n+1}} = \alpha_{j_{n+1}} z_{j_{n+1}}, \dots, w_{j_d} = \alpha_{j_d} z_{j_d}$ and $$w_{j_1} = \alpha_{j_{n+1}}^{- v_{j_{n+1} 1}} \cdots \alpha_{j_d}^{- v_{j_d 1}} z_1, \ \dots \ , w_{j_n} = \alpha_{j_{n+1}}^{- v_{j_{n+1} n}} \cdots \alpha_{j_d}^{- v_{j_d n}} z_{j_n}$$ from which it is immediate to see that $\phi_{\sigma}[(z_1, \dots, z_d)] = \phi_{\sigma}[(w_1, \dots, w_d)]$. Moreover, one sees that $$\label{invphi} \psi_{\sigma}: {\Bbb{C}}^n \rightarrow X_{\sigma}, \ \ \ \psi_{\sigma}(\xi_1, \dots, \xi_n) = [(\psi_1, \dots, \psi_d)]$$ where $$\psi_{j_1} = \xi_1, \dots, \psi_{j_n} = \xi_n, \ \ \psi_{j_{n+1}} = \cdots = \psi_{j_d} = 1$$ and is the inverse of $\phi_{\sigma}$. Indeed, on the one hand it is clear that $\phi_{\sigma} \circ \psi_{\sigma} = id_{{\Bbb{C}}^n}$. On the other hand, for every $[(z_1, \dots, z_d)] \in X_{\sigma}$ we have $(\psi_{\sigma} \circ \phi_{\sigma})([z_1, \dots, z_d]) = [(\psi_1, \dots, \psi_d)]$ where $$\psi_{j_k} = z_{j_{k}} z_{j_{n+1}}^{v_{j_{n+1} k}} \cdots z_{j_{d}}^{v_{j_{d} k}}, \ \ k = 1, \dots, n$$ and $\psi_{j_{n+1}} = \cdots = \psi_{j_d} = 1$. But $[(\psi_1, \dots, \psi_d)] = [(z_1, \dots, z_d)]$ since $(z_1, \dots, z_d) = (\alpha_1, \dots, \alpha_d)(\psi_1, \dots, \psi_d)$ for the element $(\alpha_1, \dots, \alpha_d) \in K_{\Sigma}$ given by $\alpha_{j_{n+1}} = z_{j_{n+1}}, \dots, \alpha_{j_{d}} = z_{j_{d}}$ (recall that, by definition of $X_{\sigma}$, we have $z_{j_{n+1}}, \dots, z_{j_d} \neq 0$) and $$\alpha_{j_k} = z_{j_{n+1}}^{-v_{j_{n+1} k}} \cdots z_{j_{d}}^{-v_{j_{d} k}}, \ \ k = 1, \dots, n.$$ This proves the claim. Now, by the very definition of $X_{\sigma}$ it is clear that it contains the complex torus $\frac{({{\Bbb{C}}}^*)^d}{K_{\Sigma}}$ and that $X_{\sigma}$ is invariant by the action (\[action\]). In fact, one has $\phi_{\sigma}\left( \frac{({{\Bbb{C}}}^*)^d}{K_{\Sigma}} \right) = ({\Bbb{C}}^*)^n$ and, if $\phi_{\sigma}[\alpha_1, \dots, \alpha_d] = (a_1, \dots, a_n)$, $\phi_{\sigma}[z_1, \dots, z_d] = (\xi_1, \dots, \xi_n)$, then $\phi_{\sigma}[\alpha_1 z_1, \dots, \alpha_d z_d] = (a_1 \xi_1, \dots, a_n \xi_n)$, which means that the action of $\frac{({{\Bbb{C}}}^*)^d}{K_{\Sigma}}$ on $X_{\sigma}$ corresponds, via $\phi_{\sigma}$, to the canonical action of $({\Bbb{C}}^*)^n$ on ${\Bbb{C}}^n$. As a consequence, since the only fixed point for this canonical action is the origin, we have that the only point of $X_{\sigma}$ fixed by the action of $\frac{({{\Bbb{C}}}^*)^d}{K_{\Sigma}}$ is the point $p = [z_1, \dots, z_d]$ having $z_{j_1} = \cdots = z_{j_n} = 0$. So $X_{\sigma}$ turns out to be a neighbourhood $X_p$ of the fixed point $p$ which satisfies all the requirements of the statement of the Proposition. Since the $X_{\sigma}$’s, when $\sigma$ runs over all the $n$-dimensional cones of $\Sigma$, cover $X_{\Sigma}$, we get in this way all the fixed points by the torus action, and this concludes the proof of the Proposition. Toric bundles and Kodaira embeddings {#appendix2} ------------------------------------ Let us recall how one constructs combinatorially the line bundles on a toric manifold $X_{\Sigma}$. \[support\] Let $\Sigma$ be a fan of cones in ${\Bbb{R}}^n$. A [*$\Sigma$-linear support function*]{} (or simply a support function when the context is clear) is a continuous function $g: {\Bbb{R}}^n \rightarrow {\Bbb{R}}$ such that 1. on every $n$-dimensional cone $\sigma \in \Sigma$, g is the restriction of a linear function $g_{\sigma}: {\Bbb{R}}^n \rightarrow {\Bbb{R}}$; 2. $g$ has integer values on ${\Bbb{Z}}^n$. A support function is clearly determined by the values it has on the generators of the cones. One associates to any such function $g$ a line bundle, denoted ${X_{\Sigma}}_g$, on the manifold $X_{\Sigma}$ and defined as ${X_{\Sigma}}_g = \frac{{\Bbb{C}}^d_{\Sigma} \times {\Bbb{C}}}{K_{\Sigma}}$ where ${\Bbb{C}}^d_{\Sigma}$, $K_{\Sigma}$ are as in Definition \[deftoricquotient\] and the quotient comes from the action of $K_{\Sigma}$ on ${\Bbb{C}}^d_{\Sigma} \times {\Bbb{C}}$ given by $$(\alpha_1, \dots, \alpha_d) \cdot (z_1, \dots, z_d, z_{d+1}) = (\alpha_1 z_1, \dots, \alpha_d z_d, \alpha_1^{-g(u_1)} \cdots \alpha_d^{-g(u_d)} z_{d+1}).$$ The projection $p:{X_{\Sigma}}_g \rightarrow X_{\Sigma}$ is just given by $p([z_1, \dots, z_{d+1}]) = [z_1, \dots, z_d]$, which is clearly well-defined by the very definition of the equivalence relations involved. It is known that ${X_{\Sigma}}_g$ is very ample if and only if $g$ is [*strictly convex*]{}, i.e. it fulfills the following requirements: 1. for every $v_1, v_2 \in {\Bbb{R}}^n$, $t \in [0,1]$, one has $g(tv_1 + (1-t)v_2) \geq t g(v_1) + (1-t) g(v_2)$ (i.e. $-g$ is convex); 2. distinct $n$-dimensional cones $\sigma$ give distinct functions $g_{\sigma}$. A nice representation of the very ample line bundle $p:{X_{\Sigma}}_g \rightarrow X_{\Sigma}$, encoding combinatorially both the structure of $X_{\Sigma}$ and the function $g$, is given by the convex polytope $$\label{polytope} \Delta_g = \{ x \in {\Bbb{R}}^n \ | \ \langle x, u_i \rangle \geq g(u_i), \ \ i=1, \dots, d \}$$ where $u_1, \dots, u_d$ are the generators of $\Sigma$. Every $k$-dimensional face of $\Delta_g$ is given by the intersection of $n-k$ hyperplanes $\langle x, u_i \rangle = g(u_i)$, for $i \in I \subseteq \{1, \dots, d \}$ such that $\{ u_i \}_{i \in I}$ generates an $(n-k)$-dimensional cone of $\Sigma$. In particular, the vertices of $\Delta_g$ correspond to the $n$-dimensional cones of $\Sigma$ and then (see the proof of Proposition \[prop1\]) to the fixed points of the torus action. Conversely, every convex polytope $\Delta = \{ x \in {\Bbb{R}}^n \ | \ \langle x, u_i \rangle \geq \lambda_i, \ \ i=1, \dots, d \}$ with the property that the normal vectors $u_i$ to the faces meeting in a given vertex form a ${\Bbb{Z}}$-basis of ${\Bbb{Z}}^n$ determine a toric manifold together with a very ample line bundle. We are now ready to prove the following \[prop2\] Let $p \in X_{\Sigma}$ be a fixed point for the torus action and $X_p$, $\phi_p: X_p \rightarrow {\Bbb{C}}^n$ be as in Proposition \[prop1\]. The restriction to $X_p$ of the Kodaira embedding $i_g: X_{\Sigma} \rightarrow {\Bbb{C}}{\Bbb{P}}^{N-1}$ associated to ${X_{\Sigma}}_g$ writes, in the coordinates given by $\phi_p$, as $$i_g|_{X_{p}} \circ\phi_p^{-1}: {\Bbb{C}}^n \rightarrow {\Bbb{C}}{\Bbb{P}}^{N-1}, \ \ \xi \mapsto [\dots, \xi_1^{x_1} \cdots \xi_n^{x_n}, \dots]$$ where $(x_1, \dots, x_n)$ runs over all the points with integral coordinates in $\Delta$, being $\Delta$ the polytope in ${\Bbb{R}}^n$ obtained by $\Delta_g$ via the transformation in $SL_n({\Bbb{Z}})$ and the translation which send the vertex of $\Delta_g$ corresponding to $p$ to the origin and the corresponding edge to the edge generated by the vectors $e_1, \dots, e_n$ of the canonical basis of ${\Bbb{R}}^n$. For the sake of simplicity and without loss of generality, we can assume that the fixed point $p$ corresponds (in the sense of the proof of Proposition \[prop1\]) to the $n$-dimensional cone of $\Sigma$ generated by $u_1, \dots, u_n$, so that $X_p = \{ [(z_1, \dots, z_d)] \in X_{\Sigma} \ | \ z_{n+1}, \dots, z_d \neq 0 \}$. Given the line bundle $p: {X_{\Sigma}}_g \rightarrow X_{\Sigma}$, we clearly have $$p^{-1}(X_p) = \{ [(z_1, \dots, z_d, z_{d+1})] \in {X_{\Sigma}}_g \ | \ z_{n+1}, \dots, z_d \neq 0 \}.$$ An explicit trivialization $f: p^{-1}(X_p) \rightarrow X_p \times {\Bbb{C}}$ of ${X_{\Sigma}}_g$ on $X_p$ is given by $$f([(z_1, \dots, z_d, z_{d+1})]) = ([z_1, \dots, z_d], z_{d+1} z_{n+1}^{c_{n+1}} \cdots z_{d}^{c_{d}})$$ where, for every $j = n+1, \dots, d$, $$c_j = g(u_{j}) - \sum_{k=1}^n v_{j k} g(u_k)$$ and the $v_{jk}$’s are defined in the proof of Proposition \[prop1\]. Indeed, $f$ is well defined because $z_{n+1}, \dots, z_d \neq 0$ and because, if $[(z_1, \dots, z_d, z_{d+1})] = [(w_1, \dots, w_d, w_{d+1})]$ then, for some $(\alpha_1, \dots, \alpha_d) \in K_{\Sigma}$, $$w_{d+1} w_{n+1}^{c_{n+1}} \cdots w_{d}^{c_{d}} = (z_{d+1} \alpha_1^{-g(u_1)} \cdots \alpha_d^{-g(u_d)}) z_{n+1}^{c_{n+1}} \cdots z_{d}^{c_{d}} \alpha_{n+1}^{c_{n+1}} \cdots \alpha_{d}^{c_{d}} =$$ $$= z_{d+1} z_{n+1}^{c_{n+1}} \cdots z_{d}^{c_{d}} \cdot \alpha_1^{-g(u_1)} \cdots \alpha_n^{-g(u_n)} \alpha_{n+1}^{- \sum_{k=1}^n v_{n+1 k} g(u_k)} \cdots \alpha_d^{- \sum_{k=1}^n v_{d k} g(u_k)}$$ $$= z_{d+1} z_{n+1}^{c_{n+1}} \cdots z_{d}^{c_{d}}$$ by (\[paramK\]) in the proof of Proposition \[prop1\]. The inverse of $f$ is clearly given by $f^{-1}: X_p \times {\Bbb{C}}\rightarrow p^{-1}(X_p)$, $$f^{-1}([z_1, \dots, z_d], z) = [(z_1, \dots, z_d, z z_{n+1}^{-c_{n+1}} \cdots z_{d}^{-c_{d}})],$$ which is well defined by the same arguments as above. A section of $p: {X_{\Sigma}}_g \rightarrow X_{\Sigma}$ is determined by a function $F = F(z_1, \dots, z_d)$ which satisfies $$F(\alpha_1 z_1, \dots, \alpha_d z_d) = \alpha_1^{-g(u_1)} \cdots \alpha_d^{-g(u_d)} F(z_1, \dots, z_d).$$ for every $(\alpha_1, \dots, \alpha_d) \in K_{\Sigma}$. Indeed, this is exactly the condition which assures that $s: X_{\Sigma} \rightarrow {X_{\Sigma}}_g$, $s([z_1, \dots, z_d]) = [(z_1, \dots, z_d, F(z_1, \dots, z_d))]$ is well-defined. By a straight calculation and by (\[paramK\]), a basis for the space of global sections is given by the polynomials $F(z_1, \dots, z_d) = z_1^{x_1} \cdots z_d^{x_d}$, $x_i \geq 0$, which satisfy $$\label{conditionsection} x_j + g(u_j) = v_{j1} (x_1 + g(u_1)) + \cdots + v_{jn} (x_n + g(u_n)), \ \ \ j = n+1, \dots, d$$ where the $v_{jk}$’s are defined in the proof of Proposition \[prop1\]. We will refer to this basis as the [*monomial basis*]{}. Let $\{F_0, \dots, F_N \}$ be the monomial basis. Then, by the celebrated theorem of Kodaira, the map $$\label{ig} X_{\Sigma} \stackrel{i_g}{\longrightarrow} {\Bbb{C}}P^{N}, \ \ \ [(z_1, \dots, z_d)] \mapsto [F_0(z_1, \dots, z_d), \dots, F_N(z_1, \dots, z_d)].$$ yields an embedding of $X_{\Sigma}$ in the complex projective space. Restricting $i_g$ to $X_p$ and composing with $\phi_p^{-1}: {\Bbb{C}}^n \rightarrow X_p$ we get $$\label{embCn} (\xi_1, \dots, \xi_n) \mapsto [F_0(\xi_1, \dots, \xi_n, 1 \dots, 1), \dots, F_N(\xi_1, \dots, \xi_n, 1 \dots, 1)].$$ Now, since the $x_i$’s are all non-negative integers, conditions (\[conditionsection\]) are equivalent to $$\label{newconditionsection} \langle x + g_u, v_j \rangle \geq g(u_j), \ \ j=1, \dots, d,$$ where $x = (x_1, \dots, x_n)$, $g_u = (g(u_1), \dots, g(u_n))$ and for $j=1, \dots, n$ we are setting $v_j = e_j$ (the canonical basis of ${\Bbb{R}}^n$). Since $e_1, \dots, e_n, v_{n+1}, \dots, v_d$ are the images of $u_1, \dots, u_n, u_{n+1}, \dots, u_d$ via the map $A = \left( \begin{array}{ccc} u_{{1}1} & \dots & u_{{n}1} \\ \dots & \dots & \dots \\ u_{{1}n} & \dots & u_{{n}n} \end{array} \right)^{-1} \in SL_n({\Bbb{Z}})$, one easily sees that (\[newconditionsection\]) are the defining equations of the polytope $\Delta = {}^T A^{-1} (\Delta_g) - g_u$, obtained from $\Delta_g = \{x \in {\Bbb{R}}^n \ | \ \langle x, u_i \rangle \geq g(u_i) \}$ by the map in $SL_n({\Bbb{Z}})$ and the translation which send the edge given by the faces having $u_1, \dots, u_n$ as normals (i.e. the edge at the vertex corresponding to $p$) to the edge at the origin having the vectors of the canonical basis as edge. Then the embedding (\[embCn\]) turns out to be the map $${\Bbb{C}}^n \rightarrow {\Bbb{C}}P^{N}, \ \ \ (\xi_1, \dots, \xi_n) \mapsto [\dots, \xi_1^{x_1} \cdots \xi_n^{x_n}, \dots]$$ where $(x_1, \dots, x_n)$ runs over all the points with integral coordinates in $\Delta$, as required. \[puntipolitopo\] Notice that the transformed polytope $\Delta$ represents, up to isomorphism, the same line bundle and the same toric manifold as $\Delta_g$, because we are always free to apply to the fan $\Sigma$ a transformation in $SL_n({\Bbb{Z}})$ (see, for example, Proposition VII.1.16 in [@Au]) and we can always add to $g$ an integral linear function ${\Bbb{Z}}^n \rightarrow {\Bbb{Z}}$, which correspond to a translation of the polytope (this comes from the fact that two bundles ${X_{\Sigma}}_g$, ${X_{\Sigma}}_{g'}$ associated to $\Sigma$-linear support functions $g$, $g'$ on $\Sigma$ are isomorphic if and only if $g-g': {\Bbb{R}}^n \rightarrow {\Bbb{R}}$ is a linear function). In fact, it is well-known that if a toric manifold endowed with a very ample line bundle is represented by a polytope $\Delta$, then the monomials $a_1^{y_1} \cdots a_n^{y_n}$, for $(y_1, \dots, y_n) \in \Delta \cap {\Bbb{Z}}^n$ and $(a_1, \dots, a_n) \in ({\Bbb{C}}^*)^n$, give the restriction of a Kodaira embedding (associated to the given line bundle) to the complex torus $({\Bbb{C}}^*)^n$ contained in $X_{\Sigma}$. What we have seen here in detail is exactly that this embedding can be extended to ${\Bbb{C}}^n$ if the polytope has the origin as vertex and the edge at the origin is generated by the canonical basis of ${\Bbb{R}}^n$, and coincides with (\[embCn\]) in this case. [99]{} M. Abreu, *[Kähler]{} geometry of toric manifolds in symplectic coordinates*, Symplectic and contact topology: interactions and perspectives (Toronto, ON/Montreal, QC, 2001), 1–24, Fields Inst. Commun., 35, Amer. Math. Soc., Providence, RI, 2003. M. Audin, [*Torus actions on symplectic manifolds*]{}, Progress in Mathematics, Vol. 93 (2004). V. Batyrev, [*Quantum cohomology rings of toric manifolds*]{}, Astérisque No. 218 (1993), 9-34. P. Biran, *A stability property of symplectic packing*, Invent. Math. 136 (1999) 123-155. S. Bochner, [*Curvature in Hermitian metric*]{}, Bull. Amer. Math. Soc. 53 (1947), 179-195. A. Borel, [*Linear algebraic groups*]{}, second ed., GTM n. 126 Springer–Verlag, New-York (1991). E. Calabi, [*Isometric Imbeddings of Complex Manifolds*]{}, Ann. of Math. 58 (1953), 1-23. A. C. Castro, *Upper bound for the Gromov width of coadjoint orbits of type A*, arXiv:1301.0158v1 T. Delzant, *Hamiltoniens periodiques et image convexe de l’application moment*, Bull. Soc. Math. France, 116 (1988), 315-339. S. K. Donaldson, [*[Kähler]{} geometry on toric manifolds, and some other manifolds with large symmetry*]{}, Handbook of geometric analysis. No. 1, 29–75, Adv. Lect. Math. (ALM), 7, Int. Press, Somerville, MA, 2008. W. Fulton, [*Introduction to toric varieties*]{}, Princeton University press (1993). G. Giachetta, L. Mangiarotti, G. A. Sardanashvili, [*Geometric and algebraic topological methods in quantum mechanics*]{}, World Scientific (2005). M. Gromov, *Pseudoholomorphic curves in symplectic manifolds*, Invent. Math. 82 (1985), no. 2, 307-347. V. Guillemin, [*Moment maps and combinatorial invariants of hamiltonian $T^n$-spaces*]{}, Birkhauser 1994. D. Hulin, [*Sous-varietes complexes d’Einstein de l’espace projectif*]{}, Bull. Soc. math. France 124 (1996), 277-298. D. Hulin, [*[Kähler]{}-Einstein metrics and projective embeddings*]{}, J. Geom. Anal. 10 (2000), 525-528. Y. Karshon, S. Tolman, *The Gromov width of complex Grassmannians*, Algebr. Geom. Topol. 5 (2005), 911-922. A. Loi, M. Zedda, [*Kähler-Einstein submanifolds of the infinite dimensional projective space*]{}, Math. Ann. 350 (2011), 145-154. A. J. Di Scala, A. Loi and H. Hishi, [*[Kähler]{} immersions of homogeneous [Kähler]{} manifolds into complex space forms*]{}, Asian J. of Math.s Vol. 16 No. 3 (2012), 479-488. A. Loi, F. Zuddas, [*Symplectic maps of complex domains into complex space forms*]{}, Journal of Geometry and Physics 58 (2008), 888-899. G. Lu, [*Symplectic capacities of toric manifolds and related results*]{}, Nagoya Math. J. Vol. 181 (2006), 149-184. T. Oda, *Convex bodies and algebraic geometry*, Springer Verlag (1985). D. Salamon, *Uniqueness of symplectic structures*, Acta Math. Vietnam. 38 (2013), no. 1, 123-144 Y. T. Siu, *The existence of [Kähler]{}-Einstein metrics on manifolds with positive anticanonical line bundle and a suitable finite symmetry group,*, Ann. Math. 127 (1988), 585-627. G. Tian, *[Kähler]{}-Einstein metrics on complex surfaces with $c_1(M)$ positive*, Math. Ann. 274 (1986), 503-516. [^1]: The second author was partially supported by Prin 2010/11 – Varietà reali e complesse: geometria, topologia e analisi armonica – Italy; the first and third authors were partially supported by the FIRB Project “Geometria Differenziale Complessa e Dinamica Olomorfa”. [^2]: It is easy to see that this condition is equivalent to the fact that the [Kähler]{} form on the manifold represents the first Chern class of a multiple of the anticanonical bundle. [^3]: At page 169 of [@Lu], studying the case of the projective space blown up at one point, the author applies the same estimate valid for Fano polytopes also in the case when the polytope is not Fano: by looking at the proof, it turns out that this is possible because the projective space blown up at one point is Fano and any two [Kähler]{} forms on it are deformedly equivalent (see [@Salamon], Example 3.7). This argument cannot be used here because $\tilde S_2$ is not Fano. As told to the second author by D. Salamon in a private communication, it is not known if the same result on the equivalence by deformation holds on the higher blowups of the projective spaces ${\Bbb{C}}{\Bbb{P}}^n$, with $n > 2$.
--- abstract: 'InfiniBand is widely used for low-latency, high-throughput cluster computing. Saving the state of the InfiniBand network as part of distributed checkpointing has been a long-standing challenge for researchers. Because of a lack of a solution, typical MPI implementations have included custom checkpoint-restart services that “tear down” the network, checkpoint each node as if the node were a standalone computer, and then re-connect the network again. We present the first example of transparent, system-initiated checkpoint-restart that directly supports InfiniBand. The new approach is independent of any particular Linux kernel, thus simplifying the current practice of using a kernel-based module, such as BLCR. This direct approach results in checkpoints that are found to be faster than with the use of a checkpoint-restart service. The generality of this approach is shown not only by checkpointing an MPI computation, but also a native UPC computation (Berkeley Unified Parallel C), which does not use MPI. Scalability is shown by checkpointing 2,048 MPI processes across 128 nodes (with 16 cores per node). In addition, a cost-effective debugging approach is also enabled, in which a checkpoint image from an InfiniBand-based production cluster is copied to a local Ethernet-based cluster, where it can be restarted and an interactive debugger can be attached to it. This work is based on a plugin that extends the DMTCP (Distributed MultiThreaded CheckPointing) checkpoint-restart package.' author: - | Jiajun Cao,[^1] Gregory Kerr, Kapil Arya, Gene Cooperman\ College of Computer and Information Science\ Northeastern University\ Boston, MA 02115 / USA\ Email: jiajun@ccs.neu.edu, kerrgi@gmail.com, {kapil,gene}@ccs.neu.edu title: 'Transparent Checkpoint-Restart over InfiniBand' --- Introduction {#sec:Introduction} ============ InfiniBand is the preferred network for most of high performance computing and for certain Cloud applications, due to its low latency. We present a new approach to checkpointing over InfiniBand. This is the first efficient and transparent solution for [*direct*]{} checkpoint-restart over the InfiniBand network (without the intermediary of an MPI implementation-specific checkpoint-restart service). This also extends to other language implementations over InfiniBand, such as Unified Parallel C (UPC). This work contains several subtleties, such as whether to drain the “in-flight data” in an InfiniBand network prior to checkpoint, or whether to force a re-send of data after resume and restart. A particularly efficient solution was found, which combines the two approaches. The InfiniBand completion queue is drained and refilled on resume or restart, and yet data is never re-sent, [*except*]{} in the case of restart. Historically, transparent (system-initiated) checkpoint-restart has typically been the first technology that one examines in order to provide fault tolerance during long-running computations. For example, transparency implies that a checkpointing scheme works independently of the programming language being used (e.g., Fortran, C, or C++). In the case of distributed computations over Ethernet, several distributed checkpointing approaches have been proposed [@CoopermanAnselMa06; @Cruz05; @LaadanEtAl05; @LaadanEtAl07; @Chpox]. Unfortunately, those solutions do not extend to supporting the InfiniBand network. Other solutions for distributed checkpointing are specific to a particular MPI implementation [@Bouteiler06; @GaoEtAl06; @HurseyEtAl09; @OpenMPICheckpoint07; @KerrEtAl11; @LemarinierEtAl04; @CheckpointLAMMPI05b; @SankaranEtAl05]. These MPI-based checkpoint-restart services “tear down” the InfiniBand connection, after which a single-process checkpoint-restart package can be applied. Finally, checkpoint-restart is the process of saving to stable storage (such as disk or SSD) the state of the processes in a running computation, and later re-starting from stable storage. [*Checkpoint*]{} refers to saving the state, [*resume*]{} refers to the original process resuming computation, and [*restart*]{} refers to launching a new process that will restart from stable storage. The checkpoint-restart is [*transparent*]{} if no modification of the application is required. This is sometimes called [*system-initiated*]{} checkpointing. Prior schemes specific to each MPI implementation had: (i) blocked the sending of new messages and waited until pending messages have been received; (ii) “torn down” the network connection; (iii) checkpointed each process in isolation (without the network), typically using the BLCR package [@BLCR03; @BLCR06] based on a kernel module; (iv) and then re-builds the network connection. Those methods have three drawbacks: 1. Checkpoint-resume can be slower due to the need to tear down and re-connect the network. 2. PGAS languages such as UPC [@UPC02] must be re-factored to run over MPI instead of directly over InfiniBand in order to gain support for transparent checkpoint-restart. This can produce additional overhead. 3. The use of a BLCR kernel module implied that the restart cluster must use the same Linux kernel as the checkpoint cluster. The current work is implemented as a plugin on top of DMTCP (Distributed MultiThreaded CheckPointing) [@AnselEtAl09]. The experimental evaluation demonstrates DMTCP-based checkpointing of Open MPI for the NAS LU benchmark and others. For 512 processes, checkpointing to a local disk drive, occurs in 232 seconds, whereas it requires 36 seconds when checkpointing to back-end Lustre-based storage. Checkpointing of up to 2,048 MPI processes (128 nodes with 16 cores per node) is shown to have a run-time overhead between 0.8% and 1.7%. This overhead is shown to be a little less than the overhead when using the checkpoint-restart of Open MPI using BLCR. Tests were also carried out on Berkeley UPC [@UPC99] over GASNet’s ibv conduit [@GASNet02], with similar results for checkpoint times and run-time overhead. The new approach can also be extended to provide capabilities similar to the existing interconnection-agnostic use of the Open MPI checkpoint-restart service [@HurseyEtAl09]. Specifically, we demonstrate an additional IB2TCP plugin that supports cost-effective use of a symbolic debugger (e.g., GDB) on a large production computation. The IB2TCP plugin enables checkpointing over InfiniBand and restarting over Ethernet. Thus, when a computation fails after on a production cluster (perhaps after hours or days), the checkpoint image can be copied to an inexpensive, smaller, Ethernet-based cluster for interactive debugging. An important contribution of the IB2TCP plugin, is that unlike the BLCR kernel-based approach, the DMTCP/IB2TCP approach supports using an Ethernet-based cluster that uses a different Linux kernel, something that occurs frequently in practice. Finally, the approach of using DMTCP plugins provides is easily maintainable, as measured by lines of code. The primary plugin, supporting checkpointing over InfiniBand, consists of 2,700 lines of code. The additional IB2TCP plugin consists of 1,000 lines of code. #### Organization of Paper. The rest of this paper includes the background of DMTCP (Section \[sec:dmtcp\]. and the algorithm for checkpointing over InfiniBand (Section \[sec:algorithm\]). Limitations of this approach are discussed in Section \[sec:limitations\]. An experimental evaluation is presented in Section \[sec:experiments\]. Section \[sec:ib2tcp\] is of particular note, in reporting on the IB2TCP plugin for migrating a computation from a production cluster to a debug cluster. Finally, the related work (Section \[sec:relatedWork\]) and conclusions (Section \[sec:conclusion\]) are presented. Background: InfiniBand and DMTCP {#sec:dmtcp} ================================ Section \[sec:infiniband\] reviews some concepts of InfiniBand, necessary for understanding the checkpointing approach described in Section \[sec:algorithm\]. Section \[sec:plugins\] describes the use of plugins in DMTCP. Review of InfiniBand ibverbs API {#sec:infiniband} -------------------------------- ![\[fig:ibConcepts\] InfiniBand Concepts](infiniband){width="0.9\columnwidth"} In order to understand the algorithm, we review some concepts from the Verbs API of InfiniBand. While there are several references that describe InfiniBand, we recommend one of [@KerrIB11; @BedeirIB10] as a gentle introduction for a general audience. Recall that the InfiniBand network uses [*RDMA*]{} (remote DMA to the RAM of a remote computer). Each computer node must have a Host Channel Adapter (HCA) board with access to the system bus (memory bus). With only two computer nodes, the HCA adapter boards may be connected directly to each other. With three or more nodes, communication must go through an InfiniBand switch in the middle. Note also that the bytes of an InfiniBand message may be delivered out of order. Figure \[fig:ibConcepts\] reviews the basic elements of an InfiniBand network. A hardware host channel adapter (HCA) and the software library and driver together maintain at least one [*queue pair*]{} and a [*completion queue*]{} on each node. The queue pair consists of a send queue on one node and a receive queue on a second node. In a bidirectional connection, each node will contain both a send queue and a receive queue. Sending a message across a queue pair causes an entry to be added to the completion queue on each node. However, it is possible to set a flag when posting a work request to the send queue, such that no entry is added to the completion queue on the “send” side of the connection. Although not explicitly introduced as a standard, libibverbs (provided by the Linux OFED distribution) is the most commonly used InfiniBand interface library. We will describe the model in terms of the functions prefixed by [ibv\_]{} for the [*verbs library*]{} (libibverbs). Many programs also use OFED’s convenience functions, prefixed by [rdma\_]{}. OFED also provides an optional library, librdmacm (RDMA connection manager) for ease of connection set-up and tear-down in conjunction with the verbs interface. Since this applies only to set-up and tear-down, this library does not affect the ability to perform transparent checkpoint-restart. We assume the reliable connection model (end-to-end context), which is by far the most commonly used model for InfiniBand. There are two models for the communication: - Send-receive model - RDMA (remote DMA) model (often employed for efficiency, and serving as the inspiration for the one-sided communication of the MPI-2 standard) Our InfiniBand plugin supports both models, and a typical MPI implementation can be configured to use either model. ### Send-Receive model of InfiniBand communication {#sec:sendReceiveModel} We first describe the steps in processing the send-receive model for InfiniBand connection. It may be useful to examine Figure \[fig:ibConcepts\] while reading the steps below. 1. Initialize a hardware context, which causes a buffer in RAM to be allocated. All further operations are with respect to this hardware context. 2. Create a protection domain that sets the permissions to determine which computers may connect to it. 3. Register a memory region, which causes the virtual memory to be pinned to a physical address (so that the operating system will not page that memory in or out). 4. A completion queue is created for each of the sender and the receiver. This completion queue will be used later. 5. Create a queue pair (a send queue and a receive queue) associated with the completion queue. 6. An end-to-end connection is created between two queue pairs, with each queue pair associated with a port on an HCA adapter. The sender and receiver queue pair information (several ids) is exchanged, typically either using either TCP (through a side channel that is not non-InfiniBand), or by using an rdmacm library whose API is transport-neutral. 7. The receiver creates a work request and posts it to the receive queue. (One can post multiple receive buffers in advance.) 8. The sender creates one or more work requests and posts them to the send queue. 9. \[step:postReceive\] [*NOTE:*]{} The application must ensure that a receive buffer has been posted before it posts a work request to the send queue. It ia an application error if this is not the case. 10. The transfer of data now takes place between a posted buffer on the send queue and a posted buffer on the receive queue. The posted send and receive buffers have now been used up, and further posts are required for further messages. 11. \[step:completion\] Upon completion, work completions are generated by the hardware and appended to each of the completion queues, one queue on the sender’s node and one queue on the receiver’s node. 12. \[step:polling\] The sender and receiver each poll the completion queue until a work completion is encountered. (A blocking request for work completion also exists as an alternative. A blocking request must be acknowledged on success.) 13. Polling causes the work completion to be removed from the completion queue. Hence, further polling will eventually see further completion events. Both blocking and non-blocking versions of the polling calls exist. We also remark that a work request (a WQE or Work Queue Entry) points to a list of scatter/gather elements, so that the data of the message need not be contiguous. ### RDMA model of InfiniBand communication The RDMA model is similar to the send-receive model. However, in this case, one does not post receive buffers. The data is received directly in a memory region. An efficient implementation of MPI’s one-sided communication (MPI\_Put, MPI\_Get, MPI\_Accumulate), when implemented over InfiniBand, will typically employ the RDMA model [@JiangEtAl04]. As a consequence, Step \[step:postReceive\] of Section \[sec:sendReceiveModel\] does not appear in the RDMA model. Similarly, Steps \[step:completion\] and \[step:polling\] are modified in the RDMA model to refer to completion and polling solely for the send end of the end-to-end connection. Other variations exist, which are supported in our work, but not explicitly discussed here. In one example, an InfiniBand application may choose to send several messages without requesting a work completion in the completion queue. In these cases, an application-specific algorithm will follow this sequence with a message that includes a work completion. In a second example, an RDMA-based work request may request an immediate mode, in which the work completion is placed only in the remote completion queue and not in the local completion queue. DMTCP and Plugins {#sec:plugins} ----------------- DMTCP is a transparent, checkpoint-restart package that supports third-party plugins. The current work on InfiniBand support was implemented as a DMTCP plugin [@dmtcpPlugin]. The plugin is used here to virtualize the InfiniBand resources exposed to the end user, such as the queue pair struct (ibv\_qp) (see Figure \[fig:ibConcepts\]). This is needed since upon restart from a checkpoint image, the plugin will need to create a new queue pair for communication. As a result, the InfiniBand driver will create a new queue pair struct at a new address in user space, with new ids. Plugins provide three core features used here to support virtualization: 1. wrapper functions around functions of the InfiniBand library: these wrappers translate between virtual resources (see by the target application) and real resources (seen within the InfiniBand library, driver and hardware). The wrapper function also records changes to the queue pair and other resources for later replay during restart. 2. event hooks: these hooks are functions within the plugin that DMTCP will call at the time of checkpoint and restart. Hence, the plugin is notified at the time of checkpoint and restart, so as to update the virtual-to-real translations, to recreate the network connection upon restarting from a checkpoint image, and to replay some information from the logs. 3. \[step:publishSubscribe\]a publish/subscribe facility: to exchange ids among plugins running on the different computer nodes whenever new device connections are created. Examples of such ids are local and remote queue pair numbers and remote keys of memory regions. Algorithm {#sec:algorithm} ========= Figure \[fig:qp\] presents an overview of the virtualization of a queue pair. This is the most complex of the subsystems being checkpointed. In overview, observe that the DMTCP plugin library interposes between most calls from the target application to the InfiniBand ibverbs library. This allows the DMTCP InfiniBand plugin to intercept the creation of a queue pair by the InfiniBand kernel driver, and to create a shadow queue pair. The target application is passed a pointer only to the virtual queue pair created by the plugin. Thus, any further ibverbs calls to manipulate the queue pair will be intercepted by the plugin, and appropriate fields in the queue pair structure can be appropriately virtualized before the real ibverbs call. Similarly, any ibverbs calls to post to the send or receive queue or to modify the queue pair are intercepted and saved in a log. This log is used for internal bookkeeping by the plugin, to appropriately model work requests as they evolve into the completion queue. In particular, note that a call to ibv\_post\_send may request that no work completion be entered on the completion queue. ![\[fig:qp\] Queue pair resources and their virtualization. (The plugin keeps a log of calls to post to or to modify the queue pair.)](ibv-dev){width="0.9\columnwidth"} The Checkpoint-Restart Algorithm -------------------------------- As the user base code makes calls to the verbs library, we will use DMTCP plugin wrapper functions around these library functions to interpose. Hence the user call goes first to our DMTCP plugin library. We then extract parameters describing how the resources were created, before passing on the call to the verbs library, and later passing back the return value. This allows us to recreate semantically equivalent copies of those same resources on restart [*even if we restart on a new computer*]{}. in particular, we record any calls to [modify\_qp]{} and to [modify\_srq]{}. On restart, those calls are replayed in order to place the corresponding data structures in a semantically equivalent state to pre-checkpoint. While the description above appears simple, several subtleties arise, encapsulated in the following principles. #### Principle 1: Never let the user see a pointer to the actual InfiniBand resource. A verbs call that creates a new InfiniBand resource will typically create a struct, and return a pointer to that struct. We will call this struct created by the verbs library a [*real struct*]{}. If the end user code create an InfiniBand resource, we interpose to copy that struct to a a new [*shadow struct*]{}, and then pass back to the end user the pointer to this shadow struct. Some examples of InfiniBand resources for which this is done are: a context, a protection domain, a memory region, and a queue pair. The reason for this is that many implementations of InfiniBand libraries contain additional undocumented fields in these structs, in addition to those documented by the corresponding “man page”. When we restart after checkpoint, we cannot pass the original pre-checkpoint struct to the verbs library. The undocumented (hidden) fields would not match the current state of the InfiniBand hardware on restart. (New device-dependent ids will be in use after restart.) So, on restart, we create an entirely new InfiniBand resource (using the same parameters as the original). This new struct should be semantically equivalent to the pre-checkpoint original, and the hidden fields will correspond to the post-restart state of the hardware. One can think of this as a form of virtualization. The user is passed a pointer to a [*virtual struct*]{}, the shadow struct. The verbs library knows only about the [*real struct*]{}. So, we will guarantee that the verbs library only sees real structs, and that the end user code will only see virtual structs. To do this, we interpose our DMTCP plugin library function if a verbs library function refers to one of these structs representing InfiniBand resources. If the end user calls a verbs library function that returns a pointer to a real struct, then our interposition will replace this and return a pointer to a corresponding virtual struct. If the user code passes an argument pointing to a virtual struct, we will replace it by a pointer to a real struct before calling the verbs library function. #### Principle 2: Inline functions almost always operate through pointers to possibly device-dependent functions. Add wrappers around the pointers, and not the inline functions. The previous principle depends on wrapper functions that interpose in order to translate between real and virtual structs. But in the OFED ibverbs implementation, some of the apparent library calls to the verbs library are in fact inline functions. A DMTCP plugin cannot easily interpose on inline functions. Luckily, these inline functions are often associated with possibly device-dependent functions. So, the OFED software design expands the inline functions into calls to pointers to other functions. Those pointers can be found through the [ops]{} field of a [struct ibv\_context]{}. The [ops]{} field is of type [struct ibv\_context\_ops]{} and contains the function pointers. This gives us access, and we modify the function pointers to point to our own functions, which can then interpose and finally call the original function pointers created by the verbs library. #### Principle 3: Carry out bookkeeping on posts of work queue entries to the send and receive queue. As work requests are entered onto a send queue or receive queue, the wrapper functions of the DMTCP plugin record those work requests (which have now become work queue entries). When the completion queue is polled, if a completion event corresponding to that work queue entry is found, then the DMTCP plugin records that the entry has been destroyed. At the time of checkpoint, there is a log of those work queue entries that have been posted and not yet destroyed. At the time of restart, the send and receive queues will initially be empty. So, those work queue entries are re-posted to their respective queues. (In the case of resume, the send and receive queues continue to hold their work queue entries, and so no special action is necessary.) #### Principle 4: At the time of checkpoint, “drain” the completion queue of its completion events. At the time of checkpoint, and after all user threads have been quiesced, the checkpoint thread polls the completion queue for remaining completion events not yet seen by the end user code. A copy of each completion event seen is saved by the DMTCP plugin. Note that we must drain the completion queue for each of the sender and the receiver. Recall also that the verbs library function for polling the completion queue will also remove the polled completion event from the completion queue as it passes that event to the caller. #### Principle 5: At the time of restart or resume, “refill” a virtual completion queue. At the time of restart or resume and before any user threads have been re-activated, we must somehow refill the completion queue, since the end user has not yet seen the completion events from the previous principle. To do this, the DMTCP plugin stores the completion events of the previous principle in its own private queue. The DMTCP plugin library then interposes between any end user calls to a completion queue and the corresponding verbs library function. If the end user polls the completion queue, the DMTCP wrapper function passes back to the end user the plugin’s private copy of the completion events, and the verbs library function for polling is never called. Only after the private completion queue becomes empty are further polling calls passed on to the verbs library function. Hence, the plugin’s private queue becomes part of a [*virtual completion queue*]{}. #### Principle 6: Any InfiniBand messages still “in flight” can be ignored. If data from an InfiniBand message is still in flight (has not yet arrived in the receive buffer), then InfiniBand will not generate a completion event. Note that the InfiniBand hardware may continue to transport the data of a message, and even generate a completion event [*after all user threads have been quiesced for checkpoint*]{}. Nevertheless, a simple rule operates. If our checkpoint thread has not seen a completion event that arrived late, then we will not have polled for that completion event. Therefore, our bookkeeping in Principle 3 will not have removed the send or receive post from our log. Further, this implies that the memory buffers will continue to have the complete data, since it was saved on checkpoint and restored on restart. Therefore, upon restart (which implies a fresh, empty completion queue), the checkpoint thread will issue another send or receive post (again following the logic of Principle 3). #### Remark. Blocking requests for a completion event ([ibv\_get\_cq\_event]{}) and for shared receive queues create further issues. While those details add some complication, their solution is straightforward and is not covered here. Virtualization of InfiniBand Ids on Restart {#sec:virtualization} ------------------------------------------- A number of InfiniBand objects and associated ids will change on restart. All of these must be virtualized. Among these objects and ids are ibv contexts, protection domains, memory regions (the local and remote keys (lkey/rkey) of the memory regions), completion queues, queue pairs (the queue pair number, qp\_num), and the local id of the HCA port being used (lid). Note that the lid of an HCA port will not change if restarting on the same host, but it may change needed when restarting on a new host, which may have been configured to use a different port. In all of the above cases, the plugin assigns a virtual id and maintains a translation table between virtual and real id. The application sees only the virtual id. Any InfiniBand calls are processed through the plugin, where virtual ids are translated back to real ids. On restart, the InfiniBand hardware may assign new real ids for a given InfiniBand object. In this case, the real ids are updated within the translation tables maintained by the plugin. ### Virtualization of remote ids: rkey, qp\_num and lid However, a more difficult issue occurs in the case of remote memory keys (rkey), queue pair numbers (qp\_num) and local ids (lid). In all three cases, an InfiniBand application must pass these ids to a remote node for communication with the local node. The remote need will need the qp\_num and lid when calling [ibv\_modify\_qp]{} to initialize a queue pair that connects to the local node. The remote node will need the rkey when calling [ibv\_post\_send]{} to send a message to the local node. Since the plugin allows the application to see only virtual ids, the application will employ a virtual id when calling [ibv\_modify\_qp]{} and [ibv\_post\_send]{}. The plugin will first replace the virtual id by the real id, which is known to the InfiniBand hardware. To do this, the plugin within each remote node must contain a virtualization table to translate all virtual ids by real ids. Next, we recall how a remote node received a virtual id in the first place. The InfiniBand specification solves this bootstrapping problem by requiring the application to pass these three ids to the remote node through some out-of-band mechanism. When the application employs this out-of-band mechanism, the remote node will “see” the virtual ids that the plugin passed back to the application upon completion of an InfiniBand call. The solution chosen for the InfiniBand plugin is that assigns a virtual id, which is the same as the real id at the time of the initial creation of the InfiniBand object. After restart, the InfiniBand hardware may change the real id. At the time of restart, the plugin uses the DMTCP coordinator and the publish-subscribe feature to exchange the new real ids, associated with a given virtual id. Since the application continues to see only the virtual ids, the plugin can continue to translate between virtual and real ids through any wrapper by which the application communicates to the InfiniBand hardware (see Figure \[fig:qp\]. (A subtle issue can arise if a queue pair or memory region is created after restart. This is a rare case. Although we have not seen this in the current work, Section \[sec:limitations\] discusses two possible solutions.) ### Virtualization of rkeys {#sec:rkey} Next, the case of rkeys (remote memory region keys) poses a particular problem that does not occur for queue pair numbers or local ids. This is because an rkey is guaranteed unique by InfiniBand only with respect to the protection domain within which it was created. Thus, if a single InfiniBand node has received rkeys from many remote nodes, then the rkeys for two different remote nodes may conflict. Normally, InfiniBand can resolve this conflict because a queue pair must be specified in order to send or receive a message. The local queue pair number determines a unique queue pair number on the remote node. The remote queue pair number then uniquely determines an associated protection domain $pd$. With the remote $pd$, all rkeys are unique. Hence, the InfiniBand driver on the remote node uses the ($pd$, rkey) pair, to determine a unique memory address on the remote node. In the case of the InfiniBand plugin, the vrkey and rkey are identical if no restart has taken place. (It is only after restart that the rkey may change, for a given vrkey). Hence, prior to the first checkpoint, translation from vrkey to rkey is trivial. After a restart, the InfiniBand plugin must employ a strategy motivated by that of the InfiniBand driver. In a call to [ibv\_post\_send]{}, the target application will provide both a virtual queue pair number and a virtual rkey (vrkey). Unlike InfiniBand, the plugin must translate the vrkey into the real rkey on the local node. However, during a restart, each node has published its locally generated rkey, the corresponding $pd$ (as a globally unique id; see above), and the corresponding vrkey. Similarly, each node has published the virtual queue pair number and corresponding $pd$ for any queue pair generated on that node. Each node has also subscribed to the above information published by all other nodes. Hence, the local node is aware of the following through publish-subscribe during restart: $$\begin{aligned} &{\rm (qp\_num, pd)} \\ &{\rm (vkey, pd, rkey)}\end{aligned}$$ The call to [ibv\_post\_send]{} provided the local virtual qp\_num, and the vrkey. This allows us to translate into the remote virtual qp\_num, and hence the remote real qp\_num. This allows us to derive the globally unique $pd$ from the tuples above. The $pd$ and $vkey$ together then allow us to use the second tuple to derive the necessary rkey, to be used when calling the InfiniBand hardware. Limitations {#sec:limitations} =========== Recall that DMTCP copies and restores all of user-space memory. In reviewing Figure \[fig:qp\], one notes that the user-space memory includes a low-level device-dependent driver in user space. If, for example, one checkpoints on a cluster partition using Intel/QLogic, and if one restarts on a Mellanox partition, then the Mellanox low-level driver will be missing. This presents a restriction for heterogeneous computing centers in the current implementation. In future work, we will consider one of two alternative implementations. First, it is possible to implement a generic “stub” driver library, which can then dispatch to the appropriate device-dependent library. Second, it is possible to force the InfiniBand library to re-initialize itself by restoring the pre-initialization version of the InfiniBand library data segment, instead of the data segment as it existed just prior to checkpoint. This will cause the InfiniBand library to appear to be uninitialized, and it will re-load the appropriate device-dependent library. Another issue is that the InfiniBand hardware may post completions to the sender and receiver at slightly different times. Thus, after draining the completion queue, the plugin waits for a fraction of a second, and then drains the completion queue one more time. This is repeated until no completions are seen in the latest period. Thus, correctness is affected only if the InfiniBand hardware posts corresponding completions relatively far apart in time, which is highly unlikely. (Note that this situation occurs in two cases: InfiniBand send-receive mode; and InfiniBand RDMA mode for the special case of ibv\_post\_send while setting the immediate data flag.) In a related issue, when using the immediate data flag or the inline flag in the RDMA model, a completion is posted only on the receiving node. These flags are intended primarily for applications that send small messages. Hence, the current implementation sleeps for a small amount of time to ensure that such messages complete. A future implementation will use the DMTCP coordinator to complete the bookkeeping concerning messages sent and received, and will continue to wait if needed. Next, the current implementation does not support unreliable connections (the analog of UDP for TCP/IP). Most target applications do not use this mode, and so this is not considered a priority. A small memory leak on the order of hundreds of bytes per restart can occur because the memory for the queue pair struct and other data structures generated by by InfiniBand driver are not recovered. While techniques exist to correct this issue, it is not considered important, given the small amount of memory. (Note that this is not an issue for registered or pinned memory, since on restart, a fresh process does not have any pinned memory.) Related Work {#sec:relatedWork} ============ The implementation described here can be viewed as interposing a shadow device driver between the end user’s code and the true device driver. This provides an opportunity to virtualize the fields of the queue pair struct seen by the end user code. Thus, the InfiniBand driver is modelled without the need to understand its internals. This is analogous the idea of using a shadow kernel device by Swift  [@SwiftEtAl04; @SwiftEtAl06]. In that work, after a catastrophic failure by the kernel device driver, the shadow device driver was able to take over and place the true device driver back in a sane state. In a similar manner, restarting on a new host with a new HCA Adapter can be viewed as a catastrophic failure of the InfiniBand user-space library. Our virtual queue pair along with the log of pending posts and modifications to the queue pair serves as a type of shadow device driver. This allows us to place back into a sane state the HCA hardware, the kernel driver and the device-dependent user-space driver. This work is based on DMTCP (Distributed MultiThreaded CheckPointing) [@AnselEtAl09]. The DMTCP project began in 2004 [@CoopermanAnselMa05; @CoopermanAnselMa06]. With the development of DMTCP versions 2.x, it has emphasized the use of plugins [@dmtcpPlugin] for more modular maintainable code. Currently, BLCR [@BLCR06] is widely used as one component of an MPI dialect-specific checkpoint-restart service. This design is fundamentally different, since an MPI-specific checkpoint-restart service calls BLCR, whereas DMTCP transparently invokes an arbitrary MPI implementation. Since BLCR is kernel-based, it provides direct support only on one computer node. Most MPI dialects overcome this in their checkpoint-restart service by disconnecting any network connections, delegating to BLCR the task of a single-node checkpoint, and then reconnect the network connection. Among the MPI implementations using BLCR are Open MPI [@OpenMPICheckpoint07] (CRCP coordination protocol), LAM/MPI [@CheckpointLAMMPI05b], MPICH-V [@Bouteiler06], and MVAPICH2 [@GaoEtAl06]. Other MPI implementations provide their own analogs [@GaoEtAl06; @LemarinierEtAl04; @CheckpointLAMMPI05b; @SankaranEtAl05]. In some cases, an MPI implementation may support an [*application-initiated*]{} protocol in combination with BLCR (such as SELF [@OpenMPICheckpoint07; @CheckpointLAMMPI05b]). For application-initiated checkpointing, the application writer guarantees that there are no active messages at the time of calling for a checkpoint. Some recommended technical reports for further reading on the design of InfiniBand are [@BedeirIB10; @KerrIB11], along with the earlier introduction to the C API [@WoodruffEtAl05]. The report [@KerrIB11] was a direct result of the original search for a clean design in checkpointing over InfiniBand, and [@KerrEtAl11] represents a talk on interim progress. In addition to DMTCP, there have been several packages for transparent, distributed checkpoint-restart of applications running over TCP sockets [@Cruz05; @LaadanEtAl07; @LaadanEtAl05; @Chpox]. The first two packages ([@Cruz05] and [@LaadanEtAl05; @LaadanEtAl07]) are based on the Zap package [@zap02]. The Berkeley language Unified Parallel C (UPC) [@UPC02] is an example of a PGAS language (Partitioned Global Address Space). It runs over GASNet [@GASNet02] and evolved from experience with earlier languages for DSM (Distributed Shared Memory). Experimental Evaluation {#sec:experiments} ======================= The experiments are divided into four parts: scalability with more nodes in the case of Open MPI (Section \[sec:scalability\]); comparison between BLCR and DMTCP for MPI-based computations (Section \[sec:comparison\]); tests on Unified Parallel C (UPC) (Section \[sec:upc\]); and demonstration of migration from InfiniBand to TCP (Section \[sec:ib2tcp\]). The LU benchmark from the NAS parallel benchmarks was used except for tests on UPC, for which there was no available port of the NAS LU benchmark. Note that the default behavior of DMTCP is to compress checkpoint images using gzip. All of the experiments used the default gzip invocation, unless otherwise noted. As shown in Section \[sec:scalability\], gzip produced almost no additional compression, but it is also true that the additional CPU time in running gzip was less than 5% of the checkpoint and restart times. For DMTCP, all checkpoints are to a local disk (local to the given computer node), except as noted. Open MPI/BLCR checkpointing invokes an additional step to copy all checkpoint images to a single, central node. #### Experimental Configuration. Two clusters were employed for the experiments described here. Sections \[sec:comparison\] and \[sec:upc\] refer to a cluster at the Center for Computational Research at the University of Buffalo. It uses SLURM as its resource manager, and a common NFS-mounted filesystem. Each node is equipped with either a Mellanox or QLogic (now Intel) HCA, although a given partition under which an experiment was run was always homogeneous (either all Mellanox or all QLogic). The operating system is RedHat Enterprise Linux 6.1 with Linux kernel version 2.6.32. Experiments were run using one core per computer. Hence, the MPI rank was equal to the number of computers, and so MPI processes were on separate computer nodes. Since the memory per node was not fixed, we set the memory limit per CPU to 3 GB. Each CPU has a clock rate ranging from 2.13 GHz to 2.40 GHz, and the specific CPUs can vary. In terms of software, we used Open MPI 1.6, DMTCP 2.1 (or a pre-release version in some cases), Berkeley UPC 2.16.2, and BLCR 0.8.3, respectively. Open MPI was run in its default mode, which typically used the RDMA model for InfiniBand, rather than the send-receive model. Although, DMTCP version 2.1 was used, the plugin included some additional bug fixes appearing after that DMTCP release. For the applications, we used Berkeley UPC (Unified Parallel C) version 2.18.0. The NAS Parallel Benchmark was version 3.1. Tests of BLCR under Open MPI were run by using the Open MPI checkpoint-restart service [@HurseyEtAl09]. Tests of DMTCP for Open MPI did not use the checkpoint-restart service. Under both DMTCP and BLCR, each node produced a checkpoint image file in a local, per-node scratch partition. However, the time reported for BLCR includes the time to copy all checkpoint images to a single node, preventing a direct comparison. In the case of BLCR, the current Open MPI checkpoint-restart service copies each local checkpoint-image to a central coordinator process. Unfortunately, this serializes part of the parallel checkpoint. Hence, checkpoint times for BLCR are not directly comparable to those for DMTCP. In the case of DMTCP, all tests were run using the DMTCP default parameters, which include dynamic gzip compression. Gzip compression results in little compression for numerical data, and increases the checkpoint time several fold. Further, DMTCP is checkpointing to a local “tmp” directory on each node. BLCR checkpoints to a local “tmp” directory, and then the Open MPI service copies them to a central node that saves them in a different “tmp” directory. Scalability of InfiniBand Plugin {#sec:scalability} -------------------------------- Scalability tests for the DMTCP plugin are presented, using Open MPI as the vehicle for these tests. All tests in this section were run at the Massachusetts Green High-Performance Computing Center (MGHPCC). Intel Xeon E5-2650 CPUs running at 2 GHz. Each node is dual-CPU, for a total of 16 cores per node. It employs Mellanox HCA adapters. In addition to the front-end InfiniBand network, there is a Lustre back-end network. All checkpoints are written to a local disk, except for a comparison with Lustre, described later in Table \[tbl:lustre\]. Table \[tbl:scalability\] presents a study of scalability for the InfiniBand plugin. The NAS MPI test for LU is mmployed. For a given number of processes, each of classes C, D, and E are tested provided that the running time for the test is of reasonable length. The overhead when using DMTCP is analyzed further in Table \[tbl:overhead\]. ----------- ----------- --------------- --------------- NAS of  Runtime (s) Runtime (s)   benchmark processes (natively)    (w/ DMTCP) LU.C 64 18.5 21.7 LU.C 128 11.5 16.1 LU.C 256 7.7 12.8 LU.C 512 6.6 11.9 LU.C 1024 6.2 13.0 LU.D 64 292.6 298.0 LU.D 128 154.9 161.6 LU.D 256 89.0 94.8 LU.D 512 53.2 61.3 LU.D 1024 30.5 39.6 LU.D 2048 26.9 40.3 LU.E 512 677.2 691.6 LU.E 1024 351.6 364.9 LU.E 2048 239.3 256.4 ----------- ----------- --------------- --------------- : \[tbl:scalability\] Demonstration of scalability: running times without DMTCP (natively) and with DMTCP In Table \[tbl:overhead\], the overhead derived from Table \[tbl:scalability\] is decomposed into two components: startup overhead and runtime overhead. Given a NAS parallel benchmark, the total overhead is the difference of the runtime with DMTCP and the the native runtime (without DMTCP). Consider a fixed number of processes on which two different classes of the same benchmark are run. For example, given the native runtimes for two different classes of the LU benchmark (e.g., $t_1$ for LU.C and $t_2$ for LU.D), and the total overhead in each case ($o_1$ and $o_2$), one can derive an assumed startup overhead $s$ in seconds and runtime overhead ratio $r$, based on the formulas: $$\begin{aligned} &o_1 = s + r n_1 \\ &o_2 = s + r n_2.\end{aligned}$$ --------------- --------- -------------- ---------------- \# processes  NAS Startup      Slope (runtime (running LU) classes overhead (s) overhead in %) 64 C, D 3.1 0.8 128 C, D 4.4 1.5 256 C, D 5.0 0.9 512 D, E 7.6 1.0 1024 D, E 8.7 1.3 2048 D, E 12.9 1.7 --------------- --------- -------------- ---------------- : \[tbl:overhead\] Analysis of Table \[tbl:scalability\] showing derived breakdown of DMTCP overhead into startup overhead and runtime overhead. (See analysis in text.) Table \[tbl:overhead\] reports the derived startup overhead and runtime overhead using the formula above. In cases where three classes of the NAS LU benchmark were run for the same number of nodes, the largest two classes were chosen for analysis. This decision was made to ensure that any timing perturbations in the experiment would be a small percentage of the native runtimes. The runtime overhead shown in Table \[tbl:overhead\] remains in a narrow range of 0.8% to 1.7%. The startup overhead grows as the cube root of the number of MPI processes. Table \[tbl:ckpt-analysis\] below shows the effects on checkpoint time and checkpoint image size under several configurations. Note that the first three tests hold constant the number of MPI processes at 512. In this situation, the checkpoint size remains constant (to within the natural variability of repeated runs). Further, in all cases, the checkpoint time is roughly proportional to the total size of the checkpoint images on a single node. A checkpoint time of between 20 MB/s and 27 MB/s was achieved in writing to local disk, with the faster times occurring for 16 processes per node (on 16 core nodes). ----------- --------------- ---------- ------------ NAS Number of Ckpt    Ckpt       benchmark processes time (s) size (MB) LU.E 128$\times$4 70.8 350 LU.E 64$\times$8 136.6 356 LU.E 32$\times$16 222.6 355 LU.E 128$\times$16 70.2 117 ----------- --------------- ---------- ------------ : \[tbl:ckpt-analysis\] Checkpoint times and image sizes for the same NAS benchmark, under different configurations. The checkpoint image size is for a single MPI process. Next, a test was run to compare checkpoint times when using the Lustre back-end storage versus the default checkpoint to a local disk. As expected, Lustre was faster. Specifically, Table \[tbl:lustre\] shows that checkpoint times were 6.5 times faster with Lustre, although restart times were essentially unchanged. Small differences in checkpoint image sizes and checkpoint times are part of normal variation between runs, and was always limited to less than 5%. ------------ ------------ ---------- ------------ Disk type Ckpt       Ckpt     Restart    size (MB) time (s) time (MB) local disk 356 232.3 11.1 Lustre 365 35.7 10.9 ------------ ------------ ---------- ------------ : \[tbl:lustre\] Comparison with checkpoints to local disk or Lustre back-end. Each case was run for NAS LU (class E), and 512 processes (32 nodes $\times$ 16 cores per node). Each node was writing approximately $16\times 360 = 5.76$ GB of checkpoint images. Finally, a test was run in which DMTCP was configured not to use its default gzip compression. Table \[tbl:gzip\], below, shows that this makes little difference both for the checkpoint image size and the checkpoint time. The checkpoint time is about 5% faster when gzip is not invoked. ------------- ----------- ---------- ----------- Program and Ckpt    Ckpt     Restart processes size (MB) time (s) time (MB) with gzip 117 70.2 23.5 w/o gzip 116 67.3 23.2 ------------- ----------- ---------- ----------- : \[tbl:gzip\] Comparison of checkpointing with and without the use of gzip for on-the-fly compression by DMTCP. ----------- ------------ --------------- --------------- ------------- ---------- ---------- ------------- NAS Number of Runtime (s) Runtime (s)   Runtime (s) DMTCP BLCR    DMTCP    benchmark processes  natively      with DMTCP with BLCR  Ckpt (s) Ckpt (s) Restart (s) 8 224.7 229.0 240.9 7.6 16.8 2.3 LU C 16 116.0 117.5 118.7 5.2 16.8 2.3 32 61.0 64.2 64.8 3.8 16.2 2.1 64 32.3 35.4 34.0 2.6 20.6 2.1 8 885.3 886.2 887.9 1.2 3.1 0.8 EP D 16 442.3 447.2 448.3 1.3 3.4 1.2 32 223.2 225.4 227.6 1.4 4.7 3.3 64 115.9 118.2 122.0 1.6 8.2 1.8 9 224.3 227.9 227.4 13.3 26.9 3.9 16 137.8 138.4 137.8 9.1 24.2 4.0 BT C 25 79.3 79.7 81.2 6.4 25.5 3.6 36 57.3 58.7 59.1 5.4 29.2 2.2 64 31.3 32.3 33.6 3.9 33.8 2.3 9 234.5 238.3 238.0 10.3 23.6 4.0 16 132.5 133.1 133.3 6.8 21.1 3.7 SP C 25 77.8 80.1 79.0 5.8 22.4 1.9 36 55.7 57.3 58.7 4.8 25.8 2.0 64 33.4 33.7 31.1 3.1 34.1 2.2 ----------- ------------ --------------- --------------- ------------- ---------- ---------- ------------- Comparison between DMTCP and BLCR {#sec:comparison} --------------------------------- Table \[tbl:comparison\], below, shows the overall performance of DMTCP and BLCR. We chose Open MPI and the NAS Parallel Benchmarks for a test of performance across a broad test suite. For the sake of comparability with previous tests on the checkpoint-restart service of Open MPI [@HurseyEtAl09], we include the previously used NAS tests: LU C, EP D, BT C and SP C. An analysis of the performance must consider both runtime overhead and times for checkpoint and restart. #### Runtime overhead. The runtime overhead is the overhead for running a program under a checkpoint-restart package as compared to running natively. No checkpoints are taken when measuring runtime overhead. Table \[tbl:comparison\] shows that neither DMTCP nor BLCR has significant runtime overhead. For longer running program the runtime overhead is in the range of 1% to 2%. A typical runtime overhead is in the range of 1 to 5 seconds, with 3 seconds being common. Since these overhead times do not correlate with the length of time during which the program was run, we posit that they reflect a constant overhead that is incurred primarily at the time of for program startup. #### Checkpoint/Restart times. Table \[tbl:comparison\] also shows checkpoint and restart times. These times are as reported by a central DMTCP coordinator, or an Open MPI coordinator in the case of BLCR. Note that the restart time for DMTCP was typically under 4 seconds, even for larger computations using 64 computer nodes. The BLCR package did not report restart times. Checkpoint times are particularly important for issues of fault tolerance, since checkpoints are by far the more common operation. Surprisingly, the checkpoint time for DMTCP falls with an increasing number of nodes (except in the case of EP: [*Embarrassingly Parallel*]{}), while the checkpoint time for BLCR increases slowly with an increasing number of nodes. For DMTCP, checkpoint times are smaller with an increasing number of nodes because the NAS benchmarks leave fixed the total amount of work. Thus, twice as many nodes implies half as much work. In the case of most of the NAS benchmarks, the reduced work per node seems to imply a reduced checkpoint image size (except for EP: [*Embarrassingly Parallel*]{}). This accounts for the faster checkpoint times as the number of nodes increases. In contrast, the checkpoint-restart service of Open MPI, the times for checkpointing with Open MPI/BLCR are often roughly constant. We estimate this time is dominated by the last phase, in which Open MPI copies the local checkpoint images to a single, central node. Checkpointing under UPC: a non-MPI Case Study {#sec:upc} --------------------------------------------- In our study of Berkeley UPC, is based on a port of the NAS parallel benchmarks at George Washington University [@upc-nas]. Since that did not include a port of the LU benchmark, we switch to considering FT in this example. We compiled the Berkeley UPC package to run natively over InfiniBand, so that it did not use MPI. We chose to test on the FT B NAS benchmark, as ported to run on UPC. The FT benchmark was used because that port does not support the more communication-intensive LU benchmark. Table \[tbl:upc\] shows that the native runtimes for FT B under UPC are compared to the time for MPI in Table \[tbl:comparison\]. DMTCP total run-time overhead ranges from 4% down to less than 1%. We posit that the higher overhead of 4% is due to the extremely short running time in the case of 16 processes, and is explained by significant startup overhead, consistent with Table \[tbl:overhead\]. Note that BLCR could not be tested in this regime, since BLCR depends on the Open MPI checkpoint-restart service for its use in distributed computations. ------------ -------------- ------------ ------- --------- Number of Runtime    Runtime w/ Ckpt Restart processes  natively (s) DMTCP (s) (s)   (s)   4 123.5 124.2 27.6 9.7 8 64.2 65.1 21.9 8.9 16 34.2 35.5 16.3 7.0 ------------ -------------- ------------ ------- --------- : \[tbl:upc\] Runtime overhead and Checkpoint-restart times for UPC FT B running under DMTCP Migrating from InfiniBand to TCP sockets {#sec:ib2tcp} ---------------------------------------- Some traditional checkpoint-restart services, such as that for Open MPI [@HurseyEtAl09], offer the ability to checkpoint over one network, and restart on a second network. This is especially useful for interactive debugging. A set of checkpoint images from an InfiniBand-based production cluster can be copied to an Ethernet/TCP-based debug cluster. Thus, if a bug is encountered after running for hours on the production cluster, the most recent checkpoints can be used to restart on the debug cluster under a symbolic debugger, such as GDB. The approach of this paper offers the added feature that the Linux kernel on the debug cluster does not have to be the same as on the production cluster. ### IB2TCP: Ping-pong We tested the IB2TCP plugin with a communication intensive ping-pong example InfiniBand program from the OFED distribution. In this case, a smaller development cluster was used, with 6-core Xeon X5650 CPUs and a Mellanox HCA for InfiniBand. Gigabit Ethernet was used for the Ethernet portion. Parameters were set to run over 100,000 iterations. ----------------------- ---------- ---------------         Environment Transfer Transfer rate time (s) (Gigabits/s)  IB (w/o DMTCP) 0.9 7.2 DMTCP/IB (w/o IB2TCP) 1.2 5.7 DMTCP/IB2TCP/IB 1.4 4.6 DMTCP/IB2TCP/Ethernet 65.7 0.1 ----------------------- ---------- --------------- : \[tbl:ib2tcp\] Transfer time variations when using two nodes on InfiniBand versus Gigabit Ethernet hardware, as affected by the DMTCP InfiniBand plugin and the IB2TCP plugin; 100,000 iterations of ping-pong, for a total transfer size of 819 MB Table \[tbl:ib2tcp\] presents the results. These results represent a worst case, since a typical MPI program is not as communication-intensive as the ping-ping test program. The times with IB2TCP were further degraded due to the current implementation’s use of an in-memory copy. We hypothesize that the transfer rate under Gigabit Ethernet was also limited by the speed of the kernel implementation. ### IB2TCP: NAS LU.A.2 Benchmark Next, a preliminary test of IB2TCP for Open MPI, on only two nodes, is presented. This test was conducted on the the MGHPCC cluster, as described in Section \[sec:scalability\]. Table \[tbl:lu-ib2tcp\] shows various times for NAS LU.A.2 benchmark for migrating from InfiniBand to Ethernet using the IB2TCP plugin. As can be seen, the InfiniBand and IB2TCP plugin do not add any considerable overhead to the run time of the application. However, when the process is migrated from InfiniBand to Ethernet, the runtime increases drastically. A runtime overhead of 67% is seen when the computation is restarted on two nodes. The runtime overhead further increases to 142% when the entire computation is restarted on a single node.         Environment RunTime (s) ---------------------------- ------------- IB (w/o DMTCP) 26.61 DMTCP/IB (w/o IB2TCP) 27.81 DMTCP/IB2TCP/IB 27.38 DMTCP/IB2TCP/Ethernet 45.75 (restart on two nodes) DMTCP/IB2TCP/Ethernet 66.34 (restart on a single node) : \[tbl:lu-ib2tcp\] Runtime variations for LU.A.2 benchmark when using two nodes on InfiniBand versus Gigabit Ethernet hardware, as affected by the DMTCP InfiniBand plugin and the IB2TCP plugin. The runtime does not involve the checkpoint and restart times. Future Work {#sec:futureWork} =========== The current work supports only a homogeneous InfiniBand architecture. For example, one cannot checkpoint on a node using an Intel/Qlogic HCA and restart on a different node that uses a Mellanox HCA. This is because the checkpoint image already includes a low-level library to support a particular HCA. Future work will extend this implementation to support heterogeneous InfiniBand architectures by re-loading the low-level library, as described in Section \[sec:limitations\]. Further, the experimental timings reported here did not employ any particular tuning techniques. There are opportunities to reduce the run-time overhead by reducing the copying of buffers. #### Avoiding conflict of virtual ids after restart: In typical MPI implementations memory region keys (rkey), queue pair numbers (qp\_num), and local ids (lid) are all exchanged out-of-band. Since virtualized ids are passed to the target application, it is the virtualized ids that are passed out-of-band. The remote plugin is then responsible for translating the virtual ids to the real ids known to the InfiniBand hardware, on succeeding InfiniBand calls. The current implementation ensures that this is possible, and that there are no conflicts prior to the first checkpoint, as described in Section \[sec:virtualization\]. In typical InfiniBand applications, queue pairs are created only during startup, and so all rkeys, qp\_nums and lids will be assigned prior to the first checkpoint. However, it is theoretically possible for an application to create a new queue pair, memory region, or to query its local id after the first restart. The current implementation assigns the virtual id to be the same as the real id at the time of the initial creation of the InfiniBand object. (After restart, the InfiniBand hardware may assign a different real id, but the virtual id for that object will remain the same.) If an object is created after restart, the real id assigned by InfiniBand may be the same as for an object created prior to checkpoint. This would create a conflict of the corresponding virtual ids. Two solutions to this problem are possible. The simplest is to use DMTCP’s publish-subscribe feature to generate globally unique virtual rkeys, and update a global table of virtual-to-real rkeys. In particular, one could use the existing implementation before the first checkpoint, and then switch to a publish-subscribe implementation after restart. A second solution is to choose the virtual rkeys in a globally unique manner, similarly to the globally unique protection domain ids of the current plugin. Conclusion {#sec:conclusion} ========== A new approach to distributed transparent checkpoint-restart over InfiniBand has been demonstrated. This direct approach accommodates computations both for MPI and for UPC (Unified Parallel C). The approach uses a mechanism similar to that of a shadow device driver [@SwiftEtAl04; @SwiftEtAl06]. In tests on the NAS LU parallel benchmark, a run-time overhead of between 0.7% and 1.9% is found on a computation with up to 2,048 MPI processes. Startup overhead is up to 13 seconds, and grows as the cube root of the number of MPI processes. Checkpoint times are roughly proportional to the total size of all checkpoint images on a single computer node. In one example, checkpoint times varied by a factor of 6.5 (from 232 seconds to 36 seconds), depending on whether checkpoint images were written to a local disk or to a faster, Lustre-based back-end. In each case, 16 MPI processes per node (512 processes in all) wrote a total of approximately 5.8 GB per node. Further, the new approach provides a viable checkpoint-restart mechanism for running UPC natively over InfiniBand — something that previously did not exist. finally, an IB2TCP plugin was demonstrated that allows users to checkpoint a computation on a production cluster under InfiniBand, and to restart on a smaller debug cluster under Ethernet — in a manner suitable for attaching an interactive debugger. This mode is analogous to the interconnection-agnostic feature of the Open MPI checkpoint restart service [@HurseyEtAl09], but it has an added benefit in that the production cluster and the debug cluster do not have to have the same Linux kernel image. Acknowledgment {#acknowledgment .unnumbered} ============== We are grateful to facilities provided at several institutions with which to test over a variety of configurations. We would like to thank: Shawn Matott ( of Buffalo, development and benchmarking facilities); Henry Neeman (Oklahoma University, development facilities); Larry Owen and Anthony Skjellum (the University of Alabama at Birmingham, facilities based on NSF grant CNS-1337747); and the Massachusetts Green High Performance Computing Center (facilities for scalability testing). Dotan Barak provided helpful advice on the implementation of OpenFabrics InfiniBand. Jeffrey  Squyres and Joshua Hursey provided helpful advice on the interaction of Open MPI and InfiniBand. Artem Polyakov provided advice on using the DMTCP batch-queue (resource manager) plugin. [10]{} J. Ansel, G. Cooperman, and K. Arya. : Scalable user-level transparent checkpointing for cluster computations and the desktop. In [*Proc. of IEEE International Parallel and Distributed Processing Symposium (IPDPS-09, systems track)*]{}. IEEE Press, 2009. published on CD; version also available at <http://arxiv.org/abs/cs.DC/0701037>; software available at <http://dmtcp.sourceforge.net>. T. Bedeir. Building an [RDMA]{}-capable application with [IB]{} [V]{}erbs. Technical report, <http://www.hpcadvisorycouncil.com/>, August 2010. <http://www.hpcadvisorycouncil.com/pdf/building-an-rdma-capable-> [application-with-ib-verbs.pdf](application-with-ib-verbs.pdf). D. Bonachea. et specification, v1.1. Technical report [UCB]{}/[CSD]{}-02-1207, U. of California, Berkeley, October 2002. [http://digitalassets.lib.berkeley.edu/techreports/ucb/text/CSD-02-1207.% pdf](http://digitalassets.lib.berkeley.edu/techreports/ucb/text/CSD-02-1207.% pdf). A. Bouteiler, T. Herault, G. Krawezik, P. Lemarinier, and F. Cappello. project: a multiprotocol automatic fault tolerant [MPI]{}. , 20:319–333, 2006. W. W. Carlson, J. M. Draper, D. E. Culler, K. Yelick, E. Brooks, and K. Warren. Introduction to [UPC]{} and language specification. Technical report [CCS]{}-tr-99-157, IDA Center for Computing Sciences, 1999. <http://upc.lbl.gov/publications/upctr.pdf>. G. Cooperman, J. Ansel, and X. Ma. Adaptive checkpointing for master-worker style parallelism (extended abstract). In [*Proc. of 2005 IEEE Computer Society International Conference on Cluster Computing*]{}. IEEE Press, 2005. conference proceedings on CD. G. Cooperman, J. Ansel, and X. Ma. Transparent adaptive library-based checkpointing for master-worker style parallelism. In [*Proceedings of the 6$^{th}$ IEEE International Symposium on Cluster Computing and the Grid (CCGrid06)*]{}, pages 283–291, Singapore, 2006. IEEE Press. . Tutorial for [DMTCP]{} plugins, accessed Dec. 10, 2013. [http://sourceforge.net/p/dmtcp/code/HEAD/tree/trunk/doc/plugin-tutorial% .pdf](http://sourceforge.net/p/dmtcp/code/HEAD/tree/trunk/doc/plugin-tutorial% .pdf). J. Duell, P. Hargrove, and E. Roman. The design and implementation of berkeley [L]{}ab’s [L]{}inux checkpoint/restart ([BLCR]{}). Technical Report LBNL-54941, Lawrence Berkeley National Laboratory, 2003. T. El-Ghazawi and F. Cantonnet. performance and potential: A [NPB]{} experimental study. In [*Proceedings of the 2002 ACM/IEEE Conference on Supercomputing*]{}, Supercomputing ’02, pages 1–26, Los Alamitos, CA, USA, 2002. IEEE Computer Society Press. Q. Gao, W. Yu, W. Huang, and D. K. Panda. Application-transparent checkpoint/restart for mpi programs over infiniband. In [*ICPP ’06: Proceedings of the 2006 International Conference on Parallel Processing*]{}, pages 471–478, Washington, DC, USA, 2006. IEEE Computer Society. . parallel benchmarks. <http://threads.hpcl.gwu.edu/sites/npb-upc>, accessed Jan., 2014, 2014. P. Hargrove and J. Duell. Berkeley [L]{}ab [C]{}heckpoint/[R]{}estart ([BLCR]{}) for [L]{}inux clusters. , 46:494–499, Sept. 2006. J. Hursey, T. I. Mattox, and A. Lumsdaine. Interconnect agnostic checkpoint/restart in [O]{}pen [MPI]{}. In [*HPDC ’09: Proceedings of the 18th ACM international symposium on High performance distributed computing*]{}, pages 49–58, New York, NY, USA, 2009. ACM. J. Hursey, J. M. Squyres, T. I. Mattox, and A. Lumsdaine. The design and implementation of checkpoint/restart process fault tolerance for [O]{}pen [MPI]{}. In [*Proceedings of the 21$^{st}$ IEEE International Parallel and Distributed Processing Symposium (IPDPS) / 12$^{th}$ IEEE Workshop on Dependable Parallel, Distributed and Network-Centric Systems*]{}. IEEE Computer Society, March 2007. G. Janakiraman, J. Santos, D. Subhraveti, and Y. Turner. Cruz: Application-transparent distributed checkpoint-restart on standard operating systems. In [*Dependable Systems and Networks (DSN-05)*]{}, pages 260–269, 2005. W. Jiang, J. Liu, H.-W. Jin, D. K. Panda, W. Gropp, and R. Thakur. High performance [MPI]{}-2 one-sided communication over [I]{}nfini[B]{}and. In [*CCGRID*]{}, pages 531–538, 2004. G. Kerr. Dissecting a small [I]{}nfini[B]{}and application using the [V]{}erbs [API]{}. arxiv:1105.1827v2 \[cs.dc\] technical report, arXiv.org, May 2011. G. Kerr, A. Brick, G. Cooperman, and S. Bratus. Checkpoint-restart: Proprietary hardware and the ‘spiderweb [API]{}’, July 8–10 2011. talk: abstract at <http://recon.cx/2011/schedule/events/112.en.html>; video at <https://archive.org/details/Recon_2011_Checkpoint_Restart>. O. Laadan and J. Nieh. Transparent checkpoint-restart of multiple processes for commodity clusters. In [*2007 USENIX Annual Technical Conference*]{}, pages 323–336, 2007. O. Laadan, D. Phung, and J. Nieh. Transparent networked checkpoint-restart for commodity clusters. In [*2005 IEEE International Conference on Cluster Computing*]{}. IEEE Press, 2005. P. Lemarinier, A. Bouteillerand, T. Herault, G. Krawezik, and F. Cappello. Improved message logging versus improved coordinated checkpointing for fault tolerant [MPI]{}. In [*CLUSTER ’04: Proceedings of the 2004 IEEE International Conference on Cluster Computing*]{}, pages 115–124, Washington, DC, USA, 2004. IEEE Computer Society. S. Osman, D. Subhraveti, G. Su, and J. Nieh. The design and implementation of [Z]{}ap: A system for migrating computing environments. In [*Prof. of 5$^{th}$ Symposium on Operating Systems Design and Implementation (OSDI-2002)*]{}, 2002. S. Sankaran, J. M. Squyres, B. Barrett, V. Sahay, A. Lumsdaine, J. Duell, P. Hargrove, and E. Roman. The [LAM/MPI]{} checkpoint/restart framework: System-initiated checkpointing. , 19(4):479–493, 2005. S. Sankaran, J. M. Squyres, B. Barrett, V. Sahay, A. Lumsdaine, J. Duell, P. Hargrove, and E. Roman. The [LAM/MPI]{} checkpoint/restart framework: System-initiated checkpointing. , 19(4):479–493, 2005. O. O. Sudakov, I. S. Meshcheriakov, and Y. V. Boyko. : Transparent checkpointing system for [L]{}inux clusters. In [*IEEE International Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications*]{}, pages 159–164, 2007. software available at <http://freshmeat.net/projects/chpox/>. M. M. Swift, M. Annamalai, B. N. Bershad, and H. M. Levy. Recovering device drivers. In [*Proceedings of the 6th conference on Symposium on Operating Systems Design and Implementation*]{}, OSDI’04, Berkeley, CA, USA, 2004. USENIX Association. M. M. Swift, M. Annamalai, B. N. Bershad, and H. M. Levy. Recovering device drivers. , 24(4):333–360, Nov. 2006. B. Woodruff, S. Hefty, R. Dreier, and H. Rosenstock. Introduction to the [I]{}nfini[B]{}and core software. In [*Proceedings of the Linux Symposium (Volume Two)*]{}, pages 271–282, July 2005. [^1]: This work was partially supported by the National Science Foundation under Grants OCI-0960978 and OCI 1229059.
--- abstract: 'The structural composition and the properties of the first quantum spin-orientation–dependent correction to synchrotron radiation power are discussed. On the basis of spin mass renormalization it is shown that, in the conventional sense, the Thomas precession is not a source of relativistic radiation. This conclusion is in agreement with well-known statements on the spin dependence of mass and purely kinematic origin of Thomas precession.' author: - | V.A. Bordovitsyn[^1], and A.N. Myagkii [^2]\ *Tomsk State University, Tomsk 634050, Russia* title: 'Is the Thomas precession a source of SR power?' --- PACS: 03.65.Sq, 41.60.Ap, 61.80.Az Keywords: Thomas precession, classical and quantum synchrotron radiation theory, spin light, mixed synchrotron radiation, spin mass renormalization. Introduction ============ At present, an analysis of the first quantum corrections to the synchrotron radiation power (SR) is especially topical, because at ultrahigh energy of electrons in modern accelerators and storage rings, the radiation effects begin significantly to influence the dynamics and stability of electron beams. We consider here polarization and spectral-angular properties of the first quantum spin-orientation–dependent correction to the synchrotron radiation power $$W=W_{SR}(1-\zeta\xi+\ldots), \label{1}$$ where $W_{SR}=\dfrac{2}{3}\dfrac{e_0^2c}{\rho^2}\gamma^4= \dfrac{2}{3}\dfrac{e_0^2\omega_0^2}{c}\gamma^4$ is the SR power, $\rho=\dfrac{m_0c^2}{e_0H}\gamma$ is the orbit radius, $\omega_0=\dfrac{e_0H}{m_0c\gamma}$ is the frequency of electron rotation, and $\zeta=\pm 1$. the dimensionless parameter $\xi$ can be represented in different ways: $$\xi=\frac{3}{2}\frac{\hbar\gamma^2}{m_0c\rho}= \frac{3}{2}\frac{\hbar\omega_0}{m_0c^2}\gamma^2= \frac{3}{2}\frac{H}{H^\ast}\gamma=3\frac{\mu_0}{e_0\rho}\gamma^2= \frac{3}{2}\frac{\hbar}{m_0c^3}\sqrt{w_{\mu}w^{\mu}}=inv,$$ where $H^\ast=\dfrac{m_0^2c^2}{e_0\hbar}$ is Schwinger’s magnetic field, $\mu_0=\dfrac{e_0\hbar}{2m_0c}$ is the Bohr magneton, $w^{\mu}=\dfrac{d^2r^{\mu}}{d\tau^2}$ is the four-dimensional electron acceleration, and $e=-e_0>0$ is the electron charge. The first quantum correction was theoretically calculated by I.M. Ternov, V.G. Bagrov, and R.A. Rzaev (1964) [@1]. The procedure for experimental observation of the spin dependence of SR power was proposed by V.N. Korchuganov, G.N. Kulipanov et al. in 1977, and the experiment itself was described in detail in [@2]. In 1983, the first quantum spin-orientation–dependent correction to the SR power was experimentally detected at the Institute of Nuclear Physics of the Siberian Branch of the USSR Academy of Sciences (Novosibirsk) [@3]. Later it was found out [@4] that the correction is not simple in its structural composition. In the semiclassical theory, it consists of two significantly different components $$W_{\rm em}=-\zeta\xi W_{SR}=W_{\rm emL}+W_{\rm emTh},$$ where $W_{\rm emL}$ or $W_{\rm emTh}$ is the spontaneous radiation power determined by the Larmor or Thomas precession of the electron spin, respectively. However, the standard classical radiation theory of relativistic magnetic moment confirms the result only for the Larmor precession and don not include the contribution of the Thomas precession. At the same time, all the properties of mixed $\rm emL$-radiation in classical and semiclassical theory completely coincide [@5]. Here we try to answer the questions: what is the $\rm emTh$-radiation and why is it absent in the classical radiation theory of the magnetic moment? Semiclassical analysis of mixed radiation ========================================= In this section, we use the relativistic semiclassical radiation theory (Jackson’s method [@6], see also [@7]). In comparison with the conventional quantum theory of radiation, the method is more simple and more obvious, but at the same time it reproduces all the results of the quantum theory. In the semiclassical theory, the total interaction Hamiltonian has the form $$\hat{U}^{int}=\hat{U}^{int}_{\rm e}+\hat{U}^{int}_{\rm mL}+ \hat{U}^{int}_{\rm mTh}, \label{2}$$ where $\hat{U}^{int}_{\rm e}=e_0({\mbox{\boldmath$\beta$}},\tilde{{\mbox{\boldmath$A$}}})$ describes the interaction of the electron charge with the radiation field via the vector potential $\tilde{{\mbox{\boldmath$A$}}}$ (ignoring the recoil effects), the other terms correspond to the interaction of the electron magnetic moment with radiation fields $$\hat{U}^{int}_{mL}=\mu\left({\mbox{\boldmath$\sigma$}},\left\{\tilde{{\mbox{\boldmath$H$}}}- [{\mbox{\boldmath$\beta$}},\tilde{{\mbox{\boldmath$E$}}}]-\frac{\gamma}{\gamma+1}{\mbox{\boldmath$\beta$}}\left( {\mbox{\boldmath$\beta$}},\tilde{{\mbox{\boldmath$H$}}}\right)\right\}\right)= -\frac{\mu}{\gamma}\left\lgroup{\mbox{\boldmath$\sigma$}},\tilde{{\mbox{\boldmath$H$}}}_0\right\rgroup, \label{2a}$$ $$\hat{U}^{int}_{mTh}=\mu_0\frac{\gamma}{\gamma+1}\left({\mbox{\boldmath$\sigma$}}, \left[{\mbox{\boldmath$\beta$}},\left\{\tilde{{\mbox{\boldmath$E$}}}+ [{\mbox{\boldmath$\beta$}},\tilde{{\mbox{\boldmath$H$}}}]\right\}\right]\right)= \frac{\mu_0}{\gamma+1}\left\lgroup[{\mbox{\boldmath$\sigma$}},{\mbox{\boldmath$\beta$}}],\tilde{{\mbox{\boldmath$E$}}}_0 \right\rgroup. \label{2b}$$ Here $\mu=({g}/{2})\mu_0$ is the total magnetic moment of the electron including anomalous part, ${\mbox{\boldmath$\sigma$}}$ are the Pauli matrices, ${\mbox{\boldmath$\beta$}}={{\mbox{\boldmath$u$}}}/{c}$, ${\mbox{\boldmath$u$}}$ is the electron velocity, $\tilde{{\mbox{\boldmath$E$}}}_0$ and $\tilde{{\mbox{\boldmath$H$}}}_0$ are the radiation fields in the rest frame of the electron. It follows from (\[2a\]) that $\hat{U}^{int}_{\rm mL}$ describes the interaction of the total magnetic moment with the magnetic field, whereas $\hat{U}^{int}_{\rm mTh}$ in (\[2b\]) corresponds to the interaction induced by the Bohr magneton motion. One can show that in the former case, the Larmor precession of the spin occurs, whereas in the latter case, the Thomas precession of the spin takes place. Physically speaking, this situation is quite obvious, because the Dirac equation involves both interactions, whereas the anomalous part of the Dirac-Pauli equation involves only the $\rm mL$-interaction, in other words, the anomalous magnetic moment undergoes no the Thomas precession. Calculations of the matrix elements in the semiclassical theory shows that all the mixed radiation is related to the transitions without a spin flip (see [@8]). Omitting details of calculations, we write out the spectral and angular distribution of $\sigma$- and $\pi$- components of mixed radiation: $$\frac{dW^{\sigma}_{\rm em}}{dy}=W_{SR}\zeta\xi\cos\nu\frac{9\sqrt{3}}{16\pi} \left\{\frac{g}{2}\frac{2}{3}y\int^{\infty}_{y}\!\!K_{1/3}(x)dx-2y \left(\frac{1}{3}\int^{\infty}_{y}\!\!K_{1/3}(x)dx+yK_{1/3}(y)\right) \right\},$$ $$\frac{dW^{\pi}_{\rm em}}{dy}=W_{SR}\zeta\xi\cos\nu\frac{3\sqrt{3}}{8\pi} \left(\frac{g}{2}-1\right)y\int^{\infty}_{y}\!\!K_{1/3}(x)dx \label{3a}$$ and $$\frac{dW^{\sigma}_{\rm em}}{d\chi}=W_{SR}\frac{35}{32}\left\{\frac{g}{2}\chi^2- (1+\chi^2)\right\}(1+\chi^2)^{-5/2},$$ $$\frac{dW^{\pi}_{\rm em}}{d\chi}=W_{SR}\frac{35}{32}\left(\frac{g}{2}-1\right) \chi^2(1+\chi^2)^{-9/2}. \label{3b}$$ Here $x=\dfrac{1}{2}(1+\chi^2)^{3/2}y$, $y=\dfrac{2}{3}\dfrac{\rho\tilde{\omega}}{c\gamma^3}$, $\chi=\gamma\psi$, $\psi$ is the angle between the direction of radiation and the electron velocity, $\tilde{\omega}$ is the radiation frequency. One can obtain the total radiation power by integrating (\[3a\]) over y or (\[3b\]) over $\chi$. Taking into account the main term, which corresponds to the charge radiation but without the recoil effects, we have $$W^{\sigma}=W_{SR}\left(\frac{7}{8}+ \zeta\xi\cos\nu\left(\frac{g}{2}-7\right)\frac{1}{6}\right), \quad W^{\pi}=W_{SR}\left(\frac{1}{8}+ \zeta\xi\cos\nu\left(\frac{g}{2}-1\right)\frac{1}{6}\right),$$ $$W=W^{\sigma}+W^{\pi}=W_{SR}\left(1+\zeta\xi\cos\nu\left( \frac{g}{2}-4\right)\frac{1}{3}\right){\left.\right\arrowvert}_{g=2}=W_{SR}\left(1 -\zeta\xi\cos\nu\right). \label{5}$$ In the first quantum correction to the SR power, we specially separate the terms with the factor ${g}/{2}$ that correspond to the $\rm emL$-radiation in the correction. The next terms correspond to the $\rm emTh$-radiation. At $g=2$, we obtain well-known result (\[1\]) for the Dirac electron [@1]. It should be noted that at $\nu=\pi/2$ (the spin oriented in the orbital plane), the mixed radiation is absent. Classical theory of mixed radiation =================================== In the classical theory, the mixed radiation is calculated on the basis of the general radiation theory of the relativistic magnetic moment (see [@8]). In this section, we use a somewhat different approach. The energy-momentum density tensor of mixed radiation has the form $$P^{\mu\rho}_{\rm em}=-\frac{1}{4\pi}\left(\tilde{H}^{\mu\nu}_{\rm e} \tilde{H}_{\rm m\nu}{}^{\rho}+\tilde{H}^{\mu\nu}_{\rm m} \tilde{H}_{\rm e\nu}{}^{\rho}+\frac{1}{2}g^{\mu\rho} \tilde{H}_{\rm e\alpha\beta}\tilde{H}_{\rm m}^{\alpha\beta}\right).$$ Here tensors $\tilde{H}^{\mu\nu}_{\rm e}$ and $\tilde{H}^{\mu\nu}_{\rm m}$ are caused by the radiation field of the charge or the magnetic moment, respectively: $$\tilde{H}^{\mu\nu}_{\rm e}=e\left\{-\frac{\tilde{r}^{[\mu}w^{\nu]}} {{\left(\tilde{r}_{\rho}v^{\rho}\right)}^2}+ \frac{\tilde{r}_{\rho}w^{\rho}\tilde{r}^{[\mu}v^{\nu]}} {{\left(\tilde{r}_{\rho}v^{\rho}\right)}^3}\right\},$$ $$\tilde{H}^{\mu\nu}_{\rm m}=\mu c\left\{\frac{\stackrel{\circ\circ} {\Pi}\!{}^{[\mu\lambda}r_{\lambda}\tilde{r}^{\nu]}}{{\left(\tilde{r}_{\rho} v^{\rho}\right)}^2}-\frac{3\tilde{r}_{\rho}w^{\rho} \stackrel{\circ}{\Pi}\!{}^{[\mu\lambda}r_{\lambda}\tilde{r}^{\nu]}+ \tilde{r}_{\rho}\stackrel{\circ}{w}\!{}^{\rho} \Pi^{[\mu\lambda}r_{\lambda}\tilde{r}^{\nu]}} {{\left(\tilde{r}_{\rho}v^{\rho}\right)}^3}+ 3\frac{{\left(\tilde{r}_{\rho}w^{\rho}\right)}^2 \Pi^{[\mu\lambda}r_{\lambda}\tilde{r}^{\nu]}} {{\left(\tilde{r}_{\rho}v^{\rho}\right)}^4}\right\}.$$ Here $\Pi^{\mu\nu}=\left({\mbox{\boldmath$\Phi$}},{\mbox{\boldmath$\Pi$}}\right)$ is the dimensionless antisymmetric spin tensor which satisfies the condition $\Pi^{\mu\nu}v_{\nu}=0$ and is related to the Frenkel intrinsic magnetic moment tensor by the relationship $M^{\mu\nu}=\mu\Pi^{\mu\nu}$, $\tilde{r}^{\rho}$ is the light-like position four-vector (charge-observer), $v^{\rho}=dr^{\rho}/d\tau$ is the four-dimensional velocity, the simbol $\circ$ denotes the proper time derivative. Substituting these expressions into the four-dimensional momentum of radiation per unit of proper time $$\frac{dP^{\mu}_{\rm em}}{d\tau}=\oint P^{\mu\nu}_{\rm em}e_{\nu}d\Omega,$$ where $d\Omega$ is an element of solid angle, $e^{\nu}=-c\dfrac{\tilde{r}^{\nu}}{\tilde{r}_{\rho}v^{\rho}}- \dfrac{1}{c}v^{\nu}$ is the unit spacelike four-vector, and integrating over the angels by means of the well-known method (see [@8]) we obtain $$\frac{dP^{\mu}_{\rm em}}{d\tau}=\frac{2}{3}\frac{e\mu}{c^4}\left( \stackrel{\circ\circ}{\Pi}\!{}^{\mu\nu}w_{\nu}-\frac{2}{c^2}v^{\mu}w_{\alpha} \stackrel{\circ\circ}{\Pi}\!{}^{\alpha\beta}w_{\beta}-\frac{1}{c^2}\Pi^{\mu\nu} w_{\nu}w_{\rho}w^{\rho}\right).$$ The same result was obtained by another method in [@9] (see also [@10] and works cited in [8]{}). Substituting here the solution of the spin equation in an uniform magnetic field, we find the mixed radiation power ($\mu=(g/2)\mu_0$) $$W_{\rm em}=\frac{c}{\gamma}\frac{dP^0}{d\tau}=-\frac{2}{3}\frac{e\mu}{c^2} \omega^3\gamma^5\beta^2_{\perp}\Pi_{z}=-\frac{\zeta}{3}W_{SR}\frac{g}{2}.$$ In the case of electron ($e=-e_0$, $\mu=-(g/2)\mu_0$, $\omega=-\omega_0$), this result with consideration of the main term can be represented in the form $$W=W_{SR}\left(1+\frac{1}{3}\zeta\xi\frac{g}{2}\right)=W_{SR}+W_{\rm emL}.$$ We see that the Thomas precession makes no contribution to the total radiation power. At the same time, all the properties of the $\rm emL$-radiation in classical and quantum theories completely coincide [@4; @5]. Physical interpretation of the results obtained =============================================== What is a reason for the discrepancy between the expression of the total radiation power in classical and quantum (semiclassical) theories? The situation clears up if we introduce an effective external field $H^{\mu\nu}_{\rm eff}$. In this case, the equation of spin precession in the classical theory has especially simple and clear meaning: $$\frac{d\Pi^{\mu\nu}}{d\tau}=\frac{e}{m_0c}H^{[\mu\rho}_{\rm eff} \Pi_{\rho}{}^{\nu]}, \label{6}$$ $$H^{\mu\rho}_{\rm eff}=H^{\mu\rho}_{\rm L}+H^{\mu\rho}_{\rm Th},\quad H^{\mu\rho}_{\rm L}=\frac{g}{2}\left(H^{\mu\rho}+\frac{1}{c^2}v^{[\mu}v_{\lambda} H^{\lambda\rho]}\right) ,\quad H^{\mu\rho}_{\rm Th}=\frac{m_0}{ec}v^{[\mu}w^{\rho]}. \label{6a}$$ Equation (\[6\]) may be simplified using the spin vector ${\mbox{\boldmath$\zeta$}}$ specified in the rest frame and related to the components of the tensor $\Pi^{\mu\nu}$ by means of the Lorentz transformation $$\Pi^{\mu\nu}=\left\lgroup\gamma[{\mbox{\boldmath$\beta$}},{\mbox{\boldmath$\zeta$}}],\gamma{\mbox{\boldmath$\zeta$}}- \frac{\gamma^2}{\gamma+1}{\mbox{\boldmath$\beta$}}\left({\mbox{\boldmath$\beta$}},{\mbox{\boldmath$\zeta$}}\right) \right\rgroup.$$ In this representation, the interpretation of both terms in (\[6a\]) becomes obvious: $$\frac{d{\mbox{\boldmath$\zeta$}}}{dt}=[{\mbox{\boldmath$\Omega$}},{\mbox{\boldmath$\zeta$}}],\quad {\mbox{\boldmath$\Omega$}}={\mbox{\boldmath$\Omega$}}_{\rm L}+{\mbox{\boldmath$\Omega$}}_{\rm Th},$$ $${\mbox{\boldmath$\Omega$}}_{\rm L}=-\frac{eg}{2m_0c}\left({\mbox{\boldmath$H$}}-[{\mbox{\boldmath$\beta$}},{\mbox{\boldmath$E$}}]- \frac{\gamma}{\gamma+1}{\mbox{\boldmath$\beta$}}\left({\mbox{\boldmath$\beta$}},{\mbox{\boldmath$H$}}\right)\right)= -\frac{g}{2}\frac{e}{m_0c\gamma}{{\mbox{\boldmath$H$}}}_0,$$ $${\mbox{\boldmath$\Omega$}}_{\rm Th}=-\frac{e}{m_0c}\frac{1}{\gamma+1}[{\mbox{\boldmath$\beta$}},{{\mbox{\boldmath$E$}}}_0]= -\frac{1}{c}\frac{\gamma^2}{\gamma+1}[{\mbox{\boldmath$\beta$}},{\mbox{\boldmath$a$}}].$$ Thus, we obtained the well-known expression for the value of the Thomas precession ${\mbox{\boldmath$\Omega$}}_{\rm Th}$. It is noteworthy that in the classical theory the interaction of the magnetic moment with the Thomas field $H^{\mu\nu}_{\rm Th}$ is absent, that is, $$U^{int}_{\rm mTh}=-\frac{\mu}{2\gamma}H^{\alpha\beta}_{\rm Th}\Pi_{\alpha\beta}=0,$$ whereas in both classical and quantum theories the interaction of the magnetic moment with the Larmor field assumes absolutely identical forms and hence, in both theories the interaction has common origin (compare with (2a)) $$U^{int}_{\rm mL}=-\frac{\mu}{2\gamma}H^{\alpha\beta}_{\rm L} \Pi_{\alpha\beta}= -\mu\left({\mbox{\boldmath$\zeta$}},\left\{{\mbox{\boldmath$H$}}-[{\mbox{\boldmath$\beta$}},{\mbox{\boldmath$E$}}]- \frac{\gamma}{\gamma+1}{\mbox{\boldmath$\beta$}}\left({\mbox{\boldmath$\beta$}},{\mbox{\boldmath$H$}}\right)\right\}\right)= -\frac{\mu}{\gamma}\left({\mbox{\boldmath$\zeta$}},{{\mbox{\boldmath$H$}}}_0\right).$$ The correspondence principle can be completely understood if we represent the total radiation power in the semiclassical theory in a somewhat different manner (compare with formulas (\[5\]) at $\nu=0$): $$W^{\sigma}=W_{SR}\left(\frac{7}{8}\left(1-\frac{4}{3}\zeta\xi\right)+ \frac{g}{2}\zeta\xi\frac{1}{6}\right),\quad W^{\pi}=W_{SR}\left(\frac{1}{8} \left(1-\frac{4}{3}\zeta\xi\right)+\frac{g}{2}\zeta\xi\frac{1}{6}\right),$$ $$W=W_{SR}\left(1-\frac{4}{3}\zeta\xi\right)+W_{\rm emL}.$$ It should be noted that the term $1-\dfrac{4}{3}\zeta\xi$ cannot be associated with the polarization of radiation. It can be included in $W_{SR}$ at the expense of spin renormalization of the particle mass (see [@8] pp.91-93). Indeed, according to the renormalization, the mass of a spin particle moving in a uniform magnetic field ${\mbox{\boldmath$H$}}=\left(0,0,H\right)$ has the form: $$M=m_0\left.\left(1-\frac{\mu}{2c}H^{\alpha\beta}\Pi_{\alpha\beta} \right)\right|_{\mu=-\mu_0}=m_0\left(1+\frac{1}{3}\zeta\xi\right).$$ If we set $E=Mc^2\gamma$, the SR power (\[1\]) can be represented in the form $$W^{\prime}_{SR}=\frac{2}{3}\frac{e^2\omega^2}{c}{\left(\frac{E}{Mc^2}\right)}^4= W_{SR}\left(1-\frac{4}{3}\zeta\xi\right).$$ Hence we have the relationship $W=W^{\prime}_{SR}+W_{\rm emL},$ from which it follows that classical and quantum theories of mixed radiation are in full agreement. Moreover, this means that the Thomas precession cannot be considered as a source of the SR power. [99]{} Ternov I.M., Bagrov V.G. and Rzaev R.A., Radiation of the high-energy electron with an oriented spin in the magnetic field. Zhurn. Exp. Teor. Fiz., Vol. 46, No.1 (1964), pp.374-382 (in Russia). Bondar A.E. and Saldin E.L., On the possibility of using synchrotron radiation for measuring the electron beam polarization in a storage ring. NIM, Vol.195 (1982), pp.577-580. Belomestnykh S.A., Bondar A.E., Yegorychev M.N., Zhilich V.N., Kornyukhin G.A., Nikitin S.A., Saldin E.L., Skrinsky A.N. and Tumaikin G.M.., An observation of the spin dependence of synchrotron radiation intensity. NIM, Vol.227 (1984), pp.173-181. Bordovitsyn V.A., Spin light. (L- and Th- radiation of a relativistic electron. Izv. Vyssh. Uchebn. Zaved. Fiz., Vol.40, No.2 (1997), pp.40-47 (in Russia). Kulipanov G.N., Bondar A.E., Bordovitsyn V.A. and Gushina V.S., Synchrotron radiation and spin light. NIM, Vol.A405, No.2-3 (1998), pp.191-194. Jackson J.D., On understanding spin-flip synchrotron radiation and the transverse polarization of electron in storage rings. J. Mod. Phys., Vol.48, No.3 (1976), pp.417-433. Bordovitsyn V.A., Gushchina V.S. and Ternov I.M., Structural composition of synchrotron radiation. NIM Vol.359, No.1-2 (1995), pp.34-37. Bordovitsyn V.A. and Gushchina V.S. Spin light. In: Synchrotron radiation theory and its development. In memory of I.M. Ternov. Ed.: V.A. Bordovitsyn (Wold Scientific, Singapore, 1999). Bordovitsyn V.A., Razina G.K., and Bysov H.H, Radiation of relativistic magneton. III. Izv. Vyssh. Uchebn. Zaved. Fiz., Vol.23, No.10, (1980), pp.33-38 (in Russia). Cohn J. and Wiebe H., Asymptotic radiation from spinning charged particles. J. Math. Phys., Vol.17, No.8 (1975), pp.1496-1500. [^1]: E-mail: bord@mail.tomsknet.ru. [^2]: E-mail: myagkii@mail.ru
--- address: | Department of Mathematics\ Boston University\ 111 Cummington\ Boston, MA 02215, USA author: - 'K. Karu' title: 'Semistable reduction in characteristic 0 for families of surfaces and three-folds' --- \[section\] \[th\][Claim]{} \[th\][Lemma]{} \[th\][Situation]{} \[th\][Corollary]{} \[th\][Proposition]{} \[th\][Definition]{} \[th\][Notation]{} \[th\][Example]{} \[th\][Conjecture]{} \[th\][Problem]{} \[th\][Question]{} \[th\][Definition]{} \[th\][Remark]{} Introduction ============ In [@ak] the semistable reduction of a morphism $F:X{\rightarrow}B$ was stated as a problem in the combinatorics of polyhedral complexes. In this paper we solve it in the case when the relative dimension of $F$ is no bigger than three. First we recall the setup of the problem from [@ak]. The ground field $k$ will be algebraically closed of characteristic zero. A flat morphism $F:X{\rightarrow}B$ of nonsingular projective varieties is semistable if in local analytic coordinates $x_1,\ldots,x_n$ at $x\in X$ and $t_1,\ldots,t_m$ at $b\in B$ the morphism $F$ is given by $$t_i = \prod_{j=l_{i-1}+1}^{l_i} x_j$$ where $0=l_0<l_1<\ldots<l_m\leq n$. The conjecture of semistable reduction states that \[conj-ssr\] Let $F: X{\rightarrow}B$ be a surjective morphism with geometrically integral generic fiber. There exist an alteration (proper surjective generically finite morphism) $B'{\rightarrow}B$ and a modification (proper biratonal morphism) $X'{\rightarrow}X\times_B B'$ such that $X'{\rightarrow}B'$ is semistable. Conjecture \[conj-ssr\] was proved in [@te] (main theorem of Chapter 2) in case when $B$ is a curve. A weak version of the conjecture was proved in [@ak] for arbitrary $X$ and $B$. In both cases the proof proceeds by reducing $F$ to a morphism of toroidal embeddings, stating the problem in terms of the associated polyhedral complexes, and solving the combinatorial problem. Polyhedral complexes -------------------- We consider (rational, conical) polyhedral complexes $\Delta=(|\Delta|,\{\sigma\},\{N_\sigma\})$ consisting of a collection of lattices $N_\sigma\cong{{\Bbb{Z}}}^n$ and rational full cones $\sigma\subset N_\sigma\otimes{{\Bbb{R}}}$ with a vertex. The cones $\sigma$ are glued together to form the space $|\Delta|$ so that the usual axioms of polyhedral complexes hold: 1. If $\sigma\in\Delta$ is a cone, then every face $\sigma'$ of $\sigma$ is also in $\Delta$, and $N_{\sigma'}=N_\sigma|_{{{\operatorname{Span}}}(\sigma')}$. 2. The intersection of two cones $\sigma_1\cap\sigma_2$ is a face of both of them, $N_{\sigma_1\cap\sigma_2} = N_{\sigma_1}|_{{{\operatorname{Span}}}(\sigma_1\cap\sigma_2)} = N_{\sigma_2}|_{{{\operatorname{Span}}}(\sigma_1\cap\sigma_2)}$. A morphism $f:\Delta_X{\rightarrow}\Delta_B$ of polyhedral complexes $\Delta_X=(|\Delta_X|,\{\sigma\},\{N_\sigma\})$ and $\Delta_B=(|\Delta_B|,\{\tau\},\{N_\tau\})$ is a compatible collection of linear maps $f_\sigma: (\sigma,N_\sigma){\rightarrow}(\tau,N_\tau)$; i.e. if $\sigma'$ is a face of $\sigma$ then $f_{\sigma'}$ is the restriction of $f_\sigma$. We will only consider morphisms $f:\Delta_X{\rightarrow}\Delta_B$ such that $f_\sigma^{-1}(0)\cap\sigma=\{0\}$ for all $\sigma\in\Delta_X$. A surjective morphism $f:\Delta_X{\rightarrow}\Delta_B$ such that $f^{-1}(0)=\{0\}$ is semistable if 1. $\Delta_X$ and $\Delta_B$ are nonsingular. 2. For any cone $\sigma\in\Delta_X$, we have $f(\sigma)\in\Delta_B$ and $f(N_\sigma)=N_{f(\sigma)}.$ We say that $f$ is weakly semistable if it satisfies the two properties except that $\Delta_X$ may be singular. The following two operations are allowed on $\Delta_X$ and $\Delta_B$: 1. Projective subdivisions $\Delta_X'$ of $\Delta_X$ and $\Delta_B'$ of $\Delta_B$ such that $f$ induces a morphism $f':\Delta_X'{\rightarrow}\Delta_B'$; 2. Lattice alterations: let $\Delta_X'=(|\Delta_X|,\{\sigma\},\{N_\sigma'\}), \Delta_B'=(|\Delta_B|,\{\tau\},\{N_\tau'\})$, for some compatible collection of sublattices $N_\tau'\subset N_\tau$, $N_\sigma'=f^{-1}(N'_\tau)\cap N_\sigma$, and let $f':\Delta_X'{\rightarrow}\Delta_B'$ be the morphism induced by $f$. \[main-conj\] Given a surjective morphism $f:\Delta_X{\rightarrow}\Delta_B$, such that $f^{-1}(0)=\{0\}$, there exists a projective subdivision $f':\Delta_X'{\rightarrow}\Delta_B'$ followed by a lattice alteration $f'':\Delta_X''{\rightarrow}\Delta_B''$ so that $f''$ is semistable. $$\begin{array}{lclcl} \Delta_{X''} & {\rightarrow}& \Delta_{X'} & {\rightarrow}& \Delta_{X} \\ \downarrow f'' & & \downarrow f' & & \downarrow f \\ \Delta_{B''} & {\rightarrow}& \Delta_{B'} & {\rightarrow}& \Delta_{B} \end{array}$$ The importance of Conjecture \[main-conj\] lies in the fact that it implies Conjecture \[conj-ssr\] (Proposition 8.5 in [@ak]). In the case when $\dim(\Delta_B)=1$, Conjecture \[main-conj\] was proved in [@te] (main theorem of Chapter 3). In [@ak] (Theorem 0.3) the conjecture was proved with semistable replaced by weakly semistable. The main result of this paper is \[main-thm\] Conjecture \[main-conj\] is true if $f$ has relative dimension $\leq 3$. Hence, Conjecture \[conj-ssr\] is true if $F$ has relative dimension $\leq 3$. The relative dimension of a linear map $f:\sigma{\rightarrow}\tau$ of cones $\sigma, \tau$ is $\dim(\sigma)-\dim(f(\sigma))$. The relative dimension of $f:\Delta_X{\rightarrow}\Delta_B$ is by definition the maximum of the relative dimensions of $f_\sigma:\sigma{\rightarrow}\tau$ over all $\sigma\in\Delta_X$. If $F:X{\rightarrow}B$ is a morphism of toroidal embeddings of relative dimension $d$, then the associated morphism of polyhedral complexes $f:\Delta_X{\rightarrow}\Delta_B$ has relative dimension $\leq d$ because in local models the relative dimension of $F$ is no bigger than the rank of the kernel of $f: N_\sigma{\rightarrow}N_\tau$. Thus, the second statement of the theorem follows from the first. Notation -------- We will use notations from [@te] and [@fu]. For a cone $\sigma\in N\otimes{{\Bbb{R}}}$ we write $\sigma={{\langle}}v_1,\ldots,v_n{{\rangle}}$ if $v_1,\ldots,v_n$ lie on the 1-dimensional edges of $\sigma$ and generate it. If $v_i$ are the first lattice points along the edges we call them primitive points of $\sigma$. For a simplicial cone $\sigma$ with primitive points $v_1,\ldots,v_n$, the multiplicity of $\sigma$ is $$m(\sigma,N_\sigma) = [N_\sigma:{{\Bbb{Z}}}v_1\oplus\ldots\oplus{{\Bbb{Z}}}v_n].$$ A polyhedral complex $\Delta$ is nonsingular if and only if $m(\sigma,N_\sigma)=1$ for all $\sigma\in\Delta$. To compute the multiplicity of $\sigma$ we can count the representatives $w\in N_\sigma$ of classes of $N_\sigma/{{\Bbb{Z}}}v_1\oplus\ldots\oplus{{\Bbb{Z}}}v_n$ in the form $$w=\sum_{i}\alpha_i v_i, \qquad 0\leq\alpha_i<1.$$ Such points $w$ were called Waterman points of $\sigma$ in [@te]. Also notice that since the multiplicity of a face of $\sigma$ is no bigger than the multiplicity of $\sigma$, to compute the multiplicity of $\Delta$ it suffices to consider maximal cones only. If $\Delta_X$ and $\Delta_B$ are simplicial, we we say that $f:\Delta_X{\rightarrow}\Delta_B$ is simplicial if $f(\sigma)\in\Delta_B$ for all $\sigma\in\Delta_X$. Assume that $f$ is simplicial. Let $u_1,\ldots,u_n$ be the primitive points of $\Delta_B$, and $m_1,\ldots,m_n$ positive integers. By taking the $(m_1,\ldots,m_n)$ sublattice at $u_1,\dots,u_n$ we mean the lattice alteration $N_\tau'={{\Bbb{Z}}}[m_{i_1} u_{i_1},\ldots, m_{i_l}u_{i_l}]$ where $\tau\in\Delta_B$ has primitive points $u_{i_1},\dots,u_{i_l}$. For cones $\sigma_1,\sigma_2\in\Delta$ we write $\sigma_1\leq\sigma_2$ if $\sigma_1$ is a face of $\sigma_2$. A subdivision $\Delta'$ of $\Delta$ is called projective if there exists a homogeneous piecewise linear function $\psi:|\Delta|{\rightarrow}{{\Bbb{R}}}$ taking rational values on the lattice points (a good function for short) such that the maximal cones of $\Delta'$ are exactly the maximal pieces in which $\psi$ is linear. Acknowledgment -------------- The suggestion to write up the proof of semistable reduction for low relative dimensions came from Dan Abramovich. Joins ===== For cones $\sigma_1,\sigma_2\in{{\Bbb{R}}}^N$ lying in complementary planes: $\mbox{Span}(\sigma_1)\cap\mbox{Span}(\sigma_2)=\{0\}$, the join of $\sigma_1$ and $\sigma_2$ is $\sigma_1*\sigma_2=\sigma_1+\sigma_2$. Let $\sigma$ be a simplicial cone $\sigma=\sigma_1*\ldots*\sigma_n$. If $\sigma_i'$ is a subdivision of $\sigma_i$ for all $i=1,\ldots,n$, we define the join $$\sigma' = \sigma_1' * \ldots * \sigma_n'$$ as the set of cones $\rho = \rho_1+\ldots+\rho_n$, where $\rho_i\in\sigma_i'$. Let $f:\Delta_X{\rightarrow}\Delta_B$ be a simplicial map of simplicial complexes. For $u_i$ a primitive point of $\Delta_B$, $i=1,\ldots,n$, let $\Delta_{X,i}=f^{-1}({{\Bbb{R}}}_+ u_i)$ be the simplicial subcomplex of $\Delta_X$. If $\Delta_{X,i}'$ is a subdivision of $\Delta_{X,i}$ for $i=1,\ldots,n$, we can define the join $$\Delta_X' = \Delta'_{X,1} *\ldots*\Delta'_{X,n}$$ by taking joins inside all cones $\sigma\in\Delta_X$. This is well defined by the assumption that $f^{-1}(0)=\{0\}$. If $\Delta_{X,i}'$ are projective subdivisions of $\Delta_{X,i}$ then the join $\Delta_{X}'$ is a projective subdivision of $\Delta_{X}$. [**Proof.**]{} Let $\psi_i$ be good functions for $|\Delta_{X,i}'|$. Extend $\psi_i$ linearly to the entire $|\Delta_X'|$ by setting $\psi_i(|\Delta_{X,j}'|)=0$ for $j\neq i$. Clearly, $\psi = \sum_i \psi_i$ is a good function defining the subdivision $\Delta_{X}'$. [\ ]{} Consider $f|_{\Delta_{X,i}}: \Delta_{X,i}{\rightarrow}{{\Bbb{R}}}_+ u_i$. By the main theorem of Chapter 2 in [@te] there exist a subdivision $\Delta_{X,i}'$ of $\Delta_{X,i}$ and an $m_i\in{{\Bbb{Z}}}$ such that after taking the $m_i$-sublattice at $u_i$ we have $f'|_{\Delta_{X,i}'}$ semistable. Now let $\Delta_X'$ be the join of $\Delta_{X,i}'$, and take the $(m_1,\ldots,m_n)$-sublattice at $(u_1,\ldots,u_n)$. Then $f':\Delta_X'{\rightarrow}\Delta_B'$ is a simplicial map and $f'|_{\Delta_{X,i}'}$ is semistable. We can also see that the multiplicity of $\Delta_X'$ is not bigger than the multiplicity of $\Delta_X$. Let $\sigma\in\Delta_X$ have primitive points $v_i$ and let $\sigma'\subset\sigma$ be a maximal cone in the subdivision with primitive points $v_i'$. The multiplicity of $\sigma'$ is the number of Waterman points $w'\in N_\sigma'$ $$w'=\sum_{i} \alpha_{i} v_i', \qquad 0\leq\alpha_{i}<1.$$ We show that the set of Waterman points of $\sigma'$ can be mapped injectively into the set of Waterman points of $\sigma$, hence the multiplicity of $\sigma'$ is not bigger than the multiplicity of $\sigma$. Write $$w'=\sum_{i}(\beta_i+b_i) v_i, \qquad 0\leq\beta_{i}<1, \qquad b_i\in{{\Bbb{Z}}}_+.$$ Then $w=\sum_{i}\beta_i v_i \in N_\sigma$ is a Waterman point of $\sigma$. If different $w_1', w_2'$ give the same $w$, then $w_1'-w_2' \in N_\sigma'\cap{{\Bbb{Z}}}\{v_i\} = {{\Bbb{Z}}}\{v_i'\}$, hence $w_1'-w_2'=0$. Modified barycentric subdivisions ================================= Let $f:\Delta_X{\rightarrow}\Delta_B$ be a simplicial morphism of simplicial complexes. Consider the barycentric subdivision $BS(\Delta_B)$ of $\Delta_B$. The 1-dimensional cones of $BS(\Delta_B)$ are ${{\Bbb{R}}}_+\hat{\tau}$ where $\hat{\tau}=\sum u_i$ is the barycenter of a cone $\tau\in\Delta_B$ with primitive points $u_1,\ldots,u_m$. A cone $\tau'\in BS(\Delta_B)$ is spanned by $\hat{\tau}_1,\ldots,\hat{\tau}_k$, where $\tau_1\leq\tau_2\leq\ldots\leq\tau_k$ is a chain of cones in $\Delta_B$. In general, $f$ does not induce a morphism $BS(\Delta_X){\rightarrow}BS(\Delta_B)$. For that we need to modify the barycenters $\hat{\sigma}$ of cones $\sigma\in\Delta_X$. The data of [**modified barycenters**]{} consists of 1. A subset of cones $\tilde{\Delta}_X\subset\Delta_X$. 2. For each $\sigma\in\tilde{\Delta}_X$ a point $b_\sigma\in \mbox{int}(\sigma)\cap N_\sigma$ such that $f(b_\sigma)\in{{\Bbb{R}}}_+\hat{\tau}$ for some $\tau\in\Delta_B$. Recall that for any total order $\prec$ on the set of cones in $\Delta_X$ refining the partial order $\leq$, the barycentric subdivision $BS(\Delta_X)$ can be realized as a sequence of star subdivisions at the barycenters $\hat{\sigma}$ of $\sigma\in\Delta_X$ in the descending order $\prec$. Given modified barycenters $(\tilde{\Delta}_X,\{b_\sigma\})$ and a total order $\prec$ on $\Delta_X$ refining the partial order $\leq$, the [**modified barycentric subdivision**]{} $MBS_{\tilde{\Delta}_X,\{b_\sigma\},\prec}(\Delta_X)$ is the sequence of star subdivisions at $b_\sigma$ for $\sigma\in\tilde{\Delta}_X$ in the descending order $\prec$. To simplify notations, we will write $MBS(\Delta_X)$ instead of $MBS_{\tilde{\Delta}_X,\{b_\sigma\},\prec}(\Delta_X)$. By definition, $MBS(\Delta_X)$ is a projective simplicial subdivision of $\Delta_X$. As in the case of the ordinary barycentric subdivision, the cones of $MBS(\Delta_X)$ can be characterized by chains of cones in $\Delta_X$. We may assume that the 1-dimensional cones of $\Delta_X$ are all in $\tilde{\Delta}_X$. For a cone $\sigma\in\Delta_X$ let $\tilde{\sigma}$ be the maximal face of $\sigma$ (w.r.t. $\prec$) in $\tilde{\Delta}_X$. Given a chain of cones $\sigma_1\leq\ldots\leq\sigma_k$ in $\Delta_X$, the cone spanned by $b_{\tilde{\sigma}_1}, \ldots, b_{\tilde{\sigma}_k}$ is a subcone of $\sigma_k$. Let $C(\Delta_X)$ be the set of all such cones corresponding to chains $\sigma_1\leq\ldots\leq\sigma_k$ in $\Delta_X$. $C(\Delta_X)=MBS(\Delta_X)$. [**Proof.**]{} Let $BS(\Delta_X)$ be the ordinary barycentric subdivision of $\Delta_X$. Both $C(\Delta_X)$ and $MBS(\Delta_X)$ are obtained from $BS(\Delta_X)$ by moving the barycenters $\hat{\sigma}$ (and everything attached to them) to the new position $b_{\tilde{\sigma}}$ for all $\sigma\in\Delta_X$ in the descending order $\prec$. [\ ]{} \[cor-simpl\] If $f(\tilde{\sigma})=f(\sigma)$ for all $\sigma\in\Delta_X$ then $f$ induces a simplicial map $f':MBS(\Delta_X){\rightarrow}BS(\Delta_B)$. [**Proof.**]{} Let $\sigma'\in MBS(\Delta_X)$ correspond to a chain $\sigma_1\leq\ldots\leq\sigma_k$. Then we have a chain of cones $f(\sigma_1)\leq \ldots\leq f(\sigma_k)$ in $\Delta_B$. The assumption that $f(\tilde{\sigma}_i)=f(\sigma_i)$ implies that $f(b_{\tilde{\sigma}_i})\in{{\Bbb{R}}}_+ \widehat{f}(\sigma_i)$, hence the cone ${{\langle}}b_{\tilde{\sigma}_1},\ldots,b_{\tilde{\sigma}_k}{{\rangle}}$ maps onto the cone ${{\langle}}\widehat{f}(\sigma_1), \ldots, \widehat{f}(\sigma_k){{\rangle}}\in BS(\Delta_B)$. [\ ]{} The hypothesis of the corollary is satisfied, for example, if for any $\sigma\in\Delta_X$ with $f(\sigma)=\tau\in\Delta_B$ and for any face $\sigma_1\leq\sigma$ such that $\sigma_1\in\tilde{\Delta}_X$, $f(\sigma_1)\neq\tau$, there exists $\sigma_2\in\tilde{\Delta}_X$ such that $\sigma_1\leq\sigma_2\leq\sigma$ and $f(\sigma_2)=\tau$: $$\begin{array}{ccccc} \sigma_1 & \leq & \sigma_2 & \leq & \sigma \\ \downarrow & & \downarrow & & \downarrow \\ \tau_1 & \leq & \tau & = & \tau \\ \end{array}$$ Indeed, $\tilde{\sigma}\neq\sigma_1$ because $\sigma_1\prec\sigma_2$. Example ------- Assume that $f:\Delta_X{\rightarrow}\Delta_B$ is a simplicial map of simplicial complexes taking primitive points of $\Delta_X$ to primitive points of $\Delta_B$ (e.g. $\Delta_X$ is simplicial and $f$ is weakly semistable). Then for a cone $\sigma\in\Delta_X$ such that $f:\sigma\stackrel{\simeq}{{\rightarrow}}\tau$, we have $f(\hat{\sigma})=\hat{\tau}$. Let $\tilde{\Delta}_X=\bar{\Delta}_X = \{\sigma\in\Delta_X: f|_\sigma \mbox{is injective}\}$, $b_\sigma=\hat{\sigma}$. In this case $\tilde{\sigma}$ is the maximal face of $\sigma$ (w.r.t. $\prec$) such that $f|_\sigma$ is injective. Clearly, the hypothesis of the lemma is satisfied, and we have a simplicial map $f':MBS(\Delta_X){\rightarrow}BS(\Delta_B)$. Next we compute the multiplicity of $MBS(\Delta_X)$. Let $\sigma\in\Delta_X$ have primitive points $v_1,\ldots,v_n$, and let $\sigma'\subset\sigma$ be a maximal cone in the subdivision, corresponding to the chain $${\langle}v_1{\rangle}\leq {\langle}v_1,v_2{\rangle}\leq\ldots\leq {\langle}v_1,\ldots,v_n{\rangle}.$$ Since $\tilde{\rho}\subset\rho$ for any $\rho$, the primitive points of $\sigma'$ can be written as $$\begin{array}{lllllll} v_1' &=& a_{11} v_1 & & & & \\ v_2' &=& a_{21} v_1 & + & a_{22} v_2 & & \\ & \cdots & & & & & \\ v_n' &=& a_{n 1} v_1 & + & \ldots & + & a_{n n} v_n \end{array}$$ for some $0\leq a_{i j}$. The multiplicity of $\sigma'$ is $a_{1 1}\cdot a_{2 2} \cdots a_{n n}$ times the multiplicity of $\sigma$. In case when $b_\rho$ are barycenters $\hat{\rho}$, all $a_{i j}\leq 1$, hence the multiplicity of $\sigma'$ is not bigger than the multiplicity of $\sigma$. Reducing the multiplicity of $\Delta_X$ ======================================= Let $f:\Delta_X{\rightarrow}\Delta_B$ be weakly semistable and $\Delta_X$ simplicial (i.e. $\Delta_B$ is nonsingular, $\Delta_X$ is simplicial, and $f$ is a simplicial map taking primitive points of $\Delta_X$ to primitive points of $\Delta_B$). Notice that if $\bar{\Delta}_X$ is as in Example \[example\], then $\bar{\Delta}_X$ is nonsingular, and $f(\hat{\sigma})=\widehat{f}(\sigma)$ for any $\sigma\in\bar{\Delta}_X$. A singular simplicial cone $\sigma\in\Delta_X$ with primitive points $v_1,\ldots,v_n$ contains a Waterman point $w\in N_\sigma$, $$w=\sum_{i} \alpha_{i} v_i, \qquad 0\leq\alpha_{i}<1, \qquad \sum_i\alpha_i>0.$$ The star subdivision of $\sigma$ at $w$ has multiplicity strictly less than the multiplicity of $\sigma$. We will show in this section that if every singular cone of $\Delta_X$ contains a Waterman point $w$ mapping to a barycenter of $\Delta_B$, then there exists a modified barycentric subdivision $MBS(\Delta_X)$ having multiplicity strictly less than the multiplicity of $\Delta_X$, such that $f$ induces a simplicial map $f':MBS(\Delta_X){\rightarrow}BS(\Delta_B)$. For every singular cone $\sigma\in\Delta_X$ choose a point $w_\sigma$ as follows. By assumption, there exists a Waterman point $w\in\sigma$ mapping to a barycenter of $\Delta_B$: $f(w)=\hat{\tau}$. Write $f(\sigma)=\tau*\tau_0$ and choose a face $\sigma_0\leq\sigma$ such that $f:\sigma_0\stackrel{\simeq}{{\rightarrow}}\tau_0$. Set $w_\sigma=w+\hat{\sigma}_0$; then $$f(w_\sigma) = f(w)+f(\hat{\sigma}_0)=\hat{\tau}+\hat{\tau}_0 = \widehat{f}(\sigma)$$ Having chosen the set $\{w_\sigma\}$, we may remove some of the points $w_\sigma$ if necessary so that every simplex $\rho\in\Delta_X$ contains at most one $w_\sigma$ in its interior. With $\bar{\Delta}_X$ as in Example \[example\], let $\tilde{\Delta}_X =\bar{\Delta}_X \cup \{\rho\in\Delta_X| w_\sigma\in\mbox{int $(\rho)$ for some singular $\sigma$}\}$, $b_\rho = \hat{\rho}$ if $\rho\in\bar{\Delta}_X$, and $b_\rho = w_\sigma$ if $w_\sigma\in\mbox{int}(\rho)$. By construction, $(\tilde{\Delta}_X,\{b_\rho\})$ satisfies the hypothesis of Corollary \[cor-simpl\], hence $f$ induces a simplicial map $f':MBS(\Delta_X){\rightarrow}BS(\Delta_B)$. Before we compute the multiplicity of $MBS(\Delta_X)$, we choose a particular total order $\prec$ on $\Delta_X$. Extend $\leq$ on $\Delta_X$ to a partial order $\prec_0$ by declaring that $\sigma_1\prec_0\sigma_2$ for all (nonsingular) $\sigma_1\in\bar{\Delta}_X$ and singular $\sigma_2\in\Delta_X$. Let $\prec$ be an extension of $\prec_0$ to a total order on $\Delta_X$. With such $\prec$, if $\sigma\in\Delta_X$ is singular, then $b_{\tilde{\sigma}}$ is one of the points $w_\rho$. As in Example \[example\], the multiplicity of $MBS(\Delta_X)$ is not bigger than the multiplicity of $\Delta_X$. If $\sigma\in\Delta_X$ is singular we show by induction on the dimension of $\sigma$ that the multiplicity of $MBS(\sigma)$ is strictly less than the multiplicity of $\sigma$. Let $v_1,\ldots,v_N$ be the primitive points of $\sigma$, and consider the cone $\sigma'={\langle}b_{\tilde{\sigma}},v_1,\ldots,v_{N-1}{\rangle}$ in the star subdivision of $\sigma$ at $b_{\tilde{\sigma}}=\sum_i a_i v_i$. To show that every maximal cone of $MBS(\sigma)$ contained in $\sigma'$ has multiplicity less than the multiplicity of $\sigma$, we have three cases: 1. If $a_N$ = 0, then $\sigma'$ is degenerate. 2. If $0<a_N<1$, then the multiplicity of ${{\langle}}b_{\tilde{\sigma}},v_1,\ldots,v_{N-1}{{\rangle}}$ is less than the multiplicity of $\sigma$, and since all $b_\rho=\sum_i c_i v_i$ have coefficients $0\leq c_i \leq 1$, further subdivisions at $b_\rho$ do not increase the multiplicity of ${{\langle}}b_{\tilde{\sigma}},v_1,\ldots,v_{N-1}{{\rangle}}$. 3. If $a_N=1$, then $b_{\tilde{\sigma}}=w+\hat{\rho}$ for some $\rho\leq\sigma$ and $w\in{{\langle}}v_1,\ldots,v_{N-1}{{\rangle}}$ a Waterman point. Hence ${{\langle}}v_1,\ldots,v_{N-1}{{\rangle}}$ is singular and, by induction, every maximal cone in $MBS({{\langle}}v_1,\ldots,v_{N-1}{{\rangle}})$ has multiplicity less than the multiplicity of ${{\langle}}v_1,\ldots,v_{N-1}{{\rangle}}$. Then also every maximal cone in ${{\Bbb{R}}}_+ b_{\tilde{\sigma}}*MBS({{\langle}}v_1,\ldots,v_{N-1}{{\rangle}})$ has multiplicity less than the multiplicity of ${{\langle}}b_{\tilde{\sigma}},v_1,\ldots,v_{N-1}{{\rangle}}$. Families of surfaces and 3-folds. ================================= [**Proof of Theorem \[main-thm\].**]{} It is not difficult to subdivide $\Delta_X$ and $\Delta_B$ so that $\Delta_X$ is simplicial, $\Delta_B$ is nonsingular, and $f:\Delta_X{\rightarrow}\Delta_B$ is a simplicial map (e.g. Proposition 4.4 and the remark following it in [@ak]). Applying the join construction we can make $f|_{\Delta_{X,i}}$ semistable without increasing the multiplicity of $\Delta_X$. We will show below that every singular simplex of $\Delta_X$ contains a Waterman point mapping to a barycenter of $\Delta_B$. By the previous section, there exist a modified barycentric subdivision and a simplicial map $f':MBS(\Delta_X){\rightarrow}BS(\Delta_B)$, with multiplicity of $MBS(\Delta_X)$ strictly less than the multiplicity of $\Delta_X$. Since $f'$ is simplicial and $BS(\Delta_B)$ nonsingular, the proof is completed by induction. Restrict $f$ to a singular simplex $f:\sigma{\rightarrow}\tau$, where $\sigma$ has primitive points $v_{i j}, i=1,\ldots,n, j=1,\ldots,J_i$, $\tau$ has primitive points $u_1\ldots,u_n$, and $f(v_{i j})=u_i$. Since $\sigma$ is singular, it contains a Waterman point $$w=\sum_{i,j} \alpha_{i j} v_{i j}, \qquad 0\leq\alpha_{i j}<1,$$ where not all $\alpha_{i j}=0$. Restricting to a face of $\sigma$ if necessary we may assume that $w$ lies in the interior of $\sigma$, hence $0<\alpha_{i j}$. Since $f(w)\in N_\tau$, it follows that $\sum_j \alpha_{ij}\in{{\Bbb{Z}}}$ for all $i$. In particular, if $J_{i_0}=1$ for some $i_0$ then $\alpha_{i_0 1}=0$, and $w$ lies in a face of $\sigma$. So we may assume that $J_i>1$ for all $i$. Since the relative dimension of $f$ is $\sum_i (J_i-1)$, we have to consider all possible decompositions $\sum_i (J_i-1) \leq 3$, where $J_i>1$ for all $i$. The cases when the relative dimension of $f$ is 0 or 1 are trivial and left to the reader. If the relative dimension of $f$ is 2, then either $J_1=3$, or $J_1=J_2=2$. In the first case, we have that ${{\langle}}v_{11},v_{12},v_{13}{{\rangle}}$ is singular, contradicting the semistability of $f|_{\Delta_{X,1}}$. In the second case, $\alpha_{11}+\alpha_{12}, \alpha_{21}+\alpha_{22} \in {{\Bbb{Z}}}$ and $0< \alpha_{i j} < 1$ imply that $\alpha_{11}+\alpha_{12}= \alpha_{21}+\alpha_{22}=1$. Hence $f(w)=u_1+u_2$ is a barycenter. In relative dimension 3, either $J_1=4$, or $J_1=3,J_2=2$, or $J_1=J_2=J_3=2$. In the first case, we get a contradiction with the semistability of $f|_{\Delta_{X,1}}$; the third case gives $\alpha_{11}+\alpha_{12}= \alpha_{21}+\alpha_{22}=\alpha_{31}+\alpha_{32}=1$ as for relative dimension $2$. In the second case either $\alpha_{11}+\alpha_{12}+\alpha_{13} = \alpha_{21}+\alpha_{22}=1$ and $w$ maps to a barycenter, or $\alpha_{11}+\alpha_{12}+\alpha_{13} = 2, \alpha_{21}+\alpha_{22}=1$ and $(\sum v_{i j})-w$ maps to a barycenter. [\ ]{} [HHHHHHH]{} D. Abramovich and K. Karu, [*Weak semistable reduction in characteristic 0*]{}, preprint.\ [alg-geom/9707012]{}. W. Fulton, [*Introduction to Toric Varieties*]{}, Princeton University Press, 1993. G. Kempf, F. Knudsen, D. Mumford and B. Saint-Donat, [*Toroidal Embeddings I*]{}, Springer, LNM 339, 1973.
--- abstract: 'Theories of $(d,p)$ reactions frequently use a formalism based on a transition amplitude that is dominated by the components of the total three-body scattering wave function where the spatial separation between the incoming neutron and proton is confined by the range of the $n$-$p$ interaction, $V_{np}$. By comparison with calculations based on the continuum discretized coupled channels method we show that the $(d,p)$ transition amplitude is dominated by the first term of the expansion of the three-body wave function in a complete set of Weinberg states. We use the $(d,p)$ reaction at 30 and 100 MeV as examples of contemporary interest. The generality of this observed dominance and its implications for future theoretical developments are discussed.' author: - 'D. Y. Pang' - 'N. K. Timofeyuk' - 'R. C. Johnson' - 'J. A. Tostevin' title: Rapid convergence of the Weinberg expansion of the deuteron stripping amplitude --- Introduction ============ There is growing interest and activity in transfer reaction studies using radioactive beams, driven by increased secondary beam intensities and motivated by the search for new physics at the edge of nuclear stability [@Jonson-PR-2004; @Cat10] and by the need for low-energy reaction rates for astrophysical applications [@Tom07; @Mukhamedzhanov-PRC-2008]. The $(d,p)$ reaction, measured in inverse kinematics, is well suited for these purposes. It can provide spin-parity assignments for nuclear states, allow determination of spectroscopic strengths of single-particle configurations, and give asymptotic normalization coefficients in the tail of overlap functions. The reliability of this deduced nuclear structure information depends on the existence of a reaction theory that describes adequately the mechanism of the $(d,p)$ reaction. This paper uses a formulation of the $A(d,p)B$ reaction amplitude that emphasizes the components of the total neutron+proton+target scattering wave function where the spatial separations between the incoming neutron and proton are confined by the range of the $n$-$p$ interaction, $V_{np}$. These components contain both the bound and continuum states of the $n$-$p$ system. Since the $n$-$p$ binding energy in the deuteron is small and the optical potentials that generate the tidal break-up forces are smooth functions of position, the strength of inelastic excitations to the $n$-$p$ continuum is expected to be concentrated at low $n$-$p$ relative energies. This suggests that the coupling effects between different $n$-$p$ states can be treated adiabatically and leads to a simple prescription for calculating the scattering wave function at small $n$-$p$ separations [@Johnson-PRC-1970]. In the adiabatic model the $A(d,p)B$ transition amplitude has exactly the same structure as that of the distorted-wave Born approximation (DWBA), for which many computer codes are available, which has led to its widespread use [@Johnson-PRC-1970; @Harvey-PRC-71; @Wales-NPA-76; @Johnson-NPA-1974; @Jenny-PRL-2010; @Cat10]. The adiabatic model frequently provides significant improvements over the DWBA for $A(d,p)B$ angular distributions and giving consistent results for nuclear structure information [@Jenny-PRL-2010]. There are two key ingredients in the adiabatic model:\ (i) the assumption that only components of the three-body scattering wave function with small $n$-$p$ separation are needed for the $A(d,p)B$ transition amplitude and (ii) the validity of the adiabatic treatment of deuteron break-up at the nuclear surface. The primary purpose of this paper is to show that assumption (i) is justified for a useful range of reaction energies when it is implemented in terms of a precisely defined projection of the three-body scattering wave function. This projection will be shown to involve the first Weinberg state component of the full wave function. Investigations of how assumption (ii) influences the predicted $(d,p)$ cross sections were carried out using the quasi-adiabatic model [@Amakawa-PRC-1984; @Stephenson-PRC-1990], the Weinberg states expansion (WSE) method [@Johnson-NPA-1974; @Laid-PRC-1993], the continuum discretized coupled channels (CDCC) method, and also Faddeev equation methods [@Deltuva-PRC-2007; @Filomena-PRC-2012]. The importance of nonadiabatic effects has been found to depend on the target and incident energy and, in the worst cases, these affected both the shapes and the magnitudes of the calculated differential cross sections [@Laid-PRC-1993; @Filomena-PRC-2012]. There is therefore an important need to provide a practical way of introducing corrections to the adiabatic approximation. Our aim here is to provide a suitable definition of the projection of the full scattering wave function implied by assumption (i), which we call the first Weinberg projection. We show that this projection, which is a function of only a single vector coordinate, dominates the calculation of the $A(d,p)B$ transition amplitude. This result implies that to include effects beyond the adiabatic approximation one can focus on improvements to the calculation of this projection only. The CDCC method for solving the three-body problem does not use the adiabatic approximation (ii). From a practical point of view it is well adapted to the study of deuteron breakup effects on $A(d,p)B$ reactions. In principle in the CDCC method one attempts to calculate the three-body scattering wave function in the whole six-dimensional coordinate space of the neutron+proton+target ($n+p+A$) three-body system. Our approach is to compare calculations of the $(d,p)$ transition amplitude made using a complete CDCC wave function with calculations which retain only the first few Weinberg components of the full CDCC wave function. In Sec. II we describe how the projection procedure mentioned above is related to the Weinberg state and CDCC expansion methods and we connect these. In Sec. III we construct the Weinberg components using the CDCC wave functions and in Sec. IV we compare calculations of the $(d,p)$ transition amplitudes using the first few Weinberg components. We summarize our results in Sec. V. Three-body wave function and its expansion in the CDCC and Weinberg state bases {#sec-expansion} =============================================================================== In the absence of inelastic excitations of the target and residual nuclei $A$ and $B$ in the incident and outgoing channels, the transition amplitude of the $A(d,p)B$ reaction can be written as [@Johnson-PRC-1970] $$\label{exact} T_{dp} = \langle \chi_p^{(-)}I_{AB}|V_{np}|\Psi^{(+)}\rangle\,.$$ Here $\chi_p^{(-)}$ is the outgoing proton distorted wave (where we neglect certain $1/A$ corrections [@JT]), $I_{AB}$ is the overlap function between the wave functions of $A$ and $B$, $V_{np}$ is the neutron-proton interaction, and $\Psi^{(+)}$ is the projection of the full many-body wave function onto the three-body, $n+p+A$, channel with $A$ in its ground state. The effect of coupling to excited states of $A$ is implicitly taken into account through the use of complex nucleon optical potentials, but contributions from transitions that explicitly excite components of $A$ in the initial state and $B$ in the final state are ignored. We assume that $\Psi^{(+)}$ satisfies the Schrödinger equation $$\begin{aligned} \label{the-Schrodinger-eq} \left[E_d+i\epsilon -H_{np}-T_R-U_n({{\bm r}}_n)-U_p({{\bm r}}_p)\right] \Psi^{(+)}({{\bm r}},{{\bm R}})\nonumber\\ = i\epsilon\phi_d({{\bm r}})e^{i{{\bm K}}_d\cdot{{\bm R}}},&&\end{aligned}$$ where $H_{np}=T_r+V_{np}$ is the $n$-$p$ relative motion Hamiltonian. Here $E_d=E_\textrm{c.m.}-\epsilon_d$ where $\epsilon_d$ is the deuteron binding energy and $E_\textrm{c.m.}$ is the three-body energy in the center-of-mass system. $U_n$ and $U_p$ are the optical model potentials for the neutron and the proton with the target nucleus, respectively, and ${{\bm K}}_d$ is the wave number associated with $E_d$. The coordinates ${{\bm r}}_p$ and ${{\bm r}}_n$ are the proton and neutron coordinates with respect to the target $A$ while ${{\bm r}}={{\bm r}}_p- {{\bm r}}_n$ and ${{\bm R}}=\tfrac{1}{2}({{\bm r}}_n+{{\bm r}}_p)$ are the relative and c.m. coordinates of the $n$-$p$ pair. Also, $$T_r=-\frac{\hbar^2}{2\mu_{np}}\nabla_r^2\ \textrm{ and } \ T_R=-\frac{\hbar^2}{2\mu_{dA}}\nabla_R^2$$ are the kinetic energy operators associated with ${{\bm r}}$ and ${{\bm R}}$, with $\mu_{np}$ and $\mu_{dA}$ the reduced masses of the $n$-$p$ pair and the $n+p+A$ system, respectively. The right hand side of Eq. (\[the-Schrodinger-eq\]) specifies the incident boundary condition of a deuteron with initial wave function $\phi_d$ and the physical total wave function is to be calculated in the limit $\epsilon\rightarrow 0+$. The superscripts on $\chi_p^{(-)}$ and $\Psi^{(+)}$ indicate that they obey ingoing and outgoing waves boundary conditions, respectively. For simplicity, these superscripts are omitted in the following text. In the next two sections we describe two expansion schemes for the total wave function $\Psi({{\bm r}},{{\bm R}})$. The Weinberg states expansion ----------------------------- For $n$-$p$ separations ${{\bm r}}$ within the range of $V_{np}$, the wave function $\Psi({{\bm r}},{{\bm R}})$ has the expansion [@Johnson-NPA-1974; @Laid-PRC-1993] $$\label{eq-expansion-wse} \Psi({{\bm r}},{{\bm R}}) = \sum_i\phi_i^{W}({{\bm r}})\chi_i^{W}({{\bm R}}),$$ where the Weinberg states, $\phi_i^{W}$, are solutions of the equation $$\label{weinberg-states} [-\epsilon_d-T_r-\alpha_i V_{np}]\phi_i^{W}({{\bm r}})=0, \ \ i=1,2,\ldots$$ with fixed energy $-\epsilon_d$ and eigenvalues $\alpha_i$. For radii $r>r_i$, where $r_i$ is such that $\alpha_i V_{np}(r)$ is negligible, all of the Weinberg states decay exponentially, like the deuteron ground state wave function. For $r<r_i$, they oscillate with a wavelength that varies with $i$, becoming increasingly oscillatory with increasing $i$ (see the examples given in [@Laid-PRC-1993] for the case of a Hulthén form for $V_{np}$). The Weinberg states form a complete set of functions of $r$ for regions of the $r$ axis on which $V_{np}$ is non-vanishing. They are therefore well adapted to expanding $\Psi$ in this region. They do not satisfy the usual orthonormality relation but instead satisfy $$\langle\phi_i^{W}|V_{np}|\phi_j^{W}\rangle=-\delta_{ij}\,,\label{Worthog}$$ where the value $-1$ for $i=j$ has been chosen for convenience. This form of orthonormality, with a weight factor $V_{np}$, means that if one wishes to represent an arbitrary state $\varphi(r)$ as a linear superposition of Weinberg states then the unique choice of coefficients $a_i$ which minimizes the difference $$\Delta = \int d{{\bm r}}\,V_{np}\mid \varphi-\sum_i a_i \phi_i^{W}\mid^2\,, \label{diff}$$ is $$a_i= -\langle\phi_i^{W}|V_{np}|\varphi \rangle \label{ai}\,.$$ Use of a factor $V_{np}$ in Eq. (\[diff\]), which weights $r$ values according to $V_{np}$, provides a natural scheme for constructing the expansion coefficients for states of $n$-$p$ relative motion for use in the $(d,p)$ transition amplitude. The CDCC basis method --------------------- The CDCC method involves the expansion of $\Psi({{\bm r}},{{\bm R}})$ in terms of a complete set of $n$-$p$ continuum bin states $\phi_i^{bin}$ (see, e.g., Ref. [@Austern-PR-1987]), written $$\label{eq-expansion-cdcc} \Psi({{\bm r}}, {{\bm R}}) = \phi_d({{\bm r}})\chi_0({{\bm R}})+\sum_{i=1} \phi_i^{bin}({{\bm r}})\chi_i^{bin}({{\bm R}})\,.$$ The bin states are linear superpositions of continuum eigenfunctions of $H_{np}$, on chosen intervals $\Delta k_i$ of $n$-$p$ continuum wave numbers, and are orthogonal in the usual sense. So, the projection of the three-body Schrödinger equation of Eq. (\[the-Schrodinger-eq\]) onto this set of spatially-extended bin states leads to a set of coupled-channel equations for the channel wave functions $\chi_i^{bin}({{\bm R}})$. The coupling potentials, generated from the nucleon optical potentials, are long-ranged and link parts of the wave function from all $n$-$p$, $n$-$A$, and $p$-$A$ separations. These CDCC equations can be solved numerically and their convergence properties have been intensively studied. Connection between the CDCC and Weinberg basis wave functions ------------------------------------------------------------- It is known from experience with CDCC calculations that the energy range of $n$-$p$ continuum states that are coupled to the incident deuteron channel is limited to tens of MeV. Thus, we expect that inside the range of $V_{np}$ the wave function $\Psi({{\bm r}},{{\bm R}})$ will not be a strongly oscillatory function of $r$ and only a few terms of the Weinberg expansion will be needed to evaluate the $(d,p)$ matrix element. Note that this has nothing to do with the strength of the coupling between Weinberg components in $\Psi({{\bm r}},{{\bm R}})$ or how rapidly the Weinberg expansion for $\Psi({{\bm r}},{{\bm R}})$ itself converges, but rather it relates to how rapid the convergence of the sequence of contributions to the $(d,p)$ amplitude is from the different Weinberg components. We do not obtain the latter from a set of coupled equations, as, e.g., was done successfully in Ref. [@Laid-PRC-1993], but rather from the CDCC expansion of $\Psi({{\bm r}},{{\bm R}})$. The quantitative issues arising from a comparison with the approach of Ref. [@Laid-PRC-1993] will be addressed elsewhere. To connect the Weinberg and CDCC components of $\Psi({{\bm r}},{{\bm R}})$ we project $\Psi$, expressed in the CDCC basis, onto individual Weinberg states using the orthogonality property of Eq. (\[Worthog\]). The Weinberg distorted waves, $\chi_i^{W}$, and those of the CDCC basis, $\chi_j^{bin}$, are related using $$\label{projection} \chi_{i}^{W}({{\bm R}})=C_{i0}\chi_0({{\bm R}}) + \sum_{j=1}C_{ij}\chi_j^{bin}({{\bm R}}).$$ The transformation coefficients $C_{ij}$ are given by $$\begin{aligned} C_{i0}&=&-\langle\phi_i^{W}|V_{np}|\phi_d\rangle,\ \ (=0,\,\,i\neq 1)\,,\nonumber \\ C_{ij}&=&-\langle\phi_i^{W}|V_{np}|\phi_j^{bin}\rangle, (i,j=1,2,\ldots)\,. \label{Cij}\end{aligned}$$ These coefficients also appear in the formulas $$\begin{aligned} \mid \phi_j^{bin}\rangle=\sum_i C_{ij}\mid \phi_i^W\rangle\,,\label{Cij2}\end{aligned}$$ and $$\begin{aligned} \int d{{\bm r}}V_{np} \mid\phi_j^{bin}(r)\mid^2=\sum_i\mid C_{ij}\mid^2 \label{Cij3}\end{aligned}$$ that quantify the contribution of each Weinberg state to a particular CDCC bin state, in the presence of the weight factor $V_{np}$. These $C_{ij}$ are determined entirely by the bound and scattering states of $V_{np}$ in the energy range of the relevant bin states. They do not depend on any other details of the reaction, such as the deuteron incident energy, the transferred angular momentum, or the structure of the target nuclei. The values of $C_{ij}$ do depend on how the CDCC bin states were constructed, the bin sizes $\Delta k_i$, etc., however we have checked that the changes in the computed $\chi_i^W$ are less than 0.1% with typical choices of bin sizes, such as $\Delta k_i \approx 0.1$-$0.15$ fm$^{-1}$. Throughout this work a Hulthén potential was used for $V_{np}$, namely, $$V_{np}(r)= V_0/(e^{\beta r}-1),$$ with parameters $V_0=-84.86$ MeV and $\beta=1.22$ fm$^{-1}$ [@Laid-PRC-1993]. Only $s$-wave continuum states were included. These give the largest contribution to $\Psi({{\bm r}}, {{\bm R}})$ at small $r$. In Fig. \[fig01\] we show the calculated $C_{ij}$ for $i\leq5$ for bin states $\phi_j^{bin}$ calculated from CDCC calculations using the computer code <span style="font-variant:small-caps;">fresco</span> [@fresco]. The lower and upper horizontal axes show the $n$-$p$ continuum energies included in the CDCC and the label of the different bins, with $j=1,\ldots,14$, respectively. The point with $j=0$ shows the $C_{10}$ that connects with the deuteron ground state. Each line then corresponds to a different Weinberg state, $\phi^W_i$. For the $(d,p)$ reaction the most relevant continuum energies lie in the range 0 to 40 MeV. From Eq. (\[Cij2\]) and the $i$ dependence of the $C_{ij}$ for the lower energy (and $j$) bins in Fig. \[fig01\], we see that the bin states in the relevant energy range are dominated by the first Weinberg component with only small contributions from Weinberg states $i=2$-$5$. This dominance is particularly marked for the low-energy continuum, which is the most strongly coupled to the deuteron ground state by the break-up mechanism and that has the largest $\chi_i^{bin}({{\bm R}})$ in Eq. (\[eq-expansion-cdcc\]). At the higher continuum energies the bin states are mixtures of several Weinberg states, as was expected. In Eq. (\[projection\]), this dominance of the $i=1$ coefficients for low continuum energies will make $\chi_1^W$ the dominant Weinberg distorted wave provided the contributions from continuum bins with energies greater than of order 30 MeV are not large. In the next section we present the details of CDCC calculations and show that these qualitative observations are borne out quantitatively for typical $(d,p)$ reactions and energies. ![(Color online) CDCC bin-state to Weinberg state transformation coefficients $C_{ij}$, of Eq. (\[projection\]), for Weinberg states $i=1,2,\ldots,5$ and CDCC bin states $j=1,2,\ldots,14$. The deuteron ground state is denoted by $j=0$. The CDCC bins were calculated up to $n$-$p$ relative momenta $k_{max}=1.4$ fm$^{-1}$ in steps $\Delta k_i=0.1$ fm$^{-1}$. See Sec. \[sec-numerical-calc\] for full details.[]{data-label="fig01"}](fig01.eps){width="48.00000%"} Construction of the $\chi_i^{W}$ from the CDCC wave function {#sec-numerical-calc} ============================================================ In this section, as relevant topical examples, we construct the Weinberg distorted waves $\chi_i^W$ for the $^{132}$Sn($d,p)^{133}$Sn reaction at deuteron incident energies $E_d =$ 100 and 30 MeV at which the contributions from closed channels are negligible. Neutron-rich target nuclei, for inverse kinematics ($d,p)$ experiments at such energies per nucleon, are available at several modern radioactive ion beam facilities including RIKEN [@Kubo-RIPS], GANIL [@Villari-SPRIL], NSCL [@Morrissey-NPA-1997], FLNR at Dubna [@Rodin-ACCULINNA], and IMP at Lanzhou [@SunZY-RIBLL]. ![(Color online) Convergence of selected partial waves of the Weinberg components $\chi_i^{W}$ with respect to the maximum $n$-$p$ continuum energy included in Eq.(\[projection\]). Results are for (a) $\chi_1^{W}$ and $E_d= 100$ MeV, (b) $\chi_2^{W}$ and $E_d=100$ MeV, and (c) $\chi_1^{W}$ and $E_d=30$ MeV. The partial wave values, $L$, associated with each $\chi_i^{W}$ are indicated in each panel.[]{data-label="fig02"}](fig02.eps){width="48.00000%"} We solved the CDCC equations using nucleon optical potentials, $U_n$ and $U_p$, evaluated at half the incident deuteron energy, taken from the KD02 systematics [@kd02]. Only the central parts of these potentials were used. Both the nuclear and Coulomb potentials were used in constructing the coupling potentials. The continuum bin states $\phi^{bin}$ were computed by discretizing the $s$-wave $n$-$p$ continuum using $\Delta k_i$ of 0.1 and 0.05 fm$^{-1}$ up to $k_{max}=1.4$ and 0.75 fm$^{-1}$, corresponding to maximum continuum energies of 81.9 and 23.5 MeV, for $E_d = 100$ and 30 MeV, respectively. The coupled-channels CDCC equations were solved up to $R_{max} =100$ fm because of the long-range nature of the CDCC couplings [@Jeff-PRC-2001]. The CDCC calculations were performed using the computer code <span style="font-variant:small-caps;">fresco</span> [@fresco]. ![(Color online) Calculated Weinberg state distorted waves $\chi_i^{W}$ demonstrating the dominance of $\chi_1^{W}$. Curves compare the moduli of $\chi_1^{W}$, $\chi_2^{W}$ and $\chi_3^{W}$ for the $^{132}$Sn($d,p)^{133}$Sn reaction for (a) $E_d= 100$ MeV and partial wave $L=18$, and (b) $E_d= 30$ MeV and partial wave $L=12$.[]{data-label="fig03"}](fig03.eps){width="48.00000%"} The $\chi^W_i$ were constructed from Eq. (\[eq-expansion-cdcc\]) using the coefficients $C_{ij}$ discussed in the previous section. It was found that bins up to a maximum continuum energy of 25 MeV are sufficient for the convergence of $\chi_1^W$ for both deuteron incident energies. This is illustrated in Figs. 2(a) and 2(c), which show $\chi_1^W$ for partial waves with $L = 18$ and 12. Angular momenta $L$ near these values drive the dominant contributions to the $(d,p)$ reaction cross sections for $E_d=100$ and 30 MeV, respectively. Convergence of the $\chi_i^W$ with $i > 1$ was not achieved, as anticipated from the behavior of the coefficients $C_{ij}$ shown in Fig. \[fig01\]. We demonstrate this in Fig. 2(b) for $\chi^W_2$ and $E_d = 100$ MeV. As is expected, from the $C_{ij}$ dependence on $j$ for $i>1$, all $i>1$ Weinberg components are about two orders of magnitude smaller than $\chi_1^W$ in the most important radial region for the transfer amplitude. This is $R \approx 7$ fm in the present case (see Fig. \[fig03\]). ![(Color online) Comparisons of the calculated differential cross sections for the $(d,p)$ reaction at (a) 100 MeV and (b) 30 MeV, using Weinberg distorted wave components $\chi_1^{W}$, $\chi_2^{W}$, and $\chi_3^{W}$, showing the dominance of the first Weinberg component $\chi_1^{W}$; see the text for details.[]{data-label="fig04"}](fig04.eps){width="48.00000%"} The Weinberg distorted wave components $\chi_i^W$ constructed above contain contributions from all CDCC basis components within the range of $V_{np}$. However, since $\chi_1^W$ dominates over all other Weinberg components, it is sufficient to perform one-channel transfer reaction calculations with only $\chi_1^W$ included. We call calculations truncated in this way DW$\chi_1$A (distorted wave with $\chi_1^W$ approximation). For this purpose, we read the calculated $\chi_i^W$ ($i=1,2,3$) into the computer code <span style="font-variant:small-caps;">twofnr</span> [@twofnr] and calculate the transfer amplitude within the zero-range approximation. We use the same KD02 optical potential systematics as used in the deuteron channel for the proton distorted waves in the outgoing channel. In the model calculations presented, the neutron overlap function is approximated as a single particle wave function (with $\ell=3$) calculated using a Woods-Saxon potential with standard radius and diffuseness parameters, $r_0=1.25$ fm and $a_0=0.65$ fm and depth fitted to separation energy of 2.47 MeV [@Audi-NPA-2003]. No spin-orbit potential was used for this wave function. The DW$\chi_i$A differential cross sections are shown in Fig. \[fig04\] for 100 and 30 MeV incident deuteron energies, where the differential cross sections corresponding to each of $\chi_{1,2,3}^W$ and their coherent sum are shown. Evident from these figures is that the addition of channels $\chi_2^W$ and $\chi_3^W$ does not influence the cross sections at the forward angles where the angular distributions are usually measured and are most valuable for spectroscopy. The $\chi_2^W$ and $\chi_3^W$ contributions are noticeable at large angles where the cross sections are small, but even there the changes are small. For comparison, the results of CDCC-ZR calculations, which include the contributions to transfer (in the zero-range approximation) from all of the CDCC continuum bins used to construct $\chi_i^W$, are also shown. As was expected, the cross sections from the CDCC-ZR calculation and from the coherent sums of the DW$\chi_i$A ($i=1,2,3$) amplitudes agree very well at both of the energies studied. Summary ======= Using as an example the $^{132}$Sn$(d,p)^{133}$Sn reaction at energies of 15 and 50 MeV/nucleon typical of modern radioactive ion beam facilities, we have demonstrated that the dominant effects of deuteron breakup on calculations of $(d,p)$ reaction observables can be accommodated using a one-channel distorted-wave calculation. These calculations go well beyond the DWBA method in that no Born approximation step is involved. This calculation requires knowledge of an effective deuteron distorted-wave, being the first component of the expansion of the $p+n+A$ scattering wave function $\Psi({{\bm r}}, {{\bm R}})$ in Weinberg states. This component includes accurately breakup contributions from the small $n$-$p$ separations that dominate the $(d,p)$ reaction amplitude. It is defined as the projection of $\Psi({{\bm r}}, {{\bm R}})$ onto the transfer reaction vertex, i.e., $V_{np}\mid\phi_d\rangle$. Johnson and Tandy [@Johnson-NPA-1974] showed that, by neglecting couplings between components in the Weinberg expansion of the three-body wave function, one obtains a simple prescription for a potential that generates directly (an approximation to) the first Weinberg component. $(d,p)$ reaction calculations based on this approximation, known as the adiabatic distorted wave approximation (ADWA), have had some success in the analysis of data. Successful and more complete calculations, that include the couplings between the Weinberg components have also been published [@Laid-PRC-1993]. We have shown here that there is a need to develop a simple procedure for correcting the ADWA, focusing specifically on calculating accurately only the first Weinberg component of the three-body scattering wave function $\Psi({{\bm r}},{{\bm R}})$. This would be especially important for incident energies of $3-10$ MeV per nucleon, typical of TRIUMF [@Bricault-TRIUMF], HRIBF at ORNL [@Beene-JPG-2011] (where the $^{132}$Sn($d,p$)$^{133}$Sn reaction has been measured [@Jones-nature-2010; @Jones-PRC-2011]), and ISOLDE [@Kester-ISOLDE], for which the influence of closed channels does not allow us to generate reliably this component using the scheme described above. Acknowledgments {#acknowledgments .unnumbered} =============== DYP appreciates the warm hospitality he received during his visits to the University of Surrey. This work is supported by the National Natural Science Foundation of China under Grants No. 11275018, No. 11021504, and No. 11035001, and a project sponsored by the Scientific Research Foundation for Returned Overseas Chinese Scholars, State Education Ministry. NKT, RCJ, and JAT gratefully acknowledge the support of the United Kingdom Science and Technology Facilities Council (STFC) through research Grant No. ST/J000051. [999]{} Björn Jonson, Phys. Rep. [**389**]{}, 1 (2004). W.N. Catford *et al.*, Phys.Rev.Lett. [**104**]{}, 192501 (2010). J.S. Thomas *et al.*, Phys.Rev. C [**76**]{}, 044302 (2007). A.M. Mukhamedzhanov, F.M. Nunes, and P. Mohr, Phys. Rev. C [**77**]{}, 051601(R) (2008). R.C. Johnson and P.J. Soper, Phys. Rev. C [**1**]{}, 976 (1970). J.D. Harvey and R.C. Johnson, , 636 (1971). G.L. Wales and R.C. Johnson, Nucl. Phys. [**A274**]{}, 168 (1976). R.C. Johnson and P.C. Tandy, Nucl. Phys. [**A235**]{}, 56 (1974). Jenny Lee, M. B. Tsang, D. Bazin, D. Coupland, V. Henzl, *et al.*, Phys. Rev. Lett. [**104**]{}, 112701 (2010). H. Amakawa, N. Austern, and C. M. Vincent, Phys. Rev. C [**29**]{}, 699 (1984). E. J. Stephenson, A. D. Bacher, G. P. A. Berg, V. R. Cupps, *et al.*, Phys. Rev. C [**42**]{}, 2562 (1990). A. Laid, J.A. Tostevin, and R.C. Johnson, Phys. Rev. C [**48**]{}, 1307 (1993). A. Deltuva, A.M. Moro, E. Cravo, F.M. Nunes and A.C. Fonseca, Phys. Rev. C [**76**]{}, 064602 (2007). N.J. Upadhyay, A. Deltuva, and F.M. Nunes, Phys. Rev. C [**85**]{}, 054621 (2012). N.K. Timofeyuk and R.C. Johnson, Phys. Rev. C [**59**]{}, 1545 (1999). N. Austern, Y. Iseri, M. Kamimura, M. Kawai, G. Rawitscher, and M. Yahiro, Phys. Rep. [**154**]{}, 125 (1987). I.J. Thompson, Comp. Phys. Rep. [**7**]{}, 167 (1988). T. Kubo, M. Ishihara, N. Inabe, H. Kumagai, I. Tanihata, K. Yoshida, T. Nakamura, H. Okuno, S. Shimoura, K. Asahi, Nucl. Instrum. Methods **B70**, 309 (1992). Antonio C.C Villari, Nucl. Instrum. Methods **B204**, 31 (2003). D.J. Morrissey, Nucl. Phys. **A616**, 45 (1997). A.M Rodin, S.I Sidorchuk, S.V Stepantsov, G.M Ter-Akopian, A.S Fomichev, *et al.*, Nucl. Instrum. Methods **A391**, 228 (2003). Z. Sun, W.-L. Zhan, Z.-Y. Guo, G. Xiao, J.-X. Li, Nucl. Instrum. Methods **A503**, 496 (2003). A. J. Koning and J. P. Delaroche, Nucl. Phys. [**A713**]{}, 231 (2003). J. A. Tostevin, F. M. Nunes, and I. J. Thompson, Phys. Rev. C [**63**]{}, 024617 (2001). J.A. Tostevin, University of Surrey version of the code TWOFNR (of M. Toyama, M. Igarashi and N. Kishida), http://www.nucleartheory.net/NPG/code.htm G. Audi, A.H. Wapstra, C.Thibault, Nucl. Phys. **A729**, 337 (2003). P.G. Bricault, M. Dombsky, P.W. Schmor, and G. Stanford, Nucl. Instrum. Methods **B126**, 231 (1997). J.R. Beene *et al.*, J. Phys. G: Nucl. Part. Phys. [**38**]{}, 024002 (2011). K.L. Jones, *et al.*, Nature (London), [**465**]{}, 454 (2010). K.L. Jones, *et al.*, Phys. Rev. C [**84**]{}, 034601 (2011). O Kester, T Sieber, S Emhofer, F Ames, and K Reisinger *et al.*, Nucl. Instrum. Methods **B204**, 20 (2003).
--- abstract: 'We introduce and study the turnpike property for time-varying shapes, within the viewpoint of optimal control. We focus here on second-order linear parabolic equations where the shape acts as a source term and we seek the optimal time-varying shape that minimizes a quadratic criterion. We first establish existence of optimal solutions under some appropriate sufficient conditions. We then provide necessary conditions for optimality in terms of adjoint equations and, using the concept of strict dissipativity, we prove that state and adjoint satisfy the measure-turnpike property, meaning that the extremal time-varying solution remains essentially close to the optimal solution of an associated static problem. We show that the optimal shape enjoys the exponential turnpike property in term of Hausdorff distance for a Mayer quadratic cost. We illustrate the turnpike phenomenon in optimal shape design with several numerical simulations.' address: - 'Sorbonne Université, CNRS, Université de Paris, Inria, Laboratoire Jacques-Louis Lions (LJLL), F-75005 Paris, France.' - 'Chair in Applied Analysis, Alexander von Humboldt-Professorship, Department of Mathematics Friedrich-Alexander-Universität, Erlangen-Nürnberg, 91058 Erlangen, Germany; Chair of Computational Mathematics, Fundación Deusto Av. de las Universidades 24, 48007 Bilbao, Basque Country, Spain; Departamento de Matemáticas, Universidad Autónoma de Madrid, 28049 Madrid, Spain.' author: - Gontran Lance - Emmanuel Trélat - Enrique Zuazua bibliography: - 'bibliography.bib' title: Turnpike in optimal shape design --- \[thm\][Corollary]{} \[thm\][Lemma]{} \[thm\][Claim]{} \[thm\][Axiom]{} \[thm\][Conjecture]{} \[thm\][Fact]{} \[thm\][Hypothesis]{} \[thm\][Assumption]{} \[thm\][Proposition]{} \[thm\][Criterion]{} \[thm\][Definition]{} \[thm\][Example]{} \[thm\][Remark]{} \[thm\][Problem]{} \[thm\][Principle]{} optimal shape design, turnpike, strict dissipativity, direct methods, parabolic equation Introduction {#sec:intro} ============ We start with an informal presentation of the turnpike phenomenon for general dynamical optimal shape problems. Let $T>0$. We consider the problem of determining a time-varying shape $t \mapsto \omega(t)$ (viewed as a control, as in [@MR3350723]) minimizing the cost functional $$J_T(\omega) = \frac{1}{T}\int_0^T f^0\big(y(t),\omega(t)\big) \, dt + g\big(y(T),\omega(T)\big) \label{shapemin}$$ under the constraints $$\dot{y}(t) = f\big(y(t),\omega(t)\big), \qquad R\big(y(0),y(T)\big) = 0 \label{pde}$$ where (\[pde\]) may be a partial differential equation with various terminal and boundary conditions. We associate to the dynamical problem - a *static* problem, not depending on time, $$\displaystyle{\min_{\omega} f^0(y,\omega)}, \quad f(y,\omega) = 0 \label{shape_static}$$ i.e., the problem of minimizing the instantaneous cost under the constraint of being an equilibrium of the control dynamics. According to the well known turnpike phenomenon, one expects that, for $T$ large enough, optimal solutions of - remain most of the time “close" to an optimal (stationary) solution of the static problem (\[shape\_static\]). The turnpike phenomenon was first observed and investigated by economists for discrete-time optimal control problems (see [@turnpikefirst; @10.2307/1910955]). There are several possible notions of turnpike properties, some of them being stronger than the others (see [@MR3362209]). *Exponential turnpike* properties have been established in [@GruneSchallerSchiela; @MR3124890; @MR3616131; @TrelatZhangZuazua; @MR3271298] for the optimal triple resulting of the application of Pontryagin’s maximum principle, ensuring that the extremal solution (state, adjoint and control) remains exponentially close to an optimal solution of the corresponding static controlled problem, except at the beginning and at the end of the time interval, as soon as $T$ is large enough. This follows from hyperbolicity properties of the Hamiltonian flow. For discrete-time problems it has been shown in [@MR3217211; @MR3654613; @MR3782393; @MR3470445; @measureturnpikeTZ] that exponential turnpike is closely related to strict dissipativity. *Measure-turnpike* is a weaker notion of turnpike, meaning that any optimal solution, along the time frame, remains close to an optimal static solution except along a subset of times of small Lebesgue measure. It has been proved in [@MR3654613; @measureturnpikeTZ] that measure-turnpike follows from strict dissipativity or from strong duality properties. Applications of the turnpike property in practice are numerous. Indeed, the knowledge of a static optimal solution is a way to reduce significantly the complexity of the dynamical optimal control problem. For instance it has been shown in [@MR3271298] that the turnpike property gives a way to successfully initialize direct or indirect (shooting) methods in numerical optimal control, by initializing them with the optimal solution of the static problem. In shape design and despite of technological progress, it is easier to design pieces which do not evolve with time. Turnpike can legitimate such decisions for large-time evolving systems. Shape turnpike for the heat equation {#sec:Shape Turnpike and heat equation} ==================================== Throughout the paper, we denote by: - $|Q|$ the Lebesgue measure of a measurable subset $Q \subset \mathcal{R}^N$, $N\geq 1$; - $\big( p,q \big) $ the scalar product in $L^2(\Omega)$ of $p,q \mbox{ in } L^2(\Omega)$; - $\Vert y \Vert$ the $L^2$-norm of $y\in L^2(\Omega)$; - $\chi_{\omega}$ the indicator (or characteristic) function of $\omega \subset \mathcal{R}^N$; - $d_{\omega}$ the distance function to the set $\omega \subset \mathbf{R}^d$ and $b_{\omega} = d_{\omega} - d_{\omega^c}$ the *oriented distance function*. Let $\Omega \subset \mathbf{R}^d$ ($d \geq 1$) be an open bounded Lipschitz domain. We consider a uniformly elliptic second-order operator $$Au=-\sum_{i,j=1}^d \partial_{x_j}\big(a_{ij}(x)\partial_{x_i}u\big)+\sum_{i=1}^d b_{i}(x)\partial_{x_i}u+c(x)u$$ with $a_{ij},b_i \in C^1(\Omega)$, $c\in L^{\infty}(\Omega)$ with $c\geq 0$, and its adjoint $$A^*v=-\sum_{i,j=1}^d\partial_{x_i}\left(a_{ij}(x)\partial_{x_j}v\right)-\sum_{i=1}^db_{i}(x)\partial_{x_i}v+\left(c-\sum_{i=1}^d\partial_{x_i}b_i\right)v$$ (which is also uniformly elliptic, see [@MR2597943 Definition Chapter 6]), not depending on $t$ and with a constant of ellipticity $\theta>0$ (for $A$ written in *nondivergence form*), i.e.: $$\sum_{i,j=1}^d a_{ij}(x) \xi_i\xi_j \geq \theta \vert \xi \vert^2 \qquad \forall x \in \Omega$$ Moreover, $\theta$ is such that $$\label{ineq_theta} \theta > \theta_1$$ where $\theta_1$ is the largest root of the polynomial $P = \frac{X^2}{4\min(1,C_p)} - \Vert c \Vert_{L^{\infty}(\Omega)} X - \frac{\sum_{i=1}^d\Vert b_i\Vert_{L^{\infty}(\Omega)}}{2}$ with $C_p$ the Poincaré constant on $\Omega$. This assumption is used to ensure that an energy inequality is satisfied with constants not depending on the final time $T$ (see \[sec\_app\] for details). Moreover, we assume throughout that $A$ satisfies the classical maximum principle (see [@MR2597943 sec. 6.4]) and that $c^*=c-\sum_{i=1}^d\partial_{x_i}b_i \in C^2(\Omega)$. We define $A_D$ as the differential operator $A$ defined on the domain $D(A)$ encoding Dirichlet conditions $y_{\vert\partial\Omega}=0$ (when $\Omega$ is $C^2$ or a convex polytop in $\mathbf{R}^2$, we have $D(A)=H^1_0(\Omega)\cap H^2(\Omega)$). Let $(\lambda_j, \phi_j)_{j \in \mathbf{N}^*}$ be the eigenelements of $A_D$ with $(\phi_j)_{j\in\mathbf{N}^*}$ an orthonormal eigenbasis of $L^2(\Omega)$: - $ \forall j \in \mathbf{N}^{*},\qquad A \phi_{j} = \lambda_{j}\phi_{j}, \qquad \phi_{j_{\vert\partial\Omega}}=0$ - $ \forall j \in \mathbf{N}^{*},\, j>1, \qquad \lambda_{1}< \lambda_{j} \leqslant \lambda_{j+1}, \qquad \lambda_{j}\rightarrow +\infty$ A typical example sartisfying all assumptions above is the Dirichlet Laplacian, which we will consider in our numerical simulations. We recall that the Hausdorff distance between two compact subsets $K_1, K_2$ of $\mathbf{R}^d$ is defined by $$d_{\mathcal{H}}(K_1,K_2) = \sup\Big(\sup_{x\in K_2} d_{K_1}(x),\sup_{x\in K_1} d_{K_2}(x) \Big) .$$ Setting ------- Let $L \in (0,1)$. We define the set of admissible shapes $$\mathcal{U}_L = \{\omega \subset \Omega \mbox{ measurable } \mid \, \vert \omega \vert \leq L \vert \Omega \vert \}$$ #### Dynamical optimal shape design problem [$(\mathbf{OSD})_{\mathbf{T}}$]{} Let $y_{0} \in L^{2}(\Omega)$ and let $\gamma_1 \geq 0, \gamma_2 \geq 0$ be arbitrary. We consider the parabolic equation controlled by a (measurable) time-varying map $t\mapsto\omega(t)$ of subdomains $$\partial_t y + A y = \chi_{\omega(\cdot)}, \qquad y_{\vert \partial \Omega}=0, \qquad y(0) = y_{0} \label{heat}$$ Given $T>0$ and $y_d \in L^{2}(\Omega)$, we consider the dynamical optimal shape design problem [$(\mathbf{OSD})_{\mathbf{T}}$]{} of determining a measurable path of shapes $t\mapsto \omega(t)\in \mathcal{U}_L$ that minimizes the cost functional $$J_{T}(\omega(\cdot)) = \frac{\gamma_1}{2T}\int_{0}^{T}\Vert y(t)-y_{d}\Vert^{2}\,dt + \frac{\gamma_2}{2}\,\Vert y(T) - y_d\Vert^2 \label{cost}$$ where $y=y(t,x)$ is the solution of (\[heat\]) corresponding to $\omega(\cdot)$. #### Static optimal shape design problem [$(\mathbf{SSD})$]{} Besides, for the same target function $y_d \in L^2(\Omega)$, we consider the following associated static shape design problem [$(\mathbf{SSD})$]{}: $$\displaystyle{\min_{\omega \in \mathcal{U}_L} \gamma_1 \Vert y-y_{d}\Vert^{2}}, \quad A y =\chi_{\omega}, \quad y_{\vert \partial \Omega}=0 \label{static}$$ We are going to compare the solutions of [$(\mathbf{OSD})_{\mathbf{T}}$]{} and of [$(\mathbf{SSD})$]{} when $T$ is large. Preliminaries ------------- #### Convexification Given any measurable subset $\omega\subset\Omega$, we identify $\omega$ with its characteristic function $\chi_\omega\in L^\infty(\Omega;\{0,1\})$ and we identify $\mathcal{U}_L$ with a subset of $L^\infty(\Omega)$ (as in [@MR2745777; @MR3325779; @MR3500831]). Then, the convex closure of $\mathcal{U}_L$ in $L^\infty$ weak star topology is $$\overline{\mathcal{U}}_L = \Big\{ a \in L^{\infty}\big(\Omega;[0,1]\big)\ \mid\ \int_{\Omega}a(x)\,dx \leq L\vert\Omega\vert \Big\}$$ which is also weak star compact. We define the *convexified* (or *relaxed*) optimal control problem [$(\mathbf{ocp})_{\mathbf{T}}$]{} of determining a control $t\mapsto a(t)\in \overline{\mathcal{U}}_L$ minimizing the cost $$J_T(a)= \frac{\gamma_1}{2T}\int_{0}^{T}\Vert y(t)-y_{d}\Vert^{2}\,dt + \frac{\gamma_2}{2}\,\Vert y(T) - y_d\Vert^2$$ under the constraints $$\qquad \quad \, \, \partial_t y +A y = a, \qquad y_{\vert \partial \Omega}=0, \qquad y(0) = y_{0} \label{heat_convex}$$ The corresponding convexified static optimization problem [$(\mathbf{sop})$]{} is $$\min_{a \in \overline{\mathcal{U}}_L} \frac{\gamma_1}{2} \Vert y-y_{d}\Vert^{2}, \qquad A y = a, \qquad y_{\vert \partial \Omega}=0 \label{static_convex}$$ Note that the control $a$ does not appear in the cost functionals of the above convexified control problems. Therefore the resulting optimal control problems are affine with respect to $a$. Once we have proved that optimal solutions $a$ do exist, we expect that any minimizer will be an extremal point of the compact convex set $\overline{\mathcal{U}}_L$, which is exactly $\mathcal{U}_L$: if this is true, then actually $a=\chi_\omega$ with $\omega\in\mathcal{U}_L$. Here, as it is usual in shape optimization, the interest of passing by the convexified problem is to allow us to derive optimality conditions, and thus to characterize the optimal solution. It is anyway not always the case that the minimizer $a$ of the convexified problem is an extremal point of $\overline{\mathcal{U}}_L$ (i.e., a characteristic function): in this case, we speak of a *relaxation phenomenon*. Our analysis hereafter follows these guidelines. Taking a minimizing sequence and by classical arguments of functional analysis (see, e.g., [@MR0271512]), it is straightforward to prove existence of solutions $a_T$ and $\bar{a}$ respectively of [$(\mathbf{ocp})_{\mathbf{T}}$]{} and of [$(\mathbf{sop})$]{} (see details in Section \[sec31\]). It can be noted that, when $a_T(\cdot) = \chi_{\omega_T}(\cdot)$ and $\bar a=\chi_{\bar\omega}$ with $\omega_T(t)\in\mathcal{U}_L$ (for $a.e.\ t\in[0,T]$) and $\bar\omega\in\mathcal{U}_L$, i.e., when $a_T(t)$ (for $a.e.\ t\in[0,T]$) and $\bar a$ are characteristic functions of some subsets, then actually, $t\mapsto\omega_T(t)$ and $\bar\omega$ are optimal shapes, solutions respectively of [$(\mathbf{ocp})_{\mathbf{T}}$]{} and of [$(\mathbf{sop})$]{}. Our next task is to apply necessary optimality conditions to optimal solutions of the convexified problems, and infer from these necessary conditions that, under appropriate assumptions, the optimal controls are indeed characteristic functions. #### Necessary optimality conditions for [$(\mathbf{ocp})_{\mathbf{T}}$]{} According to the Pontryagin maximum principle (see [@MR0271512 Chapter 3, Theorem 2.1], see also [@MR1312364]), for any optimal solution $(y_T,a_T)$ of [$(\mathbf{ocp})_{\mathbf{T}}$]{} there exists an adjoint state $p_T \in L^{2}(0,T;\Omega)$ such that $$\begin{aligned} \partial_t y_{T} + A y_{T} = a_{T},~ y_{T_{\vert \partial \Omega}}=0,~ y_{T}(0) = y_{0} \label{OCocpstate} \\[0.3cm] \hspace{-0.5cm}\partial_t p_{T} -A^* p_{T} = \gamma_1(y_T\!-\!y_d),~ p_{T_{\vert \partial \Omega}}\!=\!0, ~p_{T}(T) \!=\! \gamma_2 \big(y_T(T)\!-\!y_d\big) \label{OCocpadjoint} \\[0.3cm] \forall a \in \overline{\mathcal{U}}_L, \textrm{for a.e.}\ t \in [0,T],\quad \big(p_{T}(t),a_{T}(t)-a\big) \geq 0 \label{optim}\end{aligned}$$ #### Necessary optimality conditions for [$(\mathbf{sop})$]{} Similarly, applying [@MR0271512 Chapter 2, Theorem 1.4], for any optimal solution $(\bar y,\bar a)$ of [$(\mathbf{sop})$]{} there exists an adjoint state $\bar{p} \in L^{2}(\Omega)$ such that $$\begin{aligned} \begin{array}{rcl} \hspace{1cm}A \bar{y} = \bar{a},&~& \bar{y}_{\vert \partial \Omega}=0 \\[0.2cm] \hspace{1cm}-A^* \bar{p} = \gamma_1(\bar{y}-y_d),&~& \bar{p}_{\vert \partial \Omega}=0 \end{array} \label{OCsop} \\ \forall a \in \overline{\mathcal{U}}_L \qquad (\bar{p},\bar{a} - a) \geq 0 \label{optimstat}\end{aligned}$$ Using the bathtub principle (see, e.g., [@MR1817225 Theorem 1.14]), (\[optim\]) and (\[optimstat\]) give $$\begin{aligned} \hspace{1cm}a_T(\cdot) &=& \chi_{\{p_T(\cdot) > s_T(\cdot)\}} + c_T(\cdot)\chi_{\{p_T(\cdot) = s_T(\cdot)\}} \label{optimchi} \\ \hspace{1cm}\bar{a} &=& \chi_{\{\bar{p} > \bar{s}\}} + \bar{c}\chi_{\{\bar{p} = \bar{s}\}} \label{optimstatchi}\end{aligned}$$ with $$\begin{aligned} \hspace{-0.5cm}a.e. \, t \in [0,T], \, &c_T(t)& \in L^{\infty}(\Omega;[0,1]) \mbox{ and } \bar{c} \in L^{\infty}(\Omega;[0,1]) \\ &s_T(\cdot)& = \inf\big\{\sigma\in\mathbf{R}\ \mid\ \vert \{p_T(\cdot)>\sigma\} \vert \leq L\vert \Omega \vert \big\} \\ &\bar{s}& = \inf\big\{\sigma\in\mathbf{R}\ \mid\ \vert\{\bar{p}>\sigma\}\vert \leq L\vert \Omega \vert \big\} \end{aligned}$$ Note that, if $\vert \big\{\bar{p} = \bar{s}\big\} \vert = 0$, then it follows from (\[optimstatchi\]) that the static optimal control $\bar a$ is actually the characteristic function of a shape $\bar{\omega} \in \mathcal{U}_L$ and hence in that case we have existence of an optimal shape. Main results ------------ #### Existence of optimal shapes Proving existence of optimal shapes, solutions of [$(\mathbf{OSD})_{\mathbf{T}}$]{} and of [$(\mathbf{SSD})$]{}, is not an easy task. We can find cases where there is no existence for a variant of [$(\mathbf{SSD})$]{} in [@henrot2005variation Sec. 4.2, Example 2]: this is the relaxation phenomenon. Therefore, some assumptions are required on the target function $y_d$ to establish existence of optimal shapes. We define: - $y^{T,0} \mbox{ and } y^{T,1}$, the solutions of (\[heat\_convex\]) corresponding respectively to $a(\cdot)=0$ and to $a(\cdot)=1$; - $y^{s,0} \mbox{ and } y^{s,1}$, the solutions of (\[static\_convex\]) corresponding respectively to $a=0$ and to $a=1$; - $\displaystyle{y^0 = \min \Big(y^{s,0}, \min_{t \in (0,T)} y^{T,0}(t)\Big)}$ and $ \displaystyle{y^1 = \max \Big(y^{s,1},\max_{t \in (0,T) }y^{T,1}\Big)}$. We distinguish between Lagrange and Mayer cases. 1. $\gamma_1=0, \gamma_2=1$ (Mayer case): If $A$ is analytic hypoelliptic in $\Omega$ then there exists a unique optimal shape $\omega_T$, solution of [$(\mathbf{OSD})_{\mathbf{T}}$]{}. 2. $\gamma_1=1, \gamma_2=0$ (Lagrange case): Assuming that $y_0 \in D(A)$ and that $y_d \in H^2(\Omega)$: (i) If $y_d<y^0$ or $y_d>y^1$ then there exist unique optimal shapes $\bar\omega$ and $\omega_T$, respectively, of [$(\mathbf{SSD})$]{} and of [$(\mathbf{OSD})_{\mathbf{T}}$]{}. (ii) There exists a function $\beta$ such that if $A y_d \leq \beta$, then there exists a unique optimal shape $\bar \omega$, solution of [$(\mathbf{SSD})$]{}. \[existencethm\] We recall that $A$ is said to be analytic hypoelliptic in the open set $\Omega$ if any solution of $Au=v$ with $v$ analytic in $\Omega$ is also analytic in $\Omega$. Analytic hypoellipticity is satisfied for the second-order elliptic operator $A$ as soon as its coefficients are analytic in $\Omega$ (for instance it is the case for the Dirichlet Laplacian, without any further assumption). This result implies uniqueness of the optimal shapes. We deduce from that we also have uniqueness of state and adjoint. In what follows, we denote by - $(y_T,p_T,\omega_T)$ the optimal triple of [$(\mathbf{OSD})_{\mathbf{T}}$]{} and $$\displaystyle{J_T = \frac{\gamma_1}{2T}\int_{0}^{T}\Vert y_T(t)-y_{d}\Vert^{2}\,dt}+\frac{\gamma_2}{2}\Vert y_T(T)-y_{d}\Vert^{2};$$ - $(\bar{y},\bar{p},\bar{\omega})$ the optimal triple of [$(\mathbf{SSD})$]{} and $\bar{J} = \frac{\gamma_1}{2}\Vert \bar{y}-y_{d}\Vert^{2}.$ #### Integral turnpike in the Lagrange case For $\gamma_1=1, \gamma_2=0$ (Lagrange case), there exists $M>0$ such that $$\int_0^T \big( \Vert y_T(t)-\bar{y} \Vert^2 + \Vert p_T(t)-\bar{p} \Vert^2 \big) \,dt \leq M\qquad \forall T>0.$$ \[integralturnpikethm\] #### Measure-turnpike in the Lagrange case \[defmeasureturnpike\] We say that $(y_T,p_T)$ satisfies the *state-adjoint measure-turnpike property* if for every $\varepsilon > 0$ there exists $\Lambda(\varepsilon)>0$, independent of $T$, such that $$\vert P_{\varepsilon,T} \vert < \Lambda(\varepsilon) \qquad \forall T >0$$ where $P_{\varepsilon,T} = \big\{t \in [0,T]\ \mid\ \Vert y_T(t)-\bar{y} \Vert+\Vert p_T(t)-\bar{p} \Vert>\varepsilon \big\} $. We refer to [@MR3155340; @MR3654613; @measureturnpikeTZ] (and references therein) for similar definitions. Here, $P_{\varepsilon,T}$ is the set of times along which the time optimal state-adjoint pair $\big(y_T,p_T\big)$ remains outside of an $\varepsilon$-neighborhood of the static optimal state-adjoint pair $(\bar{y},\bar{p})$ in $L^2$ topology. We next recall the notion of dissipativity (see [@MR0527462]). We say that [$(\mathbf{OSD})_{\mathbf{T}}$]{} is *strictly dissipative* at an optimal stationary point $(\bar{y},\bar{\omega})$ of (\[static\]) with respect to the *supply rate function* $$w(y,\omega) = \Vert y-y_d \Vert^2 - \Vert \bar{y}-y_d \Vert^2$$ if there exists a *storage function* $S:E\rightarrow \mathbf{R}$ locally bounded and bounded below and a *$\mathcal{K}$-class function* $\alpha(\cdot)$ such that, for any $T >0$ and any $0<\tau<T$, the strict dissipation inequality $$S(y(\tau)) + \int_0^\tau \alpha(\Vert y(t) - \bar{y} \Vert )\,dt < S(y(0)) + \int_0^\tau w\big(y(t),\omega(t)\big)\,dt \label{ineqDISSIP}$$ is satisfied for any pair $\big(y(\cdot), \omega(\cdot)\big)$ solution of (\[heat\]). \[definitiondissipativity\] For $\gamma_1=1, \gamma_2=0$ (Lagrange case): (i) [$(\mathbf{OSD})_{\mathbf{T}}$]{} is strictly dissipative in the sense of Definition \[definitiondissipativity\]. (ii) The state-adjoint pair $(y_T,p_T)$ satisfies the measure-turnpike property. \[measureturnpikethm\] #### Exponential turnpike The exponential turnpike property is a stronger property and can be satisfied either by the state, by the adjoint or by the control or even by the three together. For $\gamma_1=0, \gamma_2=1$ (Mayer case): For $\Omega$ with $C^2$ boundary and $c=0$ there exist $T_0>0$, $M>0$ and $\mu>0$ such that, for every $T\geq T_0$, $$\iffalse \Vert y_T(t) -\bar{y} \Vert +\Vert p_T(t) -e^{-\lambda(T-t)}\bar{p} \Vert_{L^{\infty}(\Omega)} + \fi d_{\mathcal{H}}\big(\omega_T(t),\bar{\omega}\big) \leq M e^{-\mu(T-t)} \qquad \forall t \in (0,T).$$ \[turnpikeexpothm\] In the Lagrange case, based on the numerical simulations presented in Section \[sec:Numerical simulations\] we conjecture the exponential turnpike property, i.e., given optimal triples $(y_T,p_T,\chi_{\omega_T})$ and $(\bar{y}, \bar{p}, \bar{\omega})$, there exist $C_1>0$ and $C_2>0$ independent of $T$ such that $$\Vert y_T(t)-\bar{y} \Vert +\Vert p_T(t)-\bar{p} \Vert+\Vert \chi_{\omega_T(t)}-\chi_{\bar{\omega}} \Vert \leq C_1 \Big(e^{-C_2 t} + e^{-C_2 (T-t)} \Big)$$ for a.e. $t \in [0,T]$. Proofs ====== Proof of Theorem \[existencethm\] {#sec31} --------------------------------- We first show existence of an optimal shape, solution for [$(\mathbf{ocp})_{\mathbf{T}}$]{} and similarly for [$(\mathbf{sop})$]{}. We first see that the infimum exists. We take a minimizing sequence $(y_{n},a_{n}) \in L^{2}(0,T;H^{1}_{0}(\Omega)) \times L^{\infty}(0,T;\overline{\mathcal{U}}_L)$ such that, for every $n \in \mathbf{N}$, the pair $(y_{n},a_{n})$ satisfies and $J_T(a_n) \rightarrow J_T$. The sequence $(a_{n})$ is bounded in $L^{\infty}(0,T;L^{\infty}(\Omega))$, so, using , the sequence $(y_{n})$ is bounded in $L^{\infty}(0,T;L^{2}(\Omega)) \cap L^{2}(0,T;H^{1}_{0}(\Omega))$. We show then, using , that the sequence $(\frac{\partial y_{n}}{\partial t})$ is bounded in $L^{2}(0,T;H^{-1}(\Omega))$. We subtract a sequence always denoted by $(y_{n},a_{n})$ such that one can find a pair $(y,a) \in L^{2}(0,T;H^{1}_{0}(\Omega)) \times L^{\infty}(0,T;\overline{\mathcal{U}}_L)$ with $$\begin{aligned} y_{n} &\rightharpoonup & y \,\,\,\qquad \text{weakly in }L^{2}(0,T;H^{1}_{0}(\Omega)) \\ \partial_t y_{n} &\rightharpoonup &\partial_t y \,\,\,\quad \text{weakly in } L^{2}(0,T;H^{-1}(\Omega))\\ a_{n} &\rightharpoonup &a \qquad \text{weakly * in } L^{\infty}(0,T;L^{\infty}(\Omega))\end{aligned}$$ We deduce that $$\begin{array}{rcl} \partial_t y_{n} +A y_{n} - a_{n} &\rightarrow & \partial_t y +A y - a \quad \text{in } \mathcal{D}'(\Omega) \\ y_{n}(0) &\rightharpoonup & y(0) \quad ~\text{weakly in } L^{2}(\Omega) \end{array} \label{admissiblepair}$$ We get using that $(y,a)$ is a weak solution of . The pair $(y,a)$ is then admissible. Since $H^{1}_{0}(\Omega)$ is compactly embedded in $L^{2}(\Omega)$ and by using the Aubin-Lions compactness Lemma (see [@aubinlions]), we obtain $$y_{n} \rightarrow y \quad \text{strongly in }L^{2}(0,T;L^{2}(\Omega))$$ We get then by weak lower semi-continuity of $J_T$ and of the volume constraint, and by the Fatou Lemma that $$J_T(a) \leq \lim \inf J_T(a_n) ~~ \mbox{and}~~\displaystyle{\int_{\Omega} a(t,x)\,dx \leq L\vert\Omega\vert}\quad \forall t \in (0,T)$$ hence $a$ an optimal control for $(\mathbf{ocp})_{\mathbf{T}}$, that we rather denote by $a_{T}$ (and $\bar{a}$ for [$(\mathbf{sop})$]{}). We next proceed by proving existence of optimal shape designs. *1-* We take $\gamma_1=0, \gamma_2=1$ (Mayer case). We consider an optimal triple $(y_T,p_T,a_T)$ of [$(\mathbf{OSD})_{\mathbf{T}}$]{}. Then it satisfies and . It follows from the properties of the parabolic equation and from the assumption of analytic hypoellipticity that $p_T$ is analytic on $(0,T) \times \Omega$ and that all level sets $\{ p_T(t) = \alpha \}$ have zero Lebesgue measure. We conclude that the optimal control $a_T$ satisfying - is such that $$\textrm{for a.e.} \ t \in [0,T]\quad \exists s(t) \in \mathbf{R} \ \mid\ a_{T}(t,\cdot) = \chi_{\{p_{T}(t) > s(t)\}} \label{aoptimal}$$ i.e., $a_{T}(t)$ is a characteristic function. Hence, for a Mayer problem $(\mathbf{OSD})_{\mathbf{T}}$, existence of an optimal shape is proved. *2-(i)* In the case $\gamma_1=1, \gamma_2=0$ (Lagrange case), we give the proof for the static problem [$(\mathbf{SSD})$]{}. We suppose $y_d < y^0$ (we proceed similarly for $y_d > y^1$). Having in mind (\[OCsop\]) and \[optimstatchi\]), we have $A \bar{y} = \bar{c} \mbox{ on } \{ \bar{p} = \bar{s} \}$. By contradiction, if $\bar{c} \leq 1 \mbox{ on } \{ \bar{p} = \bar{s} \}$, let us consider the solution $y^*$ of (\[static\_convex\]) with the control $a^*$ which is the same as $\bar{a}$ verifying (\[optimstatchi\]) except that $\bar{c} = 0$ ($\bar{c} = 1$ if $y_d > y^1$) on $\{ \bar{p} = \bar{s} \}$. We have then $A(\bar{y}-y^*) \leq 0 $ (or $A(\bar{y}-y^*) \geq 0$ if $y_d > y^1$). Then, by the maximum principle (see [@MR2597943 sec. 6.4]) and using the homogeneous Dirichlet condition, we get that the maximum (the minimum if $y_d > y^1$) of $\bar{y}-y^*$ is reached on the boundary and hence $y_d \geq y^* \geq \bar{y}$ (or $y_d \leq y^* \leq \bar{y}$ if $y_d > y^1$). We deduce $\Vert y^* - y_d \Vert \leq \Vert \bar{y} - y_d \Vert$. This means that $a^*$ is an optimal control. We conclude by uniqueness. We use a similar argument thanks to maximum principle for parabolic equations (see [@MR2597943 sec. 7.1.4]) for existence of an optimal shape solution of [$(\mathbf{OSD})_{\mathbf{T}}$]{}. In view of proving the next part of the theorem, we first give a useful Lemma inspired from [@MR3793605 Theorem 3.2] and from [@MR3409135 Theorem 6.3]. Given any $p \in [1,+\infty)$ and any $u \in W^{1,p}(\Omega)$ such that $\vert \{u = 0\} \vert > 0$, we have $\nabla u = 0$ $a.e.$ on $\{u = 0\}$. \[derivativenullset\] A proof of a more general result can be found in [@MR3793605 Theorem 3.2]. For completeness, we give here a short argument. $Du$ denotes here the weak derivative of $u$. We need first to show that for $u\in W^{1,p}(\Omega)$ and for a function $S \in C^1{(\mathbf{R})}$ for which there exists $M>0$ such that $\Vert S'\Vert_{L^{\infty}(\Omega)}<M$, we have $S(u)\in W^{1,p}(\Omega)$ and $DS(u)=S'(u)Du$. By the Meyer-Serrins theorem, we get a sequence $u_n \in C^{\infty}(\Omega)\cap W^{1,p}(\Omega)$ such that $u_n\rightarrow u$ in $ W^{1,p}(\Omega)$ and pointwise too. We first get that $\int_{\Omega} \vert S(u)\vert^p\,dx \leq \Vert S'\Vert^p_{{L^{\infty}(\Omega)}}\Vert u\Vert_{L^p(\Omega)}$. Then $DS(u_n) = S'(u_n)Du_n$. We write $$\begin{gathered} \int_{\Omega}\vert DS(u_n)-S'(u)Du\vert^p\,dx = \int_{\Omega}\vert S'(u_n)Du_n-S'(u)Du\vert^p\,dx \\ \leq \int_{\Omega}\vert S'(u_n)(Du_n-Du)\vert^p\,dx + \int_{\Omega}\vert (S'(u_n)-S'(u))Du\vert^p\,dx \\ \leq \Vert S'\Vert^p_{L^{\infty}(\Omega)}\Vert u_n-u\Vert^p_{W^{1,p}(\Omega)}+\int_{\Omega}\vert S'(u_n)Du_n-S'(u)Du\vert^p\,dx\end{gathered}$$ The first term tends to $0$ since $u_n\rightarrow u$ in $ W^{1,p}(\Omega)$. As regards the second term, we use that $\vert S'(u_n)-S'(u)\vert^p \rightarrow 0$ pointwise and $\vert S'(u_n)-S'(u)\vert^p \leq 2\Vert S'\Vert^p_{L^{\infty}(\Omega)}$. By the Lebesgue dominated convergence $\int_{\Omega}\vert (S'(u_n)-S'(u))Du\vert^p\,dx \rightarrow 0$ and $DS(u) = S'(u)Du$. Then, we consider $u^+ = \max(u,0)$ and $u^- = \min(u,0) = -\max(-u,0)$. We define $$S_{\varepsilon}(s) = \left\{ \begin{array}{ll} (s^2+\varepsilon^2)^{\frac{1}{2}}-\varepsilon & \mbox{ if } s\geq 0 \\ 0 & \mbox{ else } \end{array} \right.$$ Note that $\Vert S_{\varepsilon}'\Vert_{L^{\infty}(\Omega)}<1$. We deduce that $DS_{\varepsilon}(u)=S_{\varepsilon}'(u)Du$ for every $\varepsilon>0$. For $\phi \in C^{\infty}_c(\Omega)$ we take the limit of $\int_{\Omega}S_{\varepsilon}(u)D\phi\,dx$ when $\varepsilon\rightarrow 0^+$ to get that $$Du^+=\left\{\begin{array}{ll} Du &\mbox{ on } \{u>0\} \\0 &\mbox{ on } \{u\leq 0\} \end{array} \right. \ \textrm{and}\ Du^-=\left\{\begin{array}{ll} 0 &\mbox{ on } \{u\geq 0\} \\-Du &\mbox{ on } \{u<0\} \end{array} \right.$$ Since $u = u^+-u^-$, we get $Du = 0$ on $\{u=0\}$. We can find this Lemma in a weaker form in [@MR3409135 Theorem 6.3]. *2-(ii)* We assume that $A y_d \leq \beta$ in $\Omega$ with $\beta=\bar{s}Ac^*$. Having in mind (\[OCsop\]) and (\[optimstatchi\]), we assume by contradiction that $|\{\bar{p}=\bar{s}\}| > 0 $. By Lemma \[derivativenullset\] and since $A$ and $A^*$ are differential operators, we have $A^* \bar{p} = c^*\bar{s}$ on $\{\bar{p}=\bar{s}\}$. We infer that $A y_d-\bar{s}Ac^* = \bar{a} \in (0,1)$ on $\{\bar{p}=\bar{s}\}$, which contradicts $A y_d \leq \beta$. Hence $|\{\bar{p}=\bar{s}\}| = 0 $ and thus $\bar{a}=\chi_{\bar{\omega}}$ for some ${\bar{\omega}} \in \mathcal{U}_L$. Existence of solution for [$(\mathbf{SSD})$]{} is proved. [[****]{}]{} Uniqueness of $\bar a=\chi_{\bar\omega}$ and of $a_T=\chi_{\omega_T}$ comes from the fact that the cost functionals of [$(\mathbf{ocp})_{\mathbf{T}}$]{} and [$(\mathbf{sop})$]{} are strictly convex whatever $(\gamma_1, \gamma_2)\neq(0,0)$ may be. Uniqueness of $(\bar{y},\bar{p})$ follows by application of the Poincaré inequality and uniqueness of $(y_T,p_T)$ follows from . Proof of Theorem \[integralturnpikethm\] ---------------------------------------- For $\gamma_1=1, \gamma_2=0$ (Lagrange case), the cost is $J_{T}(\omega) = \frac{1}{2T}\int_{0}^{T}\Vert y(t)-y_{d}\Vert^{2}\,dt$. We consider the triples $(y_T,p_T,\chi_{\omega_T})$ and $(\bar{y},\bar{p},\chi_{\bar{\omega}})$ satisfying the optimality conditions (\[OCocpstate\]), (\[OCocpadjoint\]) and (\[OCsop\]). Since $\chi_{\omega_T}$ is bounded at each time $t \in [0,T]$ and by application of (\[gronwall\]) to $y_{T}$ and $p_{T}$ we can find a constant $C>0$ depending only on $A, y_0, y_d, \Omega, L$ such that $$\forall T >0 \quad \Vert y_T(T) \Vert^{2} \leq C \quad \mbox{and} \quad \Vert p_T(0) \Vert^{2} \leq C$$ Setting $\tilde{y} = y_T-\bar{y},\tilde{p} =p_T-\bar{p},\tilde{a}=\chi_{\omega_T}-\chi_{\bar{\omega}}$, we have $$\begin{aligned} \partial_t \tilde{y} +A\tilde{y} = \tilde{a}, \quad \tilde{y}_{\vert \partial \Omega}&=&0, \quad \tilde{y}(0) = y_{0}-\bar{y} \label{optimedpy} \\ \partial_t \tilde{p} -A^* \tilde{p} = \tilde{y}, \quad \tilde{p}_{\vert \partial \Omega}&=&0, \quad \tilde{p}(T) = -\bar{p} \label{optimedpp}\end{aligned}$$ First, using (\[OCocpstate\]), (\[OCocpadjoint\]) and (\[OCsop\]) one has $\big(\tilde{p}(t),\tilde{a}(t)\big) \geq 0$ for almost every $t \in [0,T]$. Multiplying (\[optimedpy\]) by $\tilde{p}$, (\[optimedpp\]) by $\tilde{y}$ and then adding them, one can use the fact that $$\big(\bar{y}-y_{0},\tilde{p}(0)\big) - \big(\tilde{y}(T),\bar{p}\big) = \int_{0}^{T}\big(\tilde{p}(t),\tilde{a}(t)\big)\,dt + \int_{0}^{T}\Vert \tilde{y}(t) \Vert^{2} \,dt \label{ineq_yp}$$ By the Cauchy-Schwarz inequality we get a new constant $C>0$ such that $$\frac{1}{T}\int_{0}^{T}\Vert \tilde{y}(t) \Vert^{2} \,dt +\frac{1}{T} \int_{0}^{T} \big(\tilde{p}(t),\tilde{a}(t)\big)\,dt \leq \frac{C}{T}$$ The two terms at the left-hand side are positive and using the inequality (\[energy\]) with $\psi(t) = \tilde{p}(T-t)$, we finally obtain $$\frac{1}{T}\int_{0}^{T} \big(\Vert y_T(t) - \bar{y} \Vert^{2} + \Vert p_T(t) - \bar{p} \Vert^{2}\big) \,dt \leq \frac{M}{T} $$ Proof of Theorem \[measureturnpikethm\] --------------------------------------- \(i) Strict dissipativity is established thanks to the storage function $S(y) = \big(y,\bar{p}\big)$ where $\bar{p}$ is the optimal adjoint. Since $\Vert y \Vert_{L^{\infty}((0,T)\times\Omega)} < M$, the storage function $S$ is locally bounded and bounded from below. Indeed, we consider an admissible pair $(y(\cdot),\chi_{\omega}(\cdot))$ satisfying (\[heat\]), we multiply it by $\bar{p}$ and we integrate over $\Omega$. Then we integrate in time on $(0,T)$, we use the optimality conditions of static problem (\[OCsop\]) and we get the strict dissipation inequality (\[ineqDISSIP\]) with $\alpha(s)=s^2$: $$(\bar{p},y(\tau)) + \int_0^\tau \Vert y(t) - \bar{y} \Vert^2 \,dt < (\bar{p},y(0)) + \int_0^\tau w\big(y(t),\omega(t)\big)\,dt \label{dissipativityinequality}$$ \(ii) Now we prove that strict dissipativity implies measure-turnpike, by following an argument of [@measureturnpikeTZ]. Applying (\[dissipativityinequality\]) to the optimal solution $(y_T,\omega_T)$ at $\tau=T$, we get $$\frac{1}{T} \int_0^T \Vert y_T(t)-\bar{y} \Vert^2 \, dt \leq J_T-\bar{J} + \frac{(y(0)-y(T),\bar{p})}{T} \nonumber$$ Considering then the solution $y_s$ of (\[heat\]) with $\omega(\cdot) = \bar{\omega}$ and $J_s = {\frac{1}{T}\int_{0}^{T}\Vert y_s(t)-y_{d}\Vert^{2}}$, we have $J_T-J_s < 0$ and we show that $J_s-\bar{J} \leq \frac{1-e^{-CT}}{CT}$, then $$\frac{1}{T} \int_0^T \Vert y_T(t)-\bar{y} \Vert^2 \, dt \leq \frac{M}{T} \label{dissipativityineqn}$$ Applying (\[energy\]) to $\psi(\cdot) = p_T(T-\cdot) - \bar{p}$, we get $$\begin{gathered} \frac{1}{2C} \int_0^T \Vert p_T(t)-\bar{p} \Vert^2\, dt \leq C \int_0^T \Vert y_T(t)-\bar{y} \Vert^2 \, dt \\ + \frac{\Vert p_T(0)-\bar{p} \Vert^2-\Vert p_T(T)-\bar{p} \Vert^2}{2} \end{gathered}$$ Using again the strict dissipativity (\[ineqDISSIP\]) we get $\frac{\varepsilon^2 \vert P_{\varepsilon,T} \vert}{T} \leq \frac{M}{T}$. Hence we can find a constant $M >0$ which does not depend on $T$ such that $\vert P_{\varepsilon,T} \vert \leq \frac{M}{\varepsilon^2}$. Proof of Theorem \[turnpikeexpothm\] ------------------------------------ We take $\gamma_1=0,\gamma_2=1$ (Mayer case). We want to characterize optimal shapes as being the level set of some functions as in [@dambrine:hal-02057510]. Let $(y_T,p_T,\chi_{\omega_T})$ be an optimal triple, coming from Theorem \[existencethm\]-(i). Then $\psi(t,x) = p_{T}(T-t,x)$ satisfies $$\partial_t \psi +A^* \psi = 0, \quad \psi_{\vert \partial \Omega}=0, \quad \psi(0) = y_{1}-y_{T}(T) \label{adjointretrograde}$$ We write $y_1-y(T)$ in the basis $(\phi_j)_{j \in \mathbf{N}^*}$. There exists $(a_j) \in \mathbf{R}^{\mathbf{N}^*}$ such that $ y_1 - y(T) = \sum_j a_j \phi_j $. We can solve and get $ p(t,x) = \sum_{j\geq 1} a_j \phi_j(x) e^{-\lambda_j(T-t)}$. By the maximum principle for parabolic equations, there exists $M>0$ such that, for every $T>0$, for every $t \in (0,T)$, the solution of satisfies $\Vert y(t) \Vert ^2 \leq M $. Hence $\vert a_j \vert^2 \leq M $. Let us consider the index $j_0 = \inf\{j \in \mathbf{N}, a_j \neq 0\}$. Take $\lambda = \lambda_{j_0}$ and $\mu = \lambda_k$ where $k$ is the first index for which $\lambda_k>\lambda$. We define $\displaystyle{\Phi_0 = \sum_{\lambda_j = \lambda_{j_0}} \phi_j}$ which is a finite sum of the eigenfunctions associated to the eigenvalue $\lambda_{j_0}$. Using the bathtub principle ([@MR1817225 Theorem 1.16]), we define the stationary shape $\omega_{0}$ and the constant $s_0$ such that $$\omega_{0} = \{\Phi_0 < s_{0}\},~ \chi_{\omega_{0}}\text{ solves: } \max_{u\in \mathcal{U}_L} \int \Phi_0(x)u(x)\,dx \label{solstatic}$$ Since $(\phi_j)_{j\in\mathbb{N}^*}$ is an orthonormal basis of $L^2(\Omega)$ we get $$\Vert p_T(t)-e^{-\lambda(T-t)}\Phi_0 \Vert^{2}_{L^{2}(\Omega)} \leqslant C\,e^{-\mu (T-t)} \quad \forall t \in [0,T]$$ Let us now write, for every $x \in \Omega$ and every $t \in [0,T]$, $$\begin{gathered} \vert p(t,x) - e^{-\lambda(T-t)}\Phi_0(x) \vert = \left\vert \sum_{j\geq k} a_j \phi_j(x) e^{-\lambda_j (T-t)} \right\vert \\ \leq \sum_{j\geq k} \left\vert a_j \phi_j(x) e^{-\lambda_j (T-t)} \right\vert\end{gathered}$$ By the Weyl Law and sup-norm estimates for the eigenfunctions of $A$ (see [@MR3186367 Chapter 3]), there exists $\alpha \in (0,1)$ such that $\alpha \mu > \lambda$ and thus $$\vert p(t,x) - e^{-\lambda(T-t)}\Phi_0(x) \vert \leq e^{-\alpha\mu(T-t)} \sum_{j\geq k} M j^{\frac{N-1}{2N}} e^{-C j^{\frac{1}{N}}(T-t)}$$ where $M,C$ are positive constants not depending on $x$, $t$, $T$. Let $\varepsilon > 0$ be arbitrary. We claim that there exists $C_{\varepsilon}>0$ independent of $x$, $t$, $T$ such that, for every $ x \in \Omega$, $$\begin{aligned} \vert p(t,x) - e^{-\lambda(T-t)}\Phi_0(x) \vert &\leq C_{\varepsilon} e^{-\alpha\mu(T-t)}\quad \forall t \in (0,T-\varepsilon) \\[0,2cm] \vert p(t,x) - e^{-\lambda(T-t)}\Phi_0(x) \vert &\leq C_{\varepsilon} \qquad \forall t \in (T-\varepsilon,T)\end{aligned}$$ To conclude we take an arbitrary value for $\varepsilon$ and we write $\mu$ instead of $\alpha \mu$ but always with $\mu>\lambda$ to get $$\Vert p_T(t)-e^{-\lambda(T-t)}\Phi_0 \Vert_{L^{\infty}(\Omega)} \leq C\,e^{-\mu (T-t)} \quad \forall t \in [0,T] \label{turnpikeadjoint}$$ with $C$ not depending on the final time $T$. This is an exponential turnpike property on the adjoint-state. Moreover we get from that $$\vert s(t) - e^{-\lambda(T-t)} s_0 \vert \leq C\,e^{-\mu (T-t)} \quad \forall t \in [0,T] \label{turnpikelevel}$$ We write $\Phi = \Phi_0-s_0$ and $\psi_0(t) = e^{-\lambda(T-t)}\Phi$ and using with , we get $$\Vert \psi_T(t,x)-\psi_0(t,x)\Vert_{L^{\infty}(\Omega)} \leqslant C\,e^{-\mu (T-t)}, \quad \forall t \in [0,T] \label{turnpikepsi}$$ We now follow arguments of [@dambrine:hal-02057510] to establish the exponential turnpike property for the control and then for the state by using some information on the control $\chi_{\omega_T}$. We first remark that for all $t_1,t_2 \in [0,T]$, $\{\psi_0(t_1,\cdot) < 0\} = \{\psi_0(t_2,\cdot) < 0\} = \{ \Phi < 0 \} $. Then we take $t \in [0,T]$ and we compare the sets $\{\psi_0(t,\cdot) < 0\}, \{\psi_T(t,\cdot) < 0\} \mbox{ and } \{\psi_0(t,\cdot) + C e^{-\mu (T-t)} < 0\}$. Thanks to and we get, for every $ t \in [0,T]$, $$\begin{aligned} \hspace{-0.5cm} \{\Phi\! \leq\! -C e^{-(\mu-\lambda) (T-t)} \} \subset \{\psi_T(t,\cdot)\! \leq \!0\} \subset \{\Phi\! \leq\! C e^{-(\mu-\lambda) (T-t)}\} \\ \hspace{-0.5cm} \{\Phi\! \leq \!-C e^{-(\mu-\lambda) (T-t)} \} \subset \{\psi_0(t,\cdot) \!\leq\! 0\} \subset \{\Phi \!\leq\! C e^{-(\mu-\lambda) (T-t)}\}\end{aligned}$$ We infer from [@dambrine:hal-02057510 Lemma 2.3] that, for every $ t \in [0,T]$, $$\begin{gathered} \label{inegalitehausdorffdistance} d_{\mathcal{H}} \Big( \{\psi_T(t,\!\cdot) \leq 0\}, \{\Phi \leq 0\} \Big) \\ \leq d_{\mathcal{H}} \Big( \{\Phi\leq\!-C e^{-(\mu-\lambda) (T-t)}\}, \{\Phi \leq C e^{-(\mu-\lambda) (T-t)}\}\Big)\end{gathered}$$ Since $d_{\mathcal{H}}$ is a distance, we only have to estimate $d_{\mathcal{H}} \Big( \{\Phi\!\leq\!0\},\{\Phi\! \leq \pm C e^{-(\mu-\lambda) (T-t)}\}\Big)$. Let $f : \Omega \rightarrow \mathbf{R}$ be a continuously differentiable function and set $\Gamma = \{f=0\}$. Under the assumption **(S)**: there exists $C>0$ such that $$\Vert \nabla f(x) \Vert \geq C \quad \forall x \in \Gamma,$$ there exist $\varepsilon_0>0$ and $C_f>0$ only depending on $f$ such that for any $\varepsilon \leq \varepsilon_0$ $$d_{\mathcal{H}}\big(\{f\leq 0\},\{f \leq \pm \varepsilon\}\big) \leq C_f \varepsilon .$$ \[propdambrine\] We consider $f$ satisfying *(S)* with $\Gamma = \{ \Phi =0 \}$. We assume by contradiction that for every $\varepsilon >0$, there exists $x \in \{\vert f\vert \leq \varepsilon \}$ such that $\Vert \nabla f(x) \Vert =0$. We take $\varepsilon = \frac{1}{n}$ and we subtract a subsequence $(x_n)\rightarrow x \in \{\vert f\vert \leq 1 \}$ (which is compact). By continuity of $f$ and of $\Vert \nabla f\Vert$, we have $x \in \Gamma$ and $\Vert f(x) \Vert=0$, which raises contradiction with *(S)*. Hence we find $\varepsilon_0>0$ such that $\Vert \nabla f(x) \Vert \geq \frac{C}{2}$ for every $x \in \{\vert f\vert \leq \varepsilon \}$. We apply [@MR2592958 Corollary 4] (see also [@MR2592958 Theorem 2]) to get $$d_{\mathcal{H}}\big(\{f\leq 0\},\{f \leq \pm \varepsilon\}\big) \leq \frac{2}{C} \varepsilon$$ A more general statement can be found in [@MR2592958; @dambrine:hal-02057510]. We infer that $\Phi$ satisfies *(S)* on $\Vert \nabla_x \psi_0(t,x) \Vert = e^{-\lambda(T-t)}\Vert \nabla_x \Phi(x) \Vert $ for $x \in \Omega$. We first remark that $\Phi_0$ satisfies $A \Phi_0 = \lambda_{j_0} \Phi_0, \Phi_{0_{\vert \Gamma}} = s_0$ and that the set $\Gamma=\{\Phi_0=0\}$ is compact. Since $\Omega$ has a $C^2$ boundary and $c=0$ the Hopf lemma (see [@MR2597943 sec. 6.4]) gives $$x_0 \in \Gamma_0 \implies \Vert \nabla_x \Phi (x_0) \Vert = \Vert \nabla_x \Phi_0 (x_0) \Vert > 0$$ Hence there exists $C_0>0$ not depending on $t$, $T$ such that for every $x \in \Gamma_0$, $\Vert \nabla_x \Phi (x_0) \Vert \geq C_0 >0$. We take $\nu>0, e^{-\mu \nu} \leq \varepsilon_0$. We remark that $e^{-\mu (T-t)} \leq \varepsilon_0, \forall t \in (0,T-\nu)$ and we use Lemma \[propdambrine\] combined with to get that, for every $t \in (0,T-\nu)$, $$d_{\mathcal{H}} \Big( \{\psi_T(t,\!\cdot) \leq 0\}, \{\Phi \leq 0\} \Big) \leq C_0 e^{-(\mu-\lambda) (T-t)}$$ We adapt the constant $C_0$ such that on the compact interval $t\in (T-\nu,T)$ the sets are the same whatever $T\geq T_0>0$ may be, to get that, for every $t \in (0,T)$, $$d_{\mathcal{H}} \Big( \{\psi_T(t,\!\cdot) \leq 0\}, \{\Phi \leq 0\} \Big) \leq C_0 e^{-(\mu-\lambda) (T-t)}.$$ We obtain therefore an exponential turnpike property for the control in the sense of the Hausdorff distance: $$d_{\mathcal{H}} ( \omega(t), \omega_0 )\leq C_0 e^{-(\mu-\lambda) (T-t)} \quad \forall t \in [0,T] \label{turnpikeshape}$$ To establish the further turnpike property on state and adjoint we could use a similar argument as in [@MR1745583 Theorem 1-(i)]: $\Vert \chi_{\omega(t)} - \chi_{\omega_0} \Vert \leq C d_{\mathcal{H}} ( \omega(t), \omega_0 )$. We follow [@MR1855817 Theorem 4.1-(ii)] and [@MR1855817 Theorem 5.1-(iii)(iv)] and we use the inequality $\Vert \chi_{\overline{A}_1} - \chi_{\overline{A}_2}\Vert \leq \Vert d_{A_1} - d_{A_2} \Vert_{W^{1,2}(\Omega)} \leq \Vert b_{A_1} - b_{A_2} \Vert_{W^{1,2}(\Omega)} = \Vert b_{A_1} - b_{A_2} \Vert + \Vert \nabla b_{A_1} - \nabla b_{A_2} \Vert$, where $\Vert \chi_{\omega(t)} - \chi_{\omega_0} \Vert$ is the measure of the symmetric difference of the sets $K_1,K_2$. Therefore, applying the energy inequality , we get $$\Vert y(t) - \bar{y} \Vert_{L^{2}(\Omega)} \leq C_0 e^{-\frac{(\mu+\lambda)}{2} (T-t)} \quad \forall t \in (0,T) \label{turnpikestate}$$ with $\bar{y}$ solution of $A y = \chi_{\omega_0}, y_{\vert \partial \Omega} = 0$. Taking $\kappa = \frac{\mu+\lambda}{2} > 0$ and by application of for the adjoint, we finally get the exponential turnpike property for the state, adjoint and control. Numerical simulations: optimal shape design for the 2D heat equation {#sec:Numerical simulations} ==================================================================== We take $\Omega = [-1,1]^2$, $L = \frac{1}{8}$, $T\in\{1,\ldots,5\}$, $y_{d}=\mathrm{Cst}=0.1$ and $y_0=0$. We focus on the heat equation and consider the minimization problem $$\displaystyle{\min_{\omega(.)} \int_{0}^{5}{\int_{[-1,1]^2} |y(t,x)-0.1|^{2}\, dx\, dt}}$$ under the constraints $$\partial_t y - \triangle y = \chi_{\omega}, \qquad y(0,\cdot) = 0,\qquad y_{\vert \partial \Omega}=0 $$ We compute numerically a solution by solving the equivalent convexified problem [$(\mathbf{ocp})_{\mathbf{T}}$]{} thanks to a direct method in optimal control (see [@MR2224013]). We discretize here with an implicit Euler method in time and with a decomposition on a finite element mesh of $\Omega$ using `FREEFEM++` (see [@MR3043640]). We express the problem as a quadratic programming problem in finite dimension. We use then the routine `IpOpt` (see [@ipopt]) on a standard desktop machine. ![Time optimal shape’s evolution cylinder - $T=2$[]{data-label="fig:colonne"}](./colonne){width="1\linewidth"} We plot in Figure \[fig:colonne\] the evolution in time of the optimal shape $t\rightarrow\omega(t)$ which appears like a cylinder whose section at time $t$ represents the shape $\omega(t)$. At the beginning $(t=0$) we notice that the shape concentrates at the middle of $\Omega$ in order to warm as soon as possible near to $y_d$. Once it is acceptable the shape is almost stationary during a long time. Finally, close to the final time, the shape moves to the boundary of $\Omega$ in order to flatten the state $y_T$ because $y_d$ is taken here as a constant. ![Shape at $t=\frac{T}{2}$[]{data-label="control-2"}](./astat) ![Shape at $t=\frac{T}{2}$[]{data-label="control-2"}](./a16)    \ We plot in Figure \[fig:ex3\] the comparison between the optimal shape at several times (in red) and the optimal static shape (in yellow). We see the same behavior when $t=\frac{T}{2}$. Now in order to highlight the turnpike phenomenon, we plot the evolution in time of the distance between the optimal dynamic triple and the optimal static one $t \mapsto \Vert y_T(t)-\bar{y} \Vert+\Vert p_T(t)-\bar{p} \Vert+\Vert \chi_{\omega_T(t)}-\chi_{\bar{\omega}} \Vert$. ![Error between time optimal triple and static one[]{data-label="expfig"}](./normel2multit.jpg) In Figure \[expfig\] we observe that this function is exponentially close to $0$. This behavior leads us to conjecture that the exponential turnpike property should be satisfied. To complete this work, we need to clarify the existence of optimal shapes for [$(\mathbf{OSD})_{\mathbf{T}}$]{} when $y_d$ is convex. We see numerically in Figure \[fig:ex3\] the time optimal shape’s existence for $y_d$ convex on $\Omega$. Otherwise we can sometimes observe a relaxation phenomenon due to the presence of $\bar{c}$ and $c_T(\cdot)$ in the optimality conditions (\[OCocpstate\]), (\[OCocpadjoint\]), (\[OCsop\]). We consider the same problem [$(\mathbf{ocp})_{\mathbf{T}}$]{} in 2D with $\Omega = [-1,1]^2$, $L = \frac{1}{8}, T = 5$ and the static one associated [$(\mathbf{sop})$]{}. We take $y_d(x,y) = -\frac{1}{20}(x^2+y^2-2)$. \ ![Error between time optimal triple and static one (Relaxation case)[]{data-label="expfig_relax"}](./normel2multitrelax.jpg) In Figure \[fig:ex4\] we see that optimal control $(a_T,\bar{a})$ of [$(\mathbf{ocp})_{\mathbf{T}}$]{} and [$(\mathbf{sop})$]{} take values in $(0,1)$ in the middle of $\Omega$. This illustrates that relaxation occurs for some $y_d$. Here, $y_d$ was chosen such that $-\triangle y_d \in (0,1)$. We have tuned the parameter $L$ to observe the relaxation phenomenon, but for same $y_d$ and smaller $L$, optimal solutions are shapes. Despite the relaxation we see in Figure \[expfig\_relax\] that turnpike still occurs. Further comments ================ Numerical simulations when $\triangle y_d>0$ lead us to conjecture existence of an optimal shape for [$(\mathbf{OSD})_{\mathbf{T}}$]{}, because we have not observed any relaxation phenomenon in that case. Existence might be proved thanks to arguments like maximal regularity properties and Hölder estimates for solutions of parabolic equations. Moreover, still based on our simulations and particularly on Figure \[expfig\], we conjecture the exponential turnpike property. The work that we presented here is focused on second-order parabolic equations and particularly on the heat equation. Concerning the Mayer case, we have used in our arguments the Weyl law, sup-norm estimates of eigenelements (see [@MR3186367]) and analyticity of solutions (analytic hypoellipticity). Nevertheless, concerning the Lagrange case and having in mind [@MR3616131; @measureturnpikeTZ] it seems reasonable to extend our results to general local parabolic operators which satisfy an energy inequality and the maximum principle to ensure existence of solutions. However, some results like Theorem \[existencethm\].2-(ii) should be adapted. Moreover we consider a linear partial differential equation which gives uniqueness of the solution thanks to the strict convexity of the criterion. At the contrary, if we do not have uniqueness, as in [@measureturnpikeTZ], the notion of measure-turnpike seems to be a good and soft way to obtain turnpike results. To go further with the numerical simulations, our objective will be to find optimal shapes evolving in time, solving dynamical shape design problems for more difficult real-life partial differential equations which play a role in fluid mechanics for example. We can find in the recent literature some articles on the optimization of a wavemaker (see [@dalphinwave; @doi:10.1093/imamat/hxu051]). It is natural to wonder what can happen when considering a wavemaker whose shape can evolve in time. Energy inequality {#sec_app} ================= We recall some useful inequalities to study existence and turnpike. Since $\theta$ satisfies , we can find $\beta > 0, \gamma \geq 0$ such that $\beta \geq \gamma$ and $$\label{ineqenergyellip} (Au,u)\geq\beta\Vert u\Vert_{H^1_0(\Omega)} - \gamma\Vert u\Vert_{L^2(\Omega)}$$ From this follows the energy inequality (see [@MR2597943 Chapter 7, Theorem 2]): there exists $C>0$ such that, for any solution $y$ of (\[heat\_convex\]), for almost every $t\in[0,T]$, $$\Vert y(t) \Vert^{2} + \int_{0}^{t} \Vert y(s) \Vert_{H_0^1(\Omega)}^{2} \,ds\leq C\left(\Vert y_{0} \Vert^{2} + \int_{0}^{t}\Vert a(s) \Vert^{2} \, ds\right) \label{energy}$$ We can improve this inequality by using the Poincaré inequality and the Gronwall Lemma to obtain $C_1,C_2>0$ such that, for almost every $t\in[0,T]$, $$\Vert y(t) \Vert^{2} \leq\ C_1\left(\Vert y_{0} \Vert^{2}e^{-\frac{t}{C_2}} + \int_{0}^{t}e^{-\frac{t-s}{C_2}}\Vert a(s) \Vert^{2} \, ds\right) \label{gronwall}$$ The constants $C,C_1,C_2$ depend only on the domain $\Omega$ (Poincaré inequality) and on the operator $A$ and not on final time $T$ since is satisfied with $\beta \geq \gamma$. Acknowledgment. {#acknowledgment. .unnumbered} =============== This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 694126-DyCon), the Alexander von Humboldt-Professorship program, the Grants Finite4SoS ANR-15-CE23-0007-01 and ICON-ANR-16-ACHN-0014 of the French ANR, the Air Force Office of Scientific Research under Award NO: FA9550-18-1-0242, Grant MTM2017-92996-C2-1-R COSNET of MINECO (Spain) and by the ELKARTEK project KK-2018/00083 ROAD2DC of the Basque Government, the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement 765579-ConFlex.
--- abstract: | Head pose estimation, which computes the intrinsic Euler angles (yaw, pitch, roll) from the human, is crucial for gaze estimation, face alignment and 3D reconstruction. Traditional approaches heavily relies on the accuracy of facial landmarks. It limits their performances, especially when the visibility of face is not in good conditions. In this paper, to do the estimation without facial landmarks, we combine the coarse and fine regression output together for a deep network. Utilizing more quantization units for the angles, a fine classifier is trained with the help of other auxiliary coarse units. Integrating regression is adopted to get the final prediction. The proposed approach is evaluated on three challenging benchmarks. It achieves the state-of-the-art on AFLW2000, BIWI and performs favorably on AFLW. Code has been released on Github. [^1] address: 'School of Information Science and Technology, Dalian Maritime University$^{1}$, Horizon Robotics$^{2}$' bibliography: - 'r1.bib' - 'r2.bib' - 'r3.bib' - 'r4.bib' - 'r5.bib' - 'r6.bib' - 'r7.bib' - 'r8.bib' - 'r9.bib' - 'r10.bib' - 'r11.bib' - 'r12.bib' - 'r13.bib' - 'r14.bib' - 'r15.bib' - 'r16.bib' - 'r17.bib' - 'r18.bib' - 'r19.bib' - 'r20.bib' - 'r21.bib' - 'r22.bib' - 'r23.bib' - 'r24.bib' - 'r25.bib' - 'r26.bib' title: 'Hybrid coarse-fine classification for head pose estimation' --- [UTF8]{}[gkai]{} coarse-fine classification, head pose estimation, 3D facial understanding, image analysis. Introduction {#sec:intro} ============ Facial expression recognition is one of the most successful applications of convolutional neural network in the past few years. Recently, more and more attentions have been posed on 3D facial understanding. Most of existed methods of 3D understanding require extracting 2D facial landmarks. Establishing the corresponding relationship between 2D landmarks and a standardized 3D head model, 3D pose estimation of the head can be viewed as a by-product of the 3D understanding. While facial landmark detection has been improved by large scale, thanks to the deep neural network, the two-step based head pose estimation may have two extra errors. For example, poor facial landmark detection in the bad visual conditions harms the precision of angle estimation. Also, the precision of ad-hoc fitting of the 3D head model affects the accuracy of the pose estimation. ![Example pose estimation using our method. The blue axis points towards the front of the face, green pointing downward and red pointing to the side.](example.jpg) Recently, a fined-grained head pose estimation method[@ruiz2017fine] without landmarks has been proposed. It predict head pose Euler angle directly from image using a multi-loss network, where three angles are trained together and each angle loss is of two parts: a bin classification and regression component. Classification and regression components are connected through multi-loss training. However, they do not deal with the quantization errors brought by coarse bin classification. In our proposed method, we pose higher restriction for bin classification, in order to get better result of regression. Based on our observation, the classification converges much faster than regression, which weakens the usefulness of multi-loss training scheme. But a direct refined bin classification may counteract the benefit of problem reduction. Therefore, we introduce a hybrid coarse-fine classification framework, which proves to not only be helpful to refined bin classification but also improve the performance of prediction. Proposed network is shown in Figure 2. The main contributions of our work are summarized as below: $\bullet$ Use stricter fine bin classification to reduce the error brought by coarse bin classification. $\bullet$ Propose our hybrid coarse-fine classification scheme to make better refined classification. $\bullet$ State-of-the-art performance for head pose estimation using CNN-based method on AFLW2000 and BIWI datasets, and close the gap with state-of-the-art on AFLW. Related work {#sec:format} ============ Head pose estimation has been widely studied and diverse traditional approaches have been proposed, including Appearance Template Models [@huang1998face], Detector Arrays[@osuna1997training] and Mainfold Embedding[@balasubramanian2007biased]. Until now, approaches to head pose estimation have adapted to deep neural network and been divided into two camps: landmark-based and landmark-free. Landmark-based methods utilize facial landmarks to fit a standard 3D face. 3DDFA [@zhu2016face] directly fits a 3D face model to RGB image via convolutional neural networks, and aligns facial landmarks using a dense 3D model, 3D head pose is produced in the 3D fitting process. SolvePnP tool [@gao2003complete] also produces head pose in analogous way. However, this method usually use a mean 3D human face model which introduces intrinsic error during the fitting process. Another recent work done by Aryaman et al. [@gupta2018nose] achieves great performance on public datasets. They propose to use a higher level representation to regress the head pose while using deep learning architectures. They use the uncertainty maps in the form of 2D soft localization heatmap images over selected 5 facial landmarks, and pass them through an convolutional neural network as input channels to regress the head pose. However, this approach still cannot avoid problem of landmark invisibility even though they use coarse location, especially when considering that their method only involves with five landmarks which make their method very fragile to invisible condition. Landmark-free methods treat head pose estimation as a sub-problem of multi-task learning process. M. Patacchiola[@patacchiola2017head] proposes a shallow network to estimate head pose, and provide a detailed analysis on AFLW dataset. KEPLER[@kumar2017kepler] uses a modified GoogleNet and adopts multi-task learning to learn facial landmarks and head pose jointly. Hyperface[@ranjan2016hyperface] also follows multi-task learning framework, which detect faces and gender, predict facial landmarks and head pose at once. All-In-One[@ranjan2017all] adds smile prediction and age estimation to the former method. Chang et al.[@chang2017faceposenet] regresses 3D head pose by a simple convolutional neural network. However, they focus on face alignment and do not explicitly evaluate their method on public datasets. Ruiz et al.[@ruiz2017fine] is another landmark-free work which performs well recently, they divide three branches to predict each angle jointly, each branch is combined by classification and integral regression. Lathuiliere et. al[@lathuiliere2017deep] proposed a CNN-based model with a Gaussian mixture of linear inverse regressions to regress head pose. Drouard et. al[@drouard2017robust] further exploits to deal with issues in [@lathuiliere2017deep], including illumination, variability in face orientation and in appearance, etc. by combining the qualities of unsupervised manifold learning and inverse regressions. Although recent state-of-the-art landmark-based method has better prediction given the ground truth, they suffer from landmark invisibility and landmark inaccuracy under real scene. Robust landmark-free method introduces extra error which limits its performance. In our work, we follow the landmark-free scheme and propose hybrid coarse-fine classification scheme which intends to solve the problem of extra error introduced by coarse classification in [@ruiz2017fine]. ![image](fig2.png) proposed method {#sec:majhead} =============== Hybrid coarse-fine classification {#ssec:subhead} --------------------------------- Although [@ruiz2017fine] contribute great work on head pose estimation using landmark-free method, but their work still meet with some issues. They do coarse bin classification before integrate regression. Bin classification relaxes a strict regression problem into a coarse classification problem, meanwhile, it introduces extra error which limits the performance of precise prediction. Multi-task learning (MTL) has led to success in many applications of machine learning. [@ruder2017overview] demonstrates that MTL can be treated as implicit data augmentation, representation bias, regularization and etc. Hard parameter sharing which is common in MTL, shares low representation but has task specific layer for high representation. Most of existed multi-task methods combine several related but different tasks together, such as age, gender and emotion, to learn a more general high level representation. However, hybrid classification scheme for single same task with different dimension still does not receive enough attention as far as we know. Here, we introduce our general hybrid coarse-fine classification scheme into network, architecture is shown in Figure 2. Hybrid scheme can be regarded as a new type of hard parameters sharing, but unlike former methods which are of different tasks, each classification branch is a related and same task with specific restriction scale. It shares advantages of MTL. First, it is helpful to reduce the risk of overfitting, as the more tasks we are learning simultaneously, the more our model has to find a universal representation that captures all of the tasks and the less is our chance of overfitting on a single fine classification task. Besides, coarse classification poses with less restriction can converge faster, thus, it can help avoid some flagrant mistakes, e.g. predict a wrong sign symbol, and make the prediction more stable. We use more refined classification at the highest level, which can improve the regression accuracy to bits in theory, but this operation may counteract the benefit of problem reduction. Thus, we propose our hybrid coarse-fine classification scheme to offset the influence of refined classification. The problem is relaxed multiple times on different scales in order to ensure precise prediction under different classification scale. We take both coarse bin classification and relatively fine bin classification into account, each FC layer represents a different classification scale and compute its own cross-entropy loss. In the integrate regression component, we only utilize result of the most refined bin classification to compute expectation and regression loss. One regression loss and multiple classification losses are combined as a total loss. Each angle has such a combined loss and share the previous convolutional layers of the network. Our proposed hybrid coarse-fine classification scheme can be easily added into former framework and bring performance up without much extra computing resources. The final loss for each angle is the following: $$\label{eq.1} Loss = \alpha * MSE(y,y^{*}) + \sum_{i=1}^{num} {\beta_{i}*H(y_{i},y^{*}_{i})}$$ where H and MSE respectively designate the cross-entropy loss and mean squared error loss functions, num means the number of classification branch which is set to 5 in our case. We have tested different coefficients for the regression component and hybrid classification component, our results are presented in Table 4. and Table 5. Integrate regression {#ssec:subhead} -------------------- Xiao et al.[@sun2018integral] introduces integrate regression into human pose estimation to cope with non-differentiable post-processing and quantization error. Their work shows that a simple integral operation relates and unifies the heat map representation and joint regression. Ruiz et al.[@ruiz2017fine] utilizes integrate regression in head pose estimation. This scheme treats a direct regression problem as two steps process, a multi-class classification followed by integrate regression, by modifying the “taking-maximum” of classification to “taking-expectation”, and a fine-grained predictions is obtained by computing the expectation of each output probability for the binned output. We follow this setting in our network and use the same backbone network as [@ruiz2017fine] in order to fairly compare. Intuitively, such scheme can be seen as a way of problem reduction, as bin classification is a coarse annotation rather than precise label, classification and the output are connected through multi-loss learning which makes the classification also sensitive the output. Another explanation is that bin classification uses the very stable softmax layer and cross-entropy loss, thus the network learns to predict the neighborhood of the pose in a robust fashion. Experiments {#sec:majhead} =========== Datasets for Pose Estimation {#ssec:subhead} ---------------------------- We demostrate that datasets under real scene with precise head pose annotations, numerous variation on pose scale and lighting condition, is essential to make progress in this filed. Three benchmarks are used in our experiments. 300W-LP [@zhu2016face]: is a synthetically expanded dataset, and a collection of popular in-the-wild 2D landmark datasets which have been re-annotated. It contains 61,225 samples across large poses, which is further expanded to 122,450 samples with flipping. AFLW2000 [@koestinger2011annotated]: contains the first 2000 identities of the in-the-wild AFLW dataset, all of them have been re-annotated with 68 3D landmarks. AFLW [@koestinger2011annotated]: contains 21,080 in-the-wild faces with large-pose variations (yaw from -90$^\circ$ to 90$^\circ$). BIWI [@fanelli2011real]: is captured in well controlled laboratory environment by record RGB-D video of different people across different head pose range using a Kinect v2 device and has better pose annotations. It contains about 15, 000 images with $\pm$75$^\circ$ for yaw, $\pm$60$^\circ$ for pitch and $\pm$50$^\circ$ for roll. Method Yaw Pitch Roll MAE --------------------------- ----------- ----------- ----------- ----------- 3DDFA[@zhu2016face] 5.400 8.530 8.250 7.393 Ruiz etal.[@ruiz2017fine] 6.470 6.559 5.436 6.155 **Ours** **4.820** **6.227** **5.137** **5.395** : Mean average error of Euler angles across different methods on the AFLW2000 dataset. Pose Estimation on the AFLW2000 {#sssec:subsubhead} ------------------------------- Same backbone network as [@ruiz2017fine] is adopted. Network was trained for 25 epochs on 300L-WP using Adam optimization [@kinga2015method] with a learning rate of 10$^{-6}$ and $\beta_{1}$ = 0.9, $\beta_{2}$ = 0.999 and ε = 10$^{-8}$. We normalize the data before training by using the ImageNet mean and standard deviation for each color channel. Our method bins angles in the $\pm$99$^\circ$ range we discard images with angles outside of this range. Results can be seen in Table 1. Method Yaw Pitch Roll MAE -------------------------------- ---------- ------------ ---------- ------------ Liu et al.[@liu20163d] 6.0 6.1 5.7 5.94 Ruiz et al.[@ruiz2017fine] 4.810 6.606 3.269 4.895 Drounard[@drouard2017robust] 4.24 5.43 4.13 4.60 DMLIR[@lathuiliere2017deep] **3.12** 4.68 3.07 3.62 MLP + Location[@gupta2018nose] 3.64 4.42 3.19 3.75 CNN + Heatmap[@gupta2018nose] 3.46 3.49 **2.74** 3.23 **Ours** 3.4273 **2.6437** 2.9811 **3.0174** : Mean average error of Euler angles across different methods on the BIWI dataset with 8-fold cross-validation. Method Yaw Pitch Roll MAE ------------------------------------------ ------- ------- ------ ------- Patacchiola et al.[@patacchiola2017head] 11.04 7.15 4.40 7.530 KEPLER[@kumar2017kepler] 6.45 5.85 8.75 7.017 Ruiz et al.[@ruiz2017fine] 6.26 5.89 3.82 5.324 MLP + Location[@gupta2018nose] 6.02 5.84 3.56 5.14 **Ours** 6.18 5.38 3.71 5.090 CNN + Heatmap[@gupta2018nose] 5.22 4.43 2.53 4.06 : Mean average error of Euler angles across different methods on the AFLW dataset. Pose Estimation on the AFLW and BIWI Datasets {#sssec:subsubhead} --------------------------------------------- We also test our method on AFLW and BIWI dataset with same parameter setting as 4.2. Results can be seen in Table 2. and Table 3. Our method achieves state-of-the-art on BIWI, MAE of our method is lower the base network [@ruiz2017fine] by large scale, and also achieves better performance than the recent landmark-based CNN + Heatmap [@gupta2018nose] method. We demonstrate that our method perform well on BIWI as it is captured in controlled environment, and has better ground truth annotations, but this verifies the usefulness of our hybrid coarse-fine classification scheme when the annotation is precise. We also surpass all landmark-free methods, and achieve competing performance over all methods on AFLW, following testing protocol in [@kumar2017kepler] (i.e. selecting 1000 images from testing and remaining for training.) AFLW2000 Multi-Classification Ablation {#sssec:subsubhead} -------------------------------------- $\alpha$ $\beta_{1}$ $\beta_{2}$ $\beta_{3}$ $\beta_{4}$ $\beta_{5}$ MAE ---------- ------------- ------------- ------------- ------------- ------------- ------------ -- -- -- 2 1 0 0 0 0 5.7062 2 3 1 1 1 1 5.6270 2 5 3 1 1 1 5.6898 2 7 5 3 1 1 **5.3953** 2 9 7 5 3 1 5.5149 : Ablation analysis: MAE across different classification loss weights on the AFLW2000 dataset. $\alpha$ $\beta_{1}$ $\beta_{2}$ $\beta_{3}$ $\beta_{4}$ $\beta_{5}$ MAE ---------- ------------- ------------- ------------- ------------- ------------- ------------ -- -- -- 0.1 7 5 3 1 1 5.4834 1 7 5 3 1 1 5.6160 2 7 5 3 1 1 **5.3953** 4 7 5 3 1 1 5.6255 : Ablation analysis: MAE across different regression loss weights on the AFLW2000 dataset. In this part, we present an ablation study of the hybrid coarse-fine classification. We train ResNet50 using different coefficient setting. Results can be seen in Table 4. and Table 5. We observe the best results on the AFLW2000 dataset when the coefficient is 2,7,5,3,1,1 by order. $\alpha$,$\beta_{1}$,$\beta_{2}$,$\beta_{3}$,$\beta_{4}$,$\beta_{5}$ correspond to the weight of regression, 198 classes, 66 classes, 18 classes, 6 classes and 2 classes. conclusion {#sec:page} ========== We present a hybrid classification scheme for precise head pose estimation without facial landmarks. Our proposed method achieves state-of-the-art on BIWI and AFLW2000 dataset, and also achieves promising performance on AFLW dataset. The hybrid coarse-fine classification framework is proved to be beneficial for head pose estimating and we believe that it is not only beneficial for single specific task, it may be also helpful to other classification problem such as digits recognition, we will work on it later. [^1]: https://github.com/haofanwang/accurate-head-pose\[web\]
--- author: - 'V. Alan Kostelecký,$^{a}$' - 'Enrico Lunghi,$^{a}$' - 'Nathan Sherrill,$^{a}$' - 'A.R. Vieira$^{b}$' bibliography: - 'paper.bib' title: Lorentz and CPT Violation in Partons --- Introduction {#sec:intro} ============ Deep inelastic scattering (DIS) and the Drell-Yan (DY) process are key tools in the study of quantum chromodynamics (QCD). The DIS cross section for electron-proton scattering depends only weakly on momentum transfer [@dis1; @dis2], and the scaling invariance of the associated form factors [@bjorken] implies that nucleons contain partons [@feynman]. The DY process [@dy70], which involves the production and decay of vector bosons in hadron collisions, is related by crossing symmetry to DIS and provides complementary information about the parton distribution functions (PDFs) [@dyexpt]. Both DIS and the DY process play a crucial role in investigations of perturbative QCD and can serve as probes for physics beyond the Standard Model (SM) [@disproc]. One interesting prospect for experimental signals beyond the SM is minuscule violations of Lorentz and CPT symmetry, which may originate from the Planck scale in an underlying theory combining quantum physics and gravity such as strings [@ks89; @kp91; @kp95]. Over the last two decades, this idea has been extensively tested via precision tests with gravity and with many SM particles and interactions [@tables], but comprehensive studies directly involving quarks remain challenging due primarily to complications in interpreting hadronic results in terms of the underlying QCD degrees of freedom. In this work, we develop factorization techniques for hadronic processes in the presence of Lorentz and CPT violation and apply them to DIS and the DY process, using the results to estimate attainable sensitivities in certain experiments at the Hadronen-Elektronen Ring Anlage (HERA) [@hera], at the electron-ion collider (EIC) proposed for Thomas Jefferson National Laboratory (JLab) or Brookhaven National Laboratory (BNL) [@hera], and at the Large Hadron Collider (LHC) [@Sirunyan:2018owv]. The methodology adopted in this work is grounded in effective field theory, which provides a quantitative description of tiny effects emerging from distances below direct experimental resolution [@sw]. The comprehensive realistic effective field theory for Lorentz violation, called the Standard-Model Extension (SME) [@ck97; @ck98; @ak04], is obtained by adding all Lorentz-violating terms to the action for general relativity coupled to the SM. Since violation of CPT symmetry implies Lorentz violation in realistic effective field theory [@ck97; @owg], the SME also characterizes general effects of CPT violation. Any given Lorentz-violating term is constructed as the coordinate-independent contraction of a coefficient for Lorentz violation with a Lorentz-violating operator. The operators can be classified according to mass dimension $d$, and terms with $d\leq 4$ in Minkowski spacetime yield a theory called the minimal SME that is power-counting renormalizable. Reviews of the SME can be found in, for example, Refs. [@tables; @review1; @review2; @review3]. We concentrate here on evaluating the effects on DIS and the DY process of coefficients for Lorentz violation controlling spin-independent SME operators involving the $u$ and $d$ quarks and having mass dimension four and five. The former are minimal SME operators preserving CPT, while the latter are nonminimal and violate CPT. In Sec. \[sec:setup\], we establish the framework for the parton-model description of factorization in the presence of Lorentz and CPT violation. The application in the context of DIS is presented in Sec. \[sec:DIS\]. We demonstrate the compatibility of our factorization technique with the operator-product expansion (OPE) and with the Ward identities, and we obtain explicit results for the DIS cross section. In the quark sector, nonzero spin-independent Lorentz-violating operators of mass dimension four are controlled by $c$-type coefficients, while those of dimension five are governed by $a^{(5)}$-type ones. Sensitivities to these coefficients in existing and forthcoming DIS experiments at HERA and the EIC are estimated. In Sec. \[sec:DY\], we investigate factorization in the DY process. The cross sections for nonzero $c$- and $a^{(5)}$-type coefficients are derived, and attainable sensitivities from experiments at the LHC are estimated. A comparison of our DIS and DY results is performed, revealing the complementary nature of searches at lepton-hadron and hadron-hadron colliders. Our efforts here to explore spin-independent SME effects in the quark sector extend those in the literature, including studies of single and pair production of $t$ quarks at Fermi National Accelerator Laboratory (Fermilab) and at the LHC [@tquark; @bkl16; @ccp19], applications of chiral perturbation theory [@lvchpt1; @lvchpt2; @lvchpt3; @lvchpt4; @lvchpt5], estimates of attainable sensitivities from DIS [@klv17; @ls18; @kl19], and related investigations [@Karpikov:2016qvq; @michelsher19]. Spin-independent SME coefficients for CPT violation in the quark sector can also be constrained using neutral-meson interferometry [@ak98; @ek19] via oscillations of kaons [@ak00; @kr01; @iks01; @kaons1; @kaons2] and of $D$, $B_d$, and $B_s$ mesons [@ak01; @bmesons1; @kvk10; @bmesons2; @bmesons3; @Roberts:2017tmo; @bmesons4]. For $d=5$, these SME coefficients can trigger phenomenologically viable baryogenesis in thermal equilibrium [@bckp97; @digrezia06; @ho11; @mavromatos18], thereby avoiding the Sakharov condition of nonequilibrium processes [@as67]. Cosmic-ray observations imply a few additional bounds on ultrarelativistic combinations of quark-sector coefficients [@km13]. Other constraints on $d=5$ spin-independent CPT violation have been extracted from experiments with neutrinos, charged leptons, and nucleons [@km12; @km13; @gkv14; @kv15; @schreck16; @kv18; @icecube18]. Framework {#sec:setup} ========= In this section, we present the general procedure for factorization of the scattering cross section in the presence of quark-sector Lorentz violation. To extract the corresponding parton model, we restrict attention to the dominant physical effects occurring at tree level in the electroweak couplings and at zeroth order in the strong coupling. In the conventional Lorentz-invariant scenario, the parton-model picture of high-energy hadronic processes at large momentum transfer [@feynman] can be shown to emerge from a field-theoretic setting under suitable kinematical approximations [@Collins:2011zzd]. For many hadronic processes including DIS and the DY process, each channel contributing to the scattering cross section factorizes into a high-energy perturbative part and a low-energy nonperturbative part, with the latter described by PDFs and fragmentation functions of the hadronic spectators. The perturbative component is often called hard due to the large associated momentum transfer, while the nonperturbative component is called soft. The PDFs are universal in the sense that they are process independent. This factorization becomes most transparent in reference frames in which the dominant momentum regions of the perturbative subprocesses are approximately known. In these frames, asymptotic freedom and the large momentum transfer imply that internal interactions of the hadron constituents occur on a timescale much longer than that of the external probe. The participating constituents may then be treated as freely propagating states. The parton-model picture of scattering emerges by imposing the conservative kinematical restriction to massless and on-shell constituents with momenta collinear to the associated hadrons. For a hadron $H$ with momentum $p^\mu = (p^+,p^-,p_\perp)$ and mass $M$ in lightcone coordinates $p^\pm \equiv \tfrac{1}{\sqrt{2}}(p^0 \pm p^3)$ with $p_\perp \equiv (p^1,p^2)$, a boost from its rest frame along the 3 axis produces a momentum $p^\mu = (p^+, M^2/2p^+,0_\perp)$. A constituent of the hadron in the hadron rest frame has a momentum $k$ that scales at most as $k^\mu \sim (M, M, M)$. Under a large boost, the constituent inherits the large $+$ momentum because $k^\mu \sim (p^+, M^2/p^+,M)$ up to $\mathcal{O}(M/p^+)$ corrections. The ratio $\xi = k^+/p^+$ is boost invariant along the 3 axis and leads to the familiar scaling parametrization $k^\mu = \xi p^\mu$ of the parton momentum in the massless limit, which is a covariant expression valid in any frame. Scaling permits kinematical approximations that greatly simplify the calculation of the hadronic vertex contribution to the scattering amplitude, and it is known to hold in a wide variety of hadronic processes [@Collins:1989gx]. In the presence of Lorentz violation, the above perspective requires modification [@klv17]. We focus here on Lorentz-violating operators of arbitrary mass dimension that affect the free propagation of the internal fermion degrees of freedom, including both CPT-even and CPT-odd terms. For simplicity, we disregard possible flavor-changing couplings and limit attention to spin-independent effects. For a single massless Dirac fermion, the corresponding gauge-invariant Lorentz- and CPT-violating Lagrange density $\mathcal{L}_\psi$ can be written in the form [@km13; @kl19] $$\begin{aligned} \mathcal{L}_\psi &= \tfrac{1}{2}\bar{\psi}(\gamma^\mu i D_\mu + \widehat{\mathcal{Q}})\psi + \text{h.c.} , \label{tilde}\end{aligned}$$ where $D_\mu$ is the usual covariant derivative and the operator $\widehat{\mathcal{Q}}$ describes both Lorentz-invariant and Lorentz-violating effects. The explicit form of $\mathcal{L}_\psi$ for $d\leq 6$ relevant for our purposes is contained in Table I of Ref. [@kl19]. The corresponding coefficients for Lorentz violation may be assumed perturbatively small based on current experimental results [@tables] and the restriction to observer concordant frames [@kl01]. In an inertial frame in the neighborhood of the Earth, all coefficients for Lorentz violation may be taken as spacetime constants, which maintains the conservation of energy and momentum [@ck97]. Field redefinitions and coordinate choices can be used to simplify $\widehat{\mathcal{Q}}$, which reduces the number of coefficients controlling observable effects [@ck98; @kl01; @colladay02; @ak04; @altschul06; @lehnert06; @kt11; @bonder15; @dk16]. In this work, we present specific calculations for the coefficients $c_f^{\mu\nu}$ at $d=4$ and $a_{f}^{(5)\lambda\mu\nu}$ at $d=5$, where $f=u,d$ spans the two nucleon valence-quark flavors. Other terms in Table I of Ref. [@kl19] involving coefficients of the $a$ type include $a_{f}^{\mu}$ at $d=3$ and $a_{{\text F}f}^{(5)\lambda\mu\nu}$ at $d=5$, but none of these contribute at leading order to the processes studied here. The field redefinitions insure that the coefficients $c_f^{\mu\nu}$ and $a_{f}^{(5)\lambda\mu\nu}$ of interest can be taken to be symmetric in any pair of indices and to have vanishing traces, implying 9 independent observable components of the $c$ type and 16 independent observable components of the $a^{(5)}$ type [@dk16; @ek19]. Following standard usage in the literature, we denote the symmetric traceless parts of these coefficients as $c_f^{\mu\nu}$ and $a_{{\text S}f}^{(5)\lambda\mu\nu}$, where [@fkx17] $$\begin{aligned} a_{{\text S}f}^{(5)\lambda\mu\nu} &= \tfrac 13 \sum_{(\lambda\mu\nu)} ( a_{f}^{(5)\lambda\mu\nu} - \tfrac 16 a_{f}^{(5)\lambda\alpha\beta} \eta_{\alpha\beta} \eta^{\mu\nu} - \tfrac 13 a_{f}^{(5)\alpha\lambda\beta} \eta_{\alpha\beta} \eta^{\mu\nu}). \label{aS}\end{aligned}$$ At the quantum level, the theory leads to Lorentz-violating propagation and interaction. As a consequence, the conventional dispersion relation $k^2 = 0$ for the 4-momentum of the hadron constituent is modified. The modified dispersion relation can be derived from the Dirac equation by setting to zero the strong and electroweak couplings, converting to momentum space, and imposing the vanishing of the determinant of the matrix operator [@km13]. For the scenarios of interest here, the result can be written in the elegant form $$\begin{aligned} \widetilde{k}^2 = 0 , \label{eq:moddispgen}\end{aligned}$$ where $\widetilde{k}_\mu$ is the Fourier transform of the modified interaction-free Dirac operator. The hadron constituents then propagate along trajectories that are geodesics in a pseudo-Finsler geometry [@kr10; @ak11; @ek18; @schreck19; @silva19]. Unlike the Lorentz-invariant case, the modified dispersion relation typically involves a non-quadratic relationship between energy and 3-momentum controlled by the coefficients for Lorentz violation. This feature prevents a straightforward identification of the lightcone components of $k$ and complicates attempts at factorization of hadronic processes. An additional challenge arises for the hadron constituents in the initial state during the time of interaction because a momentum parametrization in terms of external kinematics is desired. These points imply that $k$ is no longer the momentum relevant for scaling in a Lorentz-violating parton model, as the relation $k = \xi p$ is no longer consistent with Eq. . Instead, the momentum $\widetilde{k}$ plays the role of interest. To establish the parton model in the presence of Lorentz violation, we aim to determine the lightcone decomposition of the momentum $\widetilde{k}$ of an on-shell massless quark, which is subject to the condition . The perturbative nature of Lorentz violation implies that the frame appropriate for factorization differs at most from conventional frame choices by an $\mathcal{O}(\widehat{\mathcal{Q}})$ transformation. Since a large portion of the space of SME coefficients for nucleons is strongly constrained by experiment [@tables], we can reasonably neglect Lorentz-violating effects in the initial- and final-state hadrons. We therefore seek a frame in which $\widetilde{k}$ can be parametrized in terms of its parent hadron momentum $p$ and the parton coefficients for Lorentz violation in $\widehat{\mathcal{Q}}$. To retain the equivalent on-shell condition in a covariant manner, we choose $$\widetilde{k}^\mu = \xi p^\mu . \label{eq:tildek}$$ Since the effects of Lorentz violation are perturbative, one may still argue that $\widetilde{k}^\mu \sim \left(M, M, M\right)$ in the rest frame of the hadron. Performing an observer boost along the 3 axis yields $\widetilde{k} \sim (p^+, M^2/p^+,M)$, where now the variable $\xi \equiv \widetilde{k}^+/p^+$ plays the role of the parton momentum fraction. Note that the frame changes implemented by the observer boosts are accompanied by covariant transformations of the coefficients for Lorentz violation [@ck97]. The desired procedure is therefore to impose the conditions - and perform the factorization of the hadronic scattering amplitude working in an appropriate observer frame from which the calculation can proceed in parallel with the conventional case. The momentum $\widetilde{k}_\mu$ is defined for a parton via Eq. . However, other internal momenta appear in the scattering process. In DIS, for example, the initial parton momentum $k$ differs from the final parton momentum $k+q$ by the momentum $q$ of the vector boson. For calculational purposes, it is convenient to introduce a momentum $\widetilde{q}$ defined as the difference of the modified momenta for the final and initial partons, $$\begin{aligned} \widetilde{q} \equiv \widetilde{k+q} - \widetilde{k}. \label{eq:qtildedef}\end{aligned}$$ In the presence of Lorentz violation involving operators of dimensions $d=3$ and 4, the explicit form of $\widetilde{q}$ can be written in terms of $q$ and SME coefficients, independent of $k$. For $d>4$, however, the definition implies that $\widetilde{q}$ depends nontrivially on $k$ as well, which complicates the derivation of the cross section. In this work, we explore the implications of both these types of situations for DIS and the DY process. Deep inelastic scattering {#sec:DIS} ========================= In this section, we apply the general procedure outlined in Sec. \[sec:setup\] to inclusive lepton-hadron DIS. The special case of unpolarized electron-proton DIS mediated by conventional photon and $Z^0$ exchange in the presence of minimal quark-sector Lorentz violation has been studied and applied in the context of HERA data [@klv17] and the future EIC [@ls18]. Analogous results for nonminimal Lorentz and CPT violation have also been obtained [@kl19]. Here, we show how these results fit within the new formalism and provide both updated and new numerical estimates of attainable sensitivities to Lorentz violation. Effects on DIS of minimal Lorentz violation in the weak sector are considered in Ref. [@michelsher19]. Factorization of the hadronic tensor {#ssec:FactDIStensor} ------------------------------------ The inclusive DIS process $l + H \rightarrow l' + X$ describes a lepton $l$ scattering on a hadron $H$ into a final-state lepton $l'$ and an unmeasured hadronic state $X$. The interaction is mediated by a spacelike boson of momentum $q = l-l'$. It is convenient to introduce the dimensionless Bjorken variables $$x = \frac{-q^2}{2p\cdot q}, \quad y = \frac{p\cdot q}{p\cdot l}, \label{eq:Bjorkenxy}$$ where $p$ is the hadron momentum. The DIS limit is characterized by $-q^2 \equiv Q^2 \rightarrow \infty$ with $x$ fixed. This produces a final-state invariant mass much larger than the hadron mass $M$, which may therefore be neglected. Reviews of DIS and related processes include Refs. [@Manoharreview; @Jaffereview]. The observable of interest is the differential cross section $d\sigma$, which by its conventional definition is a Lorentz-scalar quantity built from the invariant amplitude, an initial-state flux factor, and a contribution from the final-state phase space. In principle, Lorentz violation could affect each of these, so care is required in calculating the cross section [@ck01]. In this work, Lorentz violation can enter only through the hadronic portion of the full scattering amplitude because the exchanged vector boson, the incoming particle flux, and the phase space of the outgoing particles are assumed conventional. The cross section as a function of the lepton phase-space variables $x$, $y$ and $\phi$ takes the form $$\begin{aligned} \fr{d\sigma,dxdyd\phi} = \fr{\alpha^2 y,2\pi Q^4}\sum_{i}R_i(L_i)_{\mu\nu}(\text{Im}T_i)^{\mu\nu}. \label{eq:tripleDISxsec}\end{aligned}$$ In this expression, the index $i$ denotes the neutral-current channels $i = \gamma, Z$ or the charged-current channel $i = W^{\pm}$, with corresponding lepton tensor $(L_i)_{\mu\nu}$ and forward amplitude $(T_i)^{\mu\nu}$. The factor $R_i$ denotes the ratio of the exchanged boson propagator to the photon propagator. Unitarity has been used to write the hadronic tensor $(W_i)^{\mu\nu}$ in terms of the imaginary part of its forward amplitude $(\text{Im}T_i)^{\mu\nu}$ via the optical theorem in the physical scattering region $q^2 < 0$. This operation remains valid in the SME context since all potential new effects are associated with hermitian operators [@fermionobservables]. The forward amplitude is defined as $$\begin{aligned} T_{\mu\nu} = i\int d^{4}w e^{iq\cdot w} \bra{p,s}{\text}{T} j^\dagger_{\mu}(w) j_{\nu}(0) \ket{p,s}_{c}, \label{eq:forwardCompton}\end{aligned}$$ where ${\text}{T} j^\dagger_{\mu}(w) j_{\nu}(0)$ is the time-ordered product of electroweak quark currents $j^\dagger_{\mu}(w)$, $j_{\nu}(0)$. The hadron spin vector $s^\mu$ satisfies $s^2 = -M^2$, $s\cdot p = 0$, and $c$ denotes the restriction to connected matrix elements. For simplicity, we suppress the subscript $c$, the channel label $i$, and possible flavor labels in the following discussion. Given that Eq.  in principle contains higher-order derivative terms, the generalized Euler-Lagrange equations must be used to derive the global $SU(N)$ current $j^\mu$. Note that only terms with $d\geq 4$ augment the current from its conventional form. We denote the general Dirac structure of these contributions to be $\Gamma^\mu$ and for simplicity write the current as $$j_{\psi \chi }^{\mu} = {:\mathrel{\bar{\psi}\Gamma^\mu\chi}:}, \label{eq:conscurrentGamma}$$ where typically $\psi \neq \chi$ and the associated charges are implicit. In the DIS limit, asymptotic freedom implies that the first-order electroweak interaction provides the dominant contribution to the hadronic portion of the scattering amplitude. We therefore evaluate Eq.  at zeroth order in the strong-interaction coupling, giving $$T^{\mu\nu} = i \int d^4 w e^{iq\cdot w} \bra{p,s}{:\mathrel{\bar{\psi}(w)\Gamma^\mu iS_{F}(w)\Gamma^\nu\psi(0)}:} + {:\mathrel{\bar{\psi}(0)\Gamma^\nu iS_{F}(-w)\Gamma^\mu\psi(w)}:}\ket{p,s}, \label{eq:Tfirststep}$$ with the Feynman propagator $$iS_{F}(x-y) = i\int_{C_F} \frac{d^4k}{(2\pi)^4} \frac{e^{-ik\cdot(x-y)}}{\slashed{\widetilde{k}}+ i\epsilon}. \label{eq:feynpropgen}$$ Unlike the conventional case, the structure $\Gamma^\mu S_F\Gamma^\nu$ can contain both even and odd powers of gamma matrices, which leads to additional contributions. Each term in Eq.  can be viewed as a matrix $X$ and can be expanded in a basis $\Gamma^A$ of gamma matrices as $X = x_A \Gamma^A$. The conventional completeness relation $\text{Tr}[\Gamma^A \Gamma_B] = 4 \delta^A_{\hphantom{A}B}$ implies $x_A = (1/4)\text{Tr}[\Gamma_A X]$. To match with the results common in the literature, we choose the basis $$\begin{aligned} &\Gamma_A = \{\mathbb{1}, \gamma_5,\gamma_\mu,\gamma_5\gamma_\mu, i\gamma_5\sigma_{\mu\nu} \}, \nonumber\\ &\Gamma^A = \{\mathbb{1}, \gamma_5,\gamma^\mu,\gamma^\mu\gamma_5, -i\gamma_5\sigma^{\mu\nu}/2 \}.\end{aligned}$$ With Dirac indices explicitly displayed, one has $$\begin{aligned} {:\mathrel{{\psi}_a(0)\bar{\psi}_b(x)}:} = -\frac{1}{4}\bar{\psi}(x)\Gamma_A\psi(0)\left(\Gamma^A\right)_{ab},\end{aligned}$$ giving $$\begin{aligned} &T^{\mu\nu} = -\fr{1,4}\int \fr{d^4k,(2\pi)^4} \left( \text{Tr} \left[\Gamma^\mu \fr{1,\gamma_\alpha \widetilde{k+q}^\alpha +i\epsilon}\Gamma^\nu\right] \int d^4w e^{-ik\cdot w} \bra{p,s}\bar{\psi}(w)\psi(0)\ket{p,s} \right. \nonumber\\ &\left. \hskip 100pt + \text{Tr}\left[\Gamma^\mu \fr{1,\gamma_\alpha \widetilde{k+q}^\alpha + i\epsilon}\Gamma^\nu\gamma_5\right] \int d^4w e^{-ik\cdot w}\bra{p,s}\bar{\psi}(w)\gamma_5\psi(0)\ket{p,s} \right. \nonumber\\ &\left. \hskip 100pt + \text{Tr}\left[\Gamma^\mu \fr{1,\gamma_\alpha \widetilde{k+q}^\alpha + i\epsilon}\Gamma^\nu\gamma^\rho\right] \int d^4w e^{-ik\cdot w}\bra{p,s}\bar{\psi}(w)\gamma_\rho\psi(0)\ket{p,s} \right. \nonumber\\ &\left. \hskip 70pt + \text{Tr}\left[\Gamma^\mu \fr{1,\gamma_\alpha \widetilde{k+q}^\alpha + i\epsilon}\Gamma^\nu\gamma^\rho\gamma_5\right] \int d^4w e^{-ik\cdot w} \bra{p,s}\bar{\psi}(w)\gamma_5\gamma_\rho\psi(0)\ket{p,s} \right. \nonumber\\ &\left. \hskip 70pt - \tfrac{1}{2}\text{Tr}\left[\Gamma^\mu \fr{1,\gamma_\alpha \widetilde{k+q}^\alpha + i\epsilon}\Gamma^\nu i\gamma_5\sigma^{\rho\sigma}\right] \int d^4w e^{-ik\cdot w} \bra{p,s}\bar{\psi}(w)i\gamma_5\sigma_{\rho\sigma}\psi(0)\ket{p,s} \right. \nonumber\\ &\left. \hskip 40pt + (q\leftrightarrow -q,0\leftrightarrow w, \mu\leftrightarrow \nu) \right). \label{eq:Tstep2}\end{aligned}$$ Note that normal ordering of operators is implied. Taking the imaginary part of $T^{\mu\nu}$, the terms that depend on $k+q$ or $k-q$ contribute only via scattering initiated by a quark or antiquark, respectively. The imaginary part of $T^{\mu\nu}$ comes solely from the propagator denominators because the combination of spatial integration, exponential factors, and matrix-element terms is hermitian. This feature is a consequence of translation invariance, which remains a symmetry within the SME framework when the coefficients for Lorentz violation are spacetime constants. The imaginary piece of the propagator takes the form $$\begin{aligned} \text{Im} \frac{1}{\slashed{\widetilde{k}} + i\epsilon} =-\pi \delta (\widetilde k^2) \theta(k^0) - \pi \delta (\widetilde{-k}^2) \theta(-k^0), \label{eq:opticaltheorem}\end{aligned}$$ where $\slashed{\widetilde{k}} = \gamma_\alpha\widetilde{k}^\alpha$ and the two terms correspond to particle and antiparticle. For coefficients controlling CPT-even effects one finds $\widetilde{-k} = - \widetilde{k}$, implying the particle and antiparticle have the same dispersion relation. For coefficients governing CPT violation, $\widetilde{k}$ lacks a definite parity in $k$, implying the particle and antiparticle have different dispersion relations that are related by changing the signs of the coefficients for CPT violation. In what follows we focus on the quark contribution, so $\widetilde k$ is calculated with the sign corresponding to a particle. Moreover, in applying the standard Cutkowsky rules, the intermediate propagator in the diagram with an incoming quark uniquely forces the dispersion relation for the intermediate quark to be identical to the incoming quark one, so that $\widetilde{k}^2 = (\widetilde{k+q})^2 = 0$. The relevant kinematics can be handled by working in lightcone coordinates and in the Breit frame, which in the conventional case is defined as the center-of-mass (CM) frame of the hadron and exchanged boson, $\vec{p} + \vec{q} = \vec{0}$. In light of Eq. , however, we must here introduce a modified Breit frame defined by the relation $\vec{p} + \vec{\widetilde{q}} = \vec{0}$. The hadron and shifted virtual boson kinematics may be parametrized as $$\begin{aligned} &p^\mu = \left(p^+,\frac{M^2}{2p^+}, 0_\perp\right), \nonumber\\ &\widetilde{q}^\mu = \left(-\widetilde{x}p^+, \frac{\widetilde{Q}^2}{2\widetilde{x}p^+}, 0_\perp\right) \label{eq:genBreitkin},\end{aligned}$$ where $$\label{eq:xtilde} \widetilde{x} = \frac{-\widetilde{q}^2}{2p\cdot \widetilde{q}}$$ with $-\widetilde{q}^2 \equiv \widetilde{Q}^2$. In writing Eq. , we neglect corrections of order $\mathcal{O}(M^2/Q)$ and the zeroth component $\widetilde{q}^0$ with respect to $\widetilde{Q}$. Note also that $\widetilde{q}$ differs from the physical boson momentum $q$ only if operators with $d\geq4$ are taken into consideration, so the modified Breit frame differs from the conventional one only in the presence of these operators. Consideration of Eq.  implies that $\widetilde{q}$ and hence $\widetilde{x}$ are functions of $k$, $q$, and the coefficients for Lorentz violation. Therefore, for nonminimal interactions the modified Breit frame depends on a polynomial in $\xi$. However, since additional dependence on powers of $\xi$ is accompanied by coefficients for Lorentz violation, the replacement $\xi \rightarrow x$ holds at leading order in Lorentz violation and so both $\widetilde{q}$ and $\widetilde{x}$ can be constructed event by event from the incident hadron and scattered lepton kinematics. Based on the discussion in Sec. \[sec:setup\], we can parametrize the large $+$ component of $\widetilde{k}^+$ as $\xi p^+$ with virtualities $\widetilde{k}^- \sim M^2/p^+$ and $\widetilde{k}_\perp \sim M$. This yields $$\begin{aligned} &\widetilde{k}^\mu = \left(\xi p^+, \widetilde{k}^-,\widetilde{k}_\perp\right) \nonumber\\ &\widetilde{k}'^\mu = \left((\xi-\widetilde{x}) p^+, \frac{\widetilde{Q}^2}{2\widetilde{x}p^+} + \widetilde{k}^-,\widetilde{k}_\perp\right) \label{eq:genDISkins_k},\end{aligned}$$ where $\widetilde{k}'^\mu = \widetilde{k+q}^\mu$. The structure of these equations and of Eqs. - is standard but involves replacing conventional variables with tilde ones. In the usual scenario $\xi$ and $x$ differ by corrections of $\mathcal{O}(M^2/Q^2)$, implying the scaling $k'^+ \sim M^2/p^+, k'^- \sim p^+$, so the boson transfers the incident parton from the $+$ to $-$ lightcone direction. In the present case, the dominance of the $-$ component of $\widetilde{k'}$ over the $+$ component still persists because corrections from Lorentz-violating effects are suppressed relative to $p^+ \sim Q$. Proceeding with the spatial and momentum integrations in $T^{\mu\nu}$ requires a change of variables $k \rightarrow \widetilde{k}$ because only the latter momentum exhibits the scaling of interest. To evaluate the $w$ integration in a straightforward way, a transformation $w \rightarrow \hat{w}$ must be performed such that $k\cdot w = \widetilde{k}\cdot \hat{w}$. Neglecting the small components of $\widetilde{k}$ with respect to the large $+$ and $-$ components of $\widetilde{q}$, one finds that $\widetilde{k}^-$ and $\widetilde{k}_\perp$ can be disregarded in the hard scattering up to corrections of $\mathcal{O}(M/Q)$. This is the analogue in the modified Breit frame of the conventional result. The integrations over $\widetilde{k}^-$ and $\widetilde{k}_\perp$ thus bypass the traces, and the structures in the traces proportional to $\gamma^-$, $\gamma^-\gamma_5$, $\gamma^-\gamma_\perp^i\gamma_5$ provide the dominant contributions to $T^{\mu\nu}$ for a hadron with a large $+$ momentum and so are accompanied by large $+$ components in the hadronic matrix elements. It is thus reasonable to assume $\gamma^{\rho} \approx \gamma^-$ in the traces and $\gamma_\rho \approx \gamma^+$ in the matrix elements. Bearing these considerations and Eq. (\[eq:genDISkins\_k\]) in mind, we obtain $$\begin{aligned} T_f^{\mu\nu} \simeq \int \fr{d\widetilde{k}^+,\widetilde{k}^+}\text{Tr} &\left[\Gamma^\mu\fr{-1,\gamma_\alpha\widetilde{k+q}^\alpha + i\epsilon}\Gamma^\nu \fr{\slashed{\widetilde{k}},2} \right. \nonumber \\ &\times \left. \left(\mathbb{1}f_f(\widetilde{k}^+) - \gamma_5\lambda\Delta f_f(\widetilde{k}^+) + \gamma_5\gamma_\perp^i\lambda_\perp \Delta_\perp f_f(\widetilde{k}^+)\right)\right], \label{eq:Tstep3}\end{aligned}$$ where we have neglected diagrams proportional to $1/(\gamma_\alpha\widetilde{k-q}^\alpha + i\epsilon)$ because they vanish in the physical scattering region. The unintegrated PDFs here are defined as $$\begin{aligned} &f_f(\widetilde{k}^+,\ldots) \equiv \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} J_k J_w e^{-i \widetilde{k}\cdot \hat{w}} \bra{p,s}\bar{\psi}_f(w(\hat{w}))\fr{\gamma^+,2}\psi_f(0)\ket{p,s}, \nonumber\\ &\lambda \Delta f_f(\widetilde{k}^+,\ldots) \equiv \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} J_k J_w e^{-i \widetilde{k}\cdot \hat{w}} \bra{p,s}\bar{\psi}_f(w(\hat{w}))\fr{\gamma^+\gamma_5,2}\psi_f(0)\ket{p,s}, \nonumber\\ &\lambda_\perp \Delta_\perp f_f(\widetilde{k}^+,\ldots) \equiv \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} J_k J_w e^{-i \widetilde{k}\cdot \hat{w}} \bra{p,s}\bar{\psi}_f(w(\hat{w})) \fr{\gamma^+\gamma_\perp^i\gamma_5,4}\psi_f(0)\ket{p,s} , \label{eq:quarkpdf}\end{aligned}$$ where $\lambda$, $\lambda_\perp$ are the longitudinal and transverse target helicities and $\Delta f_f$, $\Delta_\perp f_f$ are the corresponding longitudinal and transverse polarized PDFs. We have also introduced the lightcone definitions of the gamma matrices, $\gamma^{\pm} = \tfrac{1}{\sqrt{2}}(\gamma^0 + \gamma^3),$ $\gamma_\perp^i = \gamma^1$, $\gamma^2$. The ellipses in the arguments on the left-hand side of Eq.  denote possible dependences on the coefficients for Lorentz violation. The factors $J_k, J_w$ are jacobians from the change of variables, which differ from unity at first order in Lorentz violation. These expressions represent the modified dominant twist-two PDFs. They differ from conventional results by the jacobians and by the dependences on $w(\hat{w})$ in the matrix elements. In the limit of vanishing coefficients for Lorentz violation, we have $J_k = J_w = 1$, $\widetilde k \rightarrow k$, $\hat w \rightarrow w$, and the PDFs reduce to functions of a single variable that can be expressed covariantly in terms of two light-like vectors $$\begin{aligned} &\bar n^\mu = \frac{1}{\sqrt{2}}(1,0,0,+1), \quad n^\mu = \frac{1}{\sqrt{2}}(1,0,0,-1), \label{eq:lightlike}\end{aligned}$$ with $n^2 = \bar n^2 = 0$, $n\cdot\bar n = 1$. In this basis, a generic four-vector $A^\mu$ can be expanded as $$\begin{aligned} A^\mu &= (n\cdot A)\bar{n}^\mu + (\bar{n}\cdot A)n^\mu + A^\mu_\perp,\end{aligned}$$ with $A^+ = n\cdot A$, $A^- = \bar{n}\cdot A$. We employ the basis and parametrize $w = \lambda n$ with $\lambda$ a positive constant. Since scaling $n$ by a positive constant implies scaling $\lambda$ oppositely, the PDFs are be invariant under scaling of $n$. The only scalar combination allowed is $k\cdot n/p\cdot n = \xi$, so the PDFs can depend only on $\xi$. Performing the $k^-$ and $\vec k_\perp$ integrations produces delta functions that set $w^+ = \vec w_\perp = 0$, which yields the standard result with PDFs as matrix elements of bilocal operators on the lightcone, $$\begin{aligned} &f_f(\xi) = \int \fr{d\lambda ,2\pi}e^{-i \xi p\cdot n \lambda} \bra{p}\bar{\psi}_f(\lambda n)\fr{\slashed{n},2}\psi_f(0)\ket{p}, \nonumber \\ &\lambda \Delta f_f(\xi) = \int \fr{d\lambda ,2\pi}e^{-i \xi p\cdot n \lambda} \bra{p,s}\bar{\psi}_f(\lambda n)\fr{\slashed{n}\gamma_5,2}\psi_f(0)\ket{p,s}, \nonumber \\ &\lambda_\perp \Delta_\perp f_f(\xi) = \int \fr{d\lambda ,2\pi}e^{-i \xi p\cdot n \lambda} \bra{p,s}\bar{\psi}_f(\lambda n)\fr{\slashed{n} \gamma_\perp^i\gamma_5,4}\psi_f(0)\ket{p,s}. \label{eq:quarkpdfunpolconv}\end{aligned}$$ Note that the rotational properties of the quark bilinear appearing in $f_f(\xi)$ imply this PDF is independent of the hadron spin $s$. In the presence of nonvanishing coefficients for Lorentz violation, the situation is more complicated. Explicit expressions at the level of Eq.  can be deduced by a similar procedure and yield scalar functions, but these are in general somewhat involved. As shown in Sec. \[sec:OPE\], the PDFs acquire additional dependence on the complete contraction of the coefficients for Lorentz violation with the hadron momentum. Taking the imaginary part of Eq.  by using Eq.  and integrating over the longitudinal variable sets $\xi$ to a function of $x$, $p$, $q$, and the coefficients for Lorentz violation. The resulting form of $T^{\mu\nu}$ is factorized and depicted in Fig. \[figure1\]. We have thus demonstrated that working in the modified Breit frame $\vec{p} + \widetilde{\vec{q}} = \vec{0}$ defined by Eq.  leads to factorization of $T^{\mu\nu}$. As in the conventional case, the PDFs in Eq.  emerge as nonlocal matrix elements evaluated along the $+$ lightcone direction. Since the PDFs remain scalar quantities and the perturbative portion of $T^{\mu\nu}$ is a covariant expression in the external momenta, the definition of the PDFs, the momentum fraction, and the cross section hold in any frame. Contraction with the lepton tensor $(L_i)^{\mu\nu}$ in the channels of interest and combining the result with the additional kinematical factors then yields the scattering cross section. The operator product expansion {#sec:OPE} ------------------------------ The hadronic tensor $W^{\mu\nu}$ and the forward amplitude $T^{\mu\nu}$ can also be calculated using the OPE approach [@klv17]. In this section, we sharpen our discussion by generalizing previous results and connecting to the PDFs in Eq. . The OPE considers the expansion of the product of spacelike-separated operators, such as the product of hadronic currents that frequently appears in scattering processes, as a sum of local operators in the short-distance limit. Note that the short-distance expansion of the currents occurs outside of the physical scattering region. For minimal $c$-type coefficients, a direct evaluation of the current product [@klv17] yields operators of the form $\bar{\psi}_f(0)\gamma^{\mu_1} (i\widetilde{\partial}^{\mu_2}) (i\widetilde{\partial}^{\mu_3})\ldots (i\widetilde{\partial}^{\mu_n})\psi_f(0)$. The calculation of the hadronic tensor requires matrix elements of these operators between hadron states. Taking tree-level matrix elements of these operators between quark states of momentum $k$ gives $$\bra{k}\bar{\psi}_f\gamma^{\mu_1} i\tilde{\partial}^{\mu_2}\cdots i\tilde{\partial}^{\mu_n}\psi_f\ket{k} \propto \widetilde{k}^{\mu_1}\cdots \widetilde{k}^{\mu_n}, \label{eq:quarkmatrixelOPE}$$ which is totally symmetric and traceless because $\widetilde{k}^2 = 0$. This suggests that only the symmetric and traceless parts of the operators $$\begin{aligned} \mathcal{O}^{\mu_1\cdots\mu_n}_f = \bar{\psi}_f(0)\gamma^{\{\mu_1} (i\tilde{D}^{\mu_2})(i\tilde{D}^{\mu_3})\ldots (i\tilde{D}^{\mu_n\}})\psi_f(0)-\text{traces} \label{eq:Optwist2}\end{aligned}$$ enter at leading twist, where $\tilde{D}^{\mu}$ represents the covariant extension of $\tilde{\partial}^{\mu}$. Moreover, the factorization analysis implies that the partons in the hard scattering have momentum $k^\mu$ such that $\widetilde k^\mu \propto p^\mu$, thus suggesting $$\begin{aligned} \bra{p}\mathcal{O}^{\mu_1\cdots\mu_n}_f\ket{p} = 2 \mathcal{A}_n^f p^{\mu_1}\cdots p^{\mu_n}, \label{eq:OPEprotonmatrixel}\end{aligned}$$ where the quantities $\mathcal{A}_n^f$ depend on the hadron momentum and on scalar contractions of the coefficients for Lorentz violation. For $n=2$, this result is supported directly by noting that $$\begin{aligned} \mathcal{O}^{\mu_1\mu_2}_f &= \theta_{f\alpha\beta} \left(\eta^{\alpha \mu_1} \eta^{\beta \mu_2} + \eta^{\alpha \mu_2} \eta^{\beta \mu_1} \right) - \text{traces},\end{aligned}$$ where $\theta^{\mu\nu}_f$ is the symmetric part of the energy-momentum tensor, and hence that $$\begin{aligned} \bra{p}{\mathcal{O}}_f^{\mu_1\mu_2}\ket{p} &= \bra{p}\theta_{f\alpha\beta}\ket{p} \left(\eta^{\alpha \mu_1} \eta^{\beta \mu_2} + \eta^{\alpha \mu_2} \eta^{\beta \mu_1} \right) -\text{traces} \propto p^{\mu_1} p^{\mu_2}, \label{eq:O2EMop}\end{aligned}$$ implying that $\mathcal{A}_2^f$ is the fraction of the total energy-momentum of the hadron carried by the parton. Given the form of Eq. , the prediction for the DIS cross section is identical to the factorization result if the matrix elements $\mathcal{A}_n^f$ yield the moments of the PDFs, $$\begin{aligned} \int d \widetilde k^+ (\widetilde k^+)^n f_f(\widetilde{k}^+) &= (n\cdot p)^{n+1} \mathcal{A}_{n+1}^f .\end{aligned}$$ To show that this indeed holds, consider the slightly more general case of coefficients for Lorentz violation $A^{\mu_1 \cdots \mu_{m+1}}$ with $m+1$ indices, for which we have $$\begin{aligned} f_f &= \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} J_k J_w e^{-i \widetilde{k}\cdot \hat{w}} \bra{p}\bar{\psi}_f(w(\hat{w}))\fr{\slashed{n},2}\psi_f(0)\ket{p} \nonumber\\ &\equiv \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} J_k J_w e^{-i \widetilde{k}\cdot \hat{w}} F(w(\hat w)) , \nonumber\\ \widetilde k^\mu &= k^\mu - A^{\mu k \cdots k} , \quad \hat w^\mu = w^\mu + A^{w \mu k \cdots k} , \nonumber\\ J_k &= 1 + \left(A^{\mu\nu\widetilde k \cdots \widetilde k} + A^{\mu\widetilde k \nu \widetilde k \cdots \widetilde k} + \cdots + A^{\mu\widetilde k \cdots \tilde k \nu} \right) \eta_{\mu\nu} , \quad J_w = 1 - A^{\mu\nu\widetilde k \cdots \widetilde k} \eta_{\mu\nu} , \nonumber\\ w(\hat w)^\mu &= \hat w^\mu - A^{\hat{w} \mu \widetilde k \cdots \widetilde k} .\end{aligned}$$ The following manipulations allow the removal of the explicit dependence on the jacobians $J_{k,w}$: $$\begin{aligned} f_f = & \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} J_k J_w e^{-i \widetilde{k}\cdot \hat{w}} F(\hat w - A^{\hat w \mu \widetilde k \cdots \widetilde k}) \nonumber\\ \stackrel{(1)}{=} & \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} J_k J_w e^{-i \widetilde{k}\cdot \hat{w}} \left( 1- A^{\hat w \mu \widetilde k \cdots \widetilde k} \frac{\partial}{\partial \hat w^\mu}\right) F(\hat w) \nonumber\\ \stackrel{(2)}{=} & \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 w,(2\pi)^4} J_k J_w e^{-i \widetilde{k}\cdot \hat{w}} \left( 1+ A^{\nu \mu \widetilde k \cdots \widetilde k} \eta_{\mu\nu} -i A^{\hat w \widetilde k \cdots \widetilde k} \right)F(\hat w) \nonumber\\ \stackrel{(3)}{=} & \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} J_k J_w F(\hat w) \left( 1+ A^{\nu \mu \widetilde k \cdots \widetilde k} \eta_{\mu\nu} + A^{\mu \widetilde k \cdots \widetilde k} \frac{\partial}{\partial \widetilde k^\mu} \right) e^{-i \widetilde{k}\cdot \hat w} \nonumber\\ \stackrel{(4)}{=} & \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} J_k J_w F(\hat w)e^{-i \widetilde{k}\cdot \hat{w}} \left( 1+ A^{\nu \mu \widetilde k \cdots \widetilde k} \eta_{\mu\nu} \right. \nonumber\\ & \left. -\left(A^{\mu\nu\widetilde k \cdots \widetilde k} + A^{\mu\widetilde k \nu \widetilde k \cdots \widetilde k} + \cdots + A^{\mu\widetilde k \cdots \tilde k \nu} \right) \eta_{\mu\nu} - A^{\mu \widetilde k \cdots \widetilde k} \frac{\partial}{\partial \widetilde k^\mu} \right) \nonumber\\ = & \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} J_k J_w F(\hat w)e^{-i \widetilde{k}\cdot \hat{w}} J_k^{-1} J_w^{-1} \left( 1 - A^{\mu \widetilde k \cdots \widetilde k} \frac{\partial}{\partial \widetilde k^\mu} \right) \nonumber\\ = & \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} F(\hat w)e^{-i \widetilde{k}\cdot \hat{w}} \left( 1 - A^{\mu \widetilde k \cdots \widetilde k} \frac{\partial}{\partial \widetilde k^\mu} \right) \nonumber\\ \stackrel{(5)}{=} & \int \fr{d\widetilde{k}^-d\widetilde{k}_\perp d^4 \hat{w},(2\pi)^4} F(\hat w)e^{-i \widetilde{k}\cdot \hat{w}} \left( 1 - A^{\mu \widetilde k \cdots \widetilde k} n_\mu \frac{\partial}{\partial \widetilde k^+} \right) . \label{eq:laststep}\end{aligned}$$ In step $(1)$ of this derivation we expanded $F$, in step $(2)$ we integrated by parts in $\hat w$, in step $(3)$ we expressed the term linear in $\hat w$ as a $\widetilde k$ derivative acting only on $\exp (-i \widetilde k \cdot \hat w)$, and in step $(4)$ we integrated by parts in $\widetilde k$ noting that $f$ is a distribution that must be integrated over a hard-scattering kernel. Finally, in step $(5)$ we used the fact that the hard scattering is a function of $\widetilde k^+$ alone and that in lightcone coordinates one has $$\begin{aligned} \frac{\partial}{\partial \widetilde k^\mu} &= n_\mu \frac{\partial}{\partial \widetilde k^+} + \bar n_\mu \frac{\partial}{\partial \widetilde k^-} + \frac{\partial}{\partial \widetilde k^\mu_\perp} .\end{aligned}$$ To proceed further, we observe that the integral over terms proportional to $\widetilde{k^{-a}}$ and $\widetilde k_\perp^b$ with $a,b \geq 1$ produce delta functions $\delta^{(a)} (\hat w^+)$ and $\delta^{(b)} (\hat w_\perp)$. After integrating over $\hat w$, these yield higher-twist PDFs that we can neglect as higher order. This implies that we can set $\widetilde k^\mu = \widetilde k^+ \bar n^\mu$ in the last term of Eq. (\[eq:laststep\]), integrate over $\widetilde k^-$, $\widetilde k_\perp$, $\hat w^+$, and $\hat w_\perp$, and obtain $$\begin{aligned} f_f = & \int \frac{d\hat w^-}{2\pi} F(\hat w^- n) e^{-i \widetilde k^+ \hat w^-} \left( 1 - A^{n \bar n \cdots \bar n} \left(\widetilde k^{+}\right)^m\frac{\partial}{\partial \widetilde k^+} \right) \nonumber\\ = & \int \frac{d\hat w^-}{2\pi} e^{-i \widetilde k^+ \hat w^-} \left(1 + A^{n \bar n \cdots \bar n} (m-1) \left(\widetilde{k}^+\right)^{m-1}\right) F(w(\hat w^- n)). \label{eq:pdfexplicit}\end{aligned}$$ To achieve the second line above, we integrate by parts in $\widetilde k^+$, replace one power of $\widetilde k^+$ with $i \partial (e^{-i \widetilde k^+ \hat w^-})/\partial \hat w^-$ in the term proportional to $\hat w^-$, integrate by parts in $\hat w^-$, and neglect higher-twist effects. The latter arise from derivatives with respect to $\hat w^+$ and $\hat w_\perp^\mu$. These expressions demonstrate that the PDF can still be written as a regular function and that for $m=1$ it reproduces the known result for the coefficient $c_f^{\mu\nu}$. To conclude the argument, we use Eq. (\[eq:pdfexplicit\]) to calculate the $n$th moment of the PDF, $$\begin{aligned} \int d \widetilde{k}^+ (\widetilde{k}^+)^n f_f = & \int d\hat w^- \frac{d\widetilde k^+}{2\pi} F(\hat w^- n) e^{-i \widetilde k^+ \hat w^-} \left( (\widetilde{k}^+)^n - A^{n \bar n \cdots \bar n} n (\widetilde k^+)^{m+n-1} \right) \nonumber\\ = & \int d\hat w^- F(\hat w^- n) \left[ (-i)^n \delta^{(n)} (\hat w^-) - A^{n \bar n \cdots \bar n} n (-i)^{m+n-1} \delta^{(m+n-1)} (\hat w^-) \right] \nonumber\\ = & i^n \frac{\partial^n}{\partial (\hat w-)^n} \left( 1 - n A^{n \bar n \cdots \bar n} i^{m-1} \frac{\partial^{m-1}}{\partial ({\hat w}^-)^{m-1}}\right) F(\hat w^- n) \Big|_{\hat w^-=0} \nonumber\\ = & \left( i \frac{\partial}{\partial \hat w^-} - A^{n \bar n \cdots \bar n} i \frac{\partial}{\partial \hat w^-} \cdots i \frac{\partial}{\partial \hat w^-}\right)^n F(\hat w^- n) \Big|_{\hat w^-=0} \nonumber\\ = & \left[ n^\mu \left( i \frac{\partial}{\partial \hat w^\mu} - {A_\mu}^{\mu_1\cdots \mu_m} i \frac{\partial}{\partial \hat w^{\mu_1}} \cdots i \frac{\partial}{\partial \hat w^{\mu_m}}\right) \right]^n F(\hat w^- n) \Big|_{\hat w^-=0} \nonumber\\ \equiv & ( n^\mu \widetilde \partial_\mu)^n F(\hat w^- n) \Big|_{\hat w^-=0} \nonumber\\ = & \frac{1}{2} n^\nu n^{\nu_1} \cdots n^{\nu_n} \bra{p} i \widetilde \partial_{\nu_1} \cdots i \widetilde \partial_{\nu_n} \bar{\psi}_f(\hat w^- n) \gamma_\nu \psi_f(0)\ket{p} \Big|_{\hat w^-=0} \nonumber\\ = & \frac{1}{2} n^\nu n^{\nu_1} \cdots n^{\nu_n} \bra{p} \mathcal{O}_f^{\nu\nu_1 \cdots \nu_{n}} \ket{p} \nonumber\\ = & (n\cdot p)^{n+1} \mathcal{A}_{n+1} ,\end{aligned}$$ which is the desired result. Note that for this derivation we have implicitly worked with the spin-independent basis of operators given in Eq.  to make connection with the spin-independent PDF in Eq. . We anticipate that a generalization of this result holds for the spin-dependent PDFs for a suitable choice of operator basis. Note also that the above matching of the factorization result to the OPE means that the PDFs cannot depend on additional scalar quantities, which thereby provides support for our approach. Minimal $c$-type coefficients {#sec:c} ----------------------------- As a first application of the above methods, we revisit the dominant effects of minimal CPT-even Lorentz violation on the $u$- and $d$-quark sectors in unpolarized electron-proton scattering mediated by photon exchange. In the massless limit, the relevant electromagnetic Lagrange density is [@klv17] $$\begin{aligned} \mathcal{L} = \sum_{f=u,d}\tfrac{1}{2}\bar{\psi}_{f} (\eta^{\mu\nu} + c_f^{\mu\nu}) \gamma_\mu i\overset{\text{\tiny$\leftrightarrow$}} {D}_{\nu}\psi_{f}, \label{eq:cmodel}\end{aligned}$$ where $\overset{\text{\tiny$\leftrightarrow$}} D_\nu = \overset{\text{\tiny$\leftrightarrow$}} \partial_\nu + 2 i e_f A_\nu$ with $e_f$ the quark charges. As noted in Sec. \[sec:setup\], the coefficients $c_f^{\mu\nu}$ are assumed symmetric and traceless. The inclusion of dimension-four Lorentz-violating operators produces a nonhermitian hamiltonian and corresponding unconventional time evolution of the external fields [@bkr98]. One method to handle this is to perform a fermion-field redefinition to obtain a hermitian hamiltonian and hence a unitary time evolution. This induces a noncovariant relationship between spinors in different observer frames [@bkr98; @kl01; @LehnertDirac]. An alternative approach is to introduce an unconventional scalar product in Hilbert space while preserving spinor observer covariance [@kt11; @pottinglehnert]. The two approaches are known to yield equivalent physical results at leading order in Lorentz violation. We adopt the second one in this work, as it preserves the compatibility of the PDF definitions with the various observer Lorentz transformations used in the methodology developed here. Details of this quantization procedure are given in Ref. [@pottinglehnert]. The dispersion relation for Eq.  is $$\widetilde{k}_f^2 =0 , \label{eq:disprelc}$$ where $\widetilde{k}_f^\mu \equiv (\eta^{\mu\nu} + c_f^{\mu\nu})k_\mu $. For these coefficients, the tilde operation is linear and thus can be applied to an arbitrary set of 4-vectors. As described in Sec. \[sec:setup\], the on-shell condition is satisfied by the parametrization $\widetilde{k} = \xi p$, where $p$ is the proton momentum. Note that this choice renders $\widetilde{k}$ independent of flavor. The physical momentum $k$ is thus given by $$k_f^\mu = \xi(p^\mu - c_f^{\mu p}), \label{ctheorymom}$$ where $c_f^{\mu p} \equiv c_f^{\mu\nu}p_\nu$. Note that $k$ can differ from $\widetilde{k}$ only by possible 4-vectors constructed from $\xi$, $p^\mu$, and $c_f^{\mu p}$, and the requirement implies that the only available 4-vector in this case is $c_f^{\mu p}$. The modified Breit frame fixed by $\vec{p} + \vec{\widetilde{q}}_f = 0$ with $\widetilde{q}_f^\mu = (\eta^{\mu\nu} + c_f^{\mu\nu})q_\nu$ is flavor dependent. However, no interference between the different flavor channels occurs at leading order because the DIS process is within the regime of incoherent scattering. Transforming to the modified Breit frame, we can apply Eqs. - with the appropriate tilde operation. The scattered parton has $k'^\mu = k^\mu + q^\mu$ by construction and also satisfies $(\widetilde{k_f+q_f})^2 = 0$, where by linearity of the tilde operation we have $\widetilde{k+q}_f^\mu = \widetilde{k}^\mu + \widetilde{q}_f^\mu$. In particular, this implies $\widetilde{q}_f^\mu = q^\mu + c_f^{\mu q}$. Note that the flavor dependence of $\widetilde{q}_f$ is thereby transferred to $\widetilde{k}_f'$. The unpolarized differential cross section can be written in the form [@klv17] $$\begin{aligned} \fr{d\sigma,dxdyd\phi} = \fr{\alpha^2 y,2\pi q^4}L_{\mu\nu}\text{Im}T^{\mu\nu}, \label{eq:tripleDISxsec_c}\end{aligned}$$ where the electron tensor in the massless limit is $L^{\mu\nu} = 2\left(l^\mu l'^\nu + l^\nu l'^\mu - (l\cdot l')\eta^{\mu\nu}\right)$, and the scattered electron momentum is parametrized as $l'^\mu = E'(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$ in terms of the polar angle $\theta$ and the azimuthal angle $\phi$ defined relative to a chosen $z$ axis. The current is a modified vector current $$\begin{aligned} j_f^\mu = e_f \bar{\psi}_f\Gamma_f^\mu\psi_f, \label{eq:ccurrent}\end{aligned}$$ where $\Gamma_f^\mu = (\eta^{\mu\nu} + c_f^{\nu\mu})\gamma_\nu$ with $c_f^{\nu\mu}$ nonzero for $f=u$, $d$. Since only the vector part of the interaction survives, all relevant quantities are in place to construct the explicit cross section. The forward amplitude for a single flavor $f$ reads $$T_f^{\mu\nu} = \int \fr{d\xi,\xi}e_f^2 \text{Tr}\left[\Gamma_f^\mu\fr{-1,\xi\slashed{p} + \slashed{\widetilde{q}_f}+i\epsilon} \Gamma_f^\nu \fr{\xi\slashed{p},2}\right] f_f(\xi, c_f^{pp}). \label{eq:Tctheory}$$ Using the basis , we find $f_f(\xi,c_f^{pp})$ is given in covariant form by $$\begin{aligned} f_f(\xi,c_f^{pp}) &= \int\fr{d\lambda,2\pi}e^{-i \xi p\cdot n \lambda} \bra{p}\bar{\psi}(\lambda \widetilde{n}_f) \frac{\slashed{n}}{2}\psi(0)\ket{p} . \label{eq:cunpolarizedpdf}\end{aligned}$$ Note the similarity between the PDF derived in the presence of Lorentz violation and the conventional PDF in Eq. . The PDF remains independent of spin since the coefficients $c_f^{\mu\nu}$ control spin-independent operators in the theory. In principle, the jacobian factors $J_k J_w$ resulting from the change of integration variables contain contributions proportional to the trace of the coefficients $c_f^{\mu\nu}$, but the latter vanish by assumption and hence are irrelevant. The explicit dependence of the matrix elements on the coefficients for Lorentz violation arises through the shifted variable $\widetilde{n}_f$, which induces an implicit dependence on a single scalar quantity $c_f^{pp}$. Further insight on this is provided in Sec. \[sec:c\]. The imaginary part of the propagator denominator may be calculated using Eq. , which yields $$2\text{Im}\fr{-1,(\xi p + \widetilde{q}_f)^2+i\epsilon} = 2\pi\fr{\widetilde{x}_f,\widetilde{Q}_f^{2}}\delta\left(\xi - \widetilde{x}_f\right), \label{eq:Impropc}$$ where $\widetilde{Q}_f^2 \equiv -\widetilde{q}_f^2$. In this particular case, the variable $\widetilde{x}_f$ corresponds to the generic definition because $(\xi p+\widetilde{q}_f)^2$ is linear in $\xi$. Using Eqs. -, we can verify the Ward identity $q_\mu W^{\mu\nu} = 0$. This must hold here because $2\text{Im}T^{\mu\nu} = W^{\mu\nu}$ in the physical scattering region defined by $\widetilde{k}_f^2 = (\widetilde{k}_f+\widetilde{q}_f)^2 = 0$ with $q^2 < 0$. Note that the Ward identity requires both the incident and scattered quark to be on shell. To leading order in the coefficients $c_f^{\mu\nu}$, we find $$\begin{aligned} \widetilde{x}_f = x\left(1+\fr{2c_f^{qq},q^2}\right) +\frac{x^2}{q^2}\left(c_f^{pq} + c_f^{qp}\right).\end{aligned}$$ The difference between $\widetilde{x}_f$ and the quantity $x_f'$ in Eq. (13) of Ref. [@klv17] is a single term proportional to $\xi^2c_f^{pp}$, which is removed in the current approach by the on-shell relation for the partons. Summing over all flavors, denoting $\text{Im}T^{\mu\nu} = \sum_f\text{Im}T^{\mu\nu}_f$, combining Eq.  with the numerator trace in Eq. , integrating over $\xi$, and contracting with $L_{\mu\nu}$ gives the explicit form of the cross section as $$\fr{d\sigma,dx dy d\phi} = \fr{\alpha^2y,2 Q^4}\sum_f e_f^2\fr{1,\widetilde{Q}_f^2} L_{\mu\nu}H_f^{\mu\nu}f_f(\widetilde{x}_f,c_f^{pp}), \label{eq:disxsecctheory}$$ where $$\begin{aligned} &H_f^{\mu\nu} \equiv \text{Tr}\left[\Gamma_f^\mu\left(\widetilde{\slashed{\hat{k}_f}} + \slashed{\widetilde{q}_f} \right) \Gamma_f^\nu\fr{\widetilde{\slashed{\hat{k}}_f} ,2}\right], \nonumber\\ &L_{\mu\nu}H_f^{\mu\nu} = 8 \left[2(\hat{k}_f\cdot l)(\hat{k}_f\cdot l') + \hat{k}_f\cdot(l-l')(l\cdot l') + 2(\hat{k}_f\cdot l)\left(c_f^{\hat{k}_fl'} + c_f^{l'\hat{k}_f} - c_f^{l'l'} \right) \right. \nonumber\\ & \hskip 50pt \left. + 2(\hat{k}_f\cdot l')\left(c_f^{\hat{k}_fl} + c_f^{l\hat{k}_f} + c_f^{ll}\right) - 2(l\cdot l')c_f^{\hat{k}_f\hat{k}_f}\right], \label{eq:Hfc}\end{aligned}$$ with $\hat{k}_f^\mu \equiv \widetilde{x}_f(p^\mu-c_f^{\mu p})$ and $\widetilde{\hat{k}^\mu_f} = \widetilde{x}_f p^\mu$. At leading order in Lorentz violation, corrections to $\hat{k}_f^\mu$ contribute only to the first line of Eq. . The above derivation provides an explicit demonstration that the hadronic tensor in the presence of Lorentz violation factorizes into a hard part proportional to $H_f^{\mu\nu}$ in Eq. . This contribution to the cross section resembles the vertex structure of an elastic partonic subprocess, which has a number of interesting implications. The covariant parametrization $\widetilde{k}^\mu = \xi p^\mu$ is so far motivated by SME considerations, factorization arguments, and via the OPE approach. The cross section is an observer scalar because it is composed of scalar kinematical objects, including the PDFs and the contraction of covariant tensor structures. Next, we demonstrate that the cross section may be decomposed into purely observer scalar quantities that can be interpreted in terms of partonic and hadronic quantities only when the choice $\widetilde{k}^\mu = \xi p^\mu$ is made. This supports the notion that, in the restricted kinematical regime of interest, the hard process can be viewed as if it were mediated by a massless on-shell SME parton scattering from the virtual photon. To see this, consider the forward spin-averaged elastic-scattering matrix element $M$ of a virtual photon of momentum $q$ scattering from a free massless SME quark of momentum $k$ with flavor $f$. Using the sum over fermion spins, we find this is given by [@pottinglehnert; @ck01] $$\begin{aligned} M = e_f^2\delta(\xi-\widetilde{x}_f) \fr{2\pi\widetilde{x}_f,\widetilde{Q}_f^2} \text{Tr}\left[\Gamma_f^\mu\left(\widetilde{\slashed{k}}_f + \slashed{\widetilde{q}_f} \right) \Gamma_f^\nu\fr{\widetilde{\slashed{k}}_f ,2} \fr{N(\vec{k}),2{\accentset{\approx}{k}}_f^0}\right], \label{eq:forwardpartonc}\end{aligned}$$ where $N(\vec{k})$ is the fermion-field normalization and ${\accentset{\approx}{k}}_f^\mu \equiv k^\mu + 2c_f^{\mu k }$. Note that this result is consistent with Eq. . In constructing a differential cross section that preserves Lorentz observer invariance, one typically forms the product of the differential decay rate and the initial-state flux factor. For general colliding species $A$, $B$, the flux factor may be expressed in terms of the beam densities $N(\vec{A}), N(\vec{B})$ and velocities $v_A^j, v_B^j$ as $$\begin{aligned} F & = N(\vec{A})N(\vec{B}) \sqrt{(\vec{v}_{A}-\vec{v}_{B})^2 - (\vec{v}_{A}\times\vec{v}_{B})^2}, \label{eq:fluxv}\end{aligned}$$ where the group velocity $v_{A,B}^j$ is defined as $$v_{A,B}^j = \frac{\partial k_{A,B}^0}{\partial k_{A,B}^j}. \label{eq:groupv}$$ For the $c$-type coefficients with $\widetilde{k}_f^2 = 0$, the group velocity is found to be [@ck01] $$v_g^j = \frac{{\accentset{\approx}{k}}_f^j}{{\accentset{\approx}{k}}_f^0}. \label{eq:groupvc}$$ Using Eqs. -, the flux for the collision of an electron of momentum $l^\mu$ and a quark of momentum $k_f^\mu$ can be expressed as $$F = N(\vec{k})N(\vec{l})\frac{\sqrt{({\accentset{\approx}{k}}_f\cdot l)^2 -{\accentset{\approx}{k}}_f^2 l^2}}{{\accentset{\approx}{k}}_f^0 l^0}. \label{eq:fluxmomenta}$$ Combining Eqs.  and and the associated leptonic vertex contribution with spin averaging, one sees the factor ${\accentset{\approx}{k}}_f^0 l^0$ cancels leaving a scalar quantity. A cross section for the partonic subprocess has thus been found that constitutes a substructure of the full hadronic cross section given in Eq. . The hadronic cross section may thus be expressed in terms of an integral over these partonic cross sections scaled by the ratio of the partonic flux factor to that of the convential hadronic flux factor $2s$. This ratio is equal to unity for dimension-three operators but typically differs from unity for dimension-four and higher operators. However, it can at most produce a shift at first order in the coefficients for Lorentz violation. In contrast, if one instead chooses the parametrization $k = \xi p$, the above construction and interpretation of the hard-scattering process cannot be made. This alternative would represent an off-shell subprocess and would spoil electromagnetic gauge invariance. It also implies that the group velocity of the parton is exactly equal to that of the hadron as usual, which prevents the cancellation of the factor of ${\accentset{\approx}{k}}^0$ appearing in the trace without a concomitant unconventional redefinition of the flux. It follows that satisfactory partonic cross sections cannot be constructed in this alternative scenario, so a consistent interpretation of the hard scattering becomes unclear. Note that this discussion pertains only to dimensionless $c$- and $d$-type coefficients for Lorentz violation, which produce nonscalar quantities from fermion spin sums as a consequence of the quantization procedure. Finally, we remark that the connection between the moments of the PDFs and the matrix elements of the operators in the OPE is comparatively straightforward for the $c$-type coefficients. The $n$th moment of the PDF is $$\begin{aligned} \int d\widetilde k^+ (\widetilde k^+)^n f_f &= \int d\hat w^- \bra{p}\bar{\psi}(w(\hat w^- n)) \fr{\slashed{n},2}\psi(0)\ket{p} \int \frac{d\widetilde k^+}{2\pi} \widetilde (k^+)^n e^{- i \widetilde k^+ \hat w^-} \nonumber\\ &= \int d\hat w^- \bra{p}\bar{\psi}(w(\hat w^- n)) \fr{\slashed{n},2}\psi(0)\ket{p} (-i)^n \delta^{(n)} (\hat w^-) \nonumber\\ &= i^n \bra{p} \frac{\partial^n}{\partial (\hat w^-)^n} \bar{\psi}(w(\hat w^- n)) \fr{\slashed{n},2}\psi(0)\ket{p} \Big|_{\hat w^-=0} .\end{aligned}$$ In this case, we have $$\begin{aligned} w^\mu (\hat w^- n) &= (\eta^{\mu\nu} + c_f^{\mu\nu}) n_\nu \hat w^- \nonumber\\ \frac{\partial}{\partial\hat w^-} &= \frac{\partial w^\mu}{\partial \hat w^-} \frac{\partial}{\partial w^\mu} = n^\mu (\eta_{\mu\nu} + {c_f}_{\mu\nu}) \partial^\nu = n^\mu \widetilde \partial_\mu ,\end{aligned}$$ and we therefore obtain $$\begin{aligned} \int d\widetilde k^+ (\widetilde k^+)^n f_f &= \frac{1}{2} n^\mu n^{\mu_1} \cdots n^{\mu_n} \bra{p} i \widetilde \partial_{\mu_1} \cdots i \widetilde \partial_{\mu_n} \bar{\psi}(0) \gamma_\mu \psi(0)\ket{p} .\end{aligned}$$ Taking advantage of the totally symmetric nature of the tensor $n^\mu n^{\mu_1} \cdots n^{\mu_n}$, the absence of trace contributions to the matrix element, and the replacement of regular derivatives with covariant ones as required by gauge invariance yields $$\begin{aligned} \int d\widetilde k^+ (\widetilde{k}^+)^n f_f &= \frac{1}{2} n_{\nu_1} \cdots n_{\nu_{n+1}} \bra{p} \hat{\mathcal{O}}_\psi^{\nu_1 \cdots \nu_{n+1}}\ket{p} = (n\cdot p)^{n+1} \mathcal{A}_{n+1}.\end{aligned}$$ Using the moments to reconstruct the whole PDF, we see that the only dependence on the coefficients $c_f^{\mu\nu}$ arises from the matrix elements $\mathcal{A}_{n+1}$, $f = f(\xi, c_f^{pp})$. Note that the PDF is a dimensionless quantity and so $c^{pp}$ has to appear in the combination $c^{pp}/\Lambda_{\rm QCD}^2$, which emphasizes the genuinely nonperturbative origin of this dependence. Nonminimal $a^{(5)}$-type coefficients {#sec:a} -------------------------------------- In the context of unpolarized DIS, the effects of nonzero flavor-diagonal quark coefficients $a_f^{(5)\mu\alpha\beta}$ controlling CPT-odd operators with mass dimension five have recently been studied [@kl19]. These coefficients stem from the nonminimal SME term $$\mathcal{L}_{\text{SME}} \supset -(a^{(5)})_{AB}^{\mu\alpha\beta}\bar{\psi}_{A}\gamma_\mu i D_{(\alpha}i D_{\beta)}\psi_{B} + \text{h.c.} \label{eq:a5model}$$ Nonzero proton coefficients $a_p^{(5)\mu\alpha\beta}$ were included in the DIS analysis of Ref. [@kl19] because current experiments constrain them only partially [@tables]. To avoid complications with modified kinematics for the external states and with the interpretation of proton matrix elements, we assume here conventional proton states so that $a_p^{(5)\mu\alpha\beta} = 0$. Incorporating effects of nonzero proton coefficients into the following analysis is an interesting open issue but lies outside our present scope. Note that the connection between quark and proton coefficients is under investigation in the context of chiral perturbation theory [@lvchpt1; @lvchpt2; @lvchpt3; @lvchpt4; @lvchpt5] and may provide insights along these lines. Following the method developed in Sec. \[sec:setup\], the quark momentum is parametrized at leading order in Lorentz violation as $$\begin{aligned} k_f^\mu = \xi p^\mu \pm \xi^2a_f^{(5)\mu p p}, \label{eq:a5parton}\end{aligned}$$ where the $+$ and $-$ signs correspond to particles and antiparticles, respectively. This expression matches Eq. (56) of Ref. [@kl19] for $a_p^{(5)\mu\alpha\beta} = 0$. The corresponding global $U(1)$ conserved current $j^\mu$ takes the form $$\begin{aligned} j^\mu_f = \bar{\psi}_f\left(\gamma^\mu - ia_f^{(5)\alpha\beta\mu}\gamma_\alpha \overset{\text{\tiny$\leftrightarrow$}}\partial_{\beta}\right)\psi_f, \label{eq:a5current}\end{aligned}$$ where we now define $$\Gamma_f^\mu = \gamma^\mu - ia_f^{(5)\alpha\beta\mu}\gamma_\alpha \overset{\text{\tiny$\leftrightarrow$}}\partial_{\beta}. \label{eq:Gammaa5}$$ Since the $a^{(5)}$-type coefficients control spin-independent operators and the current is a modified vector current, only the leading-twist unpolarized PDF $f_f(\xi)$ appears in $T^{\mu\nu}$, paralleling the case of the $c$-type coefficients. The choice is also required to satisify the Ward identity. Using Eqs. - in the third term of Eq.  and transforming to the modified Breit frame using Eq.  for the quark momentum leads to the factorization of $T_{\mu\nu}$. After some calculation, we find the cross section to be $$\begin{aligned} \frac{d\sigma}{dx dy d\phi} = \frac{\alpha^2}{q^4}&\sum_f F_{2f} \left[\frac{ys^2}{\pi}\left[1+(1-y)^2\right]\delta_{\text{S}f} + \frac{y(y-2)s}{x}x_{\text{S}f} \right. \nonumber\\ &\left. \hskip 30pt -\frac{4}{x}\left(4x^2a_{\text{S}f}^{(5)ppk} + 6xa_{\text{S}f}^{(5)kpq} + 2a_{\text{S}f}^{(5)kqq}\right) \right. \nonumber\\ & \left. \hskip 30pt +2y\left(4x^2a_{\text{S}f}^{(5)ppp} + 4xa_{\text{S}f}^{(5)ppq} + 4xa_{\text{S}f}^{(5)kpp} + 2a_{\text{S}f}^{(5)kpq} + a_{\text{S}f}^{(5)pqq}\right) \right. \nonumber\\ &\left. \hskip 30pt + \frac{4y}{x}\left(2xa_{\text{S}f}^{(5)kkp} + a_{\text{S}f}^{(5)kkq}\right)\right], \label{eq:DISa5xsec}\end{aligned}$$ where $F_{2f} = e_f^2 f_f(x_{\text{S}f}')x_{\text{S}f}'$ with $x_{\text{S}f}' = x - x_{\text{S}f}$ and $$\begin{aligned} &\delta_{\text{S}f} = \frac{\pi}{ys} \left[1+\frac{2}{ys} \left(4x a_{\text{S}f}^{(5)ppq} + 2a_{\text{S}f}^{(5)pqq} + a_{\text{S}f}^{(5)pqq}\right)\right], \nonumber\\ &x_{\text{S}f} = -\frac{2}{ys}\left(2x^2 a_{\text{S}f}^{(5)ppq} + 3xa_{\text{S}f}^{(5)pqq} + a_{\text{S}f}^{(5)qqq}\right). \label{xSf}\end{aligned}$$ Note that this expression is consistent with the result obtained in Ref. [@kl19] in the limit $a_{p}^{(5)\mu\alpha\beta} = 0$ once the observability of the $a^{(5)}$-type quark coefficients is taken into account. Note also that the shifted Bjorken variable is distinct from the quantity $\widetilde{x}$ generically defined in Eq. , which serves as a placeholder parametrization mimicking the conventional case. This contrasts with the case of $c$-type coefficients evidenced in Eq.  because the imaginary part of the propagator denominator is quadratic in $\xi$. However, as described in Sec. \[ssec:FactDIStensor\], the replacement $\xi \rightarrow x$ is satisfactory for terms proportional to the coefficients for Lorentz violation and yields the explicit expression for $\widetilde{q}_{f}$ defining the modified Breit frame as $$\begin{aligned} \widetilde{q}_f^\mu = q^\mu - a_{\text{S}f}^{(5)\mu qq} - x (a_{\text{S}f}^{(5)\mu p q} + a_{\text{S}f}^{(5)\mu q p}). \label{eq:qtildea5}\end{aligned}$$ It is interesting to observe that if the scattering were initiated by an antiquark as opposed to a quark, the expression above acquires opposite signs at leading order in Lorentz violation, revealing that the modified Breit frame is both flavor and particle/antiparticle dependent. For the spin-independent PDF , the explicit expression at the level of Eq.  is illustrative. From general OPE considerations, the PDF can depend only on scalar combinations of $a_f^{(5)\mu\alpha\beta}$ and $p^\nu$. Furthermore, as the coefficients $a_{\text{S}f}^{(5)\mu\alpha\beta}$ are symmetric and traceless, the only possible combination is $a_{\text{S}f}^{(5)ppp}/\Lambda_{\rm QCD}^2$. Using Eq. , we find $$\begin{aligned} f_f (\xi, a_{\text{S}f}^{(5)ppp}) = \int \frac{d\lambda}{2\pi} e^{-i\xi p\cdot n \lambda} \bra{p}\bar{\psi}_f(\lambda n^\mu - a_{\text{S}f}^{(5)n\mu\bar{n}}\lambda \xi p^+) \fr{\slashed{n},2}\psi_f(0)\ket{p} \left( 1 + a_{\text{S}f}^{(5)n \bar n \bar n} \xi p^+ \right) \label{eq:a5unpolarizedpdf}\end{aligned}$$ as the explicit expression for the PDF. Estimated attainable sensitivities {#sec: Estimated constraints} ---------------------------------- In this section, we obtain estimates for the sensitivities to SME coefficients that are attainable in experiments studying unpolarized electron-proton DIS. Comparable results can be expected from dedicated analyses with HERA data [@hera] and future EIC data [@EICsummary; @eRHICdesign; @JLEICdesign]. We perform simulations with existing data and pseudodata using Eq.  for the $c$-type $u$- and $d$-quark coefficients for Lorentz violation and using Eq.  for the $a^{(5)}$-type coefficients. For simplicity, the analysis neglects the intrinsic dependence of the PDFs on the SME coefficients described in Sections \[sec:c\] and \[sec:a\] and given by Eqs.  and . These effects are genuinely nonperturbative and constitute an interesting open issue for future investigation [@Newphysproton]. Experiments performed on the Earth at a given location are sensitive to SME coefficients as they appear in the laboratory frame. However, all laboratory frames are noninertial due to the rotation of the Earth and its revolution about the Sun. The standard frame adopted to report and compare measurements of SME coefficients for Lorentz violation [@tables] is the Sun-centered frame [@km02; @bklr02; @bklr03], which is approximately inertial over experimental timescales. In the Sun-centered frame, the time $T$ has origin at the vernal equinox 2000, the $Z$ axis is aligned with the Earth’s rotation axis, the $X$ axis points from the Earth to the Sun at $T=0$, and the $Y$ axis completes the right-handed coordinate system. To an excellent approximation, the laboratory-frame coefficients are related to the coefficients in the Sun-centered frame by a rotation determined by the latitude of the experiment and by the local sidereal time $T_\oplus$, which is related to $T$ by an offset depending on the longitude of the laboratory [@kmm16]. Effects from the laboratory boost due to the rotation and revolution of the Earth are negligible for our present purposes. The rotation $\mathcal{R}$ from the electron-beam direction in the laboratory frame to the Sun-centered frame is given by [@klv17; @ck01] $$\begin{aligned} \mathcal{R} = \begin{pmatrix}\pm 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & \mp 1 & 0\end{pmatrix} \begin{pmatrix}\cos\psi & \sin\psi & 0 \\ -\sin\psi & \cos\psi & 0 \\ 0 & 0 & 1\end{pmatrix} \begin{pmatrix}\cos\chi\cos\omega_{\oplus} T_{\oplus} & \cos\chi\sin\omega_{\oplus} T_{\oplus} & -\sin\chi \\ -\sin\omega_{\oplus} T_{\oplus} & \cos\omega_{\oplus} T_{\oplus} & 0 \\ \sin\chi\cos\omega_{\oplus} T_{\oplus} & \sin\chi\sin\omega_{\oplus} T_{\oplus} & \cos\chi\end{pmatrix}. \label{eq:rotation}\end{aligned}$$ In this expression, $\omega_{\oplus} \simeq 2\pi$/(23 56 ) is the Earth’s sidereal frequency. The angle $\chi$ is the colatitude of the laboratory, while $\psi$ is the orientation of the electron-beam momentum relative to the east cardinal direction. The final rotation in Eq.  orients the Earth-frame polar direction along the direction of the electron-beam momentum. As a consequence of the rotation $\mathcal{R}$, most coefficients in the laboratory frame acquire sidereal-time variation at harmonics of the sidereal frequency. As described in Refs. [@klv17; @ls18], DIS experiments are primarily sensitive to the subset of coefficients associated with sidereal-time variations because many systematic sources of uncertainty are correlated between different sidereal-time bins. We therefore focus here on estimating attainable sensitivities to this subset. For the symmetric traceless coefficients $c_f^{\mu\nu}$, the nine independent components in the Sun-centered frame can be chosen to have indices $TX$, $TY$, $TZ$, $XX$, $XY$, $XZ$, $YY$, $YZ$, and $ZZ$. Of these, the components with indices $TZ$, $ZZ$ and the sum of components with indices $XX$ and $YY$ have no effect on sidereal variations because they control rotationally invariant effects in the $X$-$Y$ plane. We thus find that at most six independent $c$-type observables for each quark flavor can be measured using sidereal variations, so we can extract estimated sensitivities to the 12 coefficient combinations $c_{f}^{TX}$, $c_{f}^{TY}$, $c_{f}^{XZ}$, $c_{f}^{YZ}$, $c_{f}^{XY}$, and $c_{f}^{XX}-c_{f}^{YY}$ with $f=u$, $d$. For the symmetric traceless coefficients $a_{{\text S}f}^{(5)\lambda\mu\nu}$, the 16 independent components in the Sun-centered frame can be chosen to have indices $TTT$, $TTX$, $TTY$, $TTZ$, $TXX$, $TXY$, $TXZ$, $TYY$, $TYZ$, $XXX$, $XXY$, $XXZ$, $XYY$, $XYZ$, $YYY$, $YYZ$ [@ek19]. The four combinations of components $TTT$, $TTZ$, $TXX + TYY$, and $ZXX + ZYY$ play no role in sidereal variations, leaving 12 $a^{(5)}$-type observables for each quark flavor. We can therefore determine estimated sensitivities to the 24 coefficient combinations $a_{\text{S}f}^{(5)TXX}-a_{Sf}^{(5)TYY}$, $a_{\text{S}f}^{(5)XXZ}-a_{\text{S}f}^{(5)YYZ}$, $a_{\text{S}f}^{(5)TXY}$, $a_{\text{S}f}^{(5)TXZ}$, $a_{\text{S}f}^{(5)TYZ}$, $a_{\text{S}f}^{(5)XXX}$, $a_{\text{S}f}^{(5)XXY}$, $a_{\text{S}f}^{(5)XYY}$, $a_{\text{S}f}^{(5)XYZ}$, $a_{\text{S}f}^{(5)XZZ}$, $a_{\text{S}f}^{(5)YYY}$, and $a_{\text{S}f}^{(5)YZZ}$ with $f=u$, $d$. Inspection of these results reveals that the $c$-type coefficients control sidereal variations at frequencies up to $2\omega_\oplus$, while the $a^{(5)}$-type coefficients control ones up to $3\omega_\oplus$. We discuss first the pertinent details of the HERA collider, the corresponding dataset, and the procedure to estimate sensitivities. The HERA colatitude is $\chi \simeq \ang{34.6}$, and the electron/positron beam orientation is $\psi \simeq \ang{20}$ north of east for H1 and $\psi \simeq \ang{20}$ south of west for Zeus. This implies the minus sign in Eq.  is appropriate for H1 and the plus sign for Zeus. The data used here are combined electron- and positron-proton neutral-current measurements at an electron-beam energy of $E_e = 27.5$ GeV and proton-beam energies of $E_p$ = 920 GeV, 820 GeV, 575 GeV, and 460 GeV. Note that the use of positron-proton data is acceptable for studying both $c$- and $a^{(5)}$-type coefficients because the associated cross sections are invariant under interchange of electrons and positrons. In total, 644 cross-section measurements at a given fixed $x$ and $Q^2$ value are available [@hera]. In extracting the estimated sensitivities, we use the procedure employed in Ref. [@klv17]. For each measurement of the cross section at a given value of $x$ and $Q^2$, we generate 1000 Gaussian-distributed pseudoexperiments to form a $\chi^2$ function, each of which describes the potential outcome of splitting the dataset into four bins in sidereal time with the requirement that the weighted average of the binned cross sections is identical to the measured one. In forming the theoretical contribution from Lorentz-violating effects to the $\chi^2$ distribution, we use ManeParse [@Maneparse1; @Maneparse2] and the CT10 PDF set [@CT10] for the quark PDFs. The desired estimated sensitivity to each coefficient is extracted independently by minimizing the $\chi^2$ function at the 95% confidence level and setting the other coefficients to zero in accordance with the standard procedure in the field [@tables]. Further details can be found in Ref. [@klv17]. For the EIC, two EIC proposals currently exist: JLEIC at JLab and eRHIC at BNL [@JLEICdesign; @eRHICdesign]. Here, we present simulations yielding estimates for sensitivities to the coefficients for Lorentz violation that can be expected after one and ten years of data taking for both JLEIC and eRHIC. The kinematical potential for each collider is expected to be different in their first stage of running. JLEIC is expected to obtain a luminosity on the order of $10^{34}~\text{cm}^{-2}~\text{s}^{-1}$ with an electron beam energy range of $3 \leq E_{e} \leq 12$ GeV and a proton energy range of $20 \leq E_{p} \leq 100$ GeV, leading to a collider-frame energy range of roughly $15 \leq\sqrt{s}\leq 70$ GeV. The JLEIC colatitude is $\chi \approx \ang{52.9}$ with electron-beam orientations $\psi\approx\ang{47.6}$ and $\psi\approx\ang{-35.0}$ at the two collision points [@EICsummary]. In contrast, during its first stage eRHIC is expected to operate at a luminosity on the order of $10^{34}~\text{cm}^{-2}~\text{s}^{-1}$ with a beam energy range of $5 \leq E_{e} \leq 20$ GeV and $50 \leq E_{p} \leq 250$ GeV, leading to a collider-frame energy range of roughly $30 \leq\sqrt{s}\leq 140$ GeV. The eRHIC colatitude is $\chi \approx \ang{49.1}$ and the electron-beam orientations are approximately $\psi\approx\ang{-78.5}$ and $\psi\approx\ang{-16.8}$ [@eRHICdesign]. Further planned upgrades to each collider indicate a converging operational potential at the end of a ten-year time span. To derive estimated sensitivities, datasets of simulated reduced cross sections with associated uncertainties over a range of $(E_e, E_p)$ values characteristic of the JLEIC and eRHIC are adopted. All datasets are generated using HERWIG 6.4 [@Bahr:2008pv; @Bellm:2015jjp] at next-to-leading order, and estimates of detector systematics are based on those for the HERA collider [@hera]. The JLEIC dataset includes a total of 726 measurements spanning the ranges $x\in (9\times 10^{-3}, 9\times 10^{-1})$ and $Q^2 \in (2.5,2.2\times 10^3)$ GeV$^2$, with electron-beam energies $E_e = 5,10$ GeV and proton beam energies $E_p$ = 20, 60, 80, 100 GeV. These data have an overall point-to-point systematic uncertainty of 0.5% for $x <0.7$ and 1.5 for $x >0.7$, as well as a 1% luminosity error. The dataset for the eRHIC includes 1488 measurements spanning the ranges $x\in (1\times 10^{-4}, 8.2\times 10^{-1})$ and $Q^2 \in (1.3, 7.9\times 10^3)$ GeV$^2$, with $E_e$ = 5, 10, 15, 20 GeV and $E_p$ = 50, 100, 250 GeV. These data have an overall 1.6% point-to-point systematic uncertainty and a 1.4% luminosity error. As with the analysis of the HERA data, ManeParse and the CT10 PDF set are used for the quark PDFs. Additional details may be found in Ref. [@ls18]. --------------------------- ---------------- --------------- --------------- --------------- --------------- [**HERA**]{} [**JLEIC**]{} [**eRHIC**]{} [**JLEIC**]{} [**eRHIC**]{} $|c_{u}^{TX}|$ 13\. \[13.\] 2.2 \[11.\] 0.54 \[22.\] 0.14 \[19.\] 0.17 \[22.\] 13\. \[13.\] 2.0 \[10.\] 0.36 \[15.\] 0.13 \[17.\] 0.12 \[15.\] $|c_{u}^{TY}|$ 13\. \[13.\] 2.2 \[11.\] 0.54 \[22.\] 0.14 \[19.\] 0.17 \[22.\] 13\. \[13.\] 2.0 \[10.\] 0.38 \[15.\] 0.13 \[17.\] 0.12 \[15.\] $|c_{u}^{XZ}|$ 63\. \[66.\] 3.7 \[16.\] 0.73 \[30.\] 0.24 \[32.\] 0.23 \[30.\] 63\. \[66.\] 4.4 \[19.\] 1.7 \[69.\] 0.28 \[37.\] 0.54 \[70.\] $|c_{u}^{YZ}|$ 65\. \[65.\] 3.7 \[16.\] 0.73 \[30.\] 0.23 \[31.\] 0.23 \[29.\] 65\. \[65.\] 4.4 \[19.\] 1.7 \[69.\] 0.28 \[37.\] 0.53 \[71.\] $|c_{u}^{XY}|$ 31\. \[33.\] 14\. \[61.\] 1.9 \[82.\] 0.87 \[120.\] 0.61 \[80.\] 31\. \[33.\] 6.4 \[28.\] 0.79 \[35.\] 0.40 \[54.\] 0.26 \[34.\] $|c_{u}^{XX}-c_{u}^{YY}|$ 98\. \[100.\] 12\. \[52.\] 5.4 \[240.\] 0.74 \[100.\] 1.8 \[230.\] 98\. \[100.\] 12\. \[55.\] 3.9 \[170.\] 0.79 \[110.\] 1.2 \[160.\] $|c_{d}^{TX}|$ 51\. \[54.\] 8.9 \[170.\] 2.2 \[89.\] 0.57 \[75.\] 0.68 \[88.\] 51\. \[54.\] 8.1 \[150.\] 1.5 \[60.\] 0.51 \[67.\] 0.47 \[61.\] $|c_{d}^{TY}|$ 53\. \[53.\] 8.9 \[160.\] 2.2 \[89.\] 0.55 \[74.\] 0.68 \[87.\] 53\. \[53.\] 8.2 \[150.\] 1.5 \[61.\] 0.51 \[68.\] 0.47 \[62.\] $|c_{d}^{XZ}|$ 250\. \[260.\] 15\. \[240.\] 2.9 \[120.\] 0.96 \[130.\] 0.91 \[120.\] 250\. \[260.\] 18\. \[280.\] 6.6 \[270.\] 1.1 \[150.\] 2.1 \[280.\] $|c_{d}^{YZ}|$ 260\. \[260.\] 15\. \[240.\] 2.9 \[120.\] 0.94 \[130.\] 0.91 \[120.\] 260\. \[260.\] 18\. \[280.\] 6.9 \[280.\] 1.1 \[150.\] 2.1 \[280.\] $|c_{d}^{XY}|$ 130\. \[130.\] 55\. \[900.\] 7.5 \[330.\] 3.5 \[470.\] 2.4 \[320.\] 130\. \[130.\] 26\. \[420.\] 3.2 \[140.\] 1.6 \[220.\] 1.0 \[130.\] $|c_{d}^{XX}-c_{d}^{YY}|$ 390\. \[410.\] 47\. \[770.\] 22\. \[950.\] 3.0 \[400.\] 7.0 \[920.\] 390\. \[410.\] 50\. \[820.\] 15\. \[670.\] 3.1 \[420.\] 5.0 \[650.\] --------------------------- ---------------- --------------- --------------- --------------- --------------- : Best attainable sensitivities from DIS to individual coefficient components $c_{u}^{\mu\nu}$ and $c_{d}^{\mu\nu}$ estimated for HERA, JLEIC, and eRHIC. All values are in units of $10^{-5}$ and reflect the orientation giving the greatest sensitivity. Results with brackets are associated with uncorrelated systematic uncertainties between binned data, while results without brackets correspond to the assumption of 100% correlation between systematic uncertainties. We provide estimated attainable sensitivities on coefficient magnitudes for both electron beam orientations, as detailed in Ref. [@ls18]. For JLEIC and eRHIC, sensitivities are listed for both one-year and ten-year data-taking configurations. \[table1\] Consider first the $c$-type coefficients. A summary of the estimated attainable sensitivities is presented in Table \[table1\], and the distribution of pseudoexperiments as a function of $x, Q^2, y$ for the datasets most sensititive to Lorentz violation for the particular case of the coefficient $c_u^{TX}$ is shown in Fig. \[figure2\]. Overall, the HERA dataset [@hera] can provide sensitivity to Lorentz violation at roughly the $10^{-4}$ level for $u$ quarks and the $10^{-3}$ level for $d$ quarks. Both JLEIC and eRHIC can offer sensitivities at the $10^{-6} - 10^{-5}$ level for $u$ quarks and the $10^{-5} - 10^{-4}$ level for $d$ quarks. The reduction in senstitivity for the $d$ quark is primarily due to the difference in the squared charges $e_u^2$ and $e_d^2$. Although HERA operates at a larger collision energy and thus has a larger kinematical range, the integrated luminosity is roughly two orders of magnitude lower than that of either EIC, which leads to reduced statistics. The best attainable sensitivities appear for the low-$x$, low-$Q^2$, and large-$y$ region of the phase space or the deeply inelastic limit of all three colliders. Note that the other coefficients display a similar pattern in the distribution of sensitivities. Note also that the sensitivity to Lorentz violation presented here is slightly better in magnitude than the equivalent results for the EIC presented in Ref. [@ls18]. However, the distribution of sensitivities is somewhat different. In particular, the distribution is shifted to favor larger energies, with a clear preference for larger electron-beam energies and the low-$x$ region as opposed to the low-$y$ region. In addition, the grouping of the distribution is tighter and shows a clearer trend. The origin of the difference between the current and former works [@klv17; @ls18] is due to the alteration of the on-shell condition leading to the parametrization $\widetilde{k} = \xi p$ instead of $k = \xi p$. --------------------------------------------------- --------------- --------------- --------------- ---------------- ---------------- [**HERA**]{} [**JLEIC**]{} [**eRHIC**]{} [**JLEIC**]{} [**eRHIC**]{} $|a_{\text{S}u}^{(5)TXX}-a_{Su}^{(5)TYY}|$ 0.70 \[0.69\] 4.2 \[20.\] 2.4 \[20.\] 0.15 \[16.\] 0.42 \[20.\] $|a_{\text{S}u}^{(5)XXZ}-a_{\text{S}u}^{(5)YYZ}|$ 1.8 \[1.8\] 9.7 \[17.\] 5.6 \[12.\] 0.30 \[14.\] 0.82 \[12.\] $|a_{\text{S}u}^{(5)TXY}|$ 0.22 \[0.22\] 0.47 \[1.3\] 0.37 \[1.6\] 0.17 \[1.9\] 0.12 \[1.3\] $|a_{\text{S}u}^{(5)TXZ}|$ 0.44 \[0.46\] 0.13 \[0.37\] 0.14 \[0.61\] 0.046 \[0.50\] 0.045 \[0.48\] $|a_{\text{S}u}^{(5)TYZ}|$ 0.43 \[0.45\] 0.13 \[0.36\] 0.14 \[0.61\] 0.046 \[0.51\] 0.045 \[0.48\] $|a_{\text{S}u}^{(5)XXX}|$ 0.14 \[0.15\] 0.14 \[0.41\] 0.19 \[0.86\] 0.047 \[0.52\] 0.060 \[0.67\] $|a_{\text{S}u}^{(5)XXY}|$ 0.14 \[0.15\] 0.15 \[0.42\] 0.19 \[0.84\] 0.048 \[0.57\] 0.059 \[0.66\] $|a_{\text{S}u}^{(5)XYY}|$ 0.14 \[0.14\] 0.15 \[0.43\] 0.18 \[0.84\] 0.048 \[0.55\] 0.059 \[0.66\] $|a_{\text{S}u}^{(5)XYZ}|$ 0.99 \[1.0\] 0.70 \[2.0\] 0.50 \[2.1\] 0.29 \[3.0\] 0.16 \[1.6\] $|a_{\text{S}u}^{(5)XZZ}|$ 0.17 \[0.18\] 0.12 \[0.34\] 0.13 \[0.59\] 0.041 \[0.45\] 0.043 \[0.46\] $|a_{\text{S}u}^{(5)YYY}|$ 0.14 \[0.15\] 0.14 \[0.40\] 0.19 \[0.86\] 0.046 \[0.54\] 0.060 \[0.68\] $|a_{\text{S}u}^{(5)YZZ}|$ 0.17 \[0.18\] 0.12 \[0.33\] 0.14 \[0.59\] 0.040 \[0.46\] 0.043 \[0.47\] $|a_{\text{S}d}^{(5)TXX}-a_{\text{S}d}^{(5)TYY}|$ 5.2 \[4.9\] 29\. \[290.\] 9.6 \[400.\] 0.62 \[310.\] 1.7 \[400.\] $|a_{\text{S}d}^{(5)XXZ}-a_{\text{S}d}^{(5)YYZ}|$ 14\. \[14.\] 74\. \[250.\] 22\. \[240.\] 1.2 \[270.\] 3.3 \[240.\] $|a_{\text{S}d}^{(5)TXY}|$ 0.86 \[0.89\] 7.7 \[20.\] 1.5 \[32.\] 0.68 \[37.\] 0.48 \[25.\] $|a_{\text{S}d}^{(5)TXZ}|$ 1.8 \[1.8\] 2.0 \[5.6\] 0.56 \[12.\] 0.19 \[9.7\] 0.18 \[9.3\] $|a_{\text{S}d}^{(5)TYZ}|$ 1.7 \[1.8\] 2.0 \[5.4\] 0.57 \[12.\] 0.18 \[10.\] 0.18 \[9.4\] $|a_{\text{S}d}^{(5)XXX}|$ 0.58 \[0.60\] 2.3 \[6.2\] 0.75 \[17.\] 0.19 \[10.\] 0.24 \[13.\] $|a_{\text{S}d}^{(5)XXY}|$ 0.54 \[0.58\] 2.4 \[6.4\] 0.75 \[16.\] 0.19 \[11.\] 0.23 \[13.\] $|a_{\text{S}d}^{(5)XYY}|$ 0.56 \[0.58\] 2.4 \[6.6\] 0.73 \[16.\] 0.19 \[11.\] 0.24 \[13.\] $|a_{\text{S}d}^{(5)XYZ}|$ 4.3 \[4.4\] 11\. \[30.\] 2.0 \[40.\] 1.2 \[58.\] 0.64 \[32.\] $|a_{\text{S}d}^{(5)XZZ}|$ 0.68 \[0.71\] 1.9 \[5.3\] 0.53 \[12.\] 0.16 \[8.8\] 0.17 \[9.1\] $|a_{\text{S}d}^{(5)YYY}|$ 0.56 \[0.60\] 2.3 \[6.1\] 0.76 \[17.\] 0.19 \[11.\] 0.24 \[13.\] $|a_{\text{S}d}^{(5)YZZ}|$ 0.66 \[0.71\] 1.9 \[5.1\] 0.54 \[12.\] 0.16 \[9.0\] 0.17 \[9.1\] --------------------------------------------------- --------------- --------------- --------------- ---------------- ---------------- : Best attainable sensitivities from DIS to individual coefficient components $a_{\text{S}u}^{(5)\lambda\mu\nu}$ and $a_{\text{S}d}^{(5)\lambda\mu\nu}$ estimated for HERA, JLEIC, and eRHIC. All values are in units of $10^{-6}$ GeV$^{-1}$ and reflect the orientation giving the greatest sensitivity. Results with brackets are associated with uncorrelated systematic uncertainties between binned data, while results without brackets correspond to the assumption of 100% correlation between systematic uncertainties. We provide estimated attainable sensitivities on coefficient magnitudes for both electron beam orientations, as detailed in Ref. [@ls18]. For JLEIC and eRHIC, sensitivities are listed for both one-year and ten-year data-taking configurations. \[table2\] For the $a^{(5)}$-type coefficients, the overall estimated attainable sensitivities are presented in Table \[table2\]. An illustration of a distribution of sensitivities for the coefficient component $a_u^{(5)TTX}$ is displayed in Fig. \[figure3\]. Other components have similar distributions. These results represent first estimates for the $a^{(5)}$-type quark coefficients. Overall, attainable sensitivities at the level of $10^{-6} - 10^{-5}$ GeV$^{-1}$ are found for the HERA dataset and at the level of $10^{-7} - 10^{-6}$ GeV$^{-1}$ for the EIC. The $a_u^{(5)TTX}$ distribution shows a more dramatic cusp effect in $x$ than the $c_u^{TX}$ distribution in Fig. \[figure2\], which leads to sensitivity at both low and high $x$. However, the low-$x$, low-$Q^2$, and large-$y$ region still admits the most sensitivity. The overall shape of the distributions for both the $c$- and $a^{(5)}$-type coefficients can be understood via a plot of reduced cross sections as a function of $x$, depicted in Fig. \[figure4\]. The increased sensitivity at low $x$ is readily apparent, with the larger CM energy for HERA implying an onset of sensitivity to Lorentz violation at lower values of $x$ than for the EIC. It is also interesting to note the existence of a zero around the value of $x \sim 0.5$, which accounts for the corresponding feature seen in the distributions. The Drell-Yan process {#sec:DY} ===================== Next, we turn attention to studying corrections from Lorentz violation to the DY process, using an analogous approach to that adopted above for DIS. The DY process involves the interaction of two hadrons leading to lepton-pair production, $H_1+ H_2 \rightarrow l_1 + l_2 + X$, where all final hadronic states $X$ are summed over and the polarizations of the final-state leptons are averaged because they are unobserved. The total cross section is given by $$\sigma = \frac{1}{2s}\int\fr{d^{3}l_{1},(2\pi)^{3}2{l_{1}}^{0}} \fr{d^{3}l_{2},(2\pi)^{3}2{l_{2}}^{0}} \sum_{X}\prod_{i=1}^{n_{X}}\fr{d^{3}p_{i},(2\pi)^{3}2{p_{i}}^{0}} |\bra{l_1, l_2, X}\hat{T}\ket{p_1, s_1, p_2, s_2}|^{2}, \label{eq:drellyangenxsec}$$ where $|\bra{l_1, l_2, X}\hat{T}\ket{p_1, s_1, p_2, s_2}|^{2} \equiv (2\pi)^{4}\delta^{4} \left(p_{1} + p_{2} - l_{1} - l_{2} - p_{X}\right)|\mathcal{M}|^{2}$, $p_{X} \equiv \Sigma_{i=1}^{n_{X}}p_{i}$, and $q = l_1 + l_2$. We must consider all $n_{X}$ possible final hadronic states in the process because $X$ is unobserved. Note that the lepton spin labels are suppressed as they are summed over. The factor $1/2s$ is the usual hadronic flux factor. As with DIS, our treatment considers effects of Lorentz violation on $\mathcal{M}$ and in particular on the hadronic contribution. Since this process represents the head-on collision between two hadrons, it is simplest to work in the hadron-hadron CM frame with $\vec{p}_1 + \vec{p}_2 = \vec{0}$. The differential cross section then takes the form $$\begin{aligned} d\sigma = \frac{\alpha^2}{2s}\frac{1}{q^4}d^4 q \frac{d\Omega_l}{(2\pi)^4}\sum_i R_i (L_i)_{\mu\nu}(W_i)^{\mu\nu}, \label{eq:dydiffxsecgeneral}\end{aligned}$$ where $i$ denotes the sum over channels with ratios $R_i$ to the photon propagator. The momentum $q$ is $q = l_1 + l_2$, and the difference $l = l_1 - l_2$ has solid angle $d\Omega_l$ about the lepton-pair CM. Note that $q^2 > 0$ for this process, in contrast to DIS where $q^2 < 0$. Factorization of the hadronic tensor {#sec:factorizationDY} ------------------------------------ The object of primary interest in Eq.  is the hadronic tensor $W_{\mu\nu}$, which may be written as $$\begin{aligned} W_{\mu\nu} = \int d^4xe^{-iq\cdot x} \bra{p_1, s_1, p_2, s_2}j^\dagger_\mu(x)j_\nu(0)\ket{p_1, s_1, p_2, s_2}. \label{eq:drellyanhadronic}\end{aligned}$$ The dominant contribution to this object is displayed in Fig. \[figure5\]. The current product $j^\dagger_\mu(x)j_\nu(0)$ can be decomposed in a similar way as done for DIS. However, we consider here the simple product of currents instead of a time-ordered product, as the latter offers no advantage for this process. We are again interested in the dominant effects of Lorentz violation at large $q^2\equiv Q^2>0$. This contains numerous Dirac structures with certain combinations dominating at the leading power in $Q$. Equation  is to be evaluated in the CM frame of the hadron-hadron collision. Given the high energy of this process in the massless hadron limit, we can parametrize without loss of generality the hadron momenta as $p_1 = p_1^+ \bar{n}$ and $p_2 = p_2^- n$, where $\bar{n}$ and $n$ are given in Eq. . Employing similar considerations as in Sec. \[ssec:FactDIStensor\], this implies that the dominant Dirac structures are proportional to $\{\gamma^-, \gamma^-\gamma_5,\gamma^-\gamma_\perp^i\gamma_5\}$ and $\{\gamma^+, \gamma^+\gamma_5,\gamma^+\gamma_\perp^i\gamma_5\}$ for $H_1$ and $H_2$, respectively. Considering Eq. , this leads to nine Dirac bilinear products constituting the leading-power behavior of $W^{\mu\nu}$, $$\begin{aligned} W^{\mu\nu} \simeq -&\frac{1}{16}\frac{1}{3} \int d^4 x e^{-iq\cdot x} \text{Tr}\left[\gamma^- \left(\bra{p_1, s_1}\bar{\chi}(x)\gamma^+\chi(0)\ket{p_1, s_1} \right. \right. \nonumber\\ &\left. \left. + \gamma_5\bra{p_1, s_1}\bar{\chi}(x)\gamma_5\gamma^+\chi(0)\ket{p_1, s_1} + \gamma_5\gamma_\perp^i \bra{p_1, s_1}\bar{\chi}(x)\gamma^+\gamma_\perp^i\gamma_5\chi(0)\ket{p_1, s_1} \right)\Gamma^\mu \right. \nonumber\\ &\left. \times \gamma^+\left(\bra{p_2, s_2}\bar{\psi}(0)\gamma^-\psi(x)\ket{p_2, s_2} + \gamma_5 \bra{p_2, s_2}\bar{\psi}(0)\gamma_5\gamma^-\psi(x)\ket{p_2, s_2} \right. \right. \nonumber\\ & \left.\left. +\gamma_5\gamma_\perp^i \bra{p_2, s_2}\bar{\psi}(0)\gamma^-\gamma_\perp^i\gamma_5\psi(x)\ket{p_2, s_2} \right)\Gamma^\nu \right]. \label{eq:DYhadronictensorigin}\end{aligned}$$ The factor of $1/3$ comes from the Fierz decomposition of the su(3) color algebra, which fixes the matrix elements into color-neutral combinations. Note also that the electroweak charges are implicit in the definitions of $\Gamma^\mu$, $\Gamma^\nu$. From Eq. (\[eq:DYhadronictensorigin\]), we define functions that are momentum-space Fourier components $k_1$, $k_2$ of the hadron matrix elements. Expressing the matrix elements in momentum space and performing the integrations over the spatial variable $x$ yields a four-dimensional delta function $\delta^4\left(q-k_1-k_2\right)$. Since the physical internal momenta $k_1$, $k_2$ are off shell in this context, a change of variables to tilde momenta must be performed. Unlike in DIS, one may work here in the conventional CM frame because the collinear component of the quark momentum comes with $\widetilde{k}$ instead of $k$. The change of variables $k_i \rightarrow \widetilde{k}_i$ produces jacobian factors $J_{k_1}$, $J_{k_2}$. The delta function can be expressed as $$\delta^4\left(q-k_1(\widetilde{k}_1 ) - k_2(\widetilde{k}_2 )\right) = \delta^4\left(\widetilde{q}-\widetilde{k}_1 - \widetilde{k}_2 \right), \label{eq:qtildedefDY}$$ which defines $\widetilde{q}$ for the DY process. This quantity typically depends on $k_1$ and $k_2$. Additional care is required here because one momentum obeys a modified particle dispersion relation while the other obeys the antiparticle relation. Performing a subsequent Fourier transform in the spatial variables $w_1$, $w_2$ followed by a change of variables to $\hat{w}_1$, $\hat{w}_2$ such that $k_i(\widetilde{k}_i)\cdot w_i = \widetilde{k}_i\cdot \hat{w}_i$ as discussed in Sec. \[ssec:FactDIStensor\], we obtain the generic contribution to $W^{\mu\nu}$ in the form $$\begin{aligned} \int d^4 \widetilde{k}_1d^4 \widetilde{k}_2 d^4 &\hat{w}_1 d^4 \hat{w}_2 J_{k_1} J_{k_2}J_{w_1} J_{w_2} H^{\text{new}}(\widetilde{k}_1,\widetilde{k}_2) \nonumber\\ &\times F(w_1(\hat{w}_1),p_1)\bar{F}(w_2(\hat{w}_2),p_2) e^{-i(\widetilde{k}_1\cdot \hat{w}_1 + \widetilde{k}_2\cdot \hat{w}_2)}. \label{eq:genericDYtensorterm}\end{aligned}$$ In this expression, $H^{\text{new}}(\widetilde{k}_1,\widetilde{k}_2)$ represents a Dirac structure combined with Eq. . Note that this is a new function due to the change of variables on the functional form of the momentum contractions with the Dirac matrices. Equation  resembles two copies of the analogous DIS result, cf. Eqs. -. Appealing to our interest in the leading-twist contributions and considering the portion of $W^{\mu\nu}$ constrained by $q^\mu$, we can approximate $H^{\text{new}}(\widetilde{k}_1,\widetilde{k}_2) \approx H^{\text{new}}(\widetilde{k}^+_1,\widetilde{k}^-_2)$, which is the leading term in the collinear expansion of the hard-scattering function [@Qiu:1990xxa]. In the kinematics of choice defining the magnitude and direction of $p_1$ and $p_2$, we have $\widetilde{k}_1^\mu \simeq (\widetilde{k}_1^+,0,{\textbf}{0}_\perp)$ and $\widetilde{k}_2^\mu \simeq (0,\widetilde{k}_2^-,{\textbf}{0}_\perp)$ in the approximation of the hard-scattering function $H^{\text{new}}(\widetilde{k}^+_1,\widetilde{k}^-_2)$. The dominant portion of the term Eq.  is thus $$\begin{aligned} \label{eq:genericDYtensortermdominat} \int d\widetilde{k}^+_1d\widetilde{k}^-_2 &H^{\text{new}}(\widetilde{k}_1^+,\widetilde{k}_2^-) \nonumber\\ &\times\int d\widetilde{k}_1^- \widetilde{k}_{1_\perp} d\widetilde{k}_2^+ \widetilde{k}_{2_\perp} d^4 \hat{w}_1 d^4 \hat{w}_2 F(w_1(\hat{w}_1),p_1)\bar{F}(w_2(\hat{w}_2),p_2)e^{-i(\widetilde{k}_1\cdot \hat{w}_1 + \widetilde{k}_2\cdot \hat{w}_2)}.\end{aligned}$$ The placeholder functions $F$ and $\bar{F}$ are identified with the particle and antiparticle counterparts of the PDFs derived in the case of DIS, Eq. . With the above considerations, we obtain the final form of $W^{\mu\nu}$ as $$\begin{aligned} W_f^{\mu\nu} = &\frac{1}{4}\frac{1}{3}\frac{1}{p_1^+ p_2^-} \int d\xi_1 d\xi_2 \delta^4\left(\widetilde{q}(q,\xi_1p_1, \xi_2p_2) - \xi_1p_1 - \xi_2 p_2 \right) \nonumber \\ &\times \text{Tr}\left[\slashed{p}_1\left(\mathbb{1}f_f(\xi_1) + \gamma_5\lambda_1 \Delta f_f(\xi_1) + \gamma_5\gamma_\perp^i \lambda_{1\perp}\Delta_\perp f_f(\xi_1)\right)\Gamma^\mu(\xi_1p_1, \xi_2 p_2) \right. \nonumber \\ &\left. \hskip 20pt \times \slashed{p}_2\left(\mathbb{1}f_{\bar{f}}(\xi_2) - \gamma_5\lambda_2 \Delta f_{\bar{f}}(\xi_2) + \gamma_5\gamma_\perp^i \lambda_{2\perp} \Delta_\perp f_{\bar{f}}(\xi_2)\right)\Gamma^\nu(\xi_1p_1, \xi_2 p_2) \right], \label{eq:DYtensorgenfinal}\end{aligned}$$ where $\widetilde{k}^+_1 = \xi_1 p_1^+$ and $\widetilde{k}^-_2 = \xi_2 p_2^-$. Note the minus sign in front of the antiparton PDF, which is required for a consistent interpretation of the helicity asymmetry of the target state and the suppression of any potential implicit dependence on Lorentz violation. Also note that the matrices $\Gamma^\mu$ can be expressed as matrix functions of $\xi_1 p_1$, $\xi_2 p_2$ because $k$, $\widetilde{k}$ can be taken equal when contracted with the coefficients for Lorentz violation. Since $\widetilde{q}$ is a nonlinear function of $\widetilde{k}_1$ and $\widetilde{k}_2$, the integration over $\xi_1$, $\xi_2$ in Eq.  is awkward. However, an integration over $d^4q$ is required in calculating the total cross section and so $q$ can be parametrized as usual, $$q^\mu = x_1p_1^\mu + x_2p_2^\mu + q_\perp^\mu, \label{qDY}$$ which implies $ d^4q = p_1^+ p_2^- dx_1 dx_2 dq^2_\perp$. Since the argument of the delta function in Eq.  is then linear in $x_1$, $x_2$, and $q_\perp$, integration can instead first be performed over the latter variables, setting $x_i \approx \xi_i$ at leading order in Lorentz violation and thus fixing $\widetilde{q}$. Overall, we can thus conclude that the basic ideas leading to the factorization of the forward amplitude $T^{\mu\nu}$ in DIS also lead to the factorization of $W^{\mu\nu}$ for the DY process. The hadronic tensor as expressed in Eq.  is in a form suitable for insertion into the differential cross section . Performing the integrations sets the momentum fractions of the partipating partons equal to the fractions of $\widetilde{q}^{\pm}$ and $\widetilde{q}_\perp = 0$. Notice that in the above discussion we consider the situation of $k_1$ emanating from $p_1$ and $k_2$ from $p_2$. However, we must also include $k_2$ emanating from $p_1$ and $k_1$ from $p_2$. These contribute to the probability rather than to the amplitude and so represent another example of the incoherence of the partonic scattering. Note that each contribution separately satisfies the condition for electromagnetic gauge invariance. Minimal $c$-type coefficients {#sec:dy-c} ----------------------------- As a first application of the above methodology, we study the implications of Lorentz violation described by Eq.  on the unpolarized DY process at leading order in electromagnetic interactions. The final-state lepton pair now represents electrons, muons, or taus and their antiparticles. The only Dirac structure appearing is the vector current Eq. , with $\Gamma_f^\mu = (\eta^{\mu\nu} + c_f^{\nu\mu})\gamma_\nu$. In this limit, the hadronic tensor Eq.  reads $$\begin{aligned} W_f^{\mu\nu} = &-\fr{1,48}e_{f}^2 \text{Tr}\left[\Gamma_f^\mu\gamma^\rho\Gamma_f^\nu\gamma^\sigma\right] \int d^4 x e^{-iq\cdot x} \bra{p_1}\bar{\psi}_f(x)\gamma_\rho\psi_f(0)\ket{p_1} \bra{p_2}\bar{\psi}_f(0)\gamma_\sigma\psi_f(x)\ket{p_2}. \label{eq:DYWmunuc}\end{aligned}$$ Both the interacting parton and antiparton have parametrized momenta ${k_i}_f^\mu = \xi_i(p_i^\mu - c_f^{\mu p_i})$ where $i=1,2$. By performing the factorization procedure outlined in the previous section, we obtain $$\begin{aligned} W_f^{\mu\nu} = \int d\xi_1 d\xi_2 H_f^{\mu\nu}(\xi_1,\xi_2) \left[f_f(\xi_1,c_f^{p_1p_1})f_{\bar{f}}(\xi_2,c_f^{p_2p_2}) + f_f(\xi_2,c_f^{p_2p_2})f_{\bar{f}}(\xi_1,c_f^{p_1p_1})\right], \label{eq:DYunpolarixedWfhalzenmartinform}\end{aligned}$$ where the contribution to the hard-scattering function is $$\begin{aligned} H^{\mu\nu}_f(\xi_1,\xi_2) = \fr{2e_f^2,3\widetilde{s}}&\text{Tr}\left[(\eta^{\mu\alpha} + c_f^{\alpha\mu})\gamma_\alpha\fr{\xi_1 \slashed{p}_1,2}(\eta^{\nu\beta} + c_f^{\beta\nu})\gamma_\beta\fr{\xi_2\slashed{p}_2,2}\right] \nonumber\\ &\times (2\pi)^4\delta^4\left(q^\mu + \xi_1 c_f^{\mu p_1} + \xi_2 c_f^{\mu p_2} - \xi_1 p_1^\mu - \xi_2 p_2^\mu\right), \label{eq:omegaDYc} \end{aligned}$$ with $\widetilde{s} \equiv 2\widetilde{k}_1\cdot\widetilde{k}_2$, $\widetilde{q}_f^\mu =(\eta^{\mu\alpha} + c_f^{\mu\alpha})q_\alpha$. In adding the extra diagram, we have employed the symmetry $H_f^{\mu\nu}(\widetilde{k}_1^+, \widetilde{k}_2^-) = H_f^{\mu\nu}(\widetilde{k}_2^-, \widetilde{k}_1^+)$ in Eq.. The expression is similar to the conventional result for the partonic subprocess, and the discussion of Sec. \[sec:c\] applies with $l \leftrightarrow {\accentset{\approx}{k}}_2$. As expected, direct calculation shows this result satisfies the electromagnetic Ward identity, $q_\mu W^{\mu\nu} = 0$. The unpolarized parton and antiparton PDFs are the only ones emerging in this process. The parton PDF takes the form found for DIS. The antiparton PDF has the definition $$\begin{aligned} \bar{f}_f(\xi,c_f^{pp}) = -\int\fr{d\lambda,2\pi}e^{+i\xi p\cdot {n} \lambda} \bra{p}\bar{\psi}(\lambda \widetilde{n})\frac{\slashed{n}}{2}\psi(0)\ket{p},\end{aligned}$$ and satisfyies $\bar{f}_f(\xi,c_f^{pp}) = -f_f(-\xi,c_f^{pp})$. Notice here that the antiparticle PDF $f_{\bar{f}}$ has the same implicit dependence on the coefficients as the particle PDF because the $c$-type coefficients affect particles and antiparticles in the same way. Contracting the leptonic and hadronic tensors then yields the total cross section as $$\begin{aligned} \sigma = &\frac{2\alpha^2}{3s}\fr{1,Q^4} \int d\Omega_l \frac{d\xi_1}{\xi_1}\frac{d\xi_2}{\xi_2}\sum_f e_f^2 \left[(\widetilde{k}_{1}\cdot l_{1})(\widetilde{k}_{2}\cdot l_{2}) + (\widetilde{k}_{1}\cdot l_{2})(\widetilde{k}_{2}\cdot l_{1}) \right. \nonumber\\ & \left. \hskip 130pt + (\widetilde{k}_{1}\cdot l_{1})\left(c_f^{\widetilde{k}_{2}l_{2}} + c_f^{l_{2}\widetilde{k}_{2}}\right) + (\widetilde{k}_{1}\cdot l_{2})\left(c_f^{\widetilde{k}_{2}l_{1}} + c_f^{l_{1}\widetilde{k}_{2}}\right) \right. \nonumber\\ &\left. \hskip 130pt + (\widetilde{k}_{2}\cdot l_{1})\left(c_f^{\widetilde{k}_{1}l_{2}} + c_f^{l_{2}\widetilde{k}_{1}}\right) + (\widetilde{k}_{2}\cdot l_{2})\left(c_f^{\widetilde{k}_{1}l_{1}} + c_f^{l_{1}\widetilde{k}_{1}}\right) \right. \nonumber\\ &\left. \hskip 130pt - (\widetilde{k}_{1}\cdot \widetilde{k}_{2})\left(c_f^{l_{1}l_{2}} + c_f^{l_{2}l_{1}}\right) - (l_{1}\cdot l_{2})\left(c_f^{\widetilde{k}_{1}\widetilde{k}_{2}} + c_f^{\widetilde{k}_{2}\widetilde{k}_{1}} \right) \right] \nonumber\\ & \hskip 100pt \times \left(f_{f}(\xi_{1},c_f^{p_1p_1})f_{\bar{f}}(\xi_{2},c_f^{p_2p_2}) + f_{f}(\xi_{2},c_f^{p_2p_2})f_{\bar{f}}(\xi_{1},c_f^{p_1p_1})\right). \label{eq:DYsigmac}\end{aligned}$$ Next, we make the kinematics explicit by parametrizing the colliding proton momenta as $p_1^\mu = E_{p}\left(1,0,0,1\right)$ and $p_2^\mu = E_{p}\left(1,0,0,-1\right)$, with $E_p \simeq |\vec{p}|$ and the final lepton momenta as $l_1^\mu = E_e\left(1,\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta\right)$, $l_2^\mu = E_e\left(1,-\sin\theta\cos\phi,-\sin\theta\sin\phi,-\cos\theta\right)$. Here, $\theta$ and $\phi$ are the usual polar and azimuthal angles with respect to the laboratory $z$ axis, chosen along the direction of motion of the two initial protons. The total CM energy is $s = (p_1 + p_2)^2 = 4E_p^2$. After performing the solid-angle integration, we find that Eq.  becomes $$\begin{aligned} \sigma = \fr{1,3}\int dx_1dx_2\sum_{f} \left[f_f(x_1)f_{\bar{f}}(x_2) +f_f(x_2)f_{\bar{f}}(x_1)\right] \sigma_f(\hat{s},c_f^{\mu\nu}). \label{eq:DYsigmacexplicit}\end{aligned}$$ Following the discussion in Sec. \[sec:c\], we have defined the equivalent partonic cross section as $$\begin{aligned} \sigma_f(\hat{s},c_f^{\mu\nu}) = \fr{4\pi\alpha^2e_f^2,3\hat{s}}\left(1 + c_f^{33} - c_f^{00} \right), \label{eq:partonic_c}\end{aligned}$$ where $\hat{s}$ is the invariant mass $s=(l_1+l_2)^2 = (k_1+k_2)^2$ of the lepton pair. In this last expression for the cross section, we suppress the dependence on $c_f^{p_1p_1}$ and $c_f^{p_2p_2}$ for brevity. The cross section as a function of $Q^2$ and other kinematical invariants is of interest because it is measured in experiments. In forming $d\sigma/dQ^2$ in a given frame, the results and must be converted using a delta function $\delta(Q^2-\hat{s})$. In the presence of Lorentz violation, this quantity may differ from the usual value $x_1 x_2 s$, so upon integration over $x_1$ and $x_2$ the PDFs may be constrained away from the normal condition $Q^2 = x_1 x_2 s$ at first order in the coefficients for Lorentz violation. Note that $0 \leq x_1$ and $x_2\leq 1$, as dictated by the external kinematics. This introduces yet another way in which Lorentz-violating effects can manifest themselves in observables of interest. In the cases that follow, we find that this shift in the delta-function argument leads to the dominant source of sensitivity to Lorentz violation in the DY process. Explicitly, we find $\hat{s} \equiv Q^2 = (k_1 + k_2)^2$ has the expression $$\begin{aligned} \hat{s} & = x_1x_2s\left[1 - \fr{1,2x_1x_2}\left( \left(x_1+x_2\right)^2c_f^{00} + \left(x_1-x_2\right)^2c_f^{33} - \left(x_1^2 - x_2^2\right)\left(c_f^{03}+c_f^{30}\right) \right)\right], \label{eq:hatsc}\end{aligned}$$ which shifts the evaluation of the derivatives. After some calculation, we find $$\begin{aligned} \fr{d\sigma,dQ^{2}} = &\fr{4\pi\alpha^{2},9 Q^{4}}\sum_{f}e_{f}^{2}\left[ \int_{\tau}^1 dx \fr{\tau,x} \left(1 + 2 (1+\fr{x^2,\tau})c_f^{00}\right) \left(f_{f}(x)f_{\bar{f}}(\tau/x) + f_{f}(\tau/x)f_{\bar{f}}(x)\right) \right. \nonumber\\ &\left. + \int_{\tau}^1 dx \fr{\tau,x} \left[\left(x-\fr{\tau,x}\right)c_f^{33} +\left(x+\frac{\tau}{x}\right) c_f^{00} \right] \left(f_{f}(x)f'_{\bar{f}}(\tau/x) + f'_{f}(\tau/x)f_{\bar{f}}(x)\right) \right], \label{eq:DYsigmadQ2c}\end{aligned}$$ where $\tau \equiv Q^2/s$ is the usual scaling variable with $0\leq \tau \leq 1$. Here, the notation $f'(y)$ denotes the derivative of the PDF evaluated at $y$. From the expression , we see that only the single coefficient $c_f^{33}$ controls the sidereal-time dependence of the cross section, since $c_f^{00}$ is invariant under rotations. The term $c_f^{03} = c_f^{30}$ is absent because it is multiplied by a factor $(x_1^2 - x_2^2)$ that is antisymmetric in $x_1\leftrightarrow x_2$, while the cross section is symmetric under this interchange. The result also has the interesting feature of being independent of time whenever $x_1 = x_2$. The time dependence can be explicitly revealed by expressing the single laboratory-frame coefficient controlling the time dependence in terms of coefficients in the Sun-centered frame, $$\begin{aligned} c_f^{33} &= c_f^{XX}\left(\cos\chi\sin\psi\cos\Omega_{\oplus}T_{\oplus} + \cos\psi\sin\Omega_{\oplus}T_{\oplus}\right)^2 \nonumber\\ & + c_f^{YY}\left(\cos\chi\sin\psi\sin\Omega_{\oplus}T_{\oplus} - \cos\psi\cos\Omega_{\oplus}T_{\oplus}\right)^2 \nonumber\\ & + 2c_f^{XY}\left(\cos\chi\sin\psi\cos\Omega_{\oplus}T_{\oplus} + \cos\psi\sin\Omega_{\oplus}T_{\oplus}\right) \left(\cos\chi\sin\psi\sin\Omega_{\oplus}T_{\oplus} - \cos\psi\cos\Omega_{\oplus}T_{\oplus}\right) \nonumber\\ & - 2c_f^{XZ}\sin\chi\sin\psi \left(\cos\chi\sin\psi\cos\Omega_{\oplus}T_{\oplus} + \cos\psi\sin\Omega_{\oplus}T_{\oplus}\right) \nonumber\\ & + 2c_f^{YZ}\sin\chi\sin\psi \left(\cos\chi\sin\psi\sin\Omega_{\oplus}T_{\oplus} - \cos\psi\cos\Omega_{\oplus}T_{\oplus}\right) + c_f^{ZZ}\sin^2\chi\sin^2\psi. \label{eq:c33rot}\end{aligned}$$ The reader is reminded that $\chi$ is the laboratory colatitude, $\psi$ is the angle north of east specifying the beam orientation, and $\Omega_\oplus T_\oplus$ is the local sidereal angle. Note that the first line of the expression represents the conventional result shifted by the factor $(1+c_f^{33} - c_f^{00})$, which stems from the the modified partonic subprocess $q\bar{q} \rightarrow \gamma \rightarrow l\bar{l}$ encapsulated in Eq. . The remainder arises from the shifted argument in the delta function, leading to additional kinematical dependence and derivatives of the PDFs themselves. In the conventional case, the quantity $Q^4 d\sigma/dQ^2$ exhibits a scaling law in that it is a function only of $1/\tau = s/Q^2$. This scaling law persists at tree level in the DY process in the presence of Lorentz violation. In contrast, the $c$-type coefficients induce scaling violations in DIS. Nonminimal $a^{(5)}$-type coefficients {#sec:dy-a} -------------------------------------- Next, we revisit the effect of nonzero $a^{(5)}$-type coefficients on the unpolarized scattering DY process. The effects of the corresponding CPT-violating operators on the parton-antiparton collision has some interesting features. The same PDFs $f_f(\xi)$, $f_{\bar{f}}(\xi)$ emerge as in the analysis for $c$-type coefficients because the $a^{(5)}$-type coefficients also control spin-independent effects. Using the Feynman rules and noting Eqs.  and , we again find $W^{\mu\nu}_f$ takes the form . The perturbative contribution is now given by Eq.  with the replacements $$\begin{aligned} &(\eta^{\mu\alpha} + c_f^{\alpha\mu})\gamma_\alpha \rightarrow (\eta^{\mu\alpha} - a_{\text{S}f}^{(5)\alpha\beta\mu}) \gamma_\alpha(\xi_1p_1 + \xi_2 p_2)_\beta, \nonumber\\ &(\eta^{\mu\alpha} + c_f^{\mu\alpha})q_\alpha \rightarrow q^\mu \mp a_{\text{S}f}^{(5)\mu \alpha \beta} \widetilde{k}_{1_\alpha}\widetilde{k}_{1_\beta} \pm a_{\text{S}f}^{(5)\mu \alpha \beta} \widetilde{k}_{2_\alpha}\widetilde{k}_{2_\beta}.\end{aligned}$$ The upper signs in the latter expression hold for $k_1$ associated to a particle and $k_2$ to an antiparticle, while the lower signs hold for $k_1$ associated to an antiparticle and $k_2$ to a particle. The hard scattering functions $H_f^{\mu\nu}(\widetilde{k}_1^+,\widetilde{k}_2^-)$ and $H_f^{\mu\nu}(\widetilde{k}_2^-, \widetilde{k}_1^+)$ now differ because $\widetilde{q}$ is asymmetric under the interchange $\widetilde{k}_1 \leftrightarrow \widetilde{k}_2$ due to the opposite-sign contributions from the $a^{(5)}$-type coefficients for quarks and antiquarks. The two contributions are therefore distinct, and the hadronic tensor takes the factorized form $$W_f^{\mu\nu} = \int d\xi_1 d\xi_2\left[ H_f^{\mu\nu}(\xi_1,\xi_2)f_f(\xi_1)f_{\bar{f}}(\xi_2) + H_f^{\mu\nu}(\xi_2,\xi_1)f_f(\xi_2)f_{\bar{f}}(\xi_1)\right].$$ However, this has little relevance for the total cross section, as integrating over the entire available phase space gives identical contributions in each case. Explicitly, we find for the total cross section $$\begin{aligned} &\sigma = \frac{2\alpha^2}{3s}\fr{1,Q^4} \sum_f e_f^2\int d\Omega_l \frac{dx_1}{x_1}\frac{dx_2}{x_2} \left[ (\widetilde{k}_1\cdot l_1)(\widetilde{k}_2\cdot l_2) + (\widetilde{k}_1\cdot l_2)(\widetilde{k}_2\cdot l_1) \right. \nonumber\\ &\left. \hskip 100pt + (\widetilde{k}_1\cdot\widetilde{k}_2) \left(a_{\text{S}f}^{(5)l_1\widetilde{k}_1 l_2} + a_{\text{S}f}^{(5)l_1\widetilde{k}_2 l_2} + (l_1\leftrightarrow l_2)\right) \right. \nonumber\\ &\left. \hskip 100pt - \left(\left((\widetilde{k_1}\cdot l_1) \left( a_{\text{S}f}^{(5)\widetilde{k}_2\widetilde{k}_1 l_2} + a_{\text{S}f}^{(5)\widetilde{k}_2\widetilde{k}_2 l_2} + (l_1\leftrightarrow l_2) \right) \right) + (\widetilde{k}_1 \leftrightarrow \widetilde{k}_2) \right) \right. \nonumber\\ & \left. \hskip 100pt +(l_1\cdot l_2) \left( a_{\text{S}f}^{(5)\widetilde{k}_1\widetilde{k}_1 \widetilde{k}_2} + a_{\text{S}f}^{(5)\widetilde{k}_1\widetilde{k}_2 \widetilde{k}_2} + a_{\text{S}f}^{(5)\widetilde{k}_2\widetilde{k}_1 \widetilde{k}_1} + a_{\text{S}f}^{(5)\widetilde{k}_2\widetilde{k}_2 \widetilde{k}_1} \right) \right] \nonumber\\ & \hskip 80pt \times \left(f_{f}(x_{1},+)f_{\bar{f}}(x_{2},-) + f_{f}(x_{2},+)f_{\bar{f}}(x_{1},-)\right). \label{eq:DYdomegaa5}\end{aligned}$$ Here, we employ the notation $f_f(x,\pm)$ and $f_{\bar{f}}(x,\pm)$ to denote the sign dependences on the $a^{(5)}$-type scalar quantities that may appear in the PDFs, as discussed in Sec. \[sec:a\]. The differential distribution $d\sigma/dQ^2$ in terms of $\hat{s} = (k_1 + k_2)^2$ is required. At first order in Lorentz violation, it takes the general form $$\begin{aligned} \hat{s}_{\pm} = 2\widetilde{k}_1\cdot\widetilde{k}_2 \pm 2\left( a_{\text{S}f}^{(5)\widetilde{k}_1\widetilde{k}_1\widetilde{k}_1} - a_{\text{S}f}^{(5)\widetilde{k}_2\widetilde{k}_2\widetilde{k}_2} - a_{\text{S}f}^{(5)\widetilde{k}_1\widetilde{k}_2\widetilde{k}_2} + a_{\text{S}f}^{(5)\widetilde{k}_2\widetilde{k}_1\widetilde{k}_1}\right), \label{shata5}\end{aligned}$$ where the upper sign is for the particle with $k_1$ and the lower sign for the antiparticle. Using the CM-frame kinematics for the DY process, we obtain $$\begin{aligned} \hat{s}_{\pm} = sx_1 x_2 \pm sE_p&\left[ \tfrac{1}{2}a_{\text{S}f}^{(5)000}(x_1-x_2)(x_1+x_2)^2 - a_{\text{S}f}^{(5)003}(x_1+x_2)(x_1^2+x_2^2) \right. \nonumber\\ &\left. + \tfrac{1}{2}a_{\text{S}f}^{(5)033}(x_1-x_2)(x_1 + x_2)^2 - \tfrac{1}{2}a_{\text{S}f}^{(5)300}(x_1+x_2)(x_1-x_2)^2 \right. \nonumber\\ &\left. + a_{\text{S}f}^{(5)330}(x_1-x_2)(x_1^2+x_2^2) - \tfrac{1}{2}a_{\text{S}f}^{(5)333}(x_1+x_2)(x_1 - x_2)^2 \right]. \label{shatCM}\end{aligned}$$ Like the hard-scattering trace, this expression has symmetric and antisymmetric pieces in $x_1, x_2$. This differs from the result for the $c$-type coefficients, where the hard-scattering trace is symmetric and so only the symmetric parts of $\hat{s}$ contribute. Carrying out the calculation as before, we find $$\begin{aligned} \frac{d\sigma}{dQ^2} = \frac{4\pi\alpha^2}{9Q^4}\sum_f e_f^2\int_{0}^{1}dx &\left[\frac{\tau}{x} \left(1 + A_{\text{S}}(x,\tau/x)\right)f_{\text{S}f}(x,\tau/x) \right. \nonumber\\ &\left. -\frac{\tau}{sx^2}\left(A'_{\text{A}}(x,\tau/x)f_{\text{A}f}(x,\tau/x) + A_{\text{A}}(x,\tau/x)f'_{\text{A}f} \right) \right], \label{dsigmadQ2}\end{aligned}$$ where $$\begin{aligned} & f_{\text{S}f}(x,\tau/x) \equiv f_f(x)f_{\bar f}(\tau/x) + f_f(\tau/x)f_{\bar f}(x), \nonumber\\ & f_{\text{A}f}(x,\tau/x) \equiv f_f(x)f_{\bar f}(\tau/x) - f_f(\tau/x)f_{\bar f}(x), \nonumber\\ & f'_{\text{A}f}(x,\tau/x) \equiv f_f(x)f'_{\bar f}(\tau/x) - f'_f(\tau/x)f_{\bar f}(x), \label{pdfdefs}\end{aligned}$$ $$\begin{aligned} &A_{\text{S}} = E_p(x + \tau/x)\left(a_{\text{S}f}^{(5)110} + a_{\text{S}f}^{(5)220}\right), \label{AS}\end{aligned}$$ and $$\begin{aligned} &\begin{aligned} A_{\text{A}}(x,\tau/x) = sE_p&\left[\tfrac{1}{2}(x-\tau/x)(x+\tau/x)^2\left(a_{\text{S}f}^{(5)000} + a_{\text{S}f}^{(5)033}\right) \right. \nonumber\\ &\left. + a_{\text{S}f}^{(5)330}(x-\tau/x)(x^2+(\tau/x)^2) \right], &\end{aligned} \nonumber\\ &\begin{aligned} A'_{\text{A}}(x,\tau/x) = -\frac{s}{2x^2}E_p&\left[2(x^4 -2\tau x^2 + 3\tau^2)a_{\text{S}f}^{(5)330} \right. \\ &\left. - (x^2 - 3\tau)(x^2 + \tau)(a_{\text{S}f}^{(5)000} + a_{\text{S}f}^{(5)033})\right]. \end{aligned} \label{SS}\end{aligned}$$ The first line of the result represents a modification to the conventional result that is symmetric in $x_1$ and $x_2$. The analogous result for $c$-type coefficients involves a shift given by $c_f^{33} - c_f^{00} = c_f^{11} + c_f^{22}$ once trace considerations are taken into account, which has similarities with the combination of coefficients found in Eq. . The remaining terms result from the shifted delta function and the combinations antisymmetric in $x_1$ and $x_2$. Note also that the PDFs derived here are the same as those found in DIS. One new feature is that scaling violations are present by virtue of the mass dimensionality of the $a^{(5)}$-type coefficients. Also, since the DY process is more symmetric than DIS, a smaller set of coefficients for Lorentz violation appears in the cross section . Estimated attainable sensitivities and comparison with DIS {#sec:constraintsDY} ---------------------------------------------------------- [**LHC**]{} ------------------------------------------------------ ------------------ $|c_{u}^{XZ}|$ 7.3 \[19\] $|c_{u}^{YZ}|$ 7.1 \[19\] $|c_{u}^{XY}|$ 2.7 \[7.0\] $|c_{u}^{XX}-c_{u}^{YY}|$ 15 \[39\] $|c_{d}^{XZ}|$ 72 \[180\] $|c_{d}^{YZ}|$ 70 \[180\] $|c_{d}^{XY}|$ 26 \[69\] $|c_{d}^{XX}-c_{d}^{YY}|$ 150 \[400\] $|a^{(5)TXX}_{\text{S}u} - a^{(5)TYY}_{\text{S}u}|$ 0.015 \[0.039\] $|a^{(5)TXY}_{\text{S}u} |$ 0.0027\[0.0070\] $|a^{(5)TXZ}_{\text{S}u} |$ 0.0072\[0.019\] $|a^{(5)TYZ}_{\text{S}u} |$ 0.0070 \[0.018\] $|a^{(5)TXX}_{\text{S}d} - a^{(5)TYY}_{\text{S}d} |$ 0.19\[0.49\] $|a^{(5)TXY}_{\text{S}d} |$ 0.034\[0.088\] $|a^{(5)TXZ}_{\text{S}d} |$ 0.090\[0.23\] $|a^{(5)TYZ}_{\text{S}d} |$ 0.089\[0.23\] : Expected best sensitivities on individual coefficients $c_f^{JK}$ and $a_{\text{S}f}^{(5)TJK}$ from studies of the DY process at the LHC. Values are in units of $10^{-5}$ and $10^{-6}$ GeV$^{-1}$, respectively. Results with brackets are associated with uncorrelated systematic uncertainties between binned data, while results without brackets correspond to the assumption of 100% correlation between systematic uncertainties. \[table3\] In this section, we present estimated attainable sensitivities extracted from $d\sigma/dQ^2$ measurements of the DY process at the LHC and discuss the relative advantages of searches using DIS and the DY process. For definiteness, we consider CMS results for the DY process in the dielectron channel as presented in Ref. [@Sirunyan:2018owv]. These data involve a CM energy of $\sqrt{s} = 13$ TeV with a dielectron invariant mass of up to $Q^2 = 60$ GeV, which lies safely below the $Z$ pole. They involve nine bins of width 5 GeV starting at 15 GeV. The colatitude of CMS is $\chi \approx \ang{46}$, and the orientation of the beamline is $\psi \approx \ang{-14}$. With these values, applying the appropriate rotation matrices yields the relevant combinations of coefficients in the Sun-centered frame that affect the cross sections. We use the $d\sigma/d Q^2$ form of the cross sections for $c$- and $a^{(5)}$-type coefficients as given by Eqs.  and , respectively, and evaluate them at the median value of each $Q^2$ bin. Adopting a simulation strategy analogous to that for DIS in the case of purely uncorrelated systematic uncertainties, we list in Table \[table3\] the extracted estimated attainable sensitivities for both $c$- and $a^{(5)}$-type coefficients. Note that the set of coefficients affecting the DY process is smaller than that affecting DIS, which leads to fewer coefficient combinations controlling sidereal-time dependence and hence fewer independent sensitivities. For the $c$-type coefficients for the $u$ and $d$ quarks, the strongest estimated sensitivities are found to come from the lowest $Q^2$ bin and lie in the range $10^{-5} - 10^{-3}$. For the $a^{(5)}$-type coefficients, the best estimated sensitivities again arise from the lowest $Q^2$ bin and lie in the range $10^{-8} - 10^{-7}$ GeV$^{-1}$. The emergence of greater sensitivities at lower $Q^2$ and larger CM energy can be expected from the structure of the cross sections. [**EIC**]{} [**LHC**]{} ----------------------------------------------------- ------------- ------------- $|c^{XX}_{u}- c^{YY}_{u}|$ 0.74 15 $|c_{u}^{XY}|$ 0.26 2.7 $|c_{u}^{XZ}|$ 0.23 7.3 $|c_{u}^{YZ}|$ 0.23 7.1 $|a^{(5)TXX}_{\text{S}u}- a^{(5)TYY}_{\text{S}u} |$ 0.15 0.015 $|a^{(5)TXY}_{\text{S}u}|$ 0.12 0.0027 $|a^{(5)TXZ}_{\text{S}u}|$ 0.13 0.0072 $|a^{(5)TYZ}_{\text{S}u} |$ 0.13 0.0070 : Comparison of estimated attainable sensitivities to equivalent $u$-quark coefficients at the EIC and the LHC. Values are in units of $10^{-5}$ and $10^{-6}$ GeV$^{-1}$ for the minimal and nonminimal coefficients, respectively. \[table4\] It is interesting to compare the attainable sensitivities to Lorentz violation in DIS and the DY process. Table \[table4\] displays the estimated attainable sensitivities from DIS at the EIC and from the DY process at the LHC for the $u$-quark coefficient combinations that contribute to sidereal-time variations in both experiments. The prospective LHC sensitivities are weaker by an order of magnitude for minimal $c$-type coefficients, due to the dominance of the small statistical uncertainties at the EIC. In contrast, the prospective LHC sensitivities are better by an order of magnitude for the $a^{(5)}$-type coefficients, due primarily to the larger CM energy. The latter result supports the notion that higher-energy colliders have a comparative advantage in constraining coefficients with negative mass dimension since the dimensionless quantity measured in experiments is essentially the product of the coefficient and the collider energy. Given the current lack of direct constraints in the strongly interacting sector of the SME [@tables], all these results offer strong encouragement for searches for Lorentz and CPT violation in a variety of processes and using distinct collider experiments. Summary {#sec:summary} ======= In this work, we have performed a theoretical and phenomenological exploration of the effects of Lorentz and CPT violation in high-energy hadronic processes. The equivalent parton-model picture is derived in the presence of effects on freely propagating quarks emanating from the modified factorization procedure of the hadronic tensor in inclusive DIS. This leads to new definitions of the leading-twist PDFs and for the first time parametrizes and explains the potential nonperturbative dependence on Lorentz violation. The validity of this general treatment is confirmed using the alternative approach of the operator product expansion and via the electromagnetic Ward identities. Factorization is also demonstrated in the DY process. The PDFs derived for the DY process are identical to those found in DIS, supporting the conjecture that universality of the PDFs can be retained despite the presence of Lorentz violation. The phenomenological implications of this framework are explored by considering the special cases of unpolarized electron-proton DIS and the DY process mediated by photon exchange for the minimal $c$-type and nonminimal $a^{(5)}$-type coefficients. Our results show that searches for Lorentz violation at lepton-hadron and hadron-hadron colliders are complementary. The methodology presented in the present work opens the path for future studies of a multitude of related processes, including charged-current, polarized lepton-hadron, and hadron-hadron interactions, as well as investigations of higher-order effects such as QCD corrections. Acknowledgments =============== This work was supported in part by the U.S. Department of Energy under grant [DE]{}-SC0010120, by the Indiana Space Grant Consortium, and by the Indiana University Center for Spacetime Symmetries.
--- abstract: '$^{th}$ birthday.' author: - 'Nicolas Boizot and Jean-Paul Gauthier [^1]' title: Motion Planning for Kinematic systems --- Optimal control, Subriemannian geometry, robotics, motion planning Introduction ============ Here we present the main lines of a theory of motion planning for kinematic systems, which is developped for about ten years in the papers  [@GM; @GZ; @GZ2; @WIL; @GZ3; @ano; @jak]. One of the purposes of the paper is to survey the whole theory disseminated in these papers. But also we improve on the theory, by treating one more case, in which “the fourth order brackets are involved”. We aslo improve on several previous results (periodicity of our optimal trajectories for instance). Potential application of this theory is motion planning for kinematic robots. We will show several basic examples here. The theory starts from the seminal work of F. Jean, in the papers [@J1; @J2; @J3]. At the root of this point of view in robotics, there are also more applied authors like J.P. Laumond [@lau]. See also [@liu]. We consider kinematic systems that are given under the guise of a vector-distribution $\Delta$ over a $n$-dimensional manifold $M$. The rank of the distribution is $p$, and the corank $k=n-p.\ $Motion planning problems will aways be local problems in an open neighborhood of a given finite path $\Gamma$ in $M.$ Then we may always consider that $M=\mathbb{R}^{n}.$ From a control point of view, a kinematic system can be specified by a control system, linear in the controls, typically denoted by $\Sigma$:$$(\Sigma)\text{ }\dot{x}={\displaystyle\sum\limits_{i=1}^{p}} F_{i}(x)u_{i},\label{sys1}$$ where the $F_{i}$’s are smooth ($C^{\infty})$ vector fields that span the distribution $\Delta.$ The standard controllability assumption is always assumed, i.e. the Lie algebra generated by the $F_{i}$’s is transitive on $M.$ Consequently, the distribution $\Delta$ is *completely nonintegrable*, and any smooth path $\Gamma:[0,T]\rightarrow M$ can be unifomly approximated by an admissible path $\gamma:[0,\theta]\rightarrow M $, i.e. a Lipschitz path, which is almost everywhere tangent to $\Delta,$ i.e., a trajectory of (\[sys1\])$.$ This is precisely the *abstract answer* to the kinematic motion planning probem: *it is possible to approximate uniformly nonadmissible paths by admissible ones*. The purpose of this paper is to present a general constructive theory that solves this problem in a certain *optimal* way. More precisely, in this class of problems, it is natural to try to minimize a cost of the following form:$$J(u)={\displaystyle\int\limits_{0^{{}}}^{\theta}} \sqrt{{\displaystyle\sum\limits_{i=1}^{p}} (u_{i})^{2}}dt,$$ for several reasons: 1. the optimal curves do not depend on their parametrization, 2. the minimization of such a cost produces a metric space (the associated distance is called the subriemannian distance, or the Carnot-Caratheodory distance), 3. Minimizing such a cost is equivalent to minimize the following (called the *energy* of the path) quadratic cost $J_{E}(u)$, in fixed time $\theta$:$$J_{E}(u)={\displaystyle\int\limits_{0^{{}}}^{\theta}} {\displaystyle\sum\limits_{i=1}^{p}} (u_{i})^{2}dt.$$ The distance is defined as the minimum length of admissible curves connecting two points, and the length of the admissible curve corresponding to the control $u:[0,\theta]\rightarrow M$ is just $J(u).$ In this presentation, another way to interpret the problem is as follows: the dynamics is specified by the distribution $\Delta$ (i.e. not by the vector fields $F_{i},$ but their span only). The cost is then determined by an Euclidean metric $g$ over $\Delta,$ specified here by the fact that the $F_{i}$’s  form an orthonormal frame field for the metric. At this point we would like to make a more or less philosophical comment: there is, in the world of nonlinear control theory, a permanent twofold critic against the optimal control approach: 1. the choice of the cost to be minimized is in general rather arbitrary, and 2. optimal control solutions may be non robust.  Some remarkable conclusions of our theory show the following: in reasonable dimensions and codimensions, the optimal trajectories are extremely robust, and in particular, do not depend at all (modulo certain natural transformations) on the choice of the metric, but on the distribution $\Delta$ only. Even stronger: they depend only on the *nilpotent approximation along* $\Gamma$ (a concept that will be defined later on, which is a good local approximation of the problem). For a lot of low values of the rank $p$ and corank $k,$ these nilpotent approximations have no parameter (hence they are in a sense universal). The *asymptotic optimal sysntheses* (i.e. the phase portraits of the admissible trajectories that approximate up to a small $\varepsilon)$ are also universal. Given a motion planning problem, specified by a (nonadmissible) curve $\Gamma,$ and a Subriemannian structure (\[sys1\]), we will consider two distinct concepts, namely: 1. The *metric complexity* $MC(\varepsilon )$ that measures asymptotically the length of the best $\varepsilon $-approximating admissible trajectories$,$ and 2. The *interpolation entropy* $E(\varepsilon)$, that measures the length of the best admissible curves that interpolate $\Gamma$ with pieces of length $\varepsilon.$ The first concept was introduced by F. Jean in his basic paper [@J1]. The second concept is closely related with the entropy of F. Jean in [@J2], which is more or less the same as the Kolmogorov’s entropy of the path $\Gamma,$ for the metric structure induced by the Carnot-Caratheodory metric of the ambient space. Also, along the paper, we will deal with *generic* problems only (but generic in the global sense, i.e. stable singularities are considered). That is, the set of motion planning problems on $\mathbb{R}^{n}$ is the set of couples $(\Gamma,\Sigma),$ embedded with the $C^{\infty}$ topology of uniform convergence over compact sets, and generic problems (or *problems in general position*) form an open-dense set in this topology. For instance, it means that the curve $\Gamma$ is always tranversal to $\Delta$ (except maybe at isolated points, in the cases $k=1$ only). Another example is the case of a surface of degeneracy of the Lie bracket distribution $[\Delta,\Delta]$ in the $n=3,$ $k=1$ case. Generically, this surface (the Martinet surface) is smooth, and $\Gamma$ intersects it transversally at a finite number of points only. Also, along the paper, we will illustrate our results with one of the following well known academic examples: \[unic\]the unicycle: $$\dot{x}=\cos(\theta)u_{1},\text{ }\dot{y}=\sin(\theta)u_{1},\text{ }\dot{\theta}=u_{2}\label{unicycle}$$ \[ctrl\]the car with a trailer:$$\dot{x}=\cos(\theta)u_{1},\text{ }\dot{y}=\sin(\theta)u_{1},\text{ }\dot{\theta}=u_{2},\text{ }\dot{\varphi}=u_{1}-\sin(\varphi)u_{2}\label{cartrailer}$$ \[bpln\]the ball rolling on a plane:$$\dot{x}=u_{1},\text{ }\dot{y}=u2,\text{ }\dot{R}=\left[ \begin{array} [c]{ccc}0 & 0 & u_{1}\\ 0 & 0 & u_{2}\\ -u_{1} & -u_{2} & 0 \end{array} \right] R,\label{brp}$$ where $(x,y)$ are the coordinates of the contact point between the ball and the plane, $R\in SO(3,\mathbb{R})$ is the right orthogonal matrix representing an othonormal frame attached to the ball. \[brpt\]the ball with a trailer$$\begin{aligned} \dot{x} & =u_{1},\text{ }\dot{y}=u2,\text{ }\dot{R}=\left[ \begin{array} [c]{ccc}0 & 0 & u_{1}\\ 0 & 0 & u_{2}\\ -u_{1} & -u_{2} & 0 \end{array} \right] R,\label{balltrailer}\\ \text{ }\dot{\theta} & =-\frac{1}{L}(\cos(\theta)u_{1}+\sin(\theta )u_{2}).\nonumber\end{aligned}$$ Typical motion planning problems are: 1. for example (\[ctrl\]),* the parking problem*: the non admissible curve $\Gamma$ is $s\rightarrow(x(s),y(s),\theta(s),\varphi(s))=(s,0,\frac{\pi}{2},0),$ 2. for example (\[bpln\]), the *full rolling with slipping problem*, $\Gamma:s\rightarrow(x(s),y(s),R(s))$ $=(s,0,Id),$ where $Id$ is the identity matrix. On figures \[fig1\], \[fig2\] we show our approximating trajectories for both problems, that are in a sense universal. In figure \[fig1\], of course, the $x$-scale is much larger than the $y$-scale. [M1AOH500]{} [M1AOH501]{} Up to now, our theory covers the following cases: (C1) The distribution $\Delta$ is one-step bracket generating (i.e. $\dim([\Delta,\Delta]=n)$ except maybe at generic singularities, (C2) The number of controls (the dimension of $\Delta)$ is $p=2,$ and $n\leq6.$ The paper is organized as follows: In the next section \[prereq\], we introduce the basic concepts, namely the metric complexity, the interpolation entropy, the nilpotent approximation along $\Gamma,$ and the normal coordinates, that will be our basic tools. Section \[results\] summarizes the main results of our theory, disseminated in our previous papers, with some complements and details. Section \[new\] is the detailed study of the case $n=6,$ $k=4,$ which corresponds in particular to example \[brpt\], the ball with a trailer. In Section \[concl\], we state a certain number of remarks, expectations and conclusions. Basic concepts \[prereq\] ========================= In this section, we fix a generic motion planning problem $\mathcal{P=}(\Gamma,\Sigma).$ Also, along the paper there is a small parameter $\varepsilon$ (we want to approximate up to $\varepsilon),$ and certain quantities $f(\varepsilon),g(\varepsilon)$ go to $+\infty$ when $\varepsilon$ tends to zero. We say that such quantities are equivalent $(f\simeq g)$ if $\lim_{\varepsilon\rightarrow0}\frac{f(\varepsilon)}{g(\varepsilon)}=1.$ Also, $d$ denotes the subriemannian distance, and we consider the $\varepsilon $-subriemammian tube $T\varepsilon$ and cylinder $C\varepsilon$ around $\Gamma:$ $$\begin{aligned} T_{\varepsilon} & =\{x\in M\text{ }|\text{ }d(x,\Gamma)\leq\varepsilon\},\\ C_{\varepsilon} & =\{x\in M\text{ }|\text{ }d(x,\Gamma)=\varepsilon\}.\end{aligned}$$ Entropy versus metric complexity \[entcomp\] -------------------------------------------- \[mc\]The *metric complexity* $MC(\varepsilon)$ of $\mathcal{P}$ is $\frac{1}{\varepsilon}$ times the minimum length of an admissible curve $\gamma_{\varepsilon}$ connecting the endpoints $\Gamma(0),$ $\Gamma(T)$ of $\Gamma,$ and remaining in the tube $T_{\varepsilon}.$ \[mce\]The *interpolation entropy* $E(\varepsilon)$ of $\mathcal{P}$ is $\frac{1}{\varepsilon}$ times the minimum length of an admissible curve $\gamma_{\varepsilon}$ connecting the endpoints $\Gamma(0),\Gamma(T)$ of $\Gamma,$ and  $\varepsilon$-interpolating $\Gamma$, that is, in any segment of $\gamma_{\varepsilon}$ of length $\geq\varepsilon,$ there is a point of $\Gamma.$ These quantities $MC(\varepsilon),E(\varepsilon)$ are functions of $\varepsilon$ which tends to $+\infty$ as $\varepsilon$ tends to zero. They are considered **up to equivalence**.The reason to divide by $\varepsilon$ is that the second quantity counts the number of $\varepsilon $-balls to cover $\Gamma,$ or the number of pieces of length $\varepsilon$ to interpolate the full path. This is also the reason for the name “entropy”. An asymptotic optimal synthesis is a one-parameter family $\gamma _{\varepsilon}$ of admissible curves, that realizes the metric complexity or the entropy. Our main purpose in the paper is twofold: 1. We want to estimate the metric complexity and the entropy, in terms of certain invariants of the problem. Actually, in all the cases treated in this paper, we will give eplicit formulas. 2. We shall exhibit explicit asymptotic optimal syntheses realizing the metric complexity or/and the entropy. Normal coordinates\[normc\] ---------------------------  Take a **parametrized** $p$-dimensional surface $S,$ transversal to $\Delta$ (maybe defined in a neighborhood of $\Gamma$ only)$,$ $$S=\{q(s_{1},...,s_{p-1},t)\in\mathbb{R}^{n}\},\text{with }q(0,...,0,t)=\Gamma (t).$$ Such a *germ* exists if $\Gamma$ is not tangent to $\Delta.$ The exclusion of a neighborhood of an isolated point where $\Gamma$ is tangent to $\Delta$, (that is $\Gamma$ becomes almost admissible), will not affect our estimates presented later on (it will provide a term of higher order in $\varepsilon).$  . In the following, $\mathcal{C}_{\varepsilon}^{S}$ will denote the cylinder $\{\xi;$ $d(S,\xi)=\varepsilon\}.$ \[nco\] (Normal coordinates with respect to $S).$ There are mappings $x:\mathbb{R}^{n}\rightarrow\mathbb{R}^{p},$ $y:\mathbb{R}^{n}\rightarrow \mathbb{R}^{k-1},$ $w:\mathbb{R}^{n}\rightarrow\mathbb{R},$ such that $\xi=(x,y,w)$ is a coordinate system on some neighborhood of $S$ in $\mathbb{R}^{n}$, such that: 0\. $S(y,w)=(0,y,w),$ $\Gamma=\{(0,0,w)\}$ 1\. The restriction $\Delta_{|S}=\ker dw\cap_{i=1,..k-1}\ker dy_{i},$ the metric $g_{|S}=\sum_{i=1}^{p}(dx_{i})^{2},$ 2\. $\mathcal{C}_{\varepsilon}^{S}=\{\xi|\sum_{i=1}^{p}x_{i}{}^{2}=\varepsilon^{2}\},$ 3\. geodesics of the Pontryagin’s maximum principle ([@PMP]) meeting the transversality conditions w.r.t. $S$ are the straight lines through $S,$ contained in the planes $P_{y_{0},w_{0}}=\{\xi|(y,w)=(y_{0},w_{0})\}.$ Hence, they are orthogonal to $S.$ These normal coordinates are unique up to changes of coordinates of the form $$\tilde{x}=T(y,w)x,(\tilde{y},\tilde{w})=(y,w),\label{ccor}$$ where $T(y,w)\in O(p),$ the $p$-orthogonal group. Normal forms, Nilpotent approximation along $\Gamma$\[nform\] ------------------------------------------------------------- ### Frames\[fra\] Let us denote by $F=(F_{1},...,F_{p})$ the orthonormal frame of vector fields generating $\Delta.$ Hence, we will also write $\mathcal{P}=(\Gamma,F).$ If a global coordinate system $(x,y,w)$, not necessarily normal, is given on a neighborhood of $\Gamma$ in $\mathbb{R}^{n},$ with $x\in\mathbb{R}^{p},$ $y\in\mathbb{R}^{k-1},$ $w\in\mathbb{R},$ then we write:$$\begin{aligned} F_{j} & =\sum_{i=1}^{p}\mathcal{Q}_{i,j}(x,y,w)\frac{\partial}{\partial x_{i}}+\sum_{i=1}^{k-1}\mathcal{L}_{i,j}(x,y,w)\frac{\partial}{\partial y_{i}}\label{QLM}\\ & +\mathcal{M}_{j}(x,y,w)\frac{\partial}{\partial w},\text{ }\nonumber\\ \text{\ \ \ }j & =1,...,p.\nonumber\end{aligned}$$ Hence, the SR metric is specified by the triple $(\mathcal{Q},\mathcal{L},\mathcal{M})$ of smooth $x,y,w$-dependent matrices. ### The general normal form\[gennf\] Fix a surface $S$ as in Section \[ncor\] and a normal coordinate system $\xi=(x,y,w)$ for a problem $\mathcal{P}.$ \[normal\](Normal form, [@AG2]) There is a unique orthonormal frame $F=(\mathcal{Q},\mathcal{L},\mathcal{M})$ for ($\Delta,g)$ with the following properties: 1. $\mathcal{Q}(x,y,w)$ is symmetric, $\mathcal{Q}(0,y,w)=Id$ (the identity matrix), 2\. $\mathcal{Q}(x,y,w)x=x,$ 3\. $\mathcal{L}(x,y,w)x=0,$ $\mathcal{M}(x,y,w)x=0.$ 4\. Conversely if $\xi=(x,y,w)$ is a coordinate system satisfying conditions 1, 2, 3 above, then $\xi$ is a normal coordinate system for the SR metric defined by the orthonormal frame $F$ with respect to the parametrized surface $\{(0,y,w)\}.$ Clearly, this normal form is invariant under the changes of normal coordinates (\[ccor\]). Let us write: $$\begin{aligned} \mathcal{Q}(x,y,w) & =Id+Q_{1}(x,y,w)+Q_{2}(x,y,w)+...,\\ \mathcal{L}(x,y,w) & =0+L_{1}(x,y,w)+L_{2}(x,y,w)+...,\\ \mathcal{M}(x,y,w) & =0+M_{1}(x,y,w)+M_{2}(x,y,w)+...,\end{aligned}$$ where $Q_{r},L_{r},M_{r}$ are matrices depending on $\xi=(x,y,w),$ the coefficients of which have order $k$ w.r.t. $x$ (i.e. they are in the $r^{th} $ power of the ideal of $C^{\infty}(x,y,w)$ generated by the functions $x_{r},$ $r=1,...,n-p).$ In particular, $Q_{1}$ is linear in $x,$ $Q_{2}$ is quadratic, etc... Set $u=(u_{1},...,u_{p})\in\mathbb{R}^{p}.$ Then $\sum_{j=1}^{k-1}L_{1_{j}}(x,y,w)u_{j}$ $=L_{1,y,w}(x,u)$ is quadratic in $(x,u),$ and $\mathbb{R}^{k-1}$-valued. Its $i^{th}$ component is the quadratic expression denoted by $L_{1,i,y,w}(x,u)$. Similarly $\sum _{j=1}^{k-1}M_{1_{j}}(x,y,w)u_{j}$ $=M_{1,y,w}(x,u)$ is a quadratic form in $(x,u).$ The corresponding matrices are denoted by $L_{1,i,y,w},$ $i=1,...,k-1,$ and $M_{1,y,w}.$ The following was proved in [@AG2], [@char] for corank 1: \[norprop\] 1. $Q_{1}=0,$ 2. $L_{1,i,y,w},$ $i=1,...,p-1,$ and $M_{1,y,w}$ are skew symmetric matrices.  A first useful very rough estimate in normal coordinates is the following: \[propb\] If $\xi=(x,y,w)\in T_{\varepsilon},$ then: $$\begin{aligned} ||x||_{2} & \leq\varepsilon,\\ ||y||_{2} & \leq k\varepsilon^{2},\end{aligned}$$ for some $k>0.$ At this point, we shall split the problems under consideration into two distinct cases: first the 2-step bracket-generating case, and second, the 2-control case. ### Two-step bracket-generating case\[spe\] In that case, we set, in accordance to Proposition \[propb\], that $x$ has weight 1, and the $y_{i}$’s and $w$ have weight 2$.$ Then, the vector fields $\frac{\partial}{\partial x_{i}}$ have weight -1, and $\frac{\partial}{\partial y_{i}},\frac{\partial}{\partial w}$ have weight $-2.$ Inside a tube $T_{\varepsilon},$ we write our control system as a term of order -1, plus a residue, that has a certain order w.r.t. $\varepsilon .\ $Here, $O(\varepsilon^{k})$ means a smooth term bounded by $c\varepsilon ^{k}.$ We have, for a trajectory remaining inside $T_{\varepsilon}$: $$\begin{aligned} \dot{x} & =u+O(\varepsilon^{2});\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)}\label{estco1}\\ \dot{y}_{i} & =\frac{1}{2}x^{\prime}L^{i}(w)u+O(\varepsilon^{2});\text{ \ \ }i=1,...,k-1;\nonumber\\ \dot{w} & =\frac{1}{2}x^{\prime}M(w)u+O(\varepsilon^{2}),\nonumber\end{aligned}$$ where $L^{i}(w),M(w)$ are skew-symmetric matrices depending smoothly on $w.\ $ In \[estco1\], (1), the term $O(\varepsilon^{2})$ can seem surprising. One should wait for $O(\varepsilon).\ $It is due to (1) in Proposition \[norprop\]. In that case, we define the **Nilpotent Approximation** $\hat{P}$** along** $\Gamma$ of the problem $\mathcal{P}$ by keeping only the term of order -1: $$\begin{aligned} \dot{x} & =u;\label{nilap1}\\ (\mathcal{\hat{P})}\text{ \ \ \ \ \ \ \ \ }\dot{y}_{i} & =\frac{1}{2}x^{\prime}L^{i}(w)u;\text{ \ \ }i=1,...,p-1;\nonumber\\ \dot{w} & =\frac{1}{2}x^{\prime}M(w)u.\nonumber\end{aligned}$$ Consider two trajectories $\xi(t),\hat{\xi}(t)$ of $\mathcal{P}$ and $\mathcal{\hat{P}}$ corresponding to the same control $u(t),$ issued from the same point on $\Gamma,$ and both arclength-parametrized (which is equivalent to $||u(t)||=1).$ For $t\leq\varepsilon,$ we have the following estimates: $$||x(t)-\hat{x}(t)||\leq c\varepsilon^{3},||y(t)-\hat{y}(t)||\leq c\varepsilon^{3},||w(t)-\hat{w}(t)||\leq c\varepsilon^{3},\label{ff0}$$ for a suitable constant $c.\ $ \[dist1\]It follows that the distance (either $d$ or $\hat{d}$-the distance associated with the nilpotent approximation$)$ between $\xi (t),\hat{\xi}(t)$ is smaller than $\varepsilon^{1+\alpha}$ for some $\alpha>0. $ This fact comes from the estimate just given, and the standard ball-box Theorem ([@GRO]). It will be the key point to reduce the motion planning problem to the one of its nilpotent approximation along $\Gamma$. ### The 2-control case\[2control\] ### Normal forms\[nffs\] In that case, we have the following general normal form, in normal coordinates. It was proven first in [@AmPetr92], in the corank1 case. The proof holds in any corank, without modification. Consider Normal coordinates with respect to any surface $\mathcal{S}$. There are smooth functions, $\beta(x,y,w),\gamma_{i}(x,y,w),\delta(x,y,w),$ such that $\mathcal{P}$ can be written as (on a neighborhood of $\Gamma)$: $$\begin{aligned} \dot{x}_{1} & =(1+(x_{2}^{{}})^{2}\beta)u_{1}-x_{1}x_{2}\beta u_{2},\text{ \ }\label{nf2}\\ \text{\ }\dot{x}_{2} & =(1+(x_{1}^{{}})^{2}\beta)u_{2}-x_{1}x_{2}\beta u_{1},\nonumber\\ \dot{y}_{i} & =\gamma_{i}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\text{\ \ }\dot{w}=\delta(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\nonumber\end{aligned}$$ where moreover $\beta$ vanishes on the surface $\mathcal{S}$. The following normal forms can be obtained, on the tube $T_{\varepsilon},$ by just changing coordinates in $\mathcal{S}$ in certain appropriate way. It means that a trajectory $\xi(t)$ of $\mathcal{P}$ remaining in $T_{\varepsilon }$ satisfies: **Generic** $4-2$** case (see [@GZ3])**$:$$$\begin{aligned} \dot{x}_{1} & =u_{1}+0(\varepsilon^{3}),\dot{x}_{2}=u_{2}+0(\varepsilon ^{3}),\\ \dot{y} & =(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})+O(\varepsilon^{2}),\\ \dot{w} & =\delta(w)x_{1}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})+O(\varepsilon^{3}).\end{aligned}$$ We define the nilpotent approximation as:$$\begin{aligned} (\mathcal{\hat{P}}_{4,2})\text{ \ \ }\dot{x}_{1} & =u_{1},\dot{x}_{2}=u_{2},\dot{y}=(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\\ \dot{w} & =\delta(w)x_{1}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}).\text{\ }$$ Again, we consider two trajectories $\xi(t),\hat{\xi}(t)$ of $\mathcal{P}$ and $\mathcal{\hat{P}}$ corresponding to the same control $u(t),$ issued from the same point on $\Gamma,$ and both arclength-parametrized (which is equivalent to $||u(t)||=1).$ For $t\leq\varepsilon,$ we have the following estimates: $$||x(t)-\hat{x}(t)||\leq c\varepsilon^{4},||y(t)-\hat{y}(t)||\leq c\varepsilon^{3},||w(t)-\hat{w}(t)||\leq c\varepsilon^{4}.\label{ff1}$$ Which implies that, for $t\leq\varepsilon,$ the distance ($d$ or $\hat{d})$ between $\xi(t)$ and $\hat{\xi}(t)$ is less than $\varepsilon^{1+\alpha}$ for some $\alpha>0,$ and this will be also the keypoint to reduce our problem to the Nilpotent approximation. **Generic** $5-2$** case (see [@ano])**$:$$$\begin{aligned} \dot{x}_{1} & =u_{1}+0(\varepsilon^{3}),\dot{x}_{2}=u_{2}+0(\varepsilon ^{3}),\\ \dot{y} & =(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})+O(\varepsilon^{2}),\\ \dot{z} & =x_{2}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})+O(\varepsilon ^{3}),\\ \dot{w} & =\delta(w)x_{1}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})+O(\varepsilon^{3}).\end{aligned}$$ We define the nilpotent approximation as:$$\begin{aligned} (\mathcal{\hat{P}}_{5,2})\text{ \ \ }\dot{x}_{1} & =u_{1},\dot{x}_{2}=u_{2},\dot{y}=(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\\ \dot{z} & =x_{2}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\\ \dot{w} & =\delta(w)x_{1}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}).\text{\ }$$ The estimates necessary to reduce to Nilpotent approximation are:$$\begin{aligned} ||x(t)-\hat{x}(t)|| & \leq c\varepsilon^{4},||y(t)-\hat{y}(t)||\leq c\varepsilon^{3},\label{ff2}\\ ||z(t)-\hat{z}(t)|| & \leq c\varepsilon^{4},||w(t)-\hat{w}(t)||\leq c\varepsilon^{4}.\nonumber\end{aligned}$$ **Generic** $6-2$** case (proven in Appendix)**$:$$$\begin{aligned} \dot{x}_{1} & =u_{1}+0(\varepsilon^{3}),\dot{x}_{2}=u_{2}+0(\varepsilon ^{3}),\label{nf62}\\ \dot{y} & =(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})+O(\varepsilon ^{2}),\nonumber\\ \dot{z}_{1} & =x_{2}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})+O(\varepsilon ^{3}),\nonumber\\ \dot{z}_{2} & =x_{1}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})+O(\varepsilon ^{3}),\nonumber\\ \dot{w} & =Q_{w}(x_{1},x_{2})(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})+O(\varepsilon^{4}),\nonumber\end{aligned}$$ where $Q_{w}(x_{1},x_{2})$ is a quadratic form in $x$ depending smoothly on $w.$ We define the nilpotent approximation as:$$\begin{aligned} (\mathcal{\hat{P}}_{6,2})\text{ \ \ }\dot{x}_{1} & =u_{1},\dot{x}_{2}=u_{2},\dot{y}=(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\label{nil62}\\ \dot{z}_{1} & =x_{2}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\dot{z}_{2}=x_{1}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\nonumber\\ \text{\ }\dot{w} & =Q_{w}(x_{1},x_{2})(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}).\nonumber\end{aligned}$$ The estimates necessary to reduce to Nilpotent approximation are:$$\begin{aligned} ||x(t)-\hat{x}(t)|| & \leq c\varepsilon^{4},||y(t)-\hat{y}(t)||\leq c\varepsilon^{3},\label{fff3}\\ ||z(t)-\hat{z}(t)|| & \leq c\varepsilon^{4},||w(t)-\hat{w}(t)||\leq c\varepsilon^{5}.\nonumber\end{aligned}$$ In fact, the proof given in Appendix, of the reduction to this normal form, contains the other cases 4-2 and 5-2. ### Invariants in the 6-2 case, and the ball with a trailer Let us consider a one form $\omega$ that vanishes on $\Delta^{\prime\prime }=[\Delta,[\Delta,\Delta]].\ $Set $\alpha=d\omega_{|\Delta},$ the restriction of $d\omega$ to $\Delta.$ Set $H=[F_{1},F_{2}],$ $I=[F1,H], $ $J=[F_{2},H],$ and consider the $2\times2$ matrix $A(\xi)=\left( \begin{array} [c]{cc}d\omega(F_{1},I) & d\omega(F_{2},I)\\ d\omega(F_{1},J) & d\omega(F_{2},J) \end{array} \right) .$ Due tu Jacobi Identity, $\ A(\xi)$ is a symmetric matrix. It is also equal to $\left( \begin{array} [c]{cc}\omega([F_{1},I]) & \omega([F_{2},I])\\ \omega([F_{1},J]) & \omega([F_{2},J]) \end{array} \right) ,$ using the fact that $\omega([X,Y])=d\omega(X,Y)$ in restriction to $\Delta^{\prime\prime}.$ Let us consider a gauge transformation, i.e. a feedback that preserves the metric (i.e. a change of othonormal frame $(F_{1},F_{2})$ obtained by setting $\tilde{F}_{1}=\cos(\theta(\xi))F_{1}+\sin(\theta(\xi))F_{2\text{ }},$ $\tilde{F}_{2}=-\sin(\theta(\xi))F_{1}+\cos(\theta(\xi))F_{2\text{ }}).$ It is just a matter of tedious computations to check that the matrix $A(\xi) $ is changed for $\tilde{A}(\xi)=R_{\theta}A(\xi)R_{-\theta}.$ On the other hand, the form $\omega$ is defined modulo muttiplication by a nonzero function $f(\xi),$ and the same holds for $\alpha,$ since $d(f\omega)=fd\omega +df\wedge\omega,$ and $\omega$ vanishes over $\Delta^{\prime\prime}.$ The following lemma follows: \[lem62inv\]The ratio $r(\xi)$ of the (real) eigenvalues of $A(\xi)$ is an invariant of the structure. Let us now consider the normal form (\[nf62\]), and compute the form $\omega=\omega_{1}dx_{1}+...+\omega_{6}dw$ along $\Gamma$ (that is, where $x,y,z=0).$ Computing all the brackets show that $\omega_{1}=\omega _{2}=...=\omega_{5}=0.$ This shows also that in fact, along $\Gamma$, $A(\xi)$ is just the matrix of the quadratic form $Q_{w}.\ $We get the following: \[lem62inv1\] The invariant $r(\Gamma(t))$ of the problem $\mathcal{P}$ is the same as the invariant $\hat{r}(\Gamma(t))$ of the nilpotent approximation along $\Gamma.$ Let us compute the ratio $r$ for the ball with a trailer,  Equation (\[balltrailer\]). We denote by $A_{1},A_{2}$ the two right-invariant vector fields over $So(3,\mathbb{R)}$ appearing in (\[balltrailer\]). We have: $$\begin{aligned} F_{1} & =\frac{\partial}{\partial x_{1}}+A_{1}-\frac{1}{L}\cos(\theta )\frac{\partial}{\partial\theta},\\ F_{2} & =\frac{\partial}{\partial x_{2}}+A_{2}-\frac{1}{L}\sin(\theta )\frac{\partial}{\partial\theta}.\\ \lbrack A_{1},A_{2}] & =A_{3},[A_{1},A_{3}]=-A_{2},[A_{2},A_{3}]=A_{1}.\end{aligned}$$ Then, we compute the brackets: $H=A_{3-}\frac{1}{L^{2}}\frac{\partial }{\partial\theta},$ $I=-A_{2-}\frac{1}{L^{3}}\sin(\theta)\frac{\partial }{\partial\theta},$ $J=A_{1+}\frac{1}{L^{3}}\cos(\theta)\frac{\partial }{\partial\theta},$ $[F_{1},I]=-A_{3-}\frac{1}{L^{4}}\frac{\partial}{\partial\theta},$ $[F_{1},J]=0=[F_{2},I],$ $[F_{2},J]=-A_{3-}\frac{1}{L^{4}}\frac{\partial}{\partial\theta}.$ Then: \[balltrailerratio\]For the ball with a trailer, the ratio $r(\xi)=1.$ These two last lemmas are a key point in the section \[new\]: theyl imply in particular that the system of geodesics of the nilpotent approximation is integrable in Liouville sense, as we shall see. Results\[results\] ================== In this section, we summarize and comment most of the results obtained in the papers [@GM; @GZ; @GZ2; @GZ3; @ano; @jak]. General results\[genres\] ------------------------- We need the concept of an $\varepsilon$-modification of an asymptotic optimal synthesis. Given a one parameter family of (absolutely continuous, arclength parametrized) admissible curves $\gamma_{\varepsilon}:$ $[0,T_{\gamma _{\varepsilon}}]\rightarrow\mathbb{R}^{n},$ **an** $\varepsilon $**-modification of** $\gamma_{\varepsilon}$ is another one parameter family of (absolutely continuous, arclength parametrized) admissible curves $\tilde{\gamma}_{\varepsilon}:$ $[0,T_{\tilde{\gamma}_{\varepsilon}}]\rightarrow\mathbb{R}^{n}$ such that for all $\varepsilon$ and for some $\alpha>0$, if $[0,T_{\gamma_{\varepsilon}}]$ is splitted into subintervals of length $\varepsilon,$ $[0,\varepsilon],$ $[\varepsilon,2\varepsilon],$ $[2\varepsilon,3\varepsilon],...$ then: 1\. $[0,T_{\tilde{\gamma}_{\varepsilon}}]$ is splitted into corresponding intervals, $[0,\varepsilon_{1}],$ $[\varepsilon_{1},\varepsilon_{1}+\varepsilon_{2}],$ $[\varepsilon_{1}+\varepsilon_{2},\varepsilon _{1}+\varepsilon_{2}+\varepsilon_{3}],...$ with $\varepsilon\leq$ $\varepsilon_{i}<\varepsilon(1+\varepsilon^{\alpha}),$ $i=1,2,...,$ 2. for each couple of an interval $I_{1}=[\tilde{\varepsilon}_{i},\tilde{\varepsilon}_{i}+\varepsilon],$ (with $\tilde{\varepsilon}_{0}=0,$ $\tilde{\varepsilon}_{1}=\varepsilon_{1},$ $\tilde{\varepsilon}_{2}=\varepsilon_{1}+\varepsilon_{2},...$) and the respective interval $I_{2}=[i\varepsilon,(i+1)\varepsilon],$ $\frac{d}{dt}(\tilde{\gamma})$ and $\frac{d}{dt}(\gamma)$ coincide over $I_{2},$ i.e.:$$\frac{d}{dt}(\tilde{\gamma})(\tilde{\varepsilon}_{i}+t)=\frac{d}{dt}(\gamma)(i\varepsilon+t),\text{ for almost all }t\in\lbrack0,\varepsilon].$$ This concept of an $\varepsilon$**-modification** is for the following use: we will construct asymptotic optimal syntheses for the nilpotent approximation $\mathcal{\hat{P}}$ of problem $\mathcal{P}$. Then, the asymptotic optimal syntheses have to be slightly modified in order to realize the interpolation constraints for the original (non-modified) problem. This has to be done “slightly” for the length of paths remaining equivalent. In this section it is always assumed but not stated that **we consider generic problems only**. One first result is the following: \[eqnil\]In the cases 2-step bracket generating, 4-2, 5-2, 6-2, (without singularities), an asymptotic optimal synthesis \[relative to the entropy\] for $\mathcal{P}$ is obtained as an $\varepsilon$-modification of an asymptotic optimal synthesis for the nilpotent approximation $\mathcal{\hat{P}}.$ As a consequence the entropy $E(\varepsilon)$ of $\mathcal{P}$ is equal to the entropy $\hat{E}(\varepsilon)$ of $\mathcal{\hat{P}}.$ This theorem is proven in [@GZ3]. However, we can easily get an idea of the proof, using the estimates of formulas (\[ff0\], \[ff1\], \[ff2\], \[fff3\]). All these estimates show that, if we apply an $\varepsilon$-interpolating strategy to $\mathcal{\hat{P}}$, and the same controls to $\mathcal{P}$, at time $\varepsilon$ (or length $\varepsilon$-since it is always possible to consider arclength-parametrized trajectories), the enpoints of the two trajectories are at subriemannian distance (either $d$ or $\hat{d})$ of order $\varepsilon^{1+\alpha},$ for some $\alpha>0.\ $Then the contribution to the entropy of $\mathcal{P}$, due to the correction necessary to interpolate $\Gamma$ will have higher order. Also, in the one-step bracket-generating case, we have the following equality: \[2pi\](one step bracket-generating case, corank $k\leq3)$ The entropy is equal to $2\pi$ times the metric complexity: $E(\varepsilon)=2\pi MC(\varepsilon).$ The reason for this distinction between corank less or more than 3 is very important, and will be explained in the section \[onestep\]. Another very important result is the following **logarithmic lemma**, that describes what happens in the case of a (generic) singularity of $\Delta.\ $ In the absence of such singularities, as we shall see, we shall always have formulas of the following type, for the entropy (the same for the metric complexity): $$E(\varepsilon)\simeq\frac{1}{\varepsilon^{p}}{\displaystyle\int\limits_{\Gamma}} \frac{dt}{\chi(t)},\label{ent1}$$ where $\chi(t)$ is a certain invariant along $\Gamma$. When the curve $\Gamma(t)$ crosses tranversally a codimension-1 singularity (of $\Delta^{\prime},$ or $\Delta^{\prime\prime}),$ the invariant $\chi(t)$ vanishes. This may happen at isolated points $t_{i},$  $i=1,...r.$ In that case, we always have the following: \[logl\](logarithmic lemma). The entropy (resp. the metric complexity) satisfies:$$E(\varepsilon)\simeq-2\frac{\ln(\varepsilon)}{\varepsilon^{p}}\sum_{i=1}^{r}\frac{1}{\rho(t_{i})},\text{ \ \ where }\rho(t)=|\frac{d\chi(t)}{dt}|.$$ On the contrary, there are also generic codimension 1 singularities where the curve $\Gamma$, at isolated points, becomes tangent to $\Delta,$ or $\Delta^{\prime},...$ At these isolated points, the invariant $\chi(t)$ of Formula \[ent1\] tends to infinity. In that case, **the formula \[ent1\] remains valid** (the integral converges). Generic distribution in $\mathbb{R}^{3}$\[contact3\] ---------------------------------------------------- This is the simplest case, and is is important, since many cases just reduce to it. Let us describe it in details. Generically, the 3-dimensional space $M$ contains a 2-dimensional singularity (called the Martinet surface, denoted by $\mathcal{M)}.$ This singularity is a smooth surface, and (except at isolated points on $\mathcal{M)},$ the distribution $\Delta$ is not tangent to $\mathcal{M}.$ Generically, the curve $\Gamma$ crosses $\mathcal{M}$ transversally at a finite number of isolated points $t_{i},$ $i=1,...,r,.\ $These points are not the special isolated points where $\Delta$ is tangent to $\mathcal{M}$ (this would be not generic). They are called Martinet points. This number $r$ can be zero. Also, there are other isolated points $\tau_{j},$ $j=1,...,l,$ at which $\Gamma$ is tangent to $\Delta$ (which means that $\Gamma$ is almost admissible in a neighborhood of $\tau_{j}).$ Out of $\mathcal{M}$, the distribution $\Delta$ is a contact distribution (a generic property). Let $\omega$ be a one-form that vanishes on $\Delta$ and that is 1 on $\dot{\Gamma}$, defined up to multiplication by a function which is 1 along $\Gamma.$ Along $\Gamma,$ the restriction 2-form $d\omega_{|\Delta}$ can be made into a skew-symmetric endomorphism $A(\Gamma(t))$ of $\Delta$ (skew symmetric with respect to the scalar product over $\Delta),$ by duality: $<A(\Gamma(t))X,Y>=d\omega(X,Y).$ Let $\chi(t)$ denote the moduli of the eigenvalues of $A(\Gamma(t)).$ We have the following: \[dim3\]1. If $r=0,$ $MC(\varepsilon)\simeq\frac{2}{\varepsilon^{2}}{\displaystyle\int\limits_{\Gamma}} \frac{dt}{\varkappa(t)}.\ $At points where $\chi(t)\rightarrow+\infty,$ the formula is convergent. 2. If $r\neq0,$ $MC(\varepsilon)\simeq-2\frac{\ln(\varepsilon)}{\varepsilon^{2}}\sum_{i=1}^{r}\frac{1}{\rho(t_{i})},$   where $\rho(t)=|\frac{d\chi(t)}{dt}|.$ 3. $E(\varepsilon)=2\pi MC(\varepsilon).$ Let us describe the asymptotic optimal syntheses. They are shown on Figures \[Figcontact\], \[figmar\]. [M1AOH502]{} Figure \[Figcontact\] concerns the case $r=0$ (everywhere contact type). The points where the distribution $\Delta$ is not transversal to $\Gamma$ are omitted (they again do not change anything)$.$ Hence $\Delta$ is also transversal to the cylinders $C_{\varepsilon}$, for $\varepsilon$ small$.\ $Therefore, $\Delta$ defines (up to sign) a vector field $X_{\varepsilon}$ on $C\varepsilon,$ tangent to $\Delta,$ that can be chosen of length 1. The asymptotic optimal synthesis consists of: 1. Reaching $C_{\varepsilon}$ from $\Gamma(0),$ 2. Follow a trajectory of $X_{\varepsilon },$ 3. Join $\Gamma(t).\ $The steps 1 and 3 cost 2$\varepsilon,$ which is neglectible w.r.t. the full metric complexity. To get the optimal synthesis for the interpolation entropy, one has to make the same construction, but starting from a subriemannian cylinder $C_{\varepsilon}^{\prime}$ tangent to $\Gamma.$ In normal coordinates, in that case, the $x$-trajectories are just circles, and the corresponding optimal controls are just trigonometric functions, with period $\frac{2\pi}{\varepsilon}.$ Figure \[figmar\] concerns the case $r\neq0$ (crossing Martinet surface). At a Martinet point, the vector-field $X_{\varepsilon}$ has a limit cycle, which is not tangent to the distribution. The asymptotic optimal strategy consists of: a. following a trajectory of $X_{\varepsilon} $ till reaching the height of the center of the limit cycle, b. crossing the cylinder, with a neglectible cost $2\varepsilon,$ c. Following a trajectory of the opposite vector field $-X_{\varepsilon}.$ The strategy for entropy is similar, but using the tangent cylinder $C_{\varepsilon}^{\prime}.$ [M1AOH503]{} The one-step bracket-generating case\[onestep\] ----------------------------------------------- For the corank $k\leq3,$ the situation is very similar to the 3-dimensional case. It can be competely reduced to it. For details, see [@GZ2]. At this point, this strange fact appears: there is the limit corank $k=3.$ If $k>3$ only, new phenomena appear. Let us explain now the reason for this$\ $ Let us consider the following mapping $\mathcal{B}_{\xi}:\Delta_{\xi}\times\Delta_{\xi}\rightarrow T_{x}M/\Delta_{\xi},$ $(X,Y)\rightarrow\lbrack X,Y]+\Delta_{\xi}.\ $It is a well defined **tensor mapping** , which means that it actually applies to vectors (and not to vector fields, as expected from the definition). This is due to the following formula, for a one-form $\omega:$ $\ d\omega(X,Y)=\omega([X,Y])+\omega(Y)X-\omega(X)Y.$ Let us call $I_{\xi}$ the image by $\mathcal{B}_{\xi}$ of the product of two unit balls in $\Delta_{\xi}.\ $The following holds: \[convexity\] For a generic $\mathcal{P}$, for $k\leq3,$ the sets $I_{\Gamma(t)}$ are **convex**. This theorem is shown in [@GZ2], with the consequences that we will state just below. This is no more true for $k>3,$ the first catastrophic case being the case 10-4 (a $p=4$ distribution in $\mathbb{R}^{10}).$ The intermadiate cases $k=4,5$ in dimension 10 are interesting, since on some open subsets of $\Gamma,$ the convexity property may hold or not. These cases are studied in the paper [@ano]. The main consequence of this convexity property is that everything reduces (out of singularities where the logarithmic lemma applies) to the 3-dimensional contact case, as is shown in the paper [@GZ2]. We briefly summarize the results. Consider the one forms $\omega$ that vanish on $\Delta$ and that are 1 on $\dot{\Gamma},$ and again, by the duality w.r.t. the metric over $\Delta,$ define $d\omega_{|\Delta}(X,Y)=<AX,Y>,$ for vector fields $X,Y$ in $\Delta.$ Now, we have  along $\Gamma,$ a ($k-1)$-parameter affine family of skew symmetric endomorphisms $A_{\Gamma(t)}$ of $\Delta_{\Gamma(t).}$ Say, $A_{\Gamma(t)}(\lambda)=A_{\Gamma(t)}^{0}+{\displaystyle\sum\limits_{i=1}^{k-1}} \lambda_{i}A_{\Gamma(t)}^{i}.$ Set $\chi(t)=\inf_{\lambda}||A_{\Gamma (t)}(\lambda)||=||A_{\Gamma(t)}(\lambda^{\ast}(t))||.$ Out of isolated points of $\Gamma$ (that count for nothing in the metric complexity or in the entropy), the $t-$one parameter family $A_{\Gamma (t)}(\lambda^{\ast}(t))$ can be smoothly block-diagonalized (with $2\times2$ bloks), using a gauge transformation along $\Gamma$. After this gauge transformation, the 2-dimensional eigenspace corresponding to the largest (in moduli) eigenvalue of $A_{\Gamma(t)}(\lambda^{\ast}(t)),$ corresponds to the two first coordinates in the distribution, and to the 2 first controls. In the asymptotic optimal synthesis, all other controls are put to zero \[here the convexity property is used\], and the picture of the asymptotic optimal synthesis is exactly that of the 3-dimensional contact case. We still have the formulas: $$MC(\varepsilon)\simeq\frac{2}{\varepsilon^{2}}{\displaystyle\int\limits_{\Gamma}} \frac{dt}{\varkappa(t)},\text{ \ \ }E(\varepsilon)=2\pi MC(\varepsilon).$$ The case $k>3$ was first treated in [@GZ3] in the 10-dimensional case, and was completed in general in [@jak]. In that case, the situation does not reduce to the 3-dimensional contact case: the optimal controls, in the asymptotic optimal synthesis for the nilpotent approximation are still trigonometric controls, but with different periods that are successive integer multiples of a given basic period. New invariants $\lambda_{\theta(t)}^{j}$ appear, and the formula for the entropy is:$$E(\varepsilon)\simeq\frac{2\pi}{\varepsilon^{2}}\int_{0}^{T}\frac{\sum _{j=1}^{r}j\lambda_{\theta}^{j}}{\sum_{j=1}^{r}(\lambda_{\theta}^{j})^{2}}d\theta,$$ the optimal controls being of the form: $$\begin{aligned} u_{2j-1}(t) & =-\sqrt{\frac{j\lambda_{\theta(t)}^{j}}{\sum_{j=1}^{r}j\lambda_{\theta(t)}^{j}}}\sin(\frac{2\pi jt}{\varepsilon}),\label{controls}\\ u_{2j}(t) & =\sqrt{\frac{j\lambda_{\theta(t)}^{j}}{\sum_{j=1}^{r}j\lambda_{\theta(t)}^{j}}}\cos(\frac{2\pi jt}{\varepsilon}),\text{ \ \ }j=1,...,r\nonumber\\ {u_{2r+1}(t)} & =0{\text{ if }p\text{ is odd }}.\nonumber\end{aligned}$$ These last formulas hold in the free case only (i.e. the case where the corank $k=\frac{p(p-1)}{2},$ the dimension of he second homogeneous component of the free Lie-algebra with $p$ generators). The non free case is more complicated (see [@jak]). To prove all the results in this section, one has to proceed as follows: 1. use the theorem of reduction to nilpotent approximation (\[eqnil\]), and 2. use the Pontriaguin’smaximum principle on the normal form of the nilpotent approximation, in normal coordinates The 2-control case, in $\mathbb{R}^{4}$ and $\mathbb{R}^{5}.$ ------------------------------------------------------------- These cases correspond respectively to the car with a trailer (Example \[ctrl\]) and the ball on a plate (Example \[bpln\]). We use also the theorem \[eqnil\] of reduction to Nilpotent approximation, and we consider the normal forms $\mathcal{\hat{P}}_{4,2},$ $\mathcal{\hat{P}}_{5,2}$ of Section \[nffs\]. In both cases, we change the variable $w$ for $\tilde{w}$ such that $d\tilde{w}=\frac{dw}{\delta(w)}.\ $ We look for arclength-parametrized trajectories of the nilpotent approximation (i.e. $(u_{1})^{2}+(u_{2})^{2}=1),$ that start from $\Gamma(0),$ and reach $\Gamma$ in fixed time $\varepsilon,$ maximizing ${\displaystyle\int\limits_{0}^{\varepsilon}} \dot{w}(\tau)d\tau.$ Abnormal extremals do no come in the picture, and optimal curves correspond to the hamiltonian $$H=\sqrt{(PF_{1})^{2}+(PF_{2})^{2}},$$ where $P$ is the adjoint vector. It turns out that, in our normal coordinates, the same trajectories are optimal for both the $4$-$2$ and the $5$-$2$ case (one has just to notice that the solution of the $4$-$2$ case meets the extra interpolation condition corresponding to the 5-2 case).  Setting as usual $u_{1}=\cos(\varphi)=PF_{1},u_{2}=\sin(\varphi)=PF_{2},$ we get $\dot{\varphi}=P[F_{1},F_{2}],\ddot{\varphi}=-P[F_{1},[F_{1},F_{2}]]PF_{1}-P[F_{2},[F_{1},F_{2}]]PF_{2}.$ At this point, we have to notice that only the components $P_{x_{1}},P_{x_{2}}$ of the adloint vector $P$ are not constant (the hamiltonian in the nilpotent approximation depends only on the $x$-variables), $\ $therefore, $P[F_{1},[F_{1},F_{2}]]$ and $P[F_{2},[F_{1},F_{2}]]$ are constant (the third brackets are also constant vector fields). Hence, $\ddot{\varphi}=\alpha \cos(\varphi)+\beta\sin(\varphi)$ $=\alpha\dot{x}_{1}+\beta\dot{x}_{2}$ for appropriate constants $\alpha,\beta.$ It follows that, for another constant $k,$ we have, for the optimal curves of the nilpotent approximation, in normal coordinates $x_{1},x_{2}:$$$\begin{aligned} \dot{x}_{1} & =\cos(\varphi),\dot{x}_{2}=\sin(\varphi),\\ \dot{\varphi} & =k+\lambda x_{1}+\mu x_{2}.\end{aligned}$$ \[remcurv\]1. It means that we are looking for curves in the $x_{1},x_{2}$ plane, whose curvature is an affine function of the position, 2. In the two-step bracket generating case (contact case), otimal curves were circles, i.e. curves of constant curvature, 3. the conditions of $\varepsilon$-interpolation of $\Gamma$ say that these curves must be periodic (there will be more details on this point in the next section), that the area of a loop must be zero $(y(\varepsilon)=0), $ and finally (in the 5-2 case) that another moment must be zero. It is easily seen that such a curve, meeting these interpolation conditions, must be an elliptic curve of elastica-type. The periodicity and vanishing surface requirements imply that it is the only periodic elastic curve shown on Figure \[elastica\], parametrized in a certain way. [M1AOH504]{} The formulas are, in terms of the standard Jacobi elliptic functions:$$\begin{aligned} u_{1}(t) & =1-2dn(K(1+\frac{4t}{\varepsilon}))^{2},\\ u_{2}(t) & =-2dn(K(1+\frac{4t}{\varepsilon}))sn(K(1+\frac{4t}{\varepsilon }))\sin(\frac{\varphi_{0}}{2}),\end{aligned}$$ where $\varphi_{0=130{{}^\circ}}$ (following [@Love], p. 403) and $\varphi_{0}=130,692{{}^\circ}$ following Mathematica$^{\textregistered},$ with $k=\sin(\frac{\varphi_{0}}{2})$ and $K(k)$ is the quarter period of the Jacobi elliptic functions. The trajectory on the $x_{1},x_{2}$ plane, shown on Figure \[elastica\], has equations: $$\begin{aligned} x_{1}(t) & =-\frac{\varepsilon}{4K}[\frac{-4Kt}{\varepsilon}+2(Eam(\frac {4Kt}{\varepsilon}+K)-Eam(K))],\\ x_{2}(t) & =k\frac{\varepsilon}{2K}cn(\frac{4Kt}{\varepsilon}+K).\end{aligned}$$ On the figure \[fig2\], one can clearly see, at the contact point of the ball with the plane, a trajectory which is a “repeated small deformation” of this basic trajectory. The formula for the entropy is, in both the 4-2 and 5-2 cases:$$E(\varepsilon)=\frac{3}{2\sigma\varepsilon^{3}}\int_{\Gamma}\frac{dt}{\delta(t)},$$ where $\sigma$ is a universal constant, $\sigma\approx0.00580305.$ Details of computations on the 4-2 case can be found in [@GZ3], and in [@ano] for the 5-2 case. The ball with a trailer\[new\] ============================== We start by using Theorem \[eqnil\], to reduce to the nilpotent approximation along $\Gamma:$ $$\begin{aligned} (\mathcal{\hat{P}}_{6,2})\text{ \ \ }\dot{x}_{1} & =u_{1},\dot{x}_{2}=u_{2},\dot{y}=(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\\ \dot{z}_{1} & =x_{2}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\dot{z}_{2}=x_{1}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\nonumber\\ \text{\ }\dot{w} & =Q_{w}(x_{1},x_{2})(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}).\nonumber\end{aligned}$$ By Lemma \[balltrailerratio\], we can consider that $$Q_{w}(x_{1},x_{2})=\delta(w)((x_{1})^{2}+(x_{2})^{2})\label{mainf}$$ where $\delta(w)$ is **the main invariant**. In fact, it is the only invariant for the nilpotent approximation along $\Gamma.$ Moreover, if we reparametrize $\Gamma$ by setting $dw:=\frac{dw}{\delta(w)},$ we can consider that $\delta(w)=1.$ Then, we want to maximaize $\int\dot{w}dt$ in fixed time $\varepsilon,$ with the interpolation conditions: $x(0)=0,y(0)=0,z(0)=0,w(0)=0,$ $x(\varepsilon )=0,y(\varepsilon)=0,z(\varepsilon)=0.$ From Lemma \[periodl\] in the appendix, we know that the optimal trajectory is smooth and periodic, (of period $\varepsilon).$ Clearly, the optimal trajectory has also to be a length minimizer, then we have to consider the usual hamiltonian for length: $H=\frac{1}{2}((P.F_{1})^{2}+(P.F_{2})^{2}),$ in which $P=(p_{1},...,p_{6})$ is the adjoint vector. It is easy to see that the abnormal extremals do not come in the picture (cannot be optimal with our additional interpollation conditions), and in fact, we will show that **the hamiltonian system corresponding to the hamiltonian** $H$** is integrable**. This integrability property is no more true in the general 6-2 case.** It holds only for the ball with a trailer.** As usual, we work in Poincaré coordinates, i.e. we consider level $\frac{1}{2}$ of the hamiltonian $H,$ and we set: $$u_{1}=PF=\sin(\varphi),\text{ \ \ }u_{2}=PG=\cos(\varphi).$$ Differentiating twice, we get $$\dot{\varphi}=P[F,G],\text{ }\ddot{\varphi}=-PFFG.PF-PGFG.PG,$$ where $FFG=[F,[F,G]]$ and $GFG=[G,[F,G]].\ $ We set $\lambda=-PFFG,$ $\mu=-PGFG.\ $We get that: $$\ddot{\varphi}=\lambda\sin(\varphi)+\mu\cos(\varphi).\label{phi2}$$ Now, we compute $\dot{\lambda}$ and $\dot{\mu}.$ We get, with similar notations as above for the brackets (we bracket from the left): $$\begin{aligned} \dot{\lambda} & =PFFFG.PF+PGFFG.PG,\\ \dot{\mu} & =PFGFG.PF+PGGFG.PG,\end{aligned}$$ and computing the brackets, we see that $GFFG=FGFG=0.\ $Also, since the hamiltonian does not depend on $y,z,w,$ we get that $p_{3},p_{4},p_{5},p_{6}$ are constants. Computing the brackets $FFG$ and $GFG$ , we get that $$\lambda=\frac{3}{2}p_{4}+p_{6}x_{1},\text{ \ }\mu=\frac{3}{2}p_{5}+p_{6}x_{2},$$ and then, $\dot{\lambda}=p_{6}\sin(\varphi)$ and $\dot{\mu}=p_{6}\cos (\varphi).$ Then, by (\[phi2\]), $\ddot{\varphi}=\frac{\lambda\dot{\lambda}}{p_{6}}+\frac{\mu\dot{\mu}}{p_{6}},$ and finally:$$\begin{aligned} \dot{x}_{1} & =\sin(\varphi),\text{ \ \ }\dot{x}_{2}=\cos(\varphi ),\label{sysint}\\ \dot{\varphi} & =K+\frac{1}{2p_{6}}(\lambda^{2}+\mu^{2}),\nonumber\\ \dot{\lambda} & =p_{6}\sin(\varphi),\text{ \ }\dot{\mu}=p_{6}\cos (\varphi).\nonumber\end{aligned}$$ Setting $\omega=\frac{\lambda}{p_{6}},\delta=\frac{\mu}{p_{6}},$ we obtain:$$\begin{aligned} \dot{\omega} & =\sin(\varphi),\text{ \ }\dot{\delta}=\cos(\varphi),\\ \dot{\varphi} & =K+\frac{p_{6}}{2}(\omega^{2}+\delta^{2}).\end{aligned}$$ It means that the plane curve $(\omega(t),\delta(t))$ has a curvature which is a quadratic function of the distance to the origin. Then, the optimal curve ($x_{1}(t),x_{2}(t))$ projected to the horizontal plane of the normal coordinates has a curvature which is a quadratic function of the distance to some point. Following the lemma (\[eqcur\]) in the appendix, this system of equations is integrable. Summarizing all the results, we get the following theorem. \[mainth\](**asymptotic optimal synthesis for the ball with a trailer**) The asymptotic optimal synthesis is an $\varepsilon$-modification of the one of the nilpotent approximation, which has the following properties, in projection to the horizontal plane $(x_{1},x_{2})$ in normal coordinates: 1. It is a closed smooth periodic curve, whose curvature is a quadratic function of the position, and a function of the square distance to some point, 2. The area and the 2$^{nd}$ order moments $\int_{\Gamma}x_{1}(x_{2}dx_{1}-x_{1}dx_{2})$ and $\int_{\Gamma}x_{2}(x_{2}dx_{1}-x_{1}dx_{2})$ are zero. 3. The entropy is given by the formula: $E(\varepsilon)=\frac{\sigma }{\varepsilon^{4}}\int_{\Gamma}\frac{dw}{\delta(w)},$ where $\delta(w)$ is the main invariant from (\[mainf\]), and $\sigma$ is a universal constant. In fact we can go a little bit further to integrate explicitely the system (\[sysint\]). Set $\bar{\lambda}=\cos(\varphi)\lambda-\sin(\varphi)\mu,$ $\bar{\mu}=\sin(\varphi)\lambda+\cos(\varphi)\mu.\ $we get:$$\begin{aligned} \frac{d\bar{\lambda}}{dt} & =-\bar{\mu}(K+\frac{1}{2p_{6}}(\bar{\lambda}^{2}+\bar{\mu}^{2})),\\ \frac{d\bar{\mu}}{dt} & =p_{6}+\bar{\lambda}(K+\frac{1}{2p_{6}}(\bar{\lambda }^{2}+\bar{\mu}^{2})).\end{aligned}$$ This is a 2 dimensional (integrable) hamiltonian system. The hamiltonian is:$$H_{1}=-p_{6}\bar{\lambda}-\frac{2p_{6}}{4}(K+\frac{1}{2p_{6}}(\bar{\lambda }^{2}+\bar{\mu}^{2}))^{2}.$$ This hamiltonian system is therefore integrable, and solutions can be expressed in terms of hyperelliptic functions. A liitle numerics now allows to show, on figure \[fig62\], the optimal $x$-trajectory in the horizontal plane of the normal coordinates. [M1AOH505]{} On the figure \[figmov\], we show the motion of the ball with a trailer on the plane (motion of the contact point between the ball and the plane).Here, the problem is to move along the $x$-axis, keeping constant the frame attached to the ball and the angle of the trailer. [M1AOH506]{} Expectations and conclusions\[concl\] ===================================== Some movies of minimum entropy for the ball rolling on a plane and the ball with a trailer are visible on the website \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*. Universality of some pictures in normal coordinates --------------------------------------------------- Our first conclusion is the following: there are certain universal pictures for the motion planning problem, in corank less or equal to 3, and in rank 2, with 4 brackets at most (could be 5 brackets at a singularity, with the logarithmic lemma). These figures are, in the two-step bracket generating case: a circle, for the third bracket, the periodic elastica, for the 4$^{th}$ bracket, the plane curve of the figure \[fig62\]. They are periodic plane curves whose curvature is respectively: a constant, a linear function of of the position, a quadratic function of the position. [M1AOH507]{} This is, as shown on Figure \[global\], the clear beginning of a series.  Robustness ---------- As one can see, in many cases (2 controls, or corank $k\leq3),$ our strategy is extremely robust in the following sense: the asymptotic optimal syntheses do not depend, from the qualitative point of view, of the metric chosen. They depend only on the number of brackets needed to generate the space. The practical importance of normal coordinates ---------------------------------------------- The main practical problem of implementation of our strategy comes with the $\varepsilon$-modifications. How to compute them, how to implement? In fact, the $\varepsilon$-modifications count at higher order in the entropy. But, if not applied, they may cause deviations that are not neglectible. The high order w.r.t. $\varepsilon$ in the estimates of the error between the original system and its nilpotent approximation (Formulas \[ff0\], \[ff1\], \[ff2\], \[ff3\]) make these deviations very small. It is why the use of our concept of a nilpotent approximation along $\Gamma,$ based upon normal coordinates is very efficient in practice.  On the other hand, when a correction appears to be needed (after a noneglectible deviation), it corresponds to brackets of lower order. For example, in the case of the ball with a trailer (4$^{th}$ bracket), the $\varepsilon$-modification corresponds to brackets of order 2 or 3. The optimal pictures corresponding to these orders can still be used to perform the $\varepsilon$-modifications. Final conclusion ---------------- This approach, to approximate optimally nonadmissible paths of nonholomic systems, looks very efficient, and in a sense, universal. Of course, the theory is not complete, but the cases under consideration (first, 2-step bracket-generating, and second, two controls) correspond to many practical situations. But there is still a lot of work to do to in order to cover all interesting cases. However, the methodology to go ahead is rather clear. Appendix\[app\] =============== Appendix 1: Normal form in the 6-2 case\[apnf62\] ------------------------------------------------- We start from the general normal form (\[nf2\]) in normal coordinates:$$\begin{aligned} \dot{x}_{1} & =(1+(x_{2}^{{}})^{2}\beta)u_{1}-x_{1}x_{2}\beta u_{2},\text{ \ }\\ \text{\ }\dot{x}_{2} & =(1+(x_{1}^{{}})^{2}\beta)u_{2}-x_{1}x_{2}\beta u_{1},\\ \text{ \ }\dot{y}_{i} & =(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})\gamma_{i}(y,w),\text{ }\\ \text{\ }\dot{w} & =(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})\delta(y,w)\end{aligned}$$ We will make a succession of changes of parametriztion of the surrface $\mathcal{S}$ (w.r.t. which normal coordinates were constructed). These coordinate changes will always preserve tha fact that $\Gamma(t)$ is the point $x=0,y=0,w=t.$ Remind that $\beta$ vanishes on $\mathcal{S},$ and since $x$ has order $1,$ we can already write on $T_{\varepsilon}$: $\dot{x}=u+O(\varepsilon^{3}).$ $\ $One of the $\gamma_{i}$’s (say $\gamma_{1})$ has to be nonzero (if not, $\Gamma$ is tangent to $\Delta^{\prime}).$ Then, $y_{1}$ has order 2 on $T_{\varepsilon}.$Set for $i>1,$ $\tilde{y}_{i}=y_{i}-\frac{\gamma_{i}}{\gamma_{1}}.\ $Differentiating, we get that $\frac{d\tilde{y}_{i}}{dt}=\dot{y}_{i}-\frac{\gamma_{i}}{\gamma_{1}}\dot{y}_{1}+O(\varepsilon^{2}),$ and $z_{1}=\tilde{y}_{2},$ $z_{2}=\tilde{y}_{3}$ have order 3. We set also $w:=w-\frac{\delta}{\gamma_{1}},$ and we are at the following point:$$\begin{aligned} \dot{x} & =u+O(\varepsilon^{3}),\text{ \ }\dot{y}=(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})\gamma_{1}(w)+O(\varepsilon^{2}),\\ \dot{z}_{i} & =(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})L_{i}(w).x+O(\varepsilon^{3}),\\ \dot{w} & =(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})\delta (w).x+O(\varepsilon^{3}),\end{aligned}$$ where $L_{i}(w).x,$ $\delta(w).x$ are liner in $x.$ The function $\gamma _{1}(w)$ can be put to 1 in the same way by setting $y:=\frac{y}{\gamma _{1}(w)}.$ Now let $T(w)$ be an invertible 2$\times2$ matrix. Set $\tilde {z}=T(w)z.$ It is easy to see that we can chose $T(w)$ for we get: $$\begin{aligned} \dot{x} & =u+O(\varepsilon^{3}),\text{ \ }\dot{y}=(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})+O(\varepsilon^{2}),\\ \dot{z}_{i} & =(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})x_{i}+O(\varepsilon^{3}),\\ \dot{w} & =(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})\delta (w).x+O(\varepsilon^{3}),\end{aligned}$$ Another change of the form: $w:=w+L(w).x,$ where $L(w).x$ is linear in $x$ kills $\delta(w)$ and brings us to $\dot{w}=(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})O(\varepsilon^{2}).\ $This $O(\varepsilon^{2})$ can be of the form $Q_{w}(x)+h(w)y+O(\varepsilon^{3})$ where $Q_{w}(x)$ is quadratic in $x.$ If we kill $h(w),$ we get the expected result. This is done with a change of coordinates of the form: $w:=w+\varphi(w)\frac{y^{2}}{2}.$ Appendix 2: Plane curves whose curvature is a function of the distance to the origin\[curvaturedist\] ----------------------------------------------------------------------------------------------------- This result was known already, see [@Singer]. However we provide here a very simple proof. Consider a plane curve $(x(t),y(t)),$ whose curvature is a function of the distance from the origin, i.e.:$$\dot{x}=\cos(\varphi),\dot{y}=\sin(\varphi),\dot{\varphi}=k(x^{2}+y^{2}).\label{eqcur}$$ Equation \[eqcur\] is integrable. Set $\bar{x}=x\cos(\varphi)+y\sin(\varphi),$ $\bar{y}=-x\sin(\varphi )+y\cos(\varphi).$ Then $k(\bar{x}^{2}+\bar{y}^{2})=k(x^{2}+y^{2}).$ Just computing, one gets: $$\begin{aligned} \frac{d\bar{x}}{dt} & =1+\bar{y}k(\bar{x}^{2}+\bar{y}^{2}),\label{ham}\\ \frac{d\bar{y}}{dt} & =-\bar{x}k(\bar{x}^{2}+\bar{y}^{2}).\nonumber\end{aligned}$$ We just show that (\[ham\]) is a hamiltonian system. Since we are in dimension 2, it is always Liouville-integrable. Then, we are looking for solutions of the system of PDE’s: $$\begin{aligned} \frac{\partial H}{\partial\bar{x}} & =1+\bar{y}k(\bar{x}^{2}+\bar{y}^{2}),\\ \frac{\partial H}{\partial\bar{y}} & =-\bar{x}k(\bar{x}^{2}+\bar{y}^{2}).\end{aligned}$$ But the Schwartz integrability conditions are satisfied: $\frac{\partial^{2}H}{\partial\bar{x}\partial\bar{y}}=\frac{\partial^{2}H}{\partial\bar {y}\partial\bar{x}}=2\bar{x}\bar{y}k^{\prime}.$ Appendix 3: periodicity of the optimal curves in the 6-2 case \[periodicity\] ----------------------------------------------------------------------------- We consider the nilpotent approximation $\mathcal{\hat{P}}_{6,2}$ given in formula \[nil62\]: $$\begin{aligned} (\mathcal{\hat{P}}_{6,2})\text{ \ \ }\dot{x}_{1} & =u_{1},\dot{x}_{2}=u_{2},\dot{y}=(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\\ \dot{z}_{1} & =x_{2}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\dot{z}_{2}=x_{1}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\nonumber\\ \text{\ }\dot{w} & =Q_{w}(x_{1},x_{2})(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}).\nonumber\end{aligned}$$ We consider the particular case of the ball with a trailer. Then, according to Lemma \[balltrailerratio\], the ratio $r(\xi)=1.$ It follows that the last equation can be rewritten  $\dot{w}=\delta (w)((x_{1})^{2}+(x_{2})^{2})(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})$ for some never vanishing function $\delta(w)$ (vanishing would contradict the full rank of $\Delta^{(4)}$). We can change the coordinate $w$ for $\tilde{w}$ such that $d\tilde{w}=\frac{dw}{\delta(w)}.$ We get finally: $$\begin{aligned} (\mathcal{\hat{P}}_{6,2})\text{ \ \ }\dot{x}_{1} & =u_{1},\dot{x}_{2}=u_{2},\dot{y}=(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\label{finalnil}\\ \dot{z}_{1} & =x_{2}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\dot{z}_{2}=x_{1}(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2}),\nonumber\\ \ \dot{w} & =((x_{1})^{2}+(x_{2})^{2})(\frac{x_{2}}{2}u_{1}-\frac{x_{1}}{2}u_{2})\nonumber\end{aligned}$$ This is a right invariant system on $\mathbb{R}^{6}$ with cooordinates $\xi=(\varsigma,w)=(x,y,z,w),$ for a certain Nilpotent Lie group structure over $\mathbb{R}^{6}$ (denoted by $G).$ It is easily seen (just expressing right invariance) that the group law is ot the form $(\varsigma_{2},w_{2})(\varsigma_{1},w_{1})=$ $(\varsigma_{1}\ast\varsigma_{2},w_{1}+w_{2}+\Phi(\varsigma_{1},\varsigma_{2})),$ where $\ast$ is the multiplication of another Lie group structure on $\mathbb{R}^{5},$ with coordinates $\varsigma$ (denoted by $G_{0}).$ In fact, $G$ is a central extension of $\mathbb{R}$ by $G_{0}.$ \[periodl\]The trajectories of (\[finalnil\]) that maximize ${\displaystyle\int} \dot{w}dt$ in fixed time $\varepsilon,$ with interpolating conditions $\varsigma(0)=\varsigma(\varepsilon)=0,$ have a periodic projection on $\varsigma$ (i.e. $\varsigma(t)$ is smooth and periodic of period $\varepsilon).$ 1. Due to the invariance with respect to the $w$ coordinate of (\[finalnil\]), it is equivalent to consider the problem with the more restrictive terminal conditions $\varsigma(0)=\varsigma(\varepsilon)=0,$ $w(0)=0,$ 2. The scheme of this proof works also to show periodicity in the 4-2 and 5-2 cases. The idea for the proof was given to us by A. Agrachev. Let $(\varsigma,w_{1}),(\varsigma,w_{2})$ be initial and terminal points of an optimal solution of our problem. By right translation by $(\varsigma^{-1},0),$ this trajectory is mapped into another trajectory of the system, with initial and terminal points $(0,w_{1}+\Phi(\varsigma,\varsigma^{-1})) $ and $(0,w_{1}+\Phi(\varsigma,\varsigma^{-1})).\ $Hence, this trajectory has the same value of the cost ${\displaystyle\int} \dot{w}dt.$ We see that the optimal cost is in fact independant of the $\varsigma$-coordinate of the initial and terminal condition.  Therefore, the problem is the same as maximizing ${\displaystyle\int} \dot{w}dt$ but with the (larger) endpoint condition $\varsigma(0)=\varsigma (\varepsilon)$ (free). Now, we can apply the general transversality conditions of Theorem 12.15 page 188 of [@AS]. It says that the initial and terminal covectors $(p_{\varsigma}^{1},p_{w}^{1})$ and $(p_{\varsigma}^{2},p_{w}^{2})$ are such that $p_{\varsigma}^{1}=p_{\varsigma}^{2}.\ $This is enough to show periodicity. [99]{} A.A. Agrachev, H.E.A. Chakir, J.P. Gauthier, Subriemannian Metrics on R$^{3},$ in Geometric Control and Nonholonomic Mechanics, Mexico City 1996, pp. 29-76, Proc. Can. Math. Soc. 25, 1998. A.A. Agrachev, J.P. Gauthier, Subriemannian Metrics and Isoperimetric Problems in the Contact Case, in honor L. Pontriaguin, 90th birthday commemoration, Contemporary Maths, Tome 64, pp. 5-48, 1999 (Russian). English version: journal of Mathematical sciences, Vol 103, N${{}^\circ}6,$ pp. 639-663. A.A. Agrachev, J.P. Gauthier, On the subanalyticity of Carnot Caratheodory distances, Annales de l’Institut Henri Poincaré, AN 18, 3 (2001), pp. 359-382. A.A. Agrachev, Y. Sachkov, Contro Theory from the geometric view point, Springer Verlag Berlin Heidelberg, 2004. G. Charlot Quasi Contact SR Metrics: Normal Form in $\mathbb{R}^{2n}$, Wave Front and Caustic in $\mathbb{R}^{4};$ Acta Appl. Math., 74, N${{}^\circ}3,$ pp. 217-263, 2002. H.E.A. Chakir, J.P. Gauthier, I.A.K. Kupka, Small Subriemannian Balls on R$^{3},$ Journal of Dynamical and Control Systems, Vol 2, N${{}^\circ}3,$ , pp. 359-421, 1996. F.H. Clarke, Optimization and nonsmooth analysis, John Wiley & Sons, 1983. J.P. Gauthier, F.Monroy-Perez, C. Romero-Melendez, On complexity and Motion Planning for Corank one SR Metrics, 2004, COCV; Vol 10, pp. 634-655. J.P. Gauthier, V. Zakalyukin, On the codimension one Motion Planning Problem, JDCS, Vol. 11, N${{}^\circ}1,$ January 2005, pp.73-89. J.P. Gauthier, V. Zakalyukin, On the One-Step-Bracket-Generating Motion Planning Problem, JDCS, Vol. 11, N${{}^\circ}2,$ april 2005, pp. 215-235. J.P. Gauthier, V. Zakalyukin, Robot Motion Planning, a wild case, Proceedings of the Steklov Institute of Mathematics, Vol 250, pp.56-69, 2005. J.P. Gauthier, V. Zakalyukin, On the motion planning problem, complexity, entropy, and nonholonomic interpolation, Journal of dynamical and control systems, Vol. 12, N${{}^\circ}3$, [J]{}uly 2006. J.P. Gauthier, V. Zakalyukin , Entropy estimations for motion planning problems in robotics, Volume In honor of Dmitry Victorovich Anosov, Proceedings of the Steklov Mathematical Institute, Vol. 256, pp. 62-79, 2007. JP Gauthier, B. Jakubczyk, V. Zakalyukin, Motion planning and fastly oscillating controls, SIAM Journ. On Control and Opt, Vol. 48 (5), pp. 3433-3448, 2010. M. Gromov, Carnot Caratheodory Spaces Seen from Within, Eds A. Bellaiche, J.J. Risler, Birkhauser, pp. 79-323, 1996. F. Jean, Complexity of Nonholonomic Motion Planning, International Journal on Control, Vol 74, N${{}^\circ}$8, pp 776-782, 2001. F. Jean, Entropy and Complexity of a Path in SR Geometry, COCV, Vol 9, pp. 485-506, 2003. F. Jean, E. Falbel, Measures and transverse paths in SR geometry, Journal d’Analyse Mathématique, Vol. 91, pp. 231-246, 2003. T. Kato, Perturbation theory for linear operators, Springer Verlag 1966, pp. 120-122. J.P. Laumond, (editor), Robot Motion Planning and Control, Lecture notes in Control and Information Sciences 229, Springer Verlag, 1998. L. Pontryagin, V. Boltyanski, R. Gamkelidze, E. Michenko, The Mathematical theory of optimal processes, Wiley, New-York, 1962. A.E.H. Love, A Treatise on the Mathematical Theory of Elasticity, Dover, New-York, 1944. H.J. Sussmann, G. Lafferriere, Motion planning for controllable systems without drift; In Proceedings of the IEEE Conference on Robotics and Automation, Sacramento, CA, April 1991. IEEE Publications, New York, 1991, pp. 109-148. D.A. Singer, Curves whose curvature depend on the distance from the origin, the American mathematical Monthly, 1999, vol. 106, no9, pp. 835-841. H.J. Sussmann, W.S. Liu, Lie Bracket extensions and averaging: the single bracket generating case; in Nonholonomic Motion Planning, Z. X. Li and J. F. Canny Eds., Kluwer Academic Publishers, Boston, 1993, pp. 109-148. [^1]: The authors are with LSIS, laboratoire des sciences de l’information et des systèmes, UMR CNRS 6168, Domaine universitaire de Saint Jérôme, Avenue Escadrille Normandie Niemen, 13397 MARSEILLE Cedex 2, France.  J.P. Gauthier is also with INRIA team GECO.
--- abstract: 'In this work we present kiwiPy, a Python library designed to support robust message based communication for high-throughput, big-data, applications while being general enough to be useful wherever high-volumes of messages need to be communicated in a predictable manner. KiwiPy relies on the RabbitMQ protocol, an industry standard message broker, while providing a simple and intuitive interface that can be used in both multithreaded and coroutine based applications. To demonstrate some of kiwiPy’s functionality we give examples from AiiDA, a high-throughput simulation platform, where kiwiPy is used as a key component of the workflow engine.' author: - Martin Uhrin - 'Sebastiaan P. Huber' bibliography: - 'ms.bib' title: 'kiwiPy: Robust, high-volume, messaging for big-data and computational science workflows' ---
--- author: - 'Jörg Schmeling and St[é]{}phane Seuret' title: On measures resisting multifractal analysis --- [*Dedicated to Victor Afraimovich on the occasion of his 65th birthday.*]{} Introduction {#sec1} ============ Let $\mu$ be a probability measure on a metric space $(X,d)$. For $x\in\supp\mu)$ define $$d_\mu(x):={\mathop{{\underline {\hbox{{\rm lim}}}}}}_{r\to 0}\frac{\log\mu (B(x,r))}{\log r}$$ where $B(x,r)$ is the ball of radius $r$ centered at $x$. For $\a\ge 0$ we will consider the level sets $$D_\mu(\a):=\{x\in\supp\mu)\, :\, d_\mu(x)=\a\}.$$ The multifractal spectrum of $\mu$ is given by $$ f\_():= { ----------------------- ------------------------------ $f_\mu(\a)=-\infty $ if $ D_\mu(\a) =\emptyset $, $ \dim_H D_\mu(\a)$ otherwise. ----------------------- ------------------------------ . $$ The dimension of a measure $\mu$ is defined as $$\label{defdimmu} \dim_H\mu:=\inf\{\dim_HZ\, :\, \mu(Z)=1\}.$$ It is well-known that $$\label{esup} \dim_H\mu=\esup d_\mu(x),$$ $\esup$ standing for the $\mu$-essential supremum. Hence it is likely that the graph of the function $f_\mu$ touches the diagonal at $\alpha=\dim_H\mu$. This phenomenon happens for any Gibbs measure associated with a Hölder potential invariant under a dynamical system, and we may wonder if this is a general property for measures, invariant measures or ergodic measures. In this note we will give examples of invariant measures that have a multifractal spectrum as far as possible off the diagonal. Indeed these measures can be chosen to be invariant under linear transformations of the circle. We will also remark that the same situation does not occur for ergodic measures, for which the multifractal spectrum always touches the diagonal. \[main\] For given $(a,b)\in [0,1]$ there is a probability measure $\mu$ supported on a compact Cantor set $K\subset [0,1]$ with the following properties: - $\mu (I)>0$ for all non-empty open sets (in the relative topology) in $K$, - $\dim_H\mu =b$, - if $S= \{ d _\mu(x): x\in K\}$ is the support of the multifractal spectrum of $\mu$, then $a=\min S$ and $b=\max S$. In particular, $d_\mu (x)\in [a,b]$ for all $x\in\supp\mu) =K$, - $D_\mu(\a)$ contains at most one point for all $\a\ge 0$. The exponent at which the multifractal spectrum touches the diagonal, when it exists, is characterized by many properties. Let us introduce two other spectra for measures. For all integers $j\geq 1$, we denote by $\mathcal{G}_j$ the set of dyadic intervals of generation $j$ included in $[0,1]$, i.e. the intervals $[k2^{-j}, (k+1)2^{-j})$, $k\in \{0,\cdots,2^j-1\}$. The Legendre spectrum of a Borel probability measure whose support is included in the interval $[0,1]$ is the map $$L_\mu: \ \alpha \geq 0 \mapsto \inf _{q\in \mathbb{R}} \ (\, q\alpha - \tau_\mu(q) \,) \ \ \ \in \R^+\cup\{-\infty\},$$ where the scaling function $\tau_\mu$ is defined for $q\in \R$ as $$\tau_\mu(q):={\mathop{{\underline {\hbox{{\rm lim}}}}}}_{j\to +\infty} \frac{1}{-j} \log_2 \sum_{I\in \mathcal{G}_j} \mu(I)^q,$$ the sum being taken over the dyadic intervals with non-zero $\mu$-mass. The Legendre spectrum is always defined on some interval $I \subset \R^+\cup\{+\infty\}$ (the extremal exponents may or may not belong to this interval), and is concave on its support. It is a trivial matter that there is at least one exponent $\alpha_\mu \geq 0$ such that $$\label{Ltouches} L_\mu(\alpha_\mu)=\alpha_\mu.$$ Comparing (\[defdimmu\]), (\[esup\]) and (\[Ltouches\]), obviously when there is a unique exponent such that $f_\mu(\alpha)=\alpha$, then this exponent is also the dimension of the measure $\mu$ and also the one satisfying (\[Ltouches\]). The large deviations spectrum of a Borel probability measure whose support is included in the interval $[0,1]$ is defined as $$LD_\mu(\alpha) = \lim_{\e \to 0} \ {\mathop{{\underline {\hbox{{\rm lim}}}}}}_{j\to \infty} \ \frac{ \log_2 N_j(\alpha,\e) } {j}$$ where $$\label{defNJ} N_j(\alpha,\e)\!:=\#\! \left\{ I \in \mathcal{G}_j : 2^{-j (\a+\e)} \leq \mu(I ) \leq 2^{-j (\a-\e)} \right\} \!.$$ By convention, if $N_j(\alpha,\e)=0$ for some $j$ and $\e$, then $ LD_\mu(\alpha) =-\infty$. This spectrum describes the asymptotic behavior of the number of dyadic intervals of $\mathcal{G}_j$ having a given $\mu$-mass. The fact that the values of the large deviations spectrum are accessible for real data (by algorithms based on log-log estimates) makes it interesting from a practical standpoint. In the paper [@RIEDI] for instance, it is proved that the concave hull of $f_\mu$ coincides with the Legendre spectrum of $\mu$ on the support of this Legendre spectrum. One always has for all exponents $\alpha\geq 0$ $$f_\mu(\alpha) \leq LD_\mu(\alpha) \leq L_\mu(\alpha),$$ and when the two spectra $f_\mu$ and $L_\mu$ coincide at some $\alpha\geq 0$, one says that the [*multifractal formalism*]{} holds at $\alpha$. Actually, when the multifractal formalism holds, the three spectra (multifractal, large deviations and Legendre) coincide. For the measure we are going to construct, the multifractal formalism does not hold at $\alpha_\mu$, nor at any exponent. This is the reason why we claim that this measure is “as far as possible” from being multifractal. \[main2\] For the measure $\mu$ of Theorem \[main\], we have: - $f_\mu(\alpha) = 0$ for every $\alpha \in S$, and $f_\mu(\alpha) = - \infty$ for every $\alpha \in [a,b]\setminus S$, - $ \ LD_\mu(\alpha) = \alpha$ for every $\alpha \in S$, and $LD_\mu(\alpha) = - \infty$ for every $\alpha \in \mathbb{R}_+\setminus S$, - $ \ L_\mu(\alpha) = \alpha$ for every $\alpha \in [a,b]$, and is $-\infty$ elsewhere.\ The scaling function of $\mu$ is $$\tau_\mu(q) = \left\{\begin{tabular}{ll} $b(1-q$) & $\mbox { if } q\leq 1$\\ $a(1-q)$ & $\mbox{ if } q>1$. \end{tabular}\right.$$ Hence the three spectra differ very drastically. The article is organized as follows. Section \[ergodic\] discusses the difference between ergodic and invariant measures as regards to our problem. Section \[sec\_main\] contains the construction of a measure $\mu$ supported by a Cantor set whose multifractal spectrum does not touch the diagonal. In Section \[SecLD\], we compute the Legendre and the large deviations spectra of $\mu$. Ergodic and Invariant measures {#ergodic} ============================== First we prove that the multifractal spectrum of ergodic measures always touches the diagonal. \[ergspec\] Let $\mu$ be an ergodic probability measure invariant under a $C^1$–diffeomorphism $T$ of a compact manifold $M$. Then $f_\mu(\dim_H\mu)=\dim_H\mu$. Since $T$ is a smooth diffeomorphism on a compact manifold both the norm $\|D_xT\|$ and the conorm $\|(D_xT)^{-1}\|^{-1}$ are bounded on $M$. Hence, there is a $C>1$ such that for any $x\in M$ and any $r>0$ $$B(Tx,C^{-1}r)\subset T(B(x,r))\subset B(Tx,Cr).$$ This immediately implies that $d_\mu$ is a (of course measurable) invariant function. By ergodicity of $\mu$ it takes exactly one value for $\mu$–a.e. $x\in M$. By (\[esup\]) this value equals $\dim_H\mu$. Contrarily to what happens for ergodic measures, a general invariant measure behaves as bad as a general probability measure. We will illustrate this on a simple example. Consider the (rational) rotation $x\to x+\frac12 \pmod 1$ on the unit circle $\T=\R/\Z$. This transformation is not uniquely ergodic and has plenty of invariant measures. By the Ergodic Decomposition Theorem the space $M_{inv}$ of invariant measures equals $$\left\{\mu:=\frac12\int_{[0,1/2]}(\delta_x+\delta_{x+1/2})\, d\nu (x) \, :\, \nu \mbox{ is a probability measure on $[0,1/2)$}\right\}.$$ W.l.o.g. assume that $x\in [0,1/2)$ and $r>0$ is sufficiently small. Then $$\mu (B(x,r))=\frac12\int_{B(x,r)}\, d\nu = \frac12\nu (B(x,r)).$$ Hence, $$d_\mu(x)=d_\nu(x)\qquad\mbox{and}\qquad f_\mu(\a)=f_\nu(\a).$$ In particular, using the example built in the following sections, there is a measure with a multifractal spectrum not touching the diagonal, which can not happen for an ergodic measure. The main construction {#sec_main} ===================== We will represent the numbers $x$ in $[0,1]$ by their dyadic expansion, i.e. $x=\sum_{j\geq 1} x_j2^{-j}$, $x_j\in \{0,1\}$. The construction will avoid the dyadic numbers so that no ambiguity will ocur. For $x\in [0,1]$, the prefix of order $J$ of $x$ is $x_{|J} = \sum_{j= 1}^J x_j2^{-j}$. We will also use the notation $x=x_1x_2\cdots x_j\cdots$, and $x_{|J}= x_1\cdots x_J$. A cylinder $C=[x_1x_2\cdots x_J]$ consists of the real numbers $x$ with prefix of order $J$ equal to $x_1x_2\cdots x_J$. The length $J$ of such a cylinder is denoted by $|C|=J$. We denote by $\mathcal{G}_J$ the cylinders of length $J$. The concatenation of two cylinders $C_1=[x_1 \cdots x_J]$ and $C_2=[y_1 \cdots y_{J'}]$ is the cylinder $[x_1 \cdots x_J y_1 \cdots y_{J'}]$, and is denoted $C_1C_2$. We stand some facts about subshifts of finite type. First we remark that given any non-empty interval $I\subset [0,\log 2]$ there is a mixing subshift of finite type that has entropy $h_{top}(\Sigma)\in I$. We denote the set of all mixing subshifts of finite type by $\Sf$. For $\Sigma\in\Sf$ the unique measure of maximal entropy is denoted by $\mu_\Sigma$. By standard theorems, there is a constant $M_\Sigma$ depending only on $\Sigma$ such that t for any cylinder $C_J \in\Sigma$ of length $J$ $$M_\Sigma^{-1} \, 2^{-h_{top}(\Sigma)J}<\mu_\Sigma(C_J)< M_\Sigma \, 2^{-h_{top}(\Sigma)J}.$$ In addition, for the same constant $M_\Sigma$, we have $$M_\Sigma^{-1} \, 2^{h_{top}(\Sigma)J}<\#\{C\in \mathcal{G}_J: C\in \Sigma\} < M_\Sigma \, 2^{h_{top}(\Sigma)J}.$$ Of course the two last double-sided inequalities are complementary. We now proceed to the construction of the measure $\mu$ of Theorem \[main\]. [**Step 1:**]{} We fix a map $\Sigma\colon \bigcup_{J=1}^\infty \{0,1\}^J\to \Sf$ with the property that $$h_{top}(\Sigma(y_1\cdots y_J))\in (b-a)\left[\sum_{j=1}^{J-1}\frac{2y_j}{3^j}+\left(\frac{2y_J}{3^J},\frac{2y_J+1}{3^J} \right) \right]+a.$$ This map is increasing in the sense that if $t_1\cdots t_J < y'_1\cdots y'_J$ (using the lexicographic order), then $h_{top}(\Sigma(y_1\cdots y_J)) < h_{top}(\Sigma(y'_1\cdots y'_J))$. [**Step 2:**]{} For $\Sigma\in\Sf$ and $\delta >0$, define $$ N(,) := { J: { ---------------------------------------------------------------------------------------------------------------------- -- $ \ \forall \ j\geq J, \ \ \forall \ C_j \in\Sigma \mbox{ of length $j$},$ $\ 2^{-(h_{top}(\Sigma)+\delta)j}<\mu_\Sigma(C_j )< 2^{-(h_{top}(\Sigma)-\delta)j} $ $ \mbox{ and } \forall \ j\geq J, $ $ \ 2^{ (h_{top} (\Sigma)-\delta) j} < \# \{C\in \mathcal{G}_j: C\in \Sigma \}< 2^{(h_{top}(\Sigma)+\delta)j} \ $ ---------------------------------------------------------------------------------------------------------------------- -- . } . $$ The numbers $N(\Sigma,\delta)$ allow us to estimate the time we have to wait until we see an almost precise value of the local entropy for a given subshift of finite type. Moreover, we have also a control of the number of cylinders of length $j\geq N(\Sigma,\delta)$ in $\Sigma$. We then set $$\delta_{J} = \frac{b-a}{6 \cdot 2^J} \ \mbox{ and } \ \ N_J := \max\Big\{N\Big(\Sigma(y_1\cdots y_J) , \delta_{J} \Big)\, :\, y_1\cdots y_J \in \{0,1\}^J\Big\}.$$ [**Step 3:**]{} Let $y_1\cdots y_J\in \{0,1\}^J$. For a given cylinder $C_j$ of length $j$ in $\Sigma(y_1\cdots y_J)$, there is a smallest integer $m_{C_j}$ for which for every cylinder $C'_m$ of length $m\geq m_{C_j}$ in $\Sigma(y_1...y_{J-1})$, we have $$\begin{aligned} \label{eq1} \ \ 2^{-(h_{top}(\Sigma(y_1\cdots y_{J-1}))+\delta_J)(m+j)} < && \hspace{-6mm} \mu_{\Sigma(y_1\cdots y_{J-1})}(C'_m)\cdot\mu_{\Sigma(y_1\cdots y_J)}(C_j)\\ \nonumber && \ \ \ < \ 2^{-(h_{top}(\Sigma(y_1\cdots y_{J-1}))-\delta_J)(m+j)}.\end{aligned}$$ This property holds, since we know that it holds for large $m$. Then, let $$m_j:= \max\{m_{C_j} : C_j \in \Sigma(x_1\cdots x_J) \mbox{ and } |C_j|=j\}.$$ By construction, for every cylinder $C_j$ of length $j$, for every integer $m \geq m_j$, for every cylinder $C'_m \in \Sigma(y_1...y_{J-1})$, (\[eq1\]) is true. Then, we set $$M(y_1\cdots y_J) := \max \Big\{m_j : j \in \{1,2,\cdots, N_J\} \Big\}.$$ The numbers $M(y_1\cdots y_J)$ allow us to estimate how long the cylinders in the prefix subshift have to be to control the local entropy at a concatinated cylinder. Finally, for every $J\geq 1$, we define the integer $$\begin{aligned} M_J & := & \max\Big\{M\big(y_1\cdots y_J \big)\, :\, y_1\cdots y_J \in \{0,1\}^J \Big\}.\end{aligned}$$ [**Step 4:**]{} Choose a lacunary sequence $(L_J)_J$ with $$\frac{L_J}{\sum_{j=1}^J M_j+ N_j}\ge 2 \ \ \mbox{ and} \ \ \frac{L_{J+1}}{L_J} \geq \frac{2}{ \delta_{J+1} } .$$ Now we are ready to proceed with the construction of the measure $\mu$. [**Step 5:**]{} We will construct the measure by induction on dyadic cylinders. We set $K_1:=[0,1]$ and start with labelling the cylinder $[0]$ with $y_1=0$ and $[1]$ with $y_1=1$. For a subshift of finite type $\Sigma\in\Sf$ we denote by $\Sigma\vert_J$ all non-empty dyadic cylinders in $\Sigma$ of length $J\in\N$. Now we define $$K_2:=[0]\Sigma (0)\vert_{L_2}\cup [1]\Sigma (1)\vert_{L_2}.$$ We will label a cylinder $C_{L_2+1}$ in $[0]\Sigma (0)\vert_{L_2}$ (a similar labelling for $[1]\Sigma (1)\vert_{L_2}$) by $y_1y_2(C_{L_2+1})=00$ iff $$C_{L_2+1}\cap \Big[\min\{ x\in [0]\Sigma (0)\vert_{L_2}\},\min\{ x\in [0]\Sigma (0)\vert_{L_2}\}+\frac12\diam [0]\Sigma (0)\vert_{L_2}\Big]\ne\emptyset,$$ and by $y_1 y_2(C_{L_2+1})=01$ else. This way we have that for every $y_1y_2\in \{0,1\}^2$, $$\diam\left(\bigcup_{y_1 y_2(C_{1+L_2})=y_1y_2} C_{1+L_2}\right)\le\frac14.$$ Assume that for $J\geq 2$, we have defined $K_J$ as the union of cylinders of length $1+L_2+\cdots +L_J$ labelled by binary sequences $y_1\cdots y_J$ of length $J$. Moreover assume that for the defining cylinders of $K_J$, we managed the construction so that $y_1 \cdots y_J \in \{0,1\}^J$, $$\diam\left(\bigcup_{y_1\cdots y_J(C_{1+L_2+\cdots +L_J})=y_1\cdots y_J} C_{1+L_2+\cdots +L_J}\right)\le\frac1{2^J}.$$ We define the Cantor set at the $J+1$-th generation as $$K_{J+1}:=\bigcup_{ y_1\cdots y_ J \in\{0,1\}^J}\, \,\bigcup_{C\in K_J: \, y_1\cdots y_J(C)=y_1\cdots y_J} C\Sigma (y_1\cdots y_J)\vert_{L_{J+1}}.$$ As above, we will label a cylinder $C_{1+L_2+\cdots +L_{J+1}}$ in $C\Sigma (y_1\cdots y_J)\vert_{L_{J+1}}$ (where the cylinder $C$ is labelled $y_1\cdots y_J(C)= y_1\cdots y_J$) by the word $y_1\cdots y_{J+1}(C_{1+L_2+\cdots +L_{J+1}})=y_1\cdots y_J0$ if and only if the cylinder $C_{1+L_2+\cdots +L_{J +1}}$ has non-empty intersection with the interval $$\begin{aligned} \Big[&&\!\!\!\!\!\!\! \min\{ x\in C\Sigma (y_1\cdots y_J)\vert_{L_{J+1}}\},\\ &&\! \!\!\!\! \ \!\!\!\!\!\! \ \min\{ x\in C\Sigma (y_1\cdots y_J)\vert_{L_{J+1}}\}+\frac12\diam C\Sigma (y_1\cdots y_J)\vert_{L_{J+1}} \Big],\end{aligned}$$ and by $y_1\cdots y_{J+1}(C_{1+L_2+\cdots +L_{J+1}})=y_1\cdots y_J 1$ else. This way we ensure that $$\label{diam} \diam\left(\bigcup_{y_1\cdots y_{J+1}(C_{1+L_2+\cdots +L_{J+1}})=y_1\cdots y_{J+1}} C_{1+L_2+\cdots +L_{J+1}}\right)\le\frac1{2^{J+1}}.$$ [**Step 6:**]{} We define the Cantor set $$K:=\bigcap_{J\geq 2} K_J.$$ It has the following properties: - $K$ is compact, - for $x\in K$, we have a labelling sequence $\uy (x) =y_1\cdots y_J\cdots \in\{0,1\}^\infty$, and we will use the obvious notation $y_1\cdots y_J(x)$, - by the choice of the labelling and the function $\Sigma$ we have for any $x\in K$ that the limit $$h(x):=\lim_{J\to\infty}h_{top}(\Sigma(y_1\cdots y_J(C_{1+L_2+\cdots L_J}(x))))$$ exists, where $C_{1+L_2+\cdots L_J}(x)$ denotes the unique dyadic cylinder of length $1+L_2+\cdots L_J$ containing $x$. - for $(x,x')\in K^2$, we have $\uy(x)=\uy(x')\iff x=x'$ (this is immediate from (\[diam\])). More precisely if $x<x'$ then $\uy(x)<\uy(x')$ (in lexicographical order) and by the choice of the function $\Sigma$ $$h(x)<h(x').$$ - $\dim_HK=b$. [**Step 7:**]{} We define the measure $\mu$ on the cylinder sets $$\left\{C\, :\, |C|=1+L_2+\cdots L_J , \ J\geq 2 \mbox{ and } C\cap K\ne\emptyset\right\}.$$ Any such cylinder can be written as $$\label{eq3} C=C_1C_2\cdots C_J, \mbox{ where $ |C_j|=L_j$ and $ C_j\cap\Sigma(y_1\cdots y_j(C_1\cdots C_j))\ne\emptyset $}.$$ Then we set $$\mu(C)\, := \, \frac12\prod_{j=2}^J \mu_{\Sigma(y_1\cdots y_j(C_1\cdots C_j))}(C_j).$$ This is clearly a ring of subsets and hence by Caratheodory’s extension theorem we get a measure on $[0,1]$ with support $K$. It has the following properties: - $\supp\mu) =K$, - for $x\in K$ we have $$d_\mu(x)=\frac{h(x)}{\log2}\in [a,b],$$ - for $I\cap K\ne\emptyset$ with $I$ an interval we have that $\mu(I)>0$, - From item d) in [**Step 6**]{} combined with the previous item, if $(x,x')\in K^2$ and $x<x'$, then $$d_\mu(x)<d_\mu(x').$$ Hence $D_\mu(\a)$ consists of at most one point. - $\dim_H\mu=b$ since $\esup d_\mu=b$. In the above statements, only item b) needs an explanation. Once it will be proved, items c), d) and e) will follow directly using obvious arguments. For every $x\in K$, $\displaystyle d_\mu(x)= \frac{h(x)}{\log 2}$. The point is to prove that the liminf used when defining $d_\mu(x)$ is in fact a limit, and that it coincides with $h(x)$. Let us first prove that $$\label{eq2} \frac{\log \mu( C_{1+L_2+\cdots L_J}(x) ) }{ - \log_2( 1+L_2+\cdots L_J)} \ {\longrightarrow} \ h(x)$$ when $J\to +\infty$. Once (\[eq2\]) will be proved, we will have to take care of the generations between $1+L_2+\cdots L_J$ and $1+L_2+\cdots L_{J+1}$. Let $J\geq 1$. We use the decomposition (\[eq3\]) of the cylinder $C_{1+L_2+\cdots L_J}(x)$. By our choice for $L_J$ in Step 4, we have $$\begin{aligned} \mu( C_{1+L_2+\cdots L_J}(x) ) & = & \frac{1}{2}\, \prod_{j=2}^J \mu_{\Sigma(y_1\cdots y_j(x))}(C_j)\\ &\leq & \prod_{j=2}^J 2^{ - \big( h_{top}\big(\Sigma(y_1\cdots y_j(x))\big)- \delta_ { j} \big ) L_i}\\ &\leq & {2^{ - \big( h_{top}\big(\Sigma(y_1\cdots y_J(x))\big)- \delta_ { J} \big) L_J} } 2^{- P_JL_J },\end{aligned}$$ where $$\begin{aligned} P_J := \sum_{j=2}^{J-1} \big( h_{top}\big(\Sigma(y_1\cdots y_J(x)) \big)-\delta_ { j} \big) \frac{L_j}{L_J} \, \geq \, \sum_{j=2}^{J-1} a \frac{L_j}{L_J} \, \geq \,\delta_ {J},\end{aligned}$$ the last inequality following from Step 4 and the definition of $\delta_J$. Hence, $$\begin{aligned} \label{eq4} \mu( C_{1+L_2+\cdots L_J}(x) ) &\leq & {2^{ - \big( h_{top}\big(\Sigma(y_1\cdots y_J(x))\big)- 2\delta_ {J}\big) L_J} } .\end{aligned}$$ The same inequality in Step 4 ensures that $|C_{1+L_2+\cdots L_n}(x) | = 2^{-(1+L_2+\cdots L_J)} $ is upper and lower-bounded respectively by $2^{-L_J (1 - \delta_ J)}$ and $2^{-L_J (1 + \delta_ J)}$. We deduce that $$\label{majmin1} \mu( C_{1+L_2+\cdots L_J}(x) ) \leq |C_{1+L_2+\cdots L_J}(x) | ^{ \big( h_{top}\big(\Sigma(y_1\cdots y_J(x))\big)- 2\delta_ {J} \big)\big(1 -\delta_ {J}\big) }.$$ The same arguments yield the converse inequality $$\label{majmin2} \mu( C_{1+L_2+\cdots L_J}(x) ) \geq |C_{1+L_2+\cdots L_J}(x) | ^{ \big( h_{top}\big(\Sigma(y_1\cdots y_J(x)\big) +2\delta_{J} \big)\big(1 +\delta_ {J}\big) },$$ and taking logarithms, (\[eq2\]) follows. Let now $n$ be an integer in $\{1, \cdots, L_{J+1}-1\}$, and consider $C_{1+L_2+\cdots L_J +n}(x)$. We write $C_{1+L_2+\cdots L_J +n}(x) = C_1\cdots C_J C_{J+1}$ with $|C_j|=L_j$ for every $j \leq J$, and $|C_{J+1}|=n$. - [**If $1 \leq n \leq N_{J+1}$:**]{} we get $$\begin{aligned} \mu(C_{1+L_2+\cdots L_J +n}(x)) & = & \frac{1}{2}\prod_{j=2}^{J+1}\mu_{\Sigma(y_1\cdots y_j(x))}(C_j)\\ & = & \frac{1}{2}\prod_{j=2}^{J-1}\mu_{\Sigma(y_1\cdots y_j(x))}(C_j)\\ && \ \ \ \times \mu_{\Sigma(y_1\cdots y_J(x))}(C_J) \cdot \mu_{\Sigma(y_1\cdots y_{J+1} (x))}(C_{J+1})\\ & \leq &{2^{ - \big( h_{top}\big(\Sigma(y_1\cdots y_{J-1}(x))\big)- 2\delta_ {{J-1}}\big) L_{J-1}} } \\ && \ \ \ \times 2^{-(h_{top}(\Sigma(y_1\cdots y_J (x)))- \delta_ { J})(L_J+n)},\end{aligned}$$ where (\[eq4\]) and (\[eq1\]) have been used to bound from above respectively the first and the second product. Using the same arguments as above, we see that $$\begin{aligned} \label{majmin3,5} \mu(C_{1+L_2+\cdots L_J + n}(x)) & \leq & |C_{1+L_2+\cdots L_J + n }(x))| ^{h_{top}(\Sigma(y_1\cdots y_J(x)))- \delta'_J} ,\end{aligned}$$ where $(\delta'_J)_{J\geq 2}$ is some other positive sequence converging to zero when $J$ tends to infinity. - [**If $N_{J+1}+1 \leq n \leq L_{J+1} -1$:**]{} we have $$\begin{aligned} \mu(C_{1+L_2+\cdots L_J +n}(x)) & = & \frac{1}{2}\prod_{ j =2}^{J+1}\mu_{\Sigma(y_1\cdots y_j (x))}(C_j)\\ & = & \frac{1}{2}\prod_{j=2}^{J}\mu_{\Sigma(y_1\cdots y_j(x))}(C_j) \times \mu_{\Sigma( y_1\cdots y_{J+1} (x))}(C_{J+1})\\ & \leq &{2^{ - \big( h_{top}\big(\Sigma( y_1\cdots y_J (x))\big) - 2\delta_ {{J }}\big) L_J} } \\ && \cdot 2^{-(h_{top}(\Sigma(y_1\cdots y_{J+1} (x)))-\delta_ {J+1}) n },\end{aligned}$$ where (\[eq4\]) and Step 2 of the construction have been used to bound from above respectively the first and the second product. Using the same arguments as above, we see that $$\label{majmin4} \mu(C_{1+L_2+\cdots L_J + n }(x)) \leq |C_{1+L_2+\cdots L_J + n}(x))| ^{ h_{J,n}} ,$$ where $ h_{J,n} $ is a real number between $ h_{top}\big(\Sigma(y_1\cdots y_J(x))\big)-2\delta_ {{J }} $ and $ h_{top}(\Sigma(y_1\cdots y_{J+1} (x)))-\delta_ { {J+1}} $, which gets closer and closer to the exponent $ h_{top}(\Sigma(y_1\cdots y_{J+1} (x)))- \delta_ { {J+1}} $ when $n$ tends to $L_{J+1}$. In particular, $h_{J,n}$ converges to $h(x) $ when $J$ tends to infinity, uniformly in $n\in \{1, \cdots, L_{J+1}-1\}$. - The converse inequalities are proved using the same ideas. To finish the proof of Theorem \[main\], we make the following observations. By construction, we see that the support $S$ of the multifractal spectrum of $\mu$ is actually the image of the middle-third Cantor set by the map $\alpha \mapsto a+ (b-a)\alpha$. We deduce that $S \subset [a,b]$, $\min(S)=a$ and $\max(S)=b$, and that $D_\mu(\alpha)$ contains either 0 or 1 point, for every $\alpha \geq 0$. This proves parts iii) and iv) of Theorem \[main\], and also part i) of Theorem \[main2\]. The large deviations and the Legendre spectra {#SecLD} ============================================= We prove Theorem \[main2\]. Recall that the Cantor set $K$ is the support of $\mu$ and that $S =\{ d_\mu(x):x\in K\}$ is the image of the middle-third Cantor set by an affine map. The large deviations spectrum ----------------------------- First, let $\alpha \in S$, and let $x_\alpha$ be the unique point such that $d_\mu(x_\alpha)=\alpha$. One will use the labelling $y_1\cdots y_j(x_\alpha)$, since by construction one has $\alpha = \lim_{j\to +\infty} a+(b-a) \times 0,y_1\cdots y_j(x_\alpha)$. Let $\e>0$. Due to our construction, there exists a real number $\eta(\e)$, that converges to zero when $\e$ tends to zero, such that $|h_{top}(\Sigma(y_1\cdots y_j(x))) -\alpha | \leq 2\e$ implies that $|x-x_\alpha|\leq \eta(\e)$. By construction, there exists a generation $J_\e$ such that for every $j\geq J_\e$, $|h_{top}(\Sigma(y_1\cdots y_j(x_\alpha) )) - \alpha|\leq \e $. Moreover, $J_\e$ can be chosen large enough that $\delta_{J_\e} \leq \e /2$. Observe that if $\tilde C$ is a cylinder of generation $j\geq J_\ep$ such that $$\label{ineg11} |\tilde C| ^{\alpha+\e} \leq \mu(\tilde C) \leq |\tilde C| ^{\alpha-\e} ,$$ is satisfied, then by (\[majmin3,5\]), (\[majmin4\]) and our choice for $J_\ep$, $\tilde C$ is necessarily included in a cylinder $ C$ of generation $J_\ep$ such that $$\label{ineg10} |y_1\cdots y_{J_\ep}(x_\alpha) - y_1\cdots y_{J_\ep}(C)| \leq \eta(\ep).$$ Hence, to bound by above the number $N_j(\alpha,\ep)$ (defined by (\[defNJ\])), it is sufficient to count the number of cylinders $\tilde C$ of generation $j$ included in the cylinders $ C$ of generation $J_\ep$ such that (\[ineg10\]) holds. Let us denote by $M_{\alpha,\ep} $ the number of cylinders $ C$ of generation ${J_\ep}$ satisfying (\[ineg10\]), and fix $C_{J_\ep}$ such a cylinder. Obviously, all the subshifts of finite type $\Sigma$ which are used in the construction of $K$ inside $C_{J_\ep}$ have a topological entropy which satisfies $|h_{top}(\Sigma) -\alpha| \leq 2\e$. Hence, it is an easy deduction of the preceding considerations that the number of cylinders of generation $j$ included in $C_{J_\ep}$ is lower- and upper-bounded by $$2^{ (\alpha-2\ep) j} < \# \{C\in \mathcal{G}_j: C\subset C_{J_\ep} \mbox{ and } C \cap K \neq \emptyset \}< 2^{(\alpha+2\e)j} .$$ Consequently, $$N_j(\alpha,\ep) \leq M_{\alpha,\ep} 2^{(\alpha+2\e)j}.$$ Taking the liminf of $\displaystyle \frac{\log_2 N_j(\alpha,\ep) }{j}$ when $j$ tends to infinity, and letting $\ep$ go to zero, we find that $LD_\mu(\alpha) \leq \alpha$. One gets the lower bound using what precedes. Indeed, in the above proof, all the cylinders $C\in \mathcal{G}_j$ satisfying $C\subset C_{J_\ep} \mbox{ and } C \cap K \neq \emptyset \}$ verify $$|C|^{\alpha+3\ep} \leq \mu(C) \leq |C|^{\alpha-3\ep}.$$ Hence $$M_{\alpha,\ep} 2^{ (\alpha-2\ep) j} \leq N_j(\alpha,3\ep).$$ By taking a liminf and letting $\ep$ go to zero, we get that $LD_\mu(\alpha) \geq \alpha$. If $\alpha \notin S$, then there exists $\ep>0$ such that $[\alpha-2\ep,\alpha+2\ep]\cap S =\emptyset$. Hence, using again (\[majmin3,5\]), (\[majmin4\]) and choosing $J$ sufficiently large so that $\delta_J\leq \ep/2$, one sees that for every cylinder $C$ of generation $j\geq J_\ep$ such that $C\cap K \neq \emptyset$, $\mu(C)\notin [ |C|^{\alpha+\ep} , |C|^{\alpha-\ep}]$. Consequently, $N_j(\alpha,\ep)=0$ and $LD_\mu(\alpha)=-\infty$. The Legendre spectrum --------------------- Finally, we compute the Legendre spectrum. Obviously $\tau_\mu(1)=0$, and $\tau_\mu(0)= \dim_B \mu =b$, where $\dim_B$ stands for the Minkovski dimension.This is actually relatively easy with what precedes. Indeed, we proved that for every $\ep>0$, if $j$ is large enough, then all cylinders $C$ of generation $j$ such that $C\cap K \neq \emptyset$ satisfy $$2^{- j (b+\ep)} \leq \mu(C) \leq 2^{-j(a-\ep)}.$$ Let us cover the set $S=\{\alpha\geq 0: D_\mu(\alpha)\neq \emptyset\}$ by a finite set of intervals $(I_n)_{n=1,\cdots, N}$ of the form $I_n= [\alpha_n-\ep,\alpha_n+\ep]$, where for every $n\in \{1,2,\cdots, N\}$, $\alpha_n \in S$, and $\alpha_1=a$ and $\alpha_N=b$. For every $n$, the estimates above yield that if $j$ is large, $$2^{j(\alpha_n-\ep_n)} \leq N_j(\alpha_n,\ep) \leq 2^{j(\alpha_n+\ep_n)},$$ where $\ep_n$ is some positive real number converging to zero when $\ep$ goes to zero. Hence we find that for $q>0$, $$\sum_{n=1}^N 2^{j (\alpha_n - \ep_n)} 2 ^{-qj(\alpha_n+ \ep)} \leq \sum_{C\in \mathcal{G}_j} \ \mu(C)^q \leq \sum_{n=1}^N 2^{j(\alpha_n+\ep_n)}2^{-qj(\alpha_n-\ep)}.$$ If $q>1$, then the right hand-side term is equivalent to $2^{j(a(1-q) +\ep_1 +q\ep)}$, and the left hand-side term is equivalent to $2^{j(a(1-q) -\ep_1 -q\ep)}$. Hence, by taking liminf when $j$ tends to infinity, we obtain $\tau_\mu(q) = a(q-1)$. If $q\in (0,1)$, then the right hand-side term is equivalent to $2^{j(b(1-q) +\ep_N +q\ep)}$, and the left hand-side term is equivalent to $2^{j(b(1-q) -\ep_N-q\ep)}$. We deduce that $\tau_\mu(q) = b(q-1)$. Finally, when $q<0$ one has $$\sum_{n=1}^N 2^{j (\alpha_n - \ep_n)} 2 ^{-qj(\alpha_n+ \ep_n)} \leq \sum_{C\in \mathcal{G}_j} \ \mu(C)^q \leq \sum_{n=1}^N 2^{j(\alpha_n+\ep_n)}2^{-jq(\alpha_n-\ep_n)}.$$ The same estimates yield that $\tau_\mu(q)= b(q-1)$. [99]{} L. Barreira, Y. Pesin, and J. Schmeling, [*On a general concept of multifractality: multifractal spectra for dimensions, entropies, and Lyapunov exponents. Multifractal rigidity*]{}, Chaos 7 (1997) 27–38. L. Barreira, B. Saussol and J. Schmeling, *Higher-dimensional multifractal analysis*, J. Math. Pures Appl. 81 (2002) 67–91. R. Bowen, *Entropy for non-compact sets*, Trans. Amer. Math. Soc. 184 (1973) 125–136. Y. Pesin, [*Dimension theory in dynamical systems,*]{} University of Chicago Press, Chicago, 1997. Y. Pesin and H. Weiss, [*The multifractal analysis of Gibbs measures: motivation, mathematical foundation, and examples,*]{} Chaos 7 (1997) 89–106. R. Riedi, Multifractal processes, Doukhan, Paul (ed.) et al., Theory and applications of long-range dependence, 625–716 (2003)
--- abstract: 'We report on $g$, $r$ and $i$ band observations of the Interstellar Object [[1I/‘Oumuamua ]{}]{}(1I) taken on 2017 October 29 from 04:28 to 08:40 UTC by the Apache Point Observatory (APO) 3.5m telescope’s ARCTIC camera. We find that 1I’s colors are $g-r=0.41\pm0.24$ and $r-i=0.23\pm0.25$, consistent with visible spectra [@Masiero2017; @Ye2017; @Fitzsimmons2017] and most comparable to the population of Solar System C/D asteroids, Trojans, or comets. We find no evidence of any cometary activity at a heliocentric distance of 1.46 au, approximately 1.5 months after 1I’s closest approach distance to the Sun. Significant brightness variability was seen in the $r$ observations, with the object becoming notably brighter towards the end of the run. By combining our APO photometric time series data with the Discovery Channel Telescope (DCT) data of @Knight2017, taken 20 h later on 2017 October 30, we construct an almost complete lightcurve with a most probable single-peaked lightcurve period of $P \simeq 4$ h. Our results imply a double peaked rotation period of 8.1 $\pm$ 0.02 h, with a peak-to-trough amplitude of 1.5 - 2.1 mags. Assuming that 1I’s shape can be approximated by an ellipsoid, the amplitude constraint implies that 1I has an axial ratio of 3.5 to 10.3, which is strikingly elongated. Assuming that 1I is rotating above its critical break up limit, our results are compatible with 1I having modest cohesive strength and may have obtained its elongated shape during a tidal distortion event before being ejected from its home system.' author: - 'Bryce T. Bolin' - 'Harold A. Weaver' - 'Yanga R. Fernandez' - 'Carey M. Lisse' - Daniela Huppenkothen - 'R. Lynne Jones' - Mario Jurić - Joachim Moeyens - 'Charles A. Schambeau' - 'Colin. T. Slater' - Željko Ivezić - 'Andrew J. Connolly' title: 'APO Time Resolved Color Photometry of Highly-Elongated Interstellar Object [[1I/‘OUMUAMUA]{}]{}' --- Introduction {#s.Introduction} ============ The discovery and characterization of protoplanetary disks have provided ample observational evidence that icy comet belts and rocky asteroid belts exist in other planetary systems [[[*e.g.*]{}]{} @Lisse2007; @Oberg2015; @Nomura2016; @Lisse2017]. However, these observations have consisted of distant collections of millions of objects spanning large ranges of temperature, astrocentric distance and composition. Until now, it has been impossible to bring the level of detailed analysis possible for our own local small body populations to the large, but unresolved, groups of comets and asteroids in exoplanetary disks. The observation and discovery of interstellar objects have been before [@Cook2016; @Engelhardt2017], but apparition of [[1I/‘Oumuamua ]{}]{} is the first opportunity to study up close an asteroid-like object that formed outside of the Solar System. This a unique opportunity to measure the basic properties (size, shape, rotation rate, color) of a small body originating in another planetary system, and compare it directly to the properties of cometary nuclei and asteroids in our own. Such measurements may shed light on how and where formed within its planetary system, as well provide a basis for comparison to potential Solar System analogs. In this work, we describe APO/ARCTIC imaging photometry in three bands, $g$, $r$ and $i$ taken to meet three scientific goals: (a) measure the color of the object’s surface, to compare with our own small body populations; (b) perform a deep search for cometary activity in the form of an extended coma; and (c) constrain the object’s rotation period to make an initial assessment of structural integrity. Observations {#s.Observations} ============ imaging observations were acquired on (UTC) using the ARCTIC large format CCD camera [@Huehnerhoff2016] on the Point Observatory’s 3.5m telescope. 1I was at that time at a 0.53 au geocentric distance, 1.46 au from the Sun and at a phase angle of 23.8$^{{^{\circ}}}$. The camera was used in full frame, quad amplifier readout, 2x2 binning mode with rotating SDSS $g$, $r$ and $i$ filters and a pixel scale of 0.22". The integration time on target for each frame was 180 sec, and 71 frames were acquired between 58055.1875 MJD (04:30 UT) and 58055.3611 MJD (08:40 UT). frames were taken before observing the target and instrument flat fields were obtained on the sky at the end of the night. Absolute calibration was obtained using nearby SDSS flux calibrators in the field. A similar observing strategy was used over the last 8 years SEPPCON distant cometary nucleus survey [@Fernandez2016]. The weather was photometric throughout the night and the seeing remained between . Owing to hyperbolic orbit, the object was fading rapidly in brightness after its discovery on 2017 October 18 and was observed as soon as possible with APO Director’s Discretionary time while within $\sim$0.5 au of the Earth. The main observing sequence began with two $g$ and one $r$ exposures followed by 30 exposures taken in the following sequence: two $g$, two $r$ and one $i$ repeating six times. Two additional $r$ and one $g$ exposure were taken at the end of the 30 exposure $g$, $r$ and $i$ observing sequence. At the end of the main observing sequence, 15 $g$, 15 $r$ and 6 $i$ were obtained for a total of 36 exposures. We used non-sidereal guiding matched to the rate of , the background stars to trail by (Fig. \[fig.field\]). The motion of on the sky avoided with the , and its position within the frame was arranged to avoid cosmetic defects on the chip. The fields centered on the sky position of contained sufficient bright SDSS standard stars ![Mosaic of $g$, $r$ and $i$ images. The top and center panel is a median stack of 15 180 s $g$ and $r$ exposures. The detection of 1I in the $g$ exposure is low SNR and more diffuse than the detection in the $r$ exposure. The bottom frame is a median stack of 6 180 s exposures in the $i$ filter. []{data-label="fig.field"}](g.png "fig:") ![Mosaic of $g$, $r$ and $i$ images. The top and center panel is a median stack of 15 180 s $g$ and $r$ exposures. The detection of 1I in the $g$ exposure is low SNR and more diffuse than the detection in the $r$ exposure. The bottom frame is a median stack of 6 180 s exposures in the $i$ filter. []{data-label="fig.field"}](r.png "fig:") ![Mosaic of $g$, $r$ and $i$ images. The top and center panel is a median stack of 15 180 s $g$ and $r$ exposures. The detection of 1I in the $g$ exposure is low SNR and more diffuse than the detection in the $r$ exposure. The bottom frame is a median stack of 6 180 s exposures in the $i$ filter. []{data-label="fig.field"}](i.png "fig:") I am not enabling plots. The colors of [[1I/‘OUMUAMUA]{}]{} {#s.photometry} ================================== ![Measured $g-r$ vs. $r-i$ colors of [[1I/‘Oumuamua ]{}]{}in context with moving objects observed with SDSS [@Ivezic2001; @Juric2002]. Some datapoint’s error bars are smaller than the plotting symbol. Colors derived from detections in the SDSS Moving Object Catalog (MOC) [@Ivezic2002] with a corresponding D or P, S, or C Bus-DeMeo taxonomic classification [@DeMeo2013] and $g-r$ and $r-i$ photometric errors smaller than 0.1 magnitudes are shown in the background contours; the illustrated contour intervals enclose 80, 90 and 95% of the objects in each class. TransNeptunian Objects (TNOs) generally move too slowly to be identified in the MOC, however @Ofek2012 cross-matched orbits of known (at the time) TNOs with reported photometry from SDSS Data Release 7. Colors of these objects are shown, with photometric errors, as red triangles. Comets also do not show up in the SDSS MOC, but @Solontoi2012 searched for comets in SDSS catalogs using cuts on the catalogs directly and by cross-matching against known objects. Colors and photometric error bars of the resulting sample are shown with blue circles. Our measured $g-r$ and $r-i$ colors and photometric errors of are shown by the black star; colors from @Masiero2017 and @Ye2017 are included as a green circle and gray square, respectively. []{data-label="fig:colors"}](colors.png "fig:") I am not enabling plots. The position of [[1I/‘Oumuamua ]{}]{}in our field, and the input rates used to track the object, were nearly spot-on, despite its very high apparent angular rate of motion ( 3’/h) implying that the ephemeris solution we used was accurate. We did report astrometric details of our observations to the Minor Planet Center to help refine the orbit further [@Weaver2017]. To measure colors, individual frames in our data set were bias subtracted and flat-fielded before being stacked in a robust average. Statistical outlier pixels were removed at the level from the average stack of frames. The frames were stacked in two sets one set centered on the motion of and the other set stacked sidereally. All 15 $g$ frames were stacked to create combined and star centered images with an equivalent exposure time of 2700 s. All 6 $i$ frames were stacked into a single exposure with the equivalent of a 1080 s of exposure time. Only the first 15 $r$ frames taken at approximately the same time as the $g$ and $i$ frames were stacked for the purpose of comparing the photometry of the $r$ band 1I detection with the $g$ and $i$ band detections. The 15 $g$, 15 $r$ and 6 $i$ frames were taken between 4.6 and 6.5 UTC, so they should have covered the same part of the rotation phase of eliminating any differences in brightness between the color detections due to rotational change in brightness. Between 6.5 UTC to 8.6 UTC, only $r$ exposures were taken. was brighter compared to earlier in the night during this time, so frames were stacked in shorter sequences of 2-6, as appropriate to reach a SNR $\gtrsim$10. Aperture photometry was applied to the detections in $g$, $r$ and $i$ frames. An aperture radius of 1.1“ with a sky annulus between 3.3” and 4.4“ was used to measure flux. An aperture radius of 6.6” and a sky annulus between 8.8“ and 10.0” was used for the standard stars. The median sky background in the sky annulus was subtracted from the aperture flux in both the non-sidereally and sidereally stacked frames to minimize the potential effect of artifacts on the photometry. The SDSS solar analogue star located at $RA$ 23:48:32.355, $\delta$ +05:11:37.45 with $g$ = 16.86, $r$ = 16.41 and $i$ = 16.22 was used to calibrate the photometry in the $g$ and $i$ average stacks, and the average stack corresponding to the first 36 $r$ frames. The difference in air-mass between frames in the $g$, $r$ and $i$ average stacks used to calculate colors was only $\sim$10$\%$. Following the 36th $r$ frame, additional SDSS catalogue standard stars were used as the telescope’s tracking of took it out of the frame of the imager. $g$, $r$ and $i$ magnitudes were measured at the SNR $\gtrsim$ 5 level: $$\begin{aligned} g & = & 23.51 \pm 0.22 \\ r & = & 23.10 \pm 0.09 \\ i & = & 22.87 \pm 0.23\end{aligned}$$ A complete list of our photometric measurements are available in Table \[t.photometry\]. The photometric uncertainties are dominated by statistical photon noise because the effect of changing rotational brightness should have been averaged out as the exposures in the different bands were taken at approximately the same time. The catalog magnitude uncertainty for the magnitude $\sim$16.5 SDSS standard stars is within 0.01 magnitudes. ------------- -------- ---------- ------------------ -- MJD Filter Total m$_{apparent}$ time (s) 58055.23427 $g$ 2700 23.51 $\pm$ 0.22 58055.23432 $i$ 1080 22.88 $\pm$ 0.23 58055.23436 $r$ 2700 23.12 $\pm$ 0.09 58055.28729 $r$ 1080 22.37 $\pm$ 0.11 58055.29892 $r$ 720 22.18 $\pm$ 0.11 58055.30778 $r$ 720 22.22 $\pm$ 0.11 58055.31447 $r$ 360 22.37 $\pm$ 0.07 58055.31923 $r$ 360 22.64 $\pm$ 0.08 58055.32369 $r$ 360 22.66 $\pm$ 0.09 58055.32852 $r$ 360 22.44 $\pm$ 0.07 58055.33295 $r$ 360 22.55 $\pm$ 0.07 58055.33737 $r$ 360 22.73 $\pm$ 0.07 58055.34395 $r$ 720 23.12 $\pm$ 0.08 58055.35438 $r$ 900 23.46 $\pm$ 0.11 ------------- -------- ---------- ------------------ -- : Photometry[]{data-label="t.photometry"} Our measured colors, $$\begin{aligned} g - r &=& 0.41 \pm 0.24 \\ r - i &=& 0.23 \pm 0.25 \end{aligned}$$ are consistent with reported colors and Palomar and William Herschel Telescope optical spectra from @Masiero2017, @Fitzsimmons2017 and @Ye2017. When compared to the colors of known objects in our Solar System (see Fig. \[fig:colors\]), colors are consistent with rocky small bodies in our Solar System (), The majority of $r$ band detections in image stacks used in the lightcurve have an uncertainty of $<$0.1 as seen in the top left panel of Fig. \[fig:sinusoid\]. The lightcurve of [[1I/‘OUMUAMUA]{}]{} {#s.period} ====================================== The data obtained in this paper do not allow for an unambiguous measurement of lightcurve amplitude and periodicity. We therefore added to our dataset the measurements reported by @Knight2017 (henceforth referred to as the ’DCT dataset’). Expected secular changes in the magnitude were removed prior to fitting the data by assuming an inverse-square distance from the Earth and Sun and assuming a linear phase function with slope 0.02 mag deg$^{-1}$. The combined data set is shown in Fig. \[fig:sinusoid\]. Even with the extended dataset, estimating the light-curve period using the Lomb-Scargle periodogram [@Lomb1976; @Scargle1982] was inconclusive due to the sparse sampling pattern and the time baseline of observations. This motivated us to apply more sophisticated methods – a direct Bayesian approach to model the observed lightcurve and estimate the period and amplitude of the periodic variation. Simple Sinusoidal Model {#s.sinusoidal} ----------------------- We begin by modeling the lightcurve with a simple sinusoidal signal of the form: $$\lambda_i = A \sin(2 \pi t_i/P + \phi) + b \; ,$$ where $\lambda_i$ is the model magnitude at time step $t_i$, $A$, $P$ and $\phi$ are the amplitude, period and phase of the sinusoid, respectively, and $b$ denotes the constant mean of the lightcurve. This sinusoidal model is equivalent in concept to the generalized Lomb-Scargle (LS) periodogram [@Lomb1976; @Scargle1982], but the difference is that the LS periodogram assumes a well-sampled lightcurve which cannot be guaranteed here (for more details, see @ivezic2014statistics). We model the data using a Gaussian likelihood and choose a flat prior on the period between 1 and 24 h, consistent with periods observed from similar sources known in the Solar System [@Pravec2002]. We assume a simple sinusoidal model with the expectation that the actual rotation period of asteroids with significant elongation as will be discussed for 1I in Section \[s.results\] are assumed to have a double peaked rotation curve [@Harris2014] and double the period of a simple sinusoidal model. We choose a flat prior for $b$ between 20 and 25 magnitudes, and an exponential prior for the logarithm of the amplitude between $-20$ and $20$. For the phase $\phi$, we use a Von Mises distribution as appropriate for angles in order to incorporate the phase-wrapping in the parameters correctly, with a scale parameter $\kappa = 0.1$ and a mean of $\mu=0$, corresponding to a fairly weak prior. We sampled the posterior distribution of the parameters using Markov Chain Monte Carlo (MCMC) as implemented in the *Python* package *emcee* [@ForemanMackey2013]. This reveals well-constrained, nearly Gaussian distributions for all relevant parameters. We summarize the marginalized posterior distributions in terms of their posterior means as well as the $0.16$ and $0.84$ percentiles, corresponding to $1\sigma$ credible intervals. These are : $$\begin{aligned} P_{\rm sin\, model} & = & 4.07 \pm 0.01\, {\rm hours} \\ A_{\rm sin\, model} & = & 0.64 \pm 0.05\, {\rm mag}\end{aligned}$$ for the period and the amplitude, respectively. I am not enabling plots. In Fig. \[fig:sinusoid\], we show the observed lightcurve along with models drawn from the posterior distribution of the parameters. In particular, we show that the sinusoidal model slightly underestimates the minimum brightness in the DCT data set as seen in the right panel of Fig. \[fig:sinusoid\]. This is likely due to deviations from the sinusoidal shape, which compels the model to adequately fit the wings rather than the peak. Gaussian Process Model {#s.gaussian} ---------------------- Figure \[fig:sinusoid\] indicates that the strictly sinusoidal model is too simplistic to adequately model the more complex lightcurve shape of the object. We therefore turn to a more complex model that, while still periodic, allows for non-sinusoidal as well as double-peaked lightcurve shapes. In short, instead of modelling the lightcurve directly as above, we model the *covariance* between data points, a method commonly referred to as Gaussian Processes (GPs; see @rasmussen2006gaussian for a pedagogical introduction). This approach has recently been successfully deployed in a range of astronomical applications [e.g., @Angus2017; @Jones2017]. The covariance matrix between data points is modelled by a so-called covariance function or kernel. Different choices are appropriate for different applications, and we choose a strictly periodic kernel of the following form [@Mackay1998] here: $$k(t_i, t_j) = C \exp{\left( \frac{\sin^2{(\pi |t_i - t_j|/P)}}{d^2} \right)}$$ for time stamps $t_i$ and $t_j$. In this framework, the amplitude $C$ corresponds to the amplitude of the covariance between data points and is thus not comparable to the amplitude in the sinusoidal model above. The period $P$ on the other hand retains exactly the same meaning. The model also gains an additional parameter $d$ describing the length scale of variations within a single period. It is defined with respect to the period, with $d >> P$ leading to sinusoidal variations, whereas increasingly values result in an increasingly complex harmonic content within each period. We use a Gaussian Process, as implemented in the Python package *george* [@george] with the covariance function defined above to model the combined DCT and APO data sets. For the period, we use the same prior as for the sinusoidal model, but assume priors on the logarithms of amplitude ($-100 < log(C) < 100$) and the length scale of within-period variations, $\Gamma = 1/d^2$ ($-20 < log(\Gamma) < 20$). As before, we use *emcee* to draw MCMC samples from the posterior probability. In Fig. \[fig:gp\], we show the posterior distributions for the period, amplitude and $\Gamma$ parameter. The marginalized posterior probability distribution for the period is in broad agreement with the sinusoidal model at $P$ = 4.07 h. We inferred what the expected profile would look like if the period were twice that inferred by both the sinusoidal and Gaussian Process model in order to guide additional observations of 1I, either improved photometry from existing observations or future observations before the object becomes too faint as it leaves the Solar System. We took the parameters with the highest posterior probability, doubled its period and computed the $1\sigma$ credible intervals for model lightcurve admitted by this particular Gaussian Process with these parameters (Fig. \[fig:gp\], lower panel). This figure shows that if a double-peaked profile were present, roughly half of it would be well-constrained by current observations (indicated by narrow credible intervals). The second peak of the profile, however, is considerably less well constrained due to the lack of data points. Observations in that part of phase space, in particular near the minimum and maximum of that second peak, could help pin down the exact shape. We have made our data and analysis tool used to arrive at our results online[^1]. I am not enabling plots. Results and discussion {#s.results} ====================== was challenging to characterize with the Apache Point Observatory due to its faintness. At first, it was impossible to locoate 1I by eye in our single 180 integrations, but as the night progressed it became distinct, indicating a significant brightening in less than 4 h. A similar behavior was reported by @Knight2017 in observations from the DCT 4m on the next night (Fig 2). Combining the two datasets, we find a most likely lightcurve period period of 4.07 h as described in Section \[s.period\]; phasing the data to this period produces a well structured, near-sinusoidal lightcurve as seen in the bottom panel of Fig. \[fig:sinusoid\]. The peak-to-trough amplitude of the lightcurve, almost 2 magnitudes, is unusual compared to the population of asteroids in the Solar System which usually have peak-to-trough amplitudes of $<$0.75 [@Warner2009]. We estimate the size of from a clean set of 4 band photometric images taken in the middle of our run at around 07:53, when the telescope pointing and focus had stabilized. Using the $r$ = 15.23 magnitude reference star UCAC4 ID 477-131394 with 4.08 $\times 10^6$ DN sky-subtracted counts, we find our 2.42 $\times 10^{3}$ DN sky-subtracted counts from in 180 s translates into a 22.44 $r$ magnitude object at a heliocentric distance of 1.458 au and geocentric distance of 0.534 au. Assuming the $r$ band zero-point to be 3.631 $\times 10^3$ Jy, this yields an in-band flux density of 3.03 $\times10^{-17}$ W m$^{-2}$ ${\mathrm{\mu m}}^{-1}$ Using a solar flux density of 1.90 $\times 10^3$ W m$^{-2}$ ${\mathrm{\mu m}}^{-1}$ at the r band central wavelength of 0.624 , we find an effective radius of 0.130 km for a comet-like surface albedo of 0.03 . This size estimate is likely an upper limit because it is based on data taken near 1I’s peak in brightness. The size and shape of was point source throughout , even in a stacked image of all the APO $r$ band data. This is unlike many of our distant comet program targets [@Fernandez2016], which we have over 10 years worth of experience observing for size, rotation rate, and signs of activity. The object was well-detected in multiple 180 s $r$ band images, but it took all of our 15 $g$ band 180 s exposures and all 6 of our $i$ band 180 s exposures to obtain a detection at $\sim$5 . As discussed above and shown in Fig. \[fig:colors\], the colors of are consistent with having origins in the inner part of its solar system compared to the outer part of its solar system where comets come from. We astrometric details of our observations to the Minor Planet Center to help refine the orbit further [@Weaver2017]. The peak-to-trough amplitude of our lightcurve, determined by the difference between the minimum and maximum brightness [@Barucci1982] of , is $A_{\rm peak,\rm difference}$ = 2.05 $\pm$ 0.53 as seen in Fig. \[fig:sinusoid\]. $A_{\rm peak,\rm sin\, model}$ = 2$A_{\rm sin\, model}$ = 1.28 $\pm$ 0.1 mag. The angle between the observer and the sun from the point of view of the asteroid, or the phase angle, $\alpha$, can affect the measured lightcurve peak-to-trough amplitude. [@Zappala1990a] found that the peak-to-trough amplitudes increase with the phase angle, $\alpha$ according to $$\label{eq.amphase} \Delta m (\alpha = 0^{{^{\circ}}}) = \frac{\Delta m(\alpha)}{1 + s\alpha}$$ where $s$ is the slope of the increase in peak-to-trough magnitude with $\alpha$. [@Zappala1990a] and @Gutierrez2006 found that $s$ varies with taxonomic type and with asteroid surface topography. We adopt a value of 0.015 mag deg$^{-1}$ as a value of $s$ for primitive asteroids as described in @Zappala1990a as expected for 1I, but note that a different value of $s$ would result in a different value of the peak-to-trough magnitude. $\alpha$ at the time of the APO and DCT observations was 24$^{{^{\circ}}}$ which according to Eq \[eq.amphase\] corrects $A_{\rm peak,\rm difference}$ and $A_{\rm peak,\rm sin\, model}$ by a factor of 0.73, so that $A_{\rm peak,\rm difference}$ $\simeq$ 1.51 and $A_{\rm peak,\rm sin\, model}$ $\simeq$ 0.94. Asteroids are assumed in the general case to have a simplistic triaxial prolate shape with an axial ratio, $a$:$b$:$c$ where $b$ $\geq$ $a$ $\geq$ $c$ [@Binzel1989]. As a result, the aspect angle between the observer’s line of sight and the rotational pole of the asteroid, $\theta$, can modify the measured peak-to-trough amplitude as the rotational cross section with respect to the observer increases or decreases for different $\theta$, $a$, $b$ and $c$ @Barucci1982 [@Thirouin2016]. We consider the possibility that we are observing 1I at some average angle of $\theta$ and can estimate the peak-to-trough magnitude if observing 1I from an angle of $\theta$ = 90$^{{^{\circ}}}$. From [@Thirouin2016], the difference in peak-to-trough magnitude observed at angle $\theta$ and peak-to-trough magnitude observed at angle $\theta = 90^{{^{\circ}}}$, $\Delta m_{\rm diff} \; = \; \Delta m (\theta) - \Delta m (\theta = 90^{{^{\circ}}})$ as a function of $\theta$, $a$, $b$ and $c$ is $$\hspace*{-0.75cm} \label{eq.viewingmag} \Delta m_{\rm diff} = 1.25\mathrm{log}\left( \frac{b^2\cos^2\theta \; + \; c^2\sin^2\theta}{a^2\cos^2\theta \; + \; c^2\sin^2\theta} \right )$$ Assuming $a$ = $c$, Eq. \[eq.viewingmag\] implies that $\Delta m$ will be at least $\sim$0.6 magnitudes fainter on average compared $\Delta m(\theta= 90^{{^{\circ}}})$ with the assumptions that $b/a$ $>$ 3 and $a$ = $c$. We can can estimate upper limits for the peak-to-trough magnitudes at $\theta = 90^{{^{\circ}}}$ by re-calculating $A_{\rm peak,\rm difference}$ and $A_{\rm peak,\rm sin\, model}$ with the assumption that the the data used for their calculations are representative of the average aspect angle and have $b/a$ $>$ 3 and $a$ = $c$ by using Eq. \[eq.viewingmag\] $$\begin{aligned} A_{\rm max,\rm difference} = A_{\rm peak,\rm difference} - \Delta m_{\rm diff} \\ A_{\rm max,\rm sin\, model} = A_{\rm peak,\rm sin\, model} - \Delta m_{\rm diff}\end{aligned}$$ results in $A_{\rm max,\rm difference}$ = 2.11 $\pm$ 0.53 and $A_{\rm max,\rm sin\, model}$ = 1.54 $\pm$ 0.1. We note our measurements of $A_{\rm max,\rm difference}$ = 2.11 $\pm$ 0.53 and $A_{\rm max,\rm sin\, model}$ = 1.54 $\pm$ 0.1 for the peak-to-trough magnitude of 1I are lower than the peak-to-trough magnitude of 2.5 described by @Meech2017. The difference our peak-to-trough magnitude measurements and those of @Meech2017 is possibly due to the fact that the SNR of the faintest measurements of the brightness of 1I from @Knight2017 may be substantially lower than the SNR of the faintest measurements of the brightness of 1I from @Meech2017. Additionally, our conservative estimates of the contribution of the phase angle to the peak-to-trough amplitude and the aspect angle on our measured peak-to-trough amplitude via Eqs. \[eq.amphase\] and \[eq.viewingmag\] may also result in differences between our measurements and those of @Meech2017. Assuming is a prolate triaxial body with an axial ratio $a$:$b$:$c$ where $b \geq a \geq c$ and that the lightcurve variation in magnitude is wholly due to the changing projected surface area (consistent with the sinusoidal shape of our phased lightcurve), we obtain an upper limit of $b/a$ = 6.91 $\pm$ 3.41 from $b/a \; = \; 10^{0.4\Delta M}$ [@Binzel1989] where $\Delta M \; = \; A_{\rm max,\rm difference}$. A more conservative estimate of the upper limit on the peak-to-trough amplitude is given by using $A_{\rm max,\rm sin\, model}$ for $\Delta M$ resulting in $b/a$ = 4.13 $\pm$ 0.48. The uncertainty in the $b/a$ = 6.91 $\pm$ 3.41 using $\Delta M \; = \; A_{peak,\rm \; difference}$ is dominated by uncertainty on magnitude measurement compared to $A_{max,\rm \; sin\, model}$. The uncertainty of $A_{max,\rm \; sin\, model}$ are determined by the spread of compatible values for the amplitude within the uncertainties of all data points in the lightcurve and probably more statistically robust than using the difference between the minimum and maximum brightness data points in the lightcurve. However, this fact must be tempered by the fact that the true peak-to-trough amplitude may be underestimated due to the sparseness of data points as described in Section \[s.period\]. Therefore, we assume that the true axial ratio $b/a$ lies between 3.5 $\lesssim$ $b/a$ $\lesssim$ 10.3. These limits are based generalized assumptions and more accurately determining the true value of $b/a$ would require additional observations at different $\theta$ and at times in which the object’s rotation are not covered by our observations as discussed in Section \[s.gaussian\]. This large value for $A_{peak,\rm \; difference}$ or $A_{\rm peak,\rm sin\, model}$ suggests that the modulation seen in the lightcurve is due is due to the rotation of an elongated triaxial body dominated by the second harmonic resulting in a bimodal, double-peaked lightcurve [@Harris2014; @Butkiewicz2017]. Thus, we obtain a double-peaked amplitude of $P_{rotation}$ = 2$P_{\rm sin\, model}$ or 8.14 $\pm$ 0.02 h. Non-triaxial asteroid shapes can result in lightcurves exceeding two peaks per rotation period, but this is case is ruled out as unlikely as the large amplitude of the lightcurve strongly favors an elongated object [@Harris2014]. Another alternative explanation of the rotation period is that the lightcurve variation is due to surface variations in the reflectively of the asteroid. Surface variations result in single-peaked lightcurves [@Barucci1989], but the similarity of the colors and spectra of obtained in observations taken at different times [@Masiero2017; @Fitzsimmons2017; @Ye2017] does not suggest significant variation on the object’s surface. Asteroid elongations with 3.5 $\lesssim$ $b/a$ $\lesssim$ 10.3 are uncommon for asteroids in the Solar System where the majority of have $b/a$ $<$ 2.0 [@Cibulkova2016; @Cibulkova2017]. Only a few known Solar system asteroids have $b/a$ $>$ 4 (e.g., the asteroid Elachi with $b/a$ $\sim$ 4, comparable to our lower limit on $b/a$ for 1I [@Warner2011]. Smaller asteroids have been observed to have statistically higher elongations than larger asteroids [@Pravec2008; @Cibulkova2016]. Smaller asteroids and comets have weaker surface gravity and may be under large structural stress imposed by their rotation resulting in plasticity of their structure [@Harris09; @Hirabayashi2014a; @Hirabayashi2015] or may become reconfigured after fracturing due to rotational stress [@Hirabayashi2016]. To examine the possibility that rotational stress might be an explanation for the large elongation of 1I, we examine the existing evidence for rotational breakup of asteroids in the Solar System. Asteroids in the Solar System have been observed to undergo rotational break up into fragments such as active asteroids spin-up by thermal recoil forces [@Rubincam2000; @Jewitt2015a]. Additionally, active comets and asteroids can become spun up due to the sublimation of volatiles [@Samarasinha2013; @Steckloff2016]. The critical breakup period for a strengthless rotating ellipsoid with an axial ratio is given by [@Jewitt2017a; @Bannister2017] $$\label{eq.critperiod} P_{critical \; period} \; = \; (b/a) \left ( \frac{3 \pi}{G \rho} \right )^{1/2}$$ where $\rho$ is the asteroid density and $G$ is the gravitational constant. Fig. \[fig:criticalperiod\] shows the value of $P_{critical \; period}$ in h for values of $a/b$ allowable by our results and $\rho$ for different Solar System asteroid taxonomic types from comets, D-types with 0.5-1.0 ${\,\mathrm{g \; cm^{-3}}}$ , B and C-types with 1.2-1.4 ${\,\mathrm{g \; cm^{-3}}}$, S-types with 2.3 ${\,\mathrm{g \; cm^{-3}}}$, X-types with 2.7 ${\,\mathrm{g \; cm^{-3}}}$, rubble piles with 3.3 ${\,\mathrm{g \; cm^{-3}}}$ and 4 ${\,\mathrm{g \; cm^{-3}}}$ for M-types [@Lisse1999; @Britt2002; @AHearn2005; @Fujiwara2006; @Carry2012]. I am not enabling plots. As seen in Fig. \[fig:criticalperiod\], the observed $\sim$8 h rotational period of is shorter than the critical break up period described by most of the $b/a$ vs. $\rho$ phase space covering typical asteroid densities for 3.5 $\lesssim$ $b/a$ $\lesssim$ 10.3. would have to have $\rho$ $>$ 6 ${\,\mathrm{g \; cm^{-3}}}$ (i.e., approaching the density of pure iron, beyond the known ranges of $\rho$ for asteroids in the Solar System, @Carry2012) for $b/a$ $>$ 6 to be be stable with a period of $\sim$8 h for zero cohesive strength. Assuming 1I has 4 $<$ $b/a$ $<$ 5, the lower limit of possible $b/a$ values from our lightcurve analysis, rotational stability is compatible with 3 ${\,\mathrm{g \; cm^{-3}}}$ $<$ $\rho$ $<$ 4 ${\,\mathrm{g \; cm^{-3}}}$, a reasonable value for S and M type asteroids [@Fujiwara2006; @Carry2012]. 1I would not be stable for $b/a$ $<$ 3.5 if it had a density $<$ 2 ${\,\mathrm{g \; cm^{-3}}}$ such as found for Solar System C types asteroids and comets assuming it has no structural strength. Assuming zero cohesive strength, 1I would be rotating near its break up limit if it has a $\rho$ $\lesssim$ 4.0 ${\,\mathrm{g \; cm^{-3}}}$, reasonable densities for most asteroid types in the solar system, for $b/a$ $>$ 5 as indicated by Eq. \[eq.critperiod\] and may be shedding material visible as a coma. Asteroids have been observed in the Solar System by their activity as a result of rotational breakup for P/2013 P5 and P/2013 R3 [@Bolin2013; @Hill2013b; @Jewitt2015b; @Jewitt2017; @Vokrouhlicky2017a]. However deep stacking of detections of in our own $r$ images as well as images of others have revealed no detectable presence of a coma [@Knight2017; @Williams2017]. This suggests that may actually have cohesive strength keeping it from disrupting or shedding material that would be detectable as a coma. Assuming the lower limit on of cohesive strength, $Y$, on the equitorial surface of an asteroid or comet is given by $$\label{eqn.cohesivestrength} Y \; = \; 2 \pi^2 P^{-2} \; b^2 \; \rho$$ [@Lisse1999], the minimum cohesive force required to stabilize an object with $b$ = .48 km assuming a mean radius of 0.18 km and $b/a$ = 7, 1 ${\,\mathrm{g \; cm^{-3}}}$ $<$ $\rho$ $<$ 4.0 ${\,\mathrm{g \; cm^{-3}}}$ and a spin period of 8.14 h period is only 5 Pa $\lesssim$ $Y$ $\lesssim$ 20 Pa, comparable to bulk strength of comet nuclei or cohesive strength of extremely weak materials like talcum powder or beach sand held together mainly by inter-grain friction [@Sanchez2014; @Kokotanekova2017]. Thus even a real rubble pile or comet nuclei, influenced by inter-block frictional forces, could be stable from our measurements. The implication is that either has an uncharacteristically high $\rho$ than is possible for asteroids in the solar system and is strengthless, or it has non-zero cohesive strength for 3.5 $<$ $b/a$ $<$ 10.3 and 1.0 ${\,\mathrm{g \; cm^{-3}}}$ $<$ $\rho$ $<$ 7.0 ${\,\mathrm{g \; cm^{-3}}}$ as seen in Fig \[fig:cohesivestrength\]. I am not enabling plots. The apparent large axial ratio of 1I seems to not be originated by rotational disruption as indicated by its present $\sim$8 h rotation period although it has been shown that asteroids can have plasticity in their structure due to rotational stress without undergoing disrupting such as can be for the case of asteroid Cleopatra [@Hirabayashi2014a]. Thermal recoil forces such as the Yarkovsky‚Äö√Ñ√¨-O’Keefe-‚Äö√Ñ√¨Radzievskii‚Äö√Ñ√¨-Paddack effect (YORP) [@Rubincam2000; @Bottke2006; @Vokrouhlicky2015] could have modified it rotation rate to structurally altering or disruptive rotation periods while 1I was in its home system. YORP modification of spin rate would have had to occur while 1I was still close to its host star YORP modification of an asteroid’s spin rate is thermally dependent and has a greater affect on asteroids that are closer to the sun and non-effective at heliocentric distances exceeding  10 au for 100 m scale asteroids [@Vokrouhlicky2006b; @Vokrouhlicky2007]. Perhaps the fact that 1I has a shape potentially originated by YORP and later had its spin period slowed down is additional evidence when combined with colors and spectra that 1I could not have originated from to far from its host star before it was ejected from its star system before reaching ours. Another explanation for the elongated shape of 1I is that it obtained its elongated shape when it was ejected from its home system during a close encounter with a planet or a star. It is known that asteroids and comets can be ejected from the solar system during close encounters with planets [@Granvik2017] During these close encounters, objects can pass within the Roche limit the planet subjecting their structure to tidal forces. We can eliminate the possibility that 1I experienced tidal disruption during its passage through the solar system because it came no more within 10 times the Roche limit distance from the sun during its perihelion passage on 2017 September 09. It has been shown that tidal forces can completely disrupt the structure of comets and asteroids such as in the complete disruption of Comet Shoemaker-Levy 9 during its close encounter with Jupiter [@Shoemaker1995; @Asphaug1996] and the tidal distortion of the asteroid Geographos during close encounters with the Earth [@Bottke1999; @Durech2008; @Rozitis2014a]. Modeling of asteroids and comets under the stress of tidal forces reveals that that one result of tidal encounter event is that their structures become elongated due to the stress of tidal forces [@Solem1996; @Richardson1998; @Walsh2015]. Furthermore, in the complete disruption case of an an asteroid or comet by tidal disruption, the fragmentation of the parent body can result in fragments having elongated shapes [@Walsh2006; @Richardson2009]. Therefore, 1I could have attained its elongated structure while experiencing tidal distortion itself, or while being produced as a fragment from a larger body undergoing complete tidal disruption. We can examine the possibility that the highly elongated shape as of 1I with cohesive strength could have been shaped by tidal forces during a close encounter with a gas giant planet. The scaling for tidal disruption distance of a comet-like body with an assumed cohesive strength $<$ 65 Pa consistent with the range of possible cohesive strengths for a body with 4 $<$ $b/a$ $<$ 7 and 0.7 ${\,\mathrm{g \; cm^{-3}}}$ $<$ $\rho$ $<$ 7.0 ${\,\mathrm{g \; cm^{-3}}}$ as seen in Fig \[fig:cohesivestrength\], from @Asphaug1996 is $$1 < \; \frac{d}{R} \; < \; \left ( \frac{\rho_{1\mathrm{I}}}{\rho_{planet}}\right )^{-1/3}$$ where $d$ and $R$ are the close passage distance and planet radius, we can predict how close 1I would have had to have passed by a gas giant to be tidally disrupted. Using the above limit and assuming $\rho_{planet}$ = 1.33 ${\,\mathrm{g \; cm^{-3}}}$, the density of Jupiter [@Simon1994], 1I would have to have a $\rho$ $<$ 1.3 ${\,\mathrm{g \; cm^{-3}}}$ to enable a close enough encounter distance to the gas giant planet to be tidally disrupted while $d/R$ $>$ 1. A $\rho$ $\simeq$ 1.0 to 1.4 ${\,\mathrm{g \; cm^{-3}}}$ is possible for C and D type asteroids in the solar system [@Carry2012] and a cohesive strength $<$ 65 Pa is allowable by the range of $b/a$ described by our data, therefore, tidal disruption is a possible mechanism for the formation of the 1I’s shape. Conclusion {#s.discussion} ========== We observed interstellar asteroidal object [[1I/‘Oumuamua ]{}]{}from the Apache Point Observatory on 29 Oct 2017 from 04:28 to 08:40 UTC. 3color photometry and time domain observations were obtained in the $g$, $r$ and $i$ bands when the object was as bright as 22 and faint as magnitude 23. An unresolved object with reddish color and variable brightness was found. The results our observations are consistent with the point source nature and reddish color found by observers [@Masiero2017; @Fitzsimmons2017; @Ye2017]. [@Warner2009]. We conclude that the high elongation of 1I is possibly the result of tidal distortion or structural plasticity due to rotational stress. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank the reviewer of our manuscript, Matthew Knight, for providing a thorough review and helpful suggestions for improving the quality of the manuscript. Our work is based on observations obtained with the Apache Point Observatory 3.5-meter telescope, which is owned and operated by the Astrophysical Research Consortium. We thank the Director (Nancy Chanover) and Deputy Director (Ben Williams) of the Astrophysical Research Consortium (ARC) 3.5m telescope at Apache Point Observatory for their enthusiastic and timely support of our Director’s Discretionary Time (DDT) proposals. We also thank Russet McMillan and the rest of the APO technical staff for their assistance in performing the observations just two days after our DDT proposals were submitted. We thank Ed Lu, Sarah Tuttle, and Ben Weaver for fruitful discussions and advice that made this paper possible. BTB would like to acknowledge the generous support of the B612 Foundation and its Asteroid Institute program. MJ and CTS wish to acknowledge the support of the Washington Research Foundation Data Science Term Chair fund and the University of Washington Provost’s Initiative in Data-Intensive Discovery. BTB, DH, RLJ, MJ, JM, MLG, CTS, ECB and AJC wish to acknowledge the support of DIRAC (Data Intensive Research in Astronomy and Cosmology) Institute at the University of Washington. Joachim Moeyens thanks the LSSTC Data Science Fellowship Program, his time as a Fellow has benefited this work. We would also like to thank Marco Delbó, Alan Fitzsimmons, Robert Jedicke and Alessandro Morbidelli for constructive feedback and discussion when planning this project. Funding for the creation and distribution of the SDSS Archive has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Korean Scientist Group, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. Funding for the Asteroid Institute program is provided by B612 Foundation, W.K. Bowes Jr. Foundation, P. Rawls Family Fund and two anonymous donors in addition to general support from the B612 Founding Circle (K. Algeri-Wong, B. Anders, G. Baehr, B. Burton, A. Carlson, D. Carlson, S. Cerf, V. Cerf, Y. Chapman, J. Chervenak, D. Corrigan, E. Corrigan, A. Denton, E. Dyson, A. Eustace, S. Galitsky, The Gillikin Family, E. Gillum, L. Girand, Glaser Progress Foundation, D. Glasgow, J. Grimm, S. Grimm, G. Gruener, V. K. Hsu $\&$ Sons Foundation Ltd., J. Huang, J. D. Jameson, J. Jameson, M. Jonsson Family Foundation, S. Jurvetson, D. Kaiser, S. Krausz, V. Lašas, J. Leszczenski, D. Liddle, S. Mak, G.McAdoo, S. McGregor, J. Mercer, M. Mullenweg, D. Murphy, P. Norvig, S. Pishevar, R. Quindlen, N. Ramsey, R. Rothrock, E. Sahakian, R. Schweickart, A. Slater, T. Trueman, F. B. Vaughn, R. C. Vaughn, B. Wheeler, Y. Wong, M. Wyndowe, plus six anonymous donors). natexlab\#1[\#1]{} , M. F., [Belton]{}, M. J. S., [Delamere]{}, W. A., [et al.]{} 2005, Science, 310, 258 , S., [Foreman-Mackey]{}, D., [Greengard]{}, L., [Hogg]{}, D. W., & [O’Neil]{}, M. 2014 , R., [Morton]{}, T., [Aigrain]{}, S., [Foreman-Mackey]{}, D., & [Rajpaul]{}, V. 2017, ArXiv e-prints, arXiv:1706.05459 , E., & [Benz]{}, W. 1996, , 121, 225 , [Robitaille]{}, T. P., [Tollerud]{}, E. J., [et al.]{} 2013, , 558, A33 , M. T., [Schwamb]{}, M. E., [Fraser]{}, W. C., [et al.]{} 2017, ArXiv e-prints, arXiv:1711.06214 , M. A., [Capria]{}, M. T., [Harris]{}, A. W., & [Fulchignoni]{}, M. 1989, , 78, 311 , M. A., & [Fulchignoni]{}, M. 1982, Moon and Planets, 27, 47 , R. P., [Farinella]{}, P., [Zappalà]{}, V., & [Cellino]{}, A. 1989, in Asteroids II, ed. R. P. [Binzel]{}, T. [Gehrels]{}, & M. S. [Matthews]{}, 416–441 Bolin, B. T., Weaver, H. A., Fernandez, Y. R. et al., doi:10.5281/zenodo.1068467 , B., [Denneau]{}, L., [Micheli]{}, M., [et al.]{} 2013, Central Bureau Electronic Telegrams, 3639 , Jr., W. F., [Richardson]{}, D. C., [Michel]{}, P., & [Love]{}, S. G. 1999, , 117, 1921 , Jr., W. F., [Vokrouhlick[ý]{}]{}, D., [Rubincam]{}, D. P., & [Nesvorn[ý]{}]{}, D. 2006, Annual Review of Earth and Planetary Sciences, 34, 157 , D. T., [Yeomans]{}, D., [Housen]{}, K., & [Consolmagno]{}, G. 2002, Asteroids III, 485 , M., [Kwiatkowski]{}, T., [Bartczak]{}, P., [Dudzi[ń]{}ski]{}, G., & [Marciniak]{}, A. 2017, , 470, 1314 , B. 2012, , 73, 98 , H., [[Ď]{}urech]{}, J., [Vokrouhlick[ý]{}]{}, D., [Kaasalainen]{}, M., & [Oszkiewicz]{}, D. A. 2016, , 596, A57 , H., [Nortunen]{}, H., [[Ď]{}urech]{}, J., [et al.]{} 2017, ArXiv e-prints, arXiv:1709.05640 , N. V., [Ragozzine]{}, D., [Granvik]{}, M., & [Stephens]{}, D. C. 2016, , 825, 51 , F. E., & [Carry]{}, B. 2013, , 226, 723 , J., [Vokrouhlick[ý]{}]{}, D., [Kaasalainen]{}, M., [et al.]{} 2008, , 489, L25 , T., [Jedicke]{}, R., [Vere[š]{}]{}, P., [et al.]{} 2017, , 153, 133 , Y. R., [Weaver]{}, H. A., [Lisse]{}, C. M., [et al.]{} 2016, in American Astronomical Society Meeting Abstracts, Vol. 227, American Astronomical Society Meeting Abstracts, 141.22 , A., [Hyland]{}, M., [Jedicke]{}, R., [Snodgrass]{}, C., & [Yang]{}, B. 2017, Central Bureau Electronic Telegrams, 4450 Foreman-Mackey, D. 2016, The Journal of Open Source Software, 24, doi:10.21105/joss.00024 , D., [Hogg]{}, D. W., [Lang]{}, D., & [Goodman]{}, J. 2013, , 125, 306 , A., [Kawaguchi]{}, J., [Yeomans]{}, D. K., [et al.]{} 2006, Science, 312, 1330 , M., [Morbidelli]{}, A., [Vokrouhlick[ý]{}]{}, D., [et al.]{} 2017, , 598, A52 , P. J., [Davidsson]{}, B. J. R., [Ortiz]{}, J. L., [Rodrigo]{}, R., & [Vidal-Nu[ñ]{}ez]{}, M. J. 2006, , 454, 367 , A. W., [Fahnestock]{}, E. G., & [Pravec]{}, P. 2009, Icarus, 199, 310 , A. W., [Pravec]{}, P., [Gal[á]{}d]{}, A., [et al.]{} 2014, , 235, 55 , R. E., [Bolin]{}, B., [Kleyna]{}, J., [et al.]{} 2013, Central Bureau Electronic Telegrams, 3658 , M. 2015, , 454, 2249 , M., & [Scheeres]{}, D. J. 2014, , 780, 160 , M., [Scheeres]{}, D. J., [Chesley]{}, S. R., [et al.]{} 2016, , 534, 352 , J., [Ketzeback]{}, W., [Bradley]{}, A., [et al.]{} 2016, in , Vol. 9908, Ground-based and Airborne Instrumentation for Astronomy VI, 99085H Ivezi[ć]{}, [Ž]{}., Connolly, A. J., VanderPlas, J. T., & Gray, A. 2014, Statistics, Data Mining, and Machine Learning in Astronomy: A Practical Python Guide for the Analysis of Survey Data (Princeton University Press) , [Ž]{}., [Tabachnik]{}, S., [Rafikov]{}, R., [et al.]{} 2001, , 122, 2749 , [Ž]{}., [Lupton]{}, R. H., [Juri[ć]{}]{}, M., [et al.]{} 2002, , 124, 2943 , D., [Agarwal]{}, J., [Li]{}, J., [et al.]{} 2017, , 153, 223 , D., [Agarwal]{}, J., [Weaver]{}, H., [Mutchler]{}, M., & [Larson]{}, S. 2015, , 798, 109 , D., [Hsieh]{}, H., & [Agarwal]{}, J. 2015, [The Active Asteroids]{}, ed. P. [Michel]{}, F. E. [DeMeo]{}, & W. F. [Bottke]{}, 221–241 , D., [Luu]{}, J., [Rajagopal]{}, J., [et al.]{} 2017, ApJL, 850, L36 , D. E., [Stenning]{}, D. C., [Ford]{}, E. B., [et al.]{} 2017, ArXiv e-prints, arXiv:1711.01318 , M., [Ivezi[ć]{}]{}, [Ž]{}., [Lupton]{}, R. H., [et al.]{} 2002, , 124, 1776 , M. M., [Protopapa]{}, S., [Kelley]{}, M. S. P., [et al.]{} 2017, ApJL, 850, L5 , R., [Snodgrass]{}, C., [Lacerda]{}, P., [et al.]{} 2017, , 471, 2974 , C. M., [Beichman]{}, C. A., [Bryden]{}, G., & [Wyatt]{}, M. C. 2007, , 658, 584 , C. M., [Sitko]{}, M. L., [Marengo]{}, M., [et al.]{} 2017, , 154, 182 , C. M., [Fern[á]{}ndez]{}, Y. R., [Kundu]{}, A., [et al.]{} 1999, , 140, 189 , N. R. 1976, , 39, 447 MacKay, D. J. 1998, NATO ASI Series F Computer and Systems Sciences, 168, 133 , J. 2017, ArXiv e-prints, arXiv:1710.09977, submitted to ApJ. Meech, K. J., Weryk, R., Micheli, M., [et al.]{} 2017, Nature, EP , H., [Tsukagoshi]{}, T., [Kawabe]{}, R., [et al.]{} 2016, , 819, L7 , K. I., [Guzm[á]{}n]{}, V. V., [Furuya]{}, K., [et al.]{} 2015, , 520, 198 Ofek, E. O. 2012, The Astrophysical Journal, 749, 10 , P., [Harris]{}, A. W., & [Michalowski]{}, T. 2002, [Asteroid Rotations]{}, ed. W. F. [Bottke]{}, Jr., A. [Cellino]{}, P. [Paolicchi]{}, & R. P. [Binzel]{}, 113–122 , P., [Harris]{}, A. W., [Vokrouhlick[ý]{}]{}, D., [et al.]{} 2008, , 197, 497 Rasmussen, C. E., & Williams, C. K. 2006, Gaussian processes for machine learning, Vol. 1 (MIT press Cambridge) , D. C., [Bottke]{}, W. F., & [Love]{}, S. G. 1998, Icarus, 134, 47 , D. C., [Michel]{}, P., [Walsh]{}, K. J., & [Flynn]{}, K. W. 2009, , 57, 183 , B., & [Green]{}, S. F. 2014, , 568, A43 , D. P. 2000, , 148, 2 , N. H., & [Mueller]{}, B. E. A. 2013, , 775, L10 , P., & [Scheeres]{}, D. J. 2014, Meteoritics and Planetary Science, 49, 788 , J. D. 1982, , 263, 835 , E. M. 1995, , 22, 1555 , J. L., [Bretagnon]{}, P., [Chapront]{}, J., [et al.]{} 1994, , 282, 663 , J. C., & [Hills]{}, J. G. 1996, , 111, 1382 , M., [Ivezi[ć]{}]{}, [Ž]{}., [Juri[ć]{}]{}, M., [et al.]{} 2012, , 218, 571 , J. K., & [Jacobson]{}, S. A. 2016, , 264, 160 , A., [Moskovitz]{}, N., [Binzel]{}, R. P., [et al.]{} 2016, , 152, 163 , D., [Pravec]{}, P., [Durech]{}, J., [et al.]{} 2017, , 598, A91 , D., [Bottke]{}, W. F., [Chesley]{}, S. R., [Scheeres]{}, D. J., & [Statler]{}, T. S. 2015, Asteroids IV, 509 , D., [Breiter]{}, S., [Nesvorn[ý]{}]{}, D., & [Bottke]{}, W. F. 2007, , 191, 636 , D., [Bro[ž]{}]{}, M., [Bottke]{}, W. F., [Nesvorn[ý]{}]{}, D., & [Morbidelli]{}, A. 2006, , 182, 118 , K. J., & [Jacobson]{}, S. A. 2015, [Formation and Evolution of Binary Asteroids]{}, ed. P. [Michel]{}, F. E. [DeMeo]{}, & W. F. [Bottke]{}, 375–393 , K. J., & [Richardson]{}, D. C. 2006, , 180, 201 , B. D., & [Harris]{}, A. W. 2011, , 216, 610 , B. D., [Harris]{}, A. W., & [Pravec]{}, P. 2009, , 202, 134 , H. A., [Bolin]{}, B. T., [Fernandez]{}, Y. R., [et al.]{} 2017, Minor Planet Electronic Circulars, 2017 , G. V. 2017, Minor Planet Electronic Circulars , Q.-Z., [Zhang]{}, Q., [Kelley]{}, M. S. P., & [Brown]{}, P. G. 2017, ApJL, 850, L8 , V., [Cellino]{}, A., [Barucci]{}, A. M., [Fulchignoni]{}, M., & [Lupishko]{}, D. F. 1990, , 231, 548 [^1]:
--- abstract: 'We derive the mean-field equations characterizing the dynamics of a rumor process that takes place on top of complex heterogeneous networks. These equations are solved numerically by means of a stochastic approach. First, we present analytical and Monte Carlo calculations for homogeneous networks and compare the results with those obtained by the numerical method. Then, we study the spreading process in detail for random scale-free networks. The time profiles for several quantities are numerically computed, which allow us to distinguish among different variants of rumor spreading algorithms. Our conclusions are directed to possible applications in replicated database maintenance, peer to peer communication networks and social spreading phenomena.' author: - Yamir Moreno - Maziar Nekovee - 'Amalio F. Pacheco' title: Dynamics of Rumor Spreading in Complex Networks --- Introduction {#section0} ============ During the last years, many systems have been analyzed from the perspective of graph theory [@doro; @bara02]. It turns out that seemingly diverse systems such as the Internet, the World Wide Web (WWW), metabolic and protein interaction networks and food webs, to mention a few examples, share many topological properties [@strogatz]. Among these properties, the fact that one can go from one node (or element) of the network to another node passing by just a few others is perhaps the most popular property, known as “six degrees of separation” or small-world (SW) property [@strogatz; @ws98]. The SW feature has been shown to improve the performance of many dynamical processes as compared to regular lattices; a direct consequence of the existence of key shortcuts that speed up the communication between otherwise distant nodes and of the shorter path length among any two nodes on the net [@doro; @bara02; @strogatz]. However, it has also been recognized that there are at least two types of networks fulfilling the SW property but radically different as soon as dynamical processes are ran on top of them. The first type can be called “exponential networks” since the probability of finding a node with connectivity (or degree) $k$ different from the average connectivity ${\langle k \rangle}$ decays exponentially fast for large $k$ [@ama00]. The second kind of networks comprises those referred to as “scale-free” (SF) networks [@bar99]. For these networks, the probability that a given node is connected to $k$ other nodes follows a power-law of the form $P(k) \sim k^{-\gamma}$, with the remarkable feature that $\gamma\le 3$ for most real-world networks [@doro; @bara02]. The heterogeneity of the connectivity distribution in scale-free networks greatly impacts the dynamics of processes that they support. One of the most remarkable examples is that an epidemic disease will pervade in an infinite-size SF network regardless of its spreading rate [@pv01a; @moreno02; @virusreview; @av03; @n02b]. The change in the behavior of the processes is so radical in this case that it has been claimed that the standard epidemiological framework should be carefully revisited. This might be bad news for epidemiologists, and those fighting natural and computer viruses. On the other hand, in a number of important technological and commercial applications, it is desirable to spread the “epidemic” as fast and as efficient as possible, not to prevent it from spreading. Important examples of such applications are epidemic (or rumor-based) protocols for data dissemination and resource discovery on the Internet [@vogels; @ep3; @p2p; @ep1], and marketing campaigns using rumor-like strategies (viral marketing). The above applications, and their dynamics, have passed almost unnoticed [@zanette; @liu] to the physics community working on complex networks despite the fact that they have been extensively studied by computer scientists and sociologists [@ep1; @dkbook]. The problem here consists of designing an epidemic (or rumor-mongering) algorithm in such a way that the dissemination of data or information from any node of a network reaches the largest possible number of remaining nodes. Note that in this case, in contrast to epidemic modeling, one is free to design the rules of epidemic infection in order to reach the desired result, instead of having to model an existing process. Furthermore, in a number of applications, such as peer-to-peer file sharing systems [@p2p] built on top of the Internet and grid computing [@grid], the connectivity distribution of the nodes can also be changed in order to maximize the performance of such protocols. In this paper we study in detail the dynamics of a generic rumor model [@dk64] on complex scale-free topologies through analytic and numerical studies, and investigate the impact of the interaction rules on the efficiency and reliability of the rumor process. We first solve the model analytically for the case of exponential networks in the infinite time limit and then introduce a stochastic approach to deal with the numerical solution of the mean-field rate equations characterizing the system’s dynamics. The method [@pre98; @jgr; @mgp03] is used to obtain accurate results for several quantities when the topology of random SF networks is taken into account, without using large and expensive Monte Carlo (MC) simulations. The rest of the paper is organized as follows. Section II is devoted to introducing the rumor model and to derive the mean-field rate equations used throughout the paper. In Section III we deal with the stochastic approach, and compare its performance with analytical and MC calculations in homogeneous systems. We extend the method to the case of power-law distributed networks and present the results obtained for this kind of networks in Section IV and V. Finally, the paper is rounded off in the last Section, where conclusions are given. Rumor Model in Homogeneous Networks {#section1} =================================== The rumor model is defined as follows. Each of the $N$ elements of the network can be in three different states. Following the original terminology and the epidemiological literature [@dkbook], these three classes correspond to ignorant, spreader and stifler nodes. Ignorants are those individuals who have not heard the rumor and hence they are susceptible to be informed. The second class comprises active individuals that are spreading the rumor. Finally, stiflers are those who know the rumor but that are no longer spreading it. The spreading process evolves by directed contacts of the spreaders with others in the population. When a spreader meets an ignorant the last one turns into a new spreader with probability $\lambda$. The decay of the spreading process may be due to a mechanism of “forgetting” or because spreaders learn that the rumor has lost its “news value”. We assume this latter hypothesis as the most plausible so that the contacting spreaders become stiflers with probability $\alpha$ if they encounter another spreader or a stifler. Note that as we are designing our rumor strategy in such a way that the fraction of the population which ultimately learns the rumor be the maximum possible, we have assumed that contacts of the type spreader-spreader are directed, that is, only the contacting individual loses the interest in propagating the rumor further. Therefore, there is no double transition to the stifler class. In a homogeneous system, the original rumor model due to Daley and Kendall [@dk64] can be described in terms of the densities of ignorants, spreaders, and stiflers, $i(t)$, $s(t)$, and $r(t)$, respectively, as a function of time. Besides, we have the normalization condition, $$i(t)+s(t)+r(t)=1. \label{eq1}$$ In order to obtain an analytical insight and a way to later test our numerical approach, we first study the rumor model on top of exponentially distributed networks. These include models of random graphs as well as the Watts and Strogatz (WS) small-world model [@ws98; @strogatz]. This model produces a network made up of $N$ nodes with at least $m$ links to other nodes. The resulting connectivity distribution in the random graph limit of the model [@ws98] takes the form $$P(k)=\frac{m^{k-m}}{(k-m)!}e^{-m},$$ which gives an average connectivity ${\langle k \rangle}=2m$. Hence, the probability that a node has a degree $k\gg {\langle k \rangle}$ decays exponentially fast and the network can be regarded as homogeneous. The mean-field rate equations for the evolution of the three densities satisfy the following set of coupled differential equations: $$\begin{aligned} \frac{d i(t)}{d t} & = & - \lambda {\langle k \rangle}i(t) s(t), \label{eq2}\\ \frac{d s(t)}{d t} & = & \lambda {\langle k \rangle}i(t) s(t) - \alpha {\langle k \rangle}s(t) [s(t)+r(t)], \label{eq3}\\ \frac{d r(t)}{d t} & = & \alpha {\langle k \rangle}s(t) [s(t)+r(t)], \label{eq4}\end{aligned}$$ with the initial conditions $i(0)=(N-1)/N$, $s(0)=1/N$ and $r(0)=0$. The above equations state that the density of spreaders increases at a rate proportional to the spreading rate $\lambda$, the average number of contacts of each individual ${\langle k \rangle}$ and to the densities of ignorant and spreader individuals, $i(t)$ and $s(t)$, respectively. On the other hand, the annihilation mechanism considers that spreaders decay into the stifler class at a rate $\alpha {\langle k \rangle}$ times the density of spreaders and of non-ignorant individuals $1-i(t)=s(t)+r(t)$. The system of differential equations (\[eq2\]-\[eq4\]) can be analytically solved in the infinite time limit when $s(\infty)=0$. Using equation (\[eq1\]), we have that $\int_{0}^{\infty} s(t) dt=r_{\infty}=lim_{t\rightarrow\infty}r(t) $. Introducing the new variable $\beta=1+\lambda/\alpha$ we obtain the transcendental equation, $$r_{\infty}=1-e^{-\beta r_{\infty}}. \label{eq5}$$ Equation (\[eq5\]) always admits the trivial solution $r_{\infty}=0$, but at the same time it also has another physically relevant solution [*for all*]{} values of the parameters $\lambda$ and $\alpha$. This can be easily appreciated since the condition, $$\frac{d}{dr_{\infty}}\left. \left(1-e^{-\beta r_{\infty}}\right ) \right |_{{r_{\infty}}=0}>1,$$ reduces to $\lambda/\alpha > 0$. That is, there is no “rumor threshold” contrary to the case of epidemic spreading [@moreno02]. This strikingly different behavior does not come from any difference in the growth mechanism of $s(t)$ $-$the two are actually the same$-$, but from the disparate rules for the decay of the spreading process. On the other hand, this result also points out that a mathematical model for the spreading of rumors can be constructed in many different ways. The results of this paper, however, indicate that the presence of spreader annihilation terms due to spreader-spreader and spreader-stifler interactions is very relevant for practical implementations [@mnv03; @nm03]. We shall come back to this point later on. Stochastic Numerical Approach {#section2} ============================= Recently [@mgp03], we have introduced a numerical technique [@pre98] to deal with the mean-field rate equations appearing in epidemic-like models. It solves the differential equations by calculating the passage probabilities for the different transitions. The main advantage of this method, as compared to MC simulations, is its modest memory and CPU time requirements for large system sizes. Besides, we do not have to generate any network. Instead, we produce a sequence of integers distributed according to the desired connectivity distribution $P(k)$. The numerical procedure here proceeds as follows. At each time step until the end of the rumor spreading process, the following steps are performed: $\alpha$ Eq. (\[eq5\]) MC SNA ---------- --------------- ------- ------- 1 0.7968 0.813 0.802 0.5 0.9404 0.962 0.954 0.25 0.9930 0.986 0.987 0.2 0.9974 0.996 0.997 0.1 0.9999 0.998 0.999 : Density of stiflers at the end of the rumor spreading process. Results are shown for 5 different values of $\alpha$ for each method considered. Monte Carlo (MC) simulations were performed in a WS network with ${\langle k \rangle}=6$ and $N=10^4$ nodes. The same system size was used in the stochastic numerical approach (SNA).[]{data-label="table1"} 1. Identify from the mean-field rate equations the transition probabilities per time unit from one state into the following one, that is, from the $i$ class to the $s$ class, $W_{i\rightarrow s}$, and finally to the $r$ class, $W_{s\rightarrow r}$. 2. Calculate the mean time interval, $\tau$, for one transition to occur. This is determined as the inverse of the sum of all the transition probabilities; $\tau=1/(W_{i\rightarrow s}+W_{s\rightarrow r})$. 3. Stochastically decide what transition will actually take place. This is done by deciding that the probabilities for both transitions are given by $\Pi_{i\rightarrow s}=W_{i\rightarrow s}\tau$ and $\Pi_{s\rightarrow r}=W_{s\rightarrow r}\tau$, respectively, materializing the choice by generating a random number between 0 and 1. The numerical algorithm described above does not depend on the topological features of the network on top of which the rumor dynamics is taking place. Indeed, all the topological information, including correlations, enters in the computation of the transition probabilities. We should note here that the present results are obtained for uncorrelated networks. The method could also be applied to correlated networks without explicit generation of them. In that case, one should work with the two point correlation function $P(k,k')$ [@mgp03] instead of using $P(k)$. On the other hand, a correlated network could be built up as in [@vw]. In order to gain confidence with the method and to show its soundness, we show in Table \[table1\] the values of $r_{\infty}$ obtained from Eq. (\[eq5\]), MC simulations and the stochastic approach for homogeneous networks. In this case, the transition probabilities are the same for all the elements within a given class ($i$, $s$ or $r$) irrespective of their actual connectivities. From equations(\[eq2\]-\[eq4\]) we get $$\begin{aligned} W_{i\rightarrow s}(t) & = & N \lambda{\langle k \rangle}i(t)s(t), \\ W_{s\rightarrow r}(t) & = & N \alpha{\langle k \rangle}s(t)[s(t)+r(t)],\end{aligned}$$ for the transitions from the ignorant to the spreader class and from the spreader to the stifler class, respectively. It can be seen from Table \[table1\] that the difference between the SNA result and the MC simulations is less that $1.4\%$, indicating the reliability of the SNA approach. The remaining small differences between the SNA and the MC results is mainly due to the fact that the homogeneous SNA model does not take into account the exponentially decaying fluctations in the connectivity of WS networks. On the other hand, MC simulations of the rumor dynamics for a network made up of $N=10^4$ nodes, averaged over at least 10 different network realizations and 1000 iterations, took several hours. Eventually, this method takes up to a few days when increasing the system size and decreasing the value of $\alpha$. On the contrary, the stochastic approach is very fast. Indeed, for the same parameter values, the numerical simulation takes around 5 minutes CPU time in a 2.0Ghz-P4 PC. Therefore, having such a method will allow us to scrutinize very efficiently and accurately the whole phase diagram and time profiles of the process under study. In what follows, we analyze in detail the dynamics of the rumor spreading process by numerically solving the mean-field rate equations for SF networks. power-law distributed networks ============================== The heterogeneity of the connectivity distribution inherent to SF networks significantly affects the dynamical evolution of processes that take place on top of these networks [@pv01a; @moreno02; @virusreview; @av03; @n02b; @havlin01; @newman00; @bar00; @mv02]. We have learned in recent years that the fluctuations of the connectivity distribution, ${\langle k^2 \rangle}$, can not be neglected even for finite size systems [@virusreview]. Thus, the system of differential equations (\[eq2\]-\[eq4\]) should be modified accordingly. In particular, we should take into account that nodes could not only be in three different states, but also they belong to different connectivity classes $k$. Let us denote by $i_k(t)$, $s_k(t)$, and $r_k(t)$ the densities of ignorants, spreaders and stiflers with connectivity $k$, respectively. In addition, we have that $i_k(t)+s_k(t)+r_k(t)=1$. The mean-field rate equations now read as, $$\begin{aligned} \frac{d i_k(t)}{d t} & = & - \lambda k i_k(t)\sum_{k'} \frac{k' P(k')s_{k'}(t)}{{\langle k \rangle}}, \label{eq9}\\ \frac{d s_k(t)}{d t} & = & \lambda k i_k(t)\sum_{k'} \frac{k' P(k')s_{k'}(t)}{{\langle k \rangle}}\nonumber\\ & & -\alpha k s_k(t) \sum_{k'} \frac{k' P(k')[s_{k'}(t)+r_{k'}(t)]}{{\langle k \rangle}} , \label{eq10}\\ \frac{d r_k(t)}{d t} & = & \alpha k s_k(t) \sum_{k'} \frac{k' P(k')[s_{k'}(t)+r_{k'}(t)]}{{\langle k \rangle}} , \label{eq11}\end{aligned}$$ where $P(k)$ is the connectivity distribution of the nodes and $\sum_{k'} k'P(k')s_{k'}(t)/{\langle k \rangle}$ is the probability that any given node points to a spreader. We start from a randomly selected spreader and all the remaining nodes in the ignorant class. The summation in Eq. (\[eq10\]) stands for the probability that a node points to a spreader or a stifler. Note that as before, we do not allow for double transitions from the spreader to the stifler class. Next, we compute the respective transition probabilities. In this case, we should also consider that transitions from one state into another also take place within connectivity classes. Thus, the transition probabilities depend on $k$ as well. From Eq. (\[eq10\]) we obtain, $$\begin{aligned} W_{i\rightarrow s}(t,k) & = & \lambda k N P(k) i_k(t) \sum_{k'}\frac{k'P(k')s_{k'}(t)}{{\langle k \rangle}}, \label{eq12}\\ W_{s\rightarrow r}(t,k) & = & \alpha k N P(k) s_k(t) \sum_{k'}\frac{k' P(k')[s_{k'}(t)+r_{k'}(t)]}{{\langle k \rangle}}, \nonumber\\ & & \label{eq13}\end{aligned}$$ where all the topological information is contained. Finally for the mean time interval after $i-1$ transitions, $\tau$, we find at each time step $$\tau=\frac{1}{W_{i\rightarrow s}(t)+W_{s\rightarrow r}(t)}, \label{eq14}$$ with $W_{i\rightarrow s}(t)=\sum_k W_{i\rightarrow s}(t,k)$, $W_{s\rightarrow r}(t)=\sum_k W_{s\rightarrow r}(t,k)$ and $t=\sum_j^{i-1}\tau_j$, where the $\tau_j$s are the mean times of the $i-1$ previous transitions. At this point, the identification of what transition takes place and which connectivity class is affected proceeds as defined in step $3$ of the previous section. results and discussion ====================== The stochastic method described above can be used to explore several quantities characterizing the dynamics of the rumor spreading process. Throughout the rest of the paper we set $\lambda=1$ without loss of generality and vary the value of $\alpha$. We first generated a sequence of integers distributed according to $P(k)\sim k^{-\gamma}$ with $\gamma=3$ and ${\langle k \rangle}=6$. As initial condition we use $r_k(t=0)=0$, and $$s_k(t)= \begin{cases} \frac{1}{NP(k)} & k=k_{i} \\ 0 & \text{otherwise} \end{cases}$$ where $k_i$ is the connectivity of the randomly chosen initial spreader. The results are then averaged over at least $1000$ different choices of $k_i$. One of the most important practical aspects of any rumor mongering process is whether or not it reaches a high number of individuals. This magnitude is simply given by the final density of stiflers and is called [*reliability*]{} of the rumor process. However, it is also of great importance for potential applications that higher levels of reliability are reached as fast as possible, which constitute a practical measure of the cost associated to such levels of stiflers. For example, in technological applications, where one may consider several strategies [@mnv03; @nm03], it is possible to define a key global quantity, the efficiency of the process, which is the ratio between the reliability and the traffic imposed to the network. For these applications it is not only important to have high levels of reliability but also to achieve these with the lowest possible load resulting from the epidemic protocol’s message passing traffic. This is important in order to avoid network congestion and also to reduce the amount of processing power used by nodes participating in the rumor process. In order to analyze, from a global perspective, this trade-off between reliability and cost, we use time as a practical measure of efficiency. We call a rumor process less efficient than another if it needs more time to reach the same level of reliability. Figure\[figure1\] shows the time evolution of the density of stiflers for several values of the parameter $\alpha$. It turns out, as expected, that the number of individuals who finally learned the rumor increases as the probability of becoming stifler decreases. On the other hand, the time it takes for $R(t)$ to reach its asymptotic value slightly increases with $\alpha^{-1}$, but clear differences do not arise for the two extreme values of $\alpha$. In fact, for a given time after the beginning of the rumor propagation, the density of stiflers scales with the inverse of $\alpha$. This behavior is further corroborated in the inset, where the growth of the density of spreaders as time goes on is shown for the same values of the parameter $\alpha$. While the peaks of the curves get larger and larger, the times at which the maxima are reached are of the same order of magnitude and thus the meantimes of the spreading processes do not differ significantly. Figure \[figure2\] shows another aspect worth taking into account when dealing with rumor algorithms. For a given level of reliability, it is also of interest to know the distribution of ignorants (or stiflers) by classes $k$. The figure shows a coarse-grained picture of Fig. \[figure1\], where the density of ignorants $i_k$ according to the connectivity of the individuals has been represented for different values of $\alpha$. The results indicate that the probability of having an ignorant with a connectivity $k$, at the end of the rumor propagation, decays exponentially fast with a sharp cut-off $k_c$ for large connectivity values, which depends on $\alpha$. In fact, $k_c$ is always well below the natural cut-off of the connectivity distribution ($\sim 10^2$) even for small values of $\alpha$. This implies that hubs effectively learn the rumor. We can further scrutinize the dynamics of the rumor spreading process by looking at the final density of stiflers when the initial spreader has a given connectivity $k_i$. Figure \[figure3\] represents the reliability as a function of time (in units of $\alpha^{-1}$) when the rumor starts propagating from a node of connectivity $k_i=k_{min}=3$, $k_i={\langle k \rangle}=6$, $k_i=20$ and $k_i=k_{max}\sim 280$ for two different values of $\alpha$: $0.1$ (main figure) and $1.0$ (inset). Interestingly, the final value of $R(t)$ does not depend on the initial seed, but reaches the same level irrespective of the connectivity of the very first spreader $k_i$. This is a genuine behavior of the rumor dynamics and is the opposite to what has been observed in other epidemic models like the SIR model [@moreno02], where the final number of recovered individuals strongly depends on the connectivity of the initially infected individuals. However, a closer look to the spreading dynamics tell us that not all is the same for different initial spreaders. The figure also indicates that as the connectivity of the seed is increased, the time it takes for the rumor to reach the asymptotic value decreases, so that for a fixed time length the number of individuals in the stifler class is higher when $k_i$ gets larger. This feature suggests an interesting alternative for practical applications: start propagating the rumor from the most connected nodes. Even in case that no direct link exits between a node that is willing to spread an update and a hub, a dynamical (or temporal) shortcut to a well connected node could be created in order to speed up the process. With this procedure, the density of stiflers at the intermediate stages of the spreading process could be as much different as $30\%$ for moderate values of $\alpha$. This translates in less costs, because one can always implement an algorithm that kill off the actual spreading when a given level of reliability is reached. Note, however, that this behavior slightly depends on $\alpha$, being the differences always appreciable, but more important as $\alpha$ increases. Finally, we have exploited the fastness of the stochastic approach used here to explore the consequences of implementing three different annihilation rules for the rumor spreading decay. In particular, we consider that the spreading process dies out proportionally only to the number of spreaders ($ss$ interactions) or to the number of stiflers ($sr$ interactions). This modifies the terms entering in the sum of Eqs. (\[eq10\]-\[eq11\]) so that now the transition probabilities from the $s$ into the $r$ class read $$\begin{aligned} W_{s\rightarrow r}^{ss}(t,k) & = & \alpha k N P(k) s_k(t) \sum_{k'}\frac{k'P(k')s_{k'}(t)}{{\langle k \rangle}}, \label{eq15}\\ W_{s\rightarrow r}^{sr}(t,k) & = & \alpha k N P(k) s_k(t) \sum_{k'}\frac{k' P(k')r_{k'}(t)}{{\langle k \rangle}}, \label{eq16}\end{aligned}$$ respectively. Table \[table2\] summarizes the reliability of the process as a function of $\alpha$ for the three mechanisms considered [@note1]. The results indicate that in all variants, the final density of stifler individuals is higher than for the “classical” setting. However, in order to evaluate the efficiency of the process from a global perspective, we must look at the time evolution of the densities as we did before. $\alpha$ $R^{s(s+r)}$ $R^{sr}$ $R^{ss}$ ---------- -------------- ---------- ---------- 1 0.592 0.857 0.985 0.9 0.635 0.886 0.989 0.8 0.674 0.911 0.991 0.7 0.710 0.938 0.993 0.6 0.766 0.960 0.993 0.5 0.818 0.967 0.997 0.4 0.871 0.980 0.997 0.3 0.925 0.997 0.998 0.2 0.962 0.999 0.999 0.1 0.988 0.999 0.999 : Density of stiflers at the end of the rumor spreading process. Results are shown for 10 different values of $\alpha$ for each annihilation term considered. Simulations were performed for a network with ${\langle k \rangle}=6$ and $N=10^4$ nodes. See the text for further details.[]{data-label="table2"} In Figs. \[figure4\] and \[figure5\] we have represented the time (in units of $\alpha^{-1}$) profiles of $R(t)$ and $S(t)$ for each decay term and several values of $\alpha$. From the figures, it is clear that while the final density of stiflers increases when modifying the original decay rules, the time needed to reach such high levels of reliability also increases. This is due to the fact that the tails of the densities of spreaders decay more slowly than before. In particular, it is noticeable that when only spreader-spreader interactions are taken into account in the decay mechanism, the lifetime of the propagation process is more than two times longer than for the other two settings. This means that this implementation is not very suitable for practical applications as the costs associated to the process rise as well. On the other hand, the performance of the spreader-stifler setting seems to depend on the value of $\alpha$ in such a way that it is more efficient at both the reliability level and time consumption for a large $\alpha$, but not in the middle region of the parameter space. In summary, the present results support that the original model works quite well under any condition, while other variants can be considered depending on the value of $\alpha$ used and the type of applications they are designed for. conclusions =========== In this paper, we have analyzed the spreading dynamics of rumor models in complex heterogeneous networks. We have first introduced a useful stochastic method that allows us to obtain meaningful time profiles for the quantities characterizing the propagation process. The method is based on the numerical solution of the mean-field rate equations describing the model, and contrary to Monte Carlo simulations, there is no need of generating explicitly the network. This allows to save memory and a fast exploration of the whole evolution diagram of the process. The kind of processes studied here are of great practical importance since epidemic data dissemination might become the standard practice in multiple technological applications. The results show that there is a fragile balance between different levels of reliability and the costs (in terms of time) associated to them. In this sense, our study may open new paths in the use of rumor-mongering process for replicated database maintenance, reliable group communication and peer to peer networks [@dave; @deering; @vogels; @ep3; @p2p; @ep1]. Besides, as shown here, the behavior and features of the different algorithms one may implement are not trivial and depend on the type of mechanisms used for both the creation and the annihilation terms. It is worth noting here that we have studied the simplest possible set of rumor algorithms, but other ingredients such as memory must be incorporated in more elaborated models [@mnv03; @nm03]. Of further interest would be a more careful exploration of the possibility of using dynamical shortcuts for a more efficient spreading of the updates. Our results suggest that it would be more economic to start from hubs and then kill off the updating process when a given level of reliability is reached than starting at random and letting the process dies out by itself. Preliminary studies of more elaborated models aimed at implementing a practical protocol confirm our results [@nm03]. This feature is specially relevant for the understanding and modeling of social phenomena such as the spreading of new ideas or the design of efficient marketing campaigns. We would like to thank A. Vespignani for many useful discussions and comments. Y. M. thanks M. Vázquez-Prada for helpful comments and the hospitality of BT Exact, U.K. where parts of this work were carried out. Y. M. acknowledges financial support from the Secretaría de Estado de Educación y Universidades (Spain, SB2000-0357). This work has been partially supported by the Spanish DGICYT project BFM2002-01798. [99]{} S. N. Dorogovtsev and J. F. F. Mendes, Adv. Phys. [**51**]{}, 1079 (2002). R. Albert and A.-L. Barabási, Rev. Mod. Phys. [**74**]{}, 47 (2002). S. H. Strogatz, Nature (London) [**410**]{}, 268 (2001). D. J. Watts and H. S. Strogatz, Nature [**393**]{}, 440 (1998). L. A. N. Amaral, A. Scala, M. Barthélémi, and H. E. Stanley, [*Proc. Nat. Acad. Sci.*]{} [**97**]{}, 11149 (2000). A.-L. Barabási, and R. Albert, Science [**286**]{}, 509 (1999); A.-L. Barabási, R. Albert, and H. Jeong, Physica A [**272**]{}, 173 (1999). R. Pastor-Satorras, and A. Vespignani, Phys. Rev. Lett. [**86**]{}, 3200 (2001). Y. Moreno, R. Pastor-Satorras, and A. Vespignani, Eur. Phys. J. B [**26**]{}, 521 (2002). R. Pastor-Satorras, and A. Vespignani, [*Handbook of Graphs and Networks: From the Genome to the Internet*]{}, eds. S. Bornholdt and H.G. Schuster (Wiley-VCH, Berlin, 2002). A. Vázquez, and Y. Moreno, Phys. Rev. E [**67**]{}, 015101(R) (2003). M. E. J. Newman, Phys. Rev. E [**66**]{}, 016128 (2002). R. Cohen, K. Erez, D. ben-Avraham, and S. Havlin, [*Phys. Rev. Lett.*]{} [**85**]{}, 4626 (2000). D. S. Callaway, M. E. J. Newman, S. H. Strogatz, and D. J. Watts, [*Phys. Rev. Lett.*]{} [**85**]{}, 5468 (2000). R. Albert, H. Jeong, and A.-L. Barabási, Nature [**406**]{}, 378 (2000). Dave Kosiur, “IP Multicasting: The Complete Guide to Interactive Corporate Networks”, Wiley Computer Publishing, John Wiley & Sons, Inc, New York (1998). S. Deering, “Multicast routing in internetworks and extended LANs”, in Proc. ACM Sigcom ’88, page 55-64, Stanford, CA, USA (1988). Vogels, W., van Renesse, R. and Birman, K., “The Power of Epidemics: Robust Communication for Large-Scale Distributed Systems”, in the Proceedings of HotNets-I, Princeton, NJ (2002). A.-M. Kermarrec, A Ganesh and L. Massoulie, ”Probabilistic reliable dissemination in large-scale systems”, IEEE Trans. Parall. Distr. Syst., in press (2003). , ed. A. Oram (O’Reilly & Associates, Inc., Sebastopol, CA, 2001). A. J. Demers, D. H. Greene, C. Hauser, W. Irish, and J. Larson, [*Epidemic Algorithms for Replicated Database Maintenance*]{}. In Proc. of the Sixth Annual ACM Symposium on Principles of Distributed Computing, Vancouver, Canada, 1987. I. Foster and C. Kesselman, eds., [The Grid: Blueprint for a Future Computing Infrastructure]{}, Morgan Kaufman, San Francisco (1999). D. H. Zanette, Phys. Rev. E [**64**]{}, 050901(R) (2001). Z. Liu, Y.-C. Lai, and N. Ye, Phys. Rev. E [**67**]{}, 031911 (2003). D. J. Daley and J. Gani, [*Epidemic Modeling*]{} (Cambridge University Press, Cambridge UK, 2000). D. J. Daley, and D. G. Kendall, Nature [**204**]{}, 1118 (1964). J. B. Gómez, Y. Moreno, and A. F. Pacheco, Phys. Rev. E [**58**]{}, 1528 (1998). Y. Moreno, A. M. Correig, J. B. Gómez, and A. F. Pacheco, J. Geophys. Res. [**B 106**]{}, 6609 (2001). Y. Moreno, J. B. Gómez, and A. F. Pacheco, Phys. Rev. E [**68**]{}, 035103(R) (2003). Y. Moreno, M. Nekovee, and A. Vespignani, [*cond-mat/0311212*]{} (2003). M. Nekovee and Y. Moreno, in preparation. A. Vazquez and M. Weigt, Phys. Rev. E [**67**]{}, 027101 (2003). Y. Moreno, A. Vázquez, Europhy. Lett. [**57**]{}, 765 (2002). Note that for the $sr$ variant, the initial conditions should include the existence of at least one stifler. This has been taken into account by randomly selecting a node among all the remaining ignorants. Hence the initial conditions are now $I(0)=N-2$, $S(0)=1$, and $R(0)=1$.
--- abstract: | Every self-similar group acts on the space ${X^\omega}$ of infinite words over some alphabet $X$. We study the Schreier graphs ${\Gamma}_w$ for $w\in{X^\omega}$ of the action of self-similar groups generated by bounded automata on the space ${X^\omega}$. Using sofic subshifts we determine the number of ends for every Schreier graph ${\Gamma}_w$. Almost all Schreier graphs ${\Gamma}_w$ with respect to the uniform measure on ${X^\omega}$ have one or two ends, and we characterize bounded automata whose Schreier graphs have two ends almost surely. The connection with (local) cut-points of limit spaces of self-similar groups is established. **Keywords**: self-similar group, Schreier graph, end of graph, bounded automaton, limit space, tile, cut-point **Mathematics Subject Classification 2010**: 20F65, 05C63, 05C25\ author: - 'Ievgen Bondarenko, Daniele D’Angeli, Tatiana Nagnibeda [^1]' title: '**Ends of Schreier graphs and cut-points of limit spaces of self-similar groups**' --- Introduction ============ One of the fundamental properties of fractal objects is self-similarity, which means that pieces of an object are similar to the whole object. In the last twenty years the notion of self-similarity has successfully penetrated into algebra. This lead to the development of many interesting constructions such as self-similar groups and semigroups, iterated monodromy groups, self-iterating Lie algebras, permutational bimodules, etc. The first examples of self-similar groups showed that these groups enjoy many fascinating properties (torsion, intermediate growth, finite width, just-infiniteness and many others) and provide counterexamples to several open problems in group theory. Later, Nekrashevych showed that self-similar groups appear naturally in dynamical systems as iterated monodromy groups of self-coverings and provide combinatorial models for iterations of self-coverings. Self-similar groups are defined by their action on the space $X^{*}$ of all finite words over a finite alphabet $X$ — one the most basic self-similar object. A faithful action of a group $G$ on $X^{*}$ is called self-similar if for every $x\in X$ and $g\in G$ there exist $y\in X$ and $h\in G$ such that $g(xv)=yh(v)$ for all words $v\in X^{*}$. The self-similarity of the action is reflected in the property that the action of any group element on a piece $xX^{*}$ (all words with the first letter $x$) of the space $X^{*}$ can be identified with the action of another group element on the whole space $X^{*}$. We can also imagine the set $X^{*}$ as the vertex set of a regular rooted tree with edges $(v,vx)$ for $x\in X$ and $v\in X^{*}$. Then every self-similar group acts by automorphisms on this tree. Alternatively, self-similar groups can be defined as groups generated by the states of invertible Mealy automata which are also known as automaton groups or groups generated by automata. All these interpretations come from different applications of self-similar groups in diverse areas of mathematics: geometric group theory, holomorphic dynamics, fractal geometry, automata theory, etc. (see [@self_sim_groups; @fractal_gr_sets; @GNS] and the references therein). In this paper we consider Schreier graphs of self-similar actions of groups. Given a group $G$ generated by a finite set $S$ and acting on a set $M$, one can associate to it the (simplicial) Schreier graph $\Gamma(G,M, S)$: the vertex set of the graph is the set $M$, and two vertices $x$ and $y$ are adjacent if and only if there exists $s\in S\cup S^{-1}$ such that $s(x)=y$. Schreier graphs are generalizations of the Cayley graph of a group, which corresponds to the action of a group on itself by the multiplication from the left. Every self-similar group $G$ preserves the length of words in its action on the space $X^{*}$. We then have a family of natural actions of $G$ on the sets $X^n$ of words of length $n$ over $X$. From these actions one gets a family of corresponding finite Schreier graphs $\{\Gamma_n\}_{n\geq 1}$. It was noticed in [@barth_gri:spectr_Hecke] that for a few self-similar groups the graphs $\{{\Gamma}_n\}_{n\geq 1}$ are substitution graphs (see [@Previte]) — they can be constructed by a finite collection of vertex replacement rules — and, normalized to have diameter one, they converge in the Gromov-Hausdorff metric to certain fractal spaces. However, in general, the Schreier graphs of self-similar groups are neither substitutional nor self-similar in any studied way (see discussion in [@PhDBondarenko Section I.4]). Nevertheless, the observation from [@barth_gri:spectr_Hecke] lead to the notion of the limit space of a self-similar group introduced by Nekrashevych [@self_sim_groups], which usually have fractal structure. Although finite Schreier graphs $\{{\Gamma}_n\}_{n\geq 1}$ of a group do not necessary converge to the limit space, they form a sequence of combinatorial approximations to it. Any self-similar action can be extended to the set ${X^\omega}$ of right-infinite words over $X$ (boundary of the tree $X^{*}$). Therefore we can also consider the uncountable family of Schreier graphs $\{\Gamma_w\}_{w\in{X^\omega}}$ corresponding to the action of the group on the orbit of $w$. Each Schreier graph $\Gamma_w$ can be obtained as a limit of the sequence $\{\Gamma_n\}_{n\geq 1}$ in the space $\mathcal{G}^{*}$ of (isomorphism classes of) pointed graphs with pointed Gromov-Hausdorff topology. The map $\theta: X^{\omega}\rightarrow \mathcal{G}^{*}$ sending a point $w$ to the isomorphism class of the pointed graph $(\Gamma_w,w)$ pushes forward the uniform probability measure on the space $X^{\omega}$ to a probability measure on the space of Schreier graphs. This measure is the so-called Benjamini-Schramm limit of the sequence of finite graphs $\{\Gamma_n\}_{n\geq 1}$. Therefore the family of Schreier graphs ${\Gamma}_w$ and the limit space represent two limiting constructions associated to the action and to the sequence of finite Schreier graphs $\{\Gamma_n\}_{n\geq 1}$. Structure of these Schreier graphs as well as some of their properties such as spectra, expansion, growth, random weak limits, probabilistic models on them, have been studied in various works over the last ten years, see [@barth_gri:spectr_Hecke; @gri_zuk:lampl_group; @gri_sunik:hanoi; @PhDBondarenko; @ddmn:GraphsBasilica; @ddn:Ising; @ddn:Dimer; @GrowthSch; @MN_AMS; @ZigZag:bond]. Most of the studied self-similar groups are generated by the so-called bounded automata introduced by Sidki in [@sidki:circ]. The structure of bounded automata is clearly understood, which allows one to deal fairly easily with groups generated by such automata. The main property of bounded automaton groups is that their action is concentrated along a finite number of “directions” in the tree $X^{*}$. Every group generated by a bounded automaton belongs to an important class of contracting self-similar groups, which appear naturally in the study of expanding (partial) self-coverings of topological spaces and orbispaces, as their iterated monodromy groups [@self_sim_groups; @img]. Moreover, all iterated monodromy groups of post-criticaly finite polynomials are generated by bounded automata. The limit space of an iterated monodromy group of an expanding (partial) self-covering $f$ is homeomorphic to the Julia set of the map $f$. In the language of limit spaces, groups generated by bounded automata are precisely those finitely generated self-similar groups whose limit spaces are post-critically finite self-similar sets (see [@bondnek:pcf]). Such sets play an important role in the development of analysis on fractals (see [@kigami:anal_fract]). The main goal of this paper is to investigate the ends of the Schreier graphs $\{{\Gamma}_w\}_{w\in{X^\omega}}$ of self-similar groups generated by bounded automata and the corresponding limit spaces. The number of ends is an important asymptotic invariant of an infinite graph. Roughly speaking, each end represents a topologically distinct way to move to infinity inside the graph. The most convenient way to define an end in an infinite graph $\Gamma$ is by the equivalence relation on infinite rays in $\Gamma$, where two rays are declared equivalent if their tails lie in the same connected component of $\Gamma \setminus F$ for any finite subgraph $F$ of $\Gamma$. Any equivalence class is an *end* of the graph $\Gamma$. The number of ends is a quasi-isometric invariant. The Cayley graph of an infinite finitely generated group can have one, two or infinitely many ends. Two-ended groups are virtually infinite cyclic and the celebrated theorem of Stallings characterizes finitely generated groups with infinite number of ends. Plan of the paper and main results {#plan-of-the-paper-and-main-results .unnumbered} ---------------------------------- Our main results can be summarized as follows. - Given a group generated by a bounded automaton we exhibit a constructive method that determines, for a given right-infinite word $w$, the number of ends of the Schreier graph $\Gamma_w$ (Section \[Section\_Ends\]). - We show that the Schreier graphs $\Gamma_w$ of a group generated by a bounded automaton have either one or two ends almost surely, and there are only finitely many Schreier graphs with more than two ends (Section \[Section\_number\_ends\], Proposition \[prop\_more\_ends\], Theorem \[th\_classification\_two\_ends\]). - We classify bounded automata generating groups whose Schreier graphs $\Gamma_w$ have almost surely two ends (Theorem \[th\_classification\_two\_ends\]). In the binary case, these groups agree with the class of groups defined by Šunić in [@Zoran] (Theorem \[th\_Sch\_2ended\_binary\]). - We exhibit a constructive method that describes cut-points of limit spaces of groups generated by bounded automata (Section \[section\_component\]). - In particular, we get a constructive method that describes cut-points of Julia sets of post-critically finite polynomials (Section \[section\_component\]). - We show that a punctured limit space has one or two connected components almost surely (Theorem \[thm\_punctured\_tile\_ends\]). - We classify contracting self-similar groups whose limit space is homeomorphic to an interval or a circle (Corollary \[cor\_limsp\_interval\_circle\]). The paper is organized as follows. First, we determine the number of connected components in the Schreier graph ${\Gamma}_n\setminus v$ with removed vertex $v$. The answer comes from a finite deterministic acceptor automaton over the alphabet $X$ (Section \[finite automaton section\]) so that given a word $v=x_1x_2\ldots x_n$ the automaton returns the number of components in ${\Gamma}_n\setminus v$ (Theorem \[thm\_number\_of\_con\_comp\]). Using this automaton we determine the number of finite and infinite connected components in ${\Gamma}_w\setminus w$ for any $w\in{X^\omega}$. By establishing the connection between the number of ends of the Schreier graph ${\Gamma}_w$ and the number of infinite components in ${\Gamma}_w\setminus w$ (Proposition \[prop\_ends\_limit\]), we determine the number of ends of ${\Gamma}_w$ (Theorem \[thm\_number\_of\_ends\]). If a self-similar group acts transitively on $X^n$ for all $n\in\mathbb{N}$, the action on ${X^\omega}$ is ergodic with respect to the uniform measure on ${X^\omega}$, and therefore the Schreier graphs ${\Gamma}_w$ for $w\in{X^\omega}$ have the same number of ends almost surely. For a group generated by a bounded automaton the “typical” number of ends is one or two, and we show that in most cases it is one, by characterizing completely the bounded automata generating groups whose Schreier graphs ${\Gamma}_w$ have almost surely two ends (Theorem \[th\_classification\_two\_ends\]). In the binary case we show that automata giving rise to groups whose Schreier graphs have almost surely two ends correspond to the adding machine or to one of the automata defined by Šunić in [@Zoran] (Theorem \[th\_Sch\_2ended\_binary\]). In Section \[Section\_Cut-points\] we recall the notion of the limit space of a contracting self-similar group, and study the number of connected components in a punctured limit space for groups generated by bounded automata. We show that the number of ends in a typical Schreier graph ${\Gamma}_w$ coincides with the number of connected components in a typical punctured neighborhood (punctured tile) of the limit space (Theorem \[thm\_punctured\_tile\_ends\]). In particular, this number is equal to one or two. This fact is well-known for connected Julia sets of polynomials. While Zdunik [@zdunik] and Smirnov [@smirnov] proved that almost every point of a connected polynomial Julia set is a bisection point only when the polynomial is conjugate to a Chebyshev polynomial, we describe bounded automata whose limit spaces have this property. Moreover, we provide a constructive method to compute the number of connected components in a punctured limit space (Section \[cut-points section\]). Finally, using the results about ends of Schreier graphs, we classify contracting self-similar groups whose limit space is homeomorphic to an interval or a circle (Corollary \[cor\_limsp\_interval\_circle\]). This result agrees with the description of automaton groups whose limit dynamical system is conjugate to the tent map given by Nekrashevych and Šunić in [@img]. In Section \[Section\_Preliminaries\] we recall all needed definitions concerning self-similar groups, automata and their Schreier graphs. In Section 5 we illustrate our results by performing explicit computations for three concrete examples: the Basilica group, the Gupta-Fabrykowski group, and the iterated monodromy group of $z^2+i$. **Acknowledgments.** The substantial part of this work was done while the first author was visiting the Geneva University, whose support and hospitality are gratefully acknowledged. Preliminaries {#Section_Preliminaries} ============= In this section we review the basic definitions and facts concerning self-similar groups, bounded automata and their Schreier graphs. For more detailed information and for further references, see [@self_sim_groups]. Self-similar groups and automata -------------------------------- Let $X$ be a finite set with at least two elements. Denote by $X^{*}=\{x_1x_2\ldots x_n | x_i\in X, n\geq 0\}$ the set of all finite words over $X$ (including the empty word denoted $\emptyset$) and with $X^n$ the set of words of length $n$. The length of a word $v=x_1x_2\ldots x_n\in X^n$ is denoted by $|v| = n$. We shall also consider the sets ${X^\omega}$ and ${X^{-\omega}}$ of all right-infinite sequences $x_1x_2\ldots$, $x_i\in X$, and left-infinite sequences $\ldots x_2x_1$, $x_i\in X$, respectively with the product topology of discrete sets $X$. For an infinite sequence $w=x_1x_2\ldots$ (or $w=\ldots x_2x_1$) we use notation $w_n=x_1x_2\ldots x_n$ (respectively, $w_n=x_n\ldots x_2x_1$). For a word $v$ we use notations $v^{\omega}=vv\ldots$ and $v^{-\omega}=\ldots vv$. The *uniform Bernoulli measure* on each space ${X^\omega}$ and ${X^{-\omega}}$ is the product measure of uniform distributions on $X$. The *shift $\sigma$* on the space ${X^\omega}$ (respectively, on ${X^{-\omega}}$) is the map which deletes the first (respectively, the last) letter of a right-infinite (respectively, left-infinite) sequence. ### Self-similar groups A faithful action of a group $G$ on the set $X^{*}\cup{X^\omega}$ is called $\emph{self-similar}$ if for every $g\in G$ and $x\in X$ there exist $h\in G$ and $y\in X$ such that $$g(xw)=yh(w)$$ for all $w\in X^{*}\cup {X^\omega}$. The element $h$ is called the *restriction* of $g$ at $x$ and is denoted by $h=g|_x$. Inductively one defines the restriction $g|_{x_1x_2\ldots x_n}=g|_{x_1}|_{x_2}\ldots |_{x_n}$ for every word $x_1x_2\ldots x_n\in X^{*}$. Restrictions have the following properties $$g(vu)=g(v)g|_v(u),\qquad g|_{vu}=g|_{v}|_{u}, \qquad (g\cdot h)|_v=g|_{h(v)}\cdot h|_v$$ for all $g,h\in G$ and $v,u\in X^{*}$ (we are using left actions so that $(g h)(v)=g(h(v))$). If $X=\{1,2,\ldots,d\}$ then every element $g\in G$ can be uniquely represented by the tuple $(g|_1,g|_2,\ldots,g|_d)\pi_g$, where $\pi_g$ is the permutation induced by $g$ on the set $X$. It follows from the definition that every self-similar group $G$ preserves the length of words under its action on the space $X^{*}$, so that we have an action of the group $G$ on the set $X^n$ for every $n$. The set $X^{*}$ can be naturally identified with a rooted regular tree where the root is labeled by the empty word $\emptyset$, the first level is labeled by the elements in $X$ and the $n$-th level corresponds to $X^n$. The set ${X^\omega}$ can be identified with the boundary of the tree. Every self-similar group acts by automorphisms on this rooted tree and by homeomorphisms on its boundary. ### Automata and automaton groups Another way to introduce self-similar groups is through input-output automata and automaton groups. A transducer *automaton* is a quadruple $(S,X,{t},{o})$, where $S$ is the set of states of automaton; $X$ is an alphabet; ${t}: S\times X \rightarrow S$ is the transition map; and ${o}: S\times X \rightarrow X$ is the output map. We will use notation $S$ for both the set of states and the automaton itself. An automaton is *finite* if it has finitely many states and it is *invertible* if, for all $s\in S$, the transformation ${o}(s, \cdot):X\rightarrow X$ is a permutation of $X$. An automaton can be represented by a directed labeled graph whose vertices are identified with the states and for every state $s\in S$ and every letter $x\in X$ it has an arrow from $s$ to ${t}(s,x)$ labeled by $x|{o}(s,x)$. This graph contains complete information about the automaton and we will identify them. When talking about paths and cycles in automata we always mean directed paths and cycles in corresponding graph representations. Every state $s\in S$ of an automaton defines a transformation on the set $X^{\ast}\cup X^{\omega}$, which is again denoted by $s$ by abuse of notation, as follows. Given a word $x_1x_2\ldots$ over $X$, there exists a unique path in $S$ starting at the state $s$ and labeled by $x_1|y_1$, $x_2|y_2$, …for some $y_i\in X$. Then $s(x_1x_2\ldots)=y_1y_2\ldots$. We always assume that our automata are minimal, i.e., different states define different transformations. The state of automata that defines the identity transformation is denoted by $1$. An automaton $S$ is invertible when all transformations defined by its states are invertible. In this case one can consider the group generated by these transformations under composition of functions, which is called the *automaton group* generated by $S$ and is denoted by $G(S)$. The natural action of every automaton group on the space ${X^\omega}$ is self-similar, and vise versa, every self-similar action of a group $G$ can be given by the automaton with the set of states $G$ and arrows $g\rightarrow g|_x$ labeled by $x|g(x)$ for all $g\in G$ and $x\in X$. ### Contracting self-similar groups A self-similar group $G$ is called *contracting* if there exists a finite set ${\mathcal{N}}\subset G$ with the property that for every $g\in G$ there exists $n\in\mathbb{N}$ such that $g|_v\in{\mathcal{N}}$ for all words $v$ of length greater or equal to $n$. The smallest set ${\mathcal{N}}$ with this property is called the *nucleus* of the group. It is clear from definition that $h|_x\in{\mathcal{N}}$ for every $h\in{\mathcal{N}}$ and $x\in X$, and therefore the nucleus ${\mathcal{N}}$ can be considered as an automaton. Moreover, every state of ${\mathcal{N}}$ has an incoming arrow, because otherwise minimality of the nucleus would be violated. Also, the nucleus is symmetric, i.e., $h^{-1}\in{\mathcal{N}}$ for every $h\in{\mathcal{N}}$. A self-similar group $G$ is called *self-replicating* (or recurrent) if it acts transitively on $X$, and the map $g\mapsto g|_x$ from the stabilizer $Stab_G(x)$ to the group $G$ is surjective for some (every) letter $x\in X$. It can be shown that a self-replicating group acts transitively on $X^n$ for every $n\geq 1$. It is also easy to see ([@self_sim_groups Proposition 2.11.3]) that if a finitely generated contracting group is self-replicating then its nucleus ${\mathcal{N}}$ is a generating set. Schreier graphs vs tile graphs of self-similar groups {#section schreier vs tile} ----------------------------------------------------- Let $G$ be a group generated by a finite set $S$ and let $H$ be a subgroup of $G$. The *(simplicial) Schreier coset graph ${\Gamma}(G,S,H)$* of the group $G$ is the graph whose vertices are the left cosets $G/H=\{gH : g\in G\}$, and two vertices $g_1H$ and $g_2H$ are adjacent if there exists $s\in S$ such that $g_2H=sg_1H$ or $g_1H=sg_2H$. Let $G$ be a group acting on a set $M$, then the corresponding (simplicial) *Schreier graph ${\Gamma}(G,S,M)$* is the graph with the set of vertices $M$, and two vertices $v$ and $u$ are adjacent if there exists $s\in S$ such that $s(v)=u$ or $s(u)=v$. If the action $(G,M)$ is transitive, then the Schreier coset graph ${\Gamma}(G,S,M)$ is isomorphic to the Schreier graph ${\Gamma}(G,S,Stab_G(m))$ of the group with respect to the stabilizer $Stab_G(m)$ for any $m\in M$. Let $G$ be a self-similar group generated by a finite set $S$. The sets $X^n$ are invariant under the action of $G$, and we denote the associated Schreier graphs by ${\Gamma}_n={\Gamma}_n(G,S)$. For a point $w\in{X^\omega}$ we consider the action of the group $G$ on the $G$-orbit of $w$, and the associated Schreier graph is called the *orbital Schreier graph* denoted ${\Gamma}_w={\Gamma}_w(G,S)$. For every $w\in{X^\omega}$ we have $Stab_G(w)=\bigcap_{n\geq 1} Stab_G(w_n)$, where $w_n$ denotes the prefix of length $n$ of the infinite word $w$. The connected component of the rooted graph $(\Gamma_n,w_n)$ around the root $w_n$ is exactly the Schreier graph of $G$ with respect to the stabilizer of $w_n$. It follows immediately that the graphs $(\Gamma_n,w_n)$ converge to the graph $(\Gamma_{w},w)$ in the pointed Gromov-Hausdorff topology [@gromov]. Besides the Schreier graphs, we will also work with their subgraphs called tile graphs. The *tile graph* $T_n=T_n(G,S)$ is the graph with the set of vertices $X^n$, where two vertices $v$ and $u$ are adjacent if there exists $s\in S$ such that $s(v)=u$ and $s|_v=1$. The tile graph $T_n$ is thus a subgraph of the Schreier graph ${\Gamma}_n$. To define a tile graph for the action on the space ${X^\omega}$, consider the same set of vertices as in ${\Gamma}_w$ and connect vertices $v$ and $u$ by an edge if there exists $s\in S$ such that $s(v)=u$ and $s|_{v'}=1$ for some finite beginning $v'\in X^{*}$ of the sequence $v$. The connected component of this graph containing the vertex $w$ is called the *orbital tile graph* $T_w$. It is clear from the construction that we also have the convergence $(T_n,w_n)\rightarrow (T_{w},w)$ in the pointed Gromov-Hausdorff topology. The study of orbital tile graphs $T_w$ is based on the approximation by finite tile graphs $T_n$. Namely, we will frequently use the following observation. Every tile graph $T_n$ can be considered as a subgraph of $T_w$ under the inclusion $v\mapsto v\sigma^n(w)$. Indeed, if $v$ and $u$ are adjacent in $T_n$ then $v\sigma^n(w)$ and $u\sigma^n(w)$ are adjacent in $T_w$. Moreover, every edge of $T_w$ appears in the graph $T_n$ for all large enough $n$. Hence the graphs $T_n$ viewed as subgraphs of $T_w$ form a cover of $T_w$. Schreier graphs of groups generated by bounded automata {#subsection_BoundedAutomata} ------------------------------------------------------- ### Bounded automata (Sidki [@sidki:circ]) A finite invertible automaton $S$ is called *bounded* if one of the following equivalent conditions holds: 1. the number of paths of length $n$ in $S\setminus\{1\}$ is bounded independently on $n$; 2. the number of left- (equivalently, right-) infinite paths in $S\setminus\{1\}$ is finite; 3. any two nontrivial cycles in the automaton are disjoint and not connected by a path, where a cycle is called trivial if it is a loop at the trivial state; 4. the number of left- (equivalently, right-) infinite sequences, which are read along left- (respectively, right-) infinite paths in $S\setminus\{1\}$, is finite. The states of a bounded automaton $S$ can be classified as follows:\ –  a state $s$ is *finitary* if there exists $n\in\mathbb{N}$ such that $s|_v=1$ for all $v\in X^n$;\ –  a state $s$ is *circuit* if there exists a nonempty word $v\in X^n$ such that $s|_v=s$,\ i.e., $s$ belongs to a cycle in $S$; in this case $s|_u$ is finitary for every $u\in X^n$, $u\neq v$;\ –  for every state $s$ there exists $n\in\mathbb{N}$ such that for every $v\in X^n$ the state $s|_v$ is\ either finitary or circuit.\ By passing to a power $X^m$ of the alphabet $X$ every bounded automaton can be brought to the *basic form* (see [@self_sim_groups Proposition 3.9.11]) in which the above items hold with $n=1$; in particular, all cycles are loops, and $s|_x=1$ for every finitary state $s$ and every $x\in X$ (here for $m$ we can take an integer number which is greater than the diameter of the automaton and is a multiple of the length of every simple cycle). Every self-similar group $G$ generated by a bounded automaton is contracting (see [@self_sim_groups Theorem 3.9.12]). Its nucleus is a bounded automaton, which contains only finitary and circuit states (because every state of a nucleus should have an incoming arrow). ### Cofinality $\&$ post-critical, critical and regular sequences In this section we introduce the notion of critical and post-critical sequences that is fundamental for our analysis. Let $G$ be a contracting self-similar group generated by an automaton $S$ and we assume $S=S^{-1}$. Let us describe the vertex sets of the orbital tile graphs $T_w=T_w(G,S)$. Two right- (or left-) infinite sequences are called *cofinal* if they differ only in finitely many letters. Cofinality is an equivalence relation on ${X^\omega}$ and ${X^{-\omega}}$. The respective equivalence classes are called the cofinality classes and they are denoted by $Cof(\cdot)$. The following statement characterizes vertices of tile graphs in terms of the cofinal sequences. \[cofinality\] Suppose that the tile graphs $T_n(G,S)$ are connected for all $n\in\mathbb{N}$. Then for every $w\in{X^\omega}$ the cofinality class $Cof(w)$ is the set of vertices of the orbital tile graph $T_w(G,S)$. If $g(v)=u$ for $v,u\in X^{\omega}$ and $g|_{v'}=1$ for a finite beginning $v'$ of $v$ then $v$ and $u$ are cofinal. Conversely, since every graph $T_n$ is connected, for every $v,u\in X^{n}$ there exists $g\in G$ such that $g(v)=u$ and $g|_v=1$. Hence $g(vw)=uw$ for all $w\in X^{\omega}$ and every two cofinal sequences belong to the same orbital tile graph. We classify infinite sequences over $X$ as follows. A left-infinite sequence $\ldots x_2x_1\in{X^{-\omega}}$ is called *post-critical* if there exists a left-infinite path $\ldots e_2e_1$ in the automaton $S\setminus \{1\}$ labeled by $\ldots x_2x_1|\ldots y_2y_1$ for some $y_i\in X$. The set ${\mathscr{P}}$ of all post-critical sequences is called *post-critical*. A right-infinite sequence $w=x_1x_2\ldots\in{X^\omega}$ is called *critical* if there exists a right-infinite path $e_1e_2\ldots$ in the automaton $S\setminus \{1\}$ labeled by $x_1x_2\ldots|y_1y_2\ldots$ for some $y_i\in X$. It follows that every shift $\sigma^n(w)$ of a critical sequence $w$ is again critical, and for every $n\in\mathbb{N}$ there exists $v\in X^n$ such that $vw$ is critical (here we use the assumption that every path in $S$ can be continued to the left). It is proved in [@PhDBondarenko Proposition IV.18] (see also [@self_sim_groups Proposition 3.2.7]) that the set of post-critical sequences coincides with the set of sequences that can be read along left-infinite paths in the nucleus of the group with removed trivial state. The same proof works for critical sequences. Therefore the sets of critical and post-critical sequences do not depend on the chosen generating set (as soon as it satisfies assumption that every state of the automaton $S$ has an incoming arrow, and $S^{-1}=S$). Finally, a sequence $w\in{X^\omega}$ is called *regular* if the cofinality class of $w$ does not contain critical sequences, or, equivalently, if the shifted sequence $\sigma^n(w)$ is not critical for every $n\geq 0$. Notice that the cofinality class of a critical sequence contains sequences which are neither regular nor critical. \[prop\_properties\_of\_sequences\] Suppose that the automaton $S$ is bounded. Then the sets of critical and post-critical sequences are finite. Every post-critical sequence is pre-periodic. Every cofinality class contains not more than one critical sequence. The cofinality class of a regular sequence contains only regular sequences. If $w$ is regular, then there exists a finite beginning $v$ of $w$ such that $s|_v=1$ for every $s\in S$. The number of right- and left-infinite paths avoiding the trivial state is finite in every bounded automaton. Thus the number of critical and post-critical sequences is finite. The pre-periodicity of post-critical sequences and the periodicity of critical sequences follow from the cyclic structure of bounded automata. The statement about the cofinality class of a critical sequence follows immediately, because different periodic sequences can not differ only in finitely many letters. Finally, if $w=x_1x_2\ldots\in{X^\omega}$ is regular, then starting from any state $s\in S$ and following the edges labeled by $x_1|*,x_2|*,\ldots$ we will end at the trivial state. Hence there exists $n$ such that $s|_{x_1x_2\ldots x_n}=1$ for all $s\in S$. Note that if the automaton $S$ is in the basic form, then every post-critical sequence is of the form $y^{-\omega}$ or $y^{-\omega}x$ for some letters $x,y\in X$, and every critical sequence is of the form $x^{\omega}$ for some $x\in X$. ### Inflation of graphs {#section_inflation} Let $G$ be a group generated by a bounded automaton $S$. We assume $S=S^{-1}$ and every state of $S$ has an incoming arrow. In what follows we describe an inductive method (called *inflation of graphs*) to construct the tile graphs $T_n=T_n(G,S)$ developed in [@PhDBondarenko Chapter V]. Let $p=\ldots x_2x_1\in{\mathscr{P}}$ be a post-critical sequence. The vertex $p_n=x_n\ldots x_2x_1$ of the graphs ${\Gamma}_n$ and $T_n$ will be called *post-critical*. Since the post-critical set ${\mathscr{P}}$ is finite, for all large enough $n$, the post-critical vertices of ${\Gamma}_n$ and $T_n$ are in one-to-one correspondence with the elements of ${\mathscr{P}}$ (just take $n$ large enough so that $p_n\neq q_n$ whenever $p\neq q$). Hence, with a slight abuse of notations, we will consider the elements of the set ${\mathscr{P}}$ as the vertices of the graphs ${\Gamma}_n$ and $T_n$. Let $E$ be the set of all pairs $\{(p,x),(q,y)\}$ for $p,q\in{\mathscr{P}}$ and $x,y\in X$ such that there exists a left-infinite path in the automaton $S$, which ends in the trivial state and is labeled by the pair $px|qy$. [@PhDBondarenko Theorem V.8]\[th\_tile\_graph\_construction\] To construct the tile graph $T_{n+1}$ take $|X|$ copies of the tile graph $T_n$, identify their sets of vertices with $X^nx$ for $x\in X$, and connect two vertices $vx$ and $uy$ by an edge if and only if $v,u\in{\mathscr{P}}$ and $\{(v,x),(u,y)\}\in E$. The procedure of inflation of graphs given in Theorem \[th\_tile\_graph\_construction\] can be described using the graph $M$ with the vertex set ${\mathscr{P}}\times X$ and the edge set $E$, which we call the *model graph* associated to the automaton $S$. The vertex $(p,x)$ of $M$ is called *post-critical*, if the sequence $px$ is post-critical. Note that the post-critical vertices of $M$ are in one-to-one correspondence with elements of the post-critical set ${\mathscr{P}}$. Now if we “place” the graph $T_n$ in the model graph instead of the vertices ${\mathscr{P}}\times x$ for each $x\in X$ such that the post-critical vertices of $T_n$ fit with the set ${\mathscr{P}}\times x$, we get the graph $T_{n+1}$. Moreover, the post-critical vertices of $M$ will correspond to the post-critical vertices of $T_{n+1}$. In order to construct the Schreier graph ${\Gamma}_n$ we can take the tile graph $T_n$ and add an edge between post-critical vertices $p$ and $q$ if $s(p)=q$ for some $s\in S$. Indeed, if $s(v)=u$ and $s|_v\neq 1$ (the edge that does not appear in $T_n$) then $v$ and $u$ are post-critical vertices, and they should be adjacent in ${\Gamma}_n$. Notice that there are only finitely many added edges (independently on $n$) and they can be described directly through the generating set $S$. Namely, define the set $E({\Gamma}\setminus T)$ as the set of all pairs $\{p,q\}$ for $p,q\in{\mathscr{P}}$ such that there exists a left-infinite path in $S$ labeled by the pair $p|q$ or $q|p$. Then if we take the tile graph $T_n$ and add an edge between $p_n$ and $q_n$ for every $\{p,q\}\in E({\Gamma}\setminus T)$, we get the Schreier graph ${\Gamma}_n$. Ends of tile graphs and Schreier graphs {#Section_Ends} ======================================= In this section we present the main results about the number of ends of Schreier graphs $\Gamma_w$ of groups generated by bounded automata. Our method passes through the study of the same problem for the tile graphs $T_w$. First, we show that the number of ends of graphs $T_w$ can be deduced from the number of connected components in tile graphs with a vertex removed (see Proposition \[prop\_ends\_limit\]). We use the inflation procedure to construct a finite deterministic automaton ${\mathsf{A}}_{ic}$, which given a sequence $w\in{X^\omega}$ determines the number of infinite connected components in the graph $T_w$ with the vertex $w$ removed (see Proposition \[prop\_infcomp\_A\_ic\]). Then we describe all sequences $w\in{X^\omega}$ such that $T_w$ has a given number of ends in terms of sofic subshifts associated to strongly connected components of the automaton ${\mathsf{A}}_{ic}$. Further we deduce the number of ends of Schreier graph $\Gamma_w$ (see Corollary \[remarktile\]). After this, we pass to the study of the number of ends for a random Schreier graph $\Gamma_w$. We show that picking randomly an element $w$ in $X^{\omega}$, the graph $\Gamma_w$ has one or two ends (see Corollary \[cor\_one\_or\_two\_ends\_a\_s\]). The latter case is completely described (see Theorem \[th\_classification\_two\_ends\]). Technical assumptions --------------------- In what follows, except for a few special cases directly indicated, we make the following assumptions about the studied self-similar groups $G$ and their generating sets $S$:\ \ 1. [*The group $G$ is generated by a bounded automaton $S$.*]{}\ \ 2. [*The tile graphs $T_n=T_n(G,S)$ are connected.*]{}\ \ 3. [*Every state of the automaton $S$ has an incoming arrow, and $S^{-1}=S$.*]{}\ \ Instead of the assumption 2 it is enough to require that the group acts transitively on $X^n$ for every $n\geq 1$, i.e., the Schreier graphs ${\Gamma}_n(G,S)$ are connected. Then, even if the tile graphs $T_n(G,S)$ are not connected, there is a uniform bound on the number of connected components in $T_n(G,S)$ (see how the Schreier graphs are constructed from the tile graphs after Theorem \[th\_tile\_graph\_construction\]), and one can apply the developed methods to each component. The assumption 3 is technical, it guaranties that every directed path in the automaton $S$ can be continued to the left. If the generating set $S$ contains a state $s'$, which does not contain incoming edges, then $s|_x\in S\setminus\{s'\}$ for every $s\in S$ and $x\in X$, and hence the state $s'$ does not interplay on the asymptotic properties of the tile or Schreier graphs. Moreover, if the group is self-replicating, then the property 3 is always satisfied, when we take its nucleus ${\mathcal{N}}$ as the generating set $S$. The number of ends and infinite components in tile graphs with a vertex removed ------------------------------------------------------------------------------- For a graph ${\Gamma}$ and its vertex $v$ we denote by ${\Gamma}\setminus v$ the graph obtained from ${\Gamma}$ by removing the vertex $v$ together with all edges adjacent to $v$. Let $\Gamma$ be an infinite, locally finite graph. A *ray* in $\Gamma$ is an infinite sequence of adjacent vertices $v_1, v_2, \ldots $ in $\Gamma$ such that $v_i\neq v_j$ for $i\neq j$. Two rays $r$ and $r'$ are equivalent if for every finite subset $F\subset\Gamma$ infinitely many vertices of $r$ and $r'$ belong to the same connected component of $\Gamma\setminus F$. An *end* of $\Gamma$ is an equivalence class of rays. In what follows we use the notation: - $\#Ends({\Gamma})$ is the number of ends of ${\Gamma}$; - ${c}({\Gamma})$ is the number of connected components in the graph ${\Gamma}$; - ${ic}({\Gamma})$ is the number of infinite connected components in ${\Gamma}$. We will show later that the number ${ic}({\Gamma})$ can be computed for $\Gamma=T_w\setminus w$. The following proposition relates this value to the number of ends of $T_w$. \[prop\_ends\_limit\] Every tile graph $T_w$ for $w\in{X^\omega}$ has finitely many ends, which is equal to $$\#Ends(T_w)=\lim_{n\rightarrow\infty} {ic}(T_{\sigma^n(w)}\setminus \sigma^n(w)),$$ where $\sigma$ is the shift map on the space ${X^\omega}$. Let us show that the number of infinite connected components of the graphs $T_{\sigma^n(w)}\setminus \sigma^n(w)$ and $T_w\setminus X^n\sigma^n(w)$ is the same for every $n$. Consider the natural partition of the set of vertices of $T_w$ given by $$Cof(w)=\bigsqcup_{w'\in Cof(\sigma^n(w))} X^nw'.$$ Using the graph $T_w$ we construct a new graph $\mathscr{G}$ with the set of vertices $Cof(\sigma^n(w))$, where two vertices $v$ and $u$ are adjacent if there exist $v',u'\in X^n$ such that $v'v$ and $u'u$ are adjacent in $T_w$. The graph $\mathscr{G}$ is isomorphic to the tile graph $T_{\sigma^n(w)}$ under the identity map on $Cof(\sigma^n(w))$. Indeed, let $v$ and $u$ be adjacent in $\mathscr{G}$. Then there exist $v',u'\in X^n$ and $s\in S$ such that $s(v'v)=u'u$ and $s|_{v'v''}=1$ for a finite beginning $v''$ of $v$. It follows that $s|_{v'}(v)=u$, $s|_{v'}|_{v''}=1$, and $s|_{v'}\in S$, because of self-similarity. Therefore $v$ and $u$ are adjacent in $T_{\sigma^n(w)}$. Conversely, suppose $s(v)=u$ and $s|_{v''}=1$ for some $s\in S$ and a finite beginning $v''$ of $v$. Since each element of $S$ has an incoming edge, there exist $s'\in S$ and $v',u'\in X^n$ such that $s'(v'v)=u'u$ and $s'|_{v'v''}=1$. Hence $v$ and $u$ are adjacent in the graph $\mathscr{G}$. The subgraph of $T_{w}$ spanned by every set of vertices $X^nw'$ for $w'\in Cof(\sigma^n(w))$ is connected, because, by assumption, the tile graphs $T_n$ are connected. Hence, the number of infinite connected components in $T_w\setminus X^n\sigma^n(w)$ is equal to the number of infinite connected components in $T_{\sigma^n(w)}\setminus \sigma^n(w)$. In particular, this number is bounded by the size of the generating set $S$. Every infinite component of $T_w\setminus X^n \sigma^n(w)$ contains at least one end. Hence the estimate $$\#Ends(T_w)\geq {ic}(T_w\setminus X^n \sigma^n(w))={ic}(T_{\sigma^n(w)}\setminus \sigma^n(w))$$ holds for all $n$. In particular $$\#Ends(T_w)\geq \lim_{n\rightarrow\infty} {ic}(T_{\sigma^n(w)}\setminus \sigma^n(w))$$ For the converse consider the ends $\gamma_1,\ldots,\gamma_k$ of the graph $T_{w}$. They can be made disconnected by removing finitely many vertices. Take $n$ large enough so that the set $X^n\sigma^n(w)$ disconnects the ends $\gamma_i$. Since every end belongs to an infinite component, we get at least $k$ infinite components of $T_{\sigma^n(w)}\setminus \sigma^n(w)$. In particular, the number of ends is finite and the statement follows. In particular, the number of ends of every tile graph $T_w$ is not greater than the maximal degree of vertices, i.e., $\# Ends(T_w)\leq |S|$. Now let us show how to compute the number ${ic}(T_w\setminus w)$ in terms of the components that contain post-critical vertices. In other terms, only components of $T_n\setminus w_n$ with post-critical vertices give a positive contribution, in the limit, to the number of infinite components. In order to do that, we denote by ${pc}(T_n\setminus w_n)$ the number of connected components of $T_n\setminus w_n$ that contain a post-critical vertex. \[prop\_infcomp\_limit\] Let $w=x_1x_2\ldots\in{X^\omega}$ be a regular or a critical sequence, then ${pc}(T_n\setminus w_n)$ is an eventually non-increasing sequence and $$\begin{aligned} {ic}(T_w\setminus w)=\lim_{n\rightarrow\infty} {pc}(T_n\setminus w_n).\end{aligned}$$ Choose $n$ large enough so that the subgraph $T_n$ of $T_w$ contains all edges of $T_w$ adjacent to the vertex $w$. Notice that if a vertex $v$ of $T_n$ is adjacent to some vertex $s(v)$ in $T_w\setminus T_n$, then $s|_v\neq 1$ and thus $v$ is post-critical. It follows that if $C$ is a connected component of $T_n\setminus w_n$ without post-critical vertices, then all the edges of the graph $T_w\setminus w$ adjacent to the component $C$ are contained in the graph $T_n\setminus w_n$. Hence $C$ is a finite component of $T_w\setminus w$. Therefore the number of infinite components of $T_w\setminus w$ is not greater than the number of components of $T_n\setminus w_n$ that contain a post-critical vertex. It follows ${ic}(T_w\setminus w)\leq {pc}(T_{n+k}\setminus w_{n+k})\leq {pc}(T_n\setminus w_n)$ for all $k\geq 1$. In fact, if $v, v'$ are post-critical and belong to the same connected component of $T_n\setminus w_n$, then there exists a path $\{s_1,\ldots, s_m\}$ connecting them such that $s_i|_{s_1\cdots s_{i-1}(v_n)}=1$, in particular the same holds for $s_i|_{s_1\cdots s_{i-1}(v_{n+k})}=1$, $k\geq 1$. This implies the monotonicity of ${pc}(T_{n+k}\setminus w_{n+k})$ for $k\geq 1$. Suppose now that $w$ is regular. Let $C$ be a finite component of $T_w\setminus w$. Since $C$ is finite, every edge inside $C$ appears in the graph $T_n$ for all large enough $n$, and thus $C$ is a connected component of $T_n\setminus w_n$. Since $C$ contains only regular sequences, the last statement in Proposition \[prop\_properties\_of\_sequences\] implies that for all $s\in S$ and every vertex $v$ in $C$ we have $s|_{v_n}=1$ for all large enough $n$. In other words, the vertex $v_n$ of $T_n$ is not post-critical for every vertex $v$ in $C$. Therefore the component $C$ is not counted in the number ${pc}(T_n\setminus w_n)$. Hence ${ic}(T_w\setminus w)={pc}(T_n\setminus w_n)$ for all large enough $n$. The same arguments work if $w$ is critical, because every cofinality class contains not more than one critical sequence, and hence the graph $T_w\setminus w$ has no critical sequences. With a slight modification the last proposition also works for a sequence $w$, which is not critical but is cofinal to some critical sequence $u$. In this case, we can count the number of connected components of $T_n\setminus w_n$ that contain post-critical vertices other than $u_n$, and then pass to the limit to get the number of infinite components in $T_w\setminus w$. Indeed, it is enough to notice that if the graph $T_n\setminus w_n$ contains a connected component $C$ with precisely one post-critical vertex $u_n$ for large enough $n$, then $C$ is a finite component in the graph $T_w\setminus w$. Under this modification the proposition may be applied to any sequence. Also to find the number of ends it is enough to know that the limit in Proposition \[prop\_infcomp\_limit\] is valid for regular and critical sequences. For any sequence $w$ cofinal to a critical sequence $u$ we just consider the graph $T_w=T_u$ centered at the vertex $u$ and apply the proposition. It is not difficult to observe that one can use the same method to obtain the number ${c}(T_w\setminus w)$ of all connected components of $T_w\setminus w$. In particular, one has $${c}(T_w\setminus w)=\lim_{n\rightarrow\infty} {c}(T_n\setminus w_n).$$ Finite automaton to determine the number of components in tile graphs with a vertex removed {#finite automaton section} ------------------------------------------------------------------------------------------- Using the iterative construction of tile graphs given in Theorem \[th\_tile\_graph\_construction\] we can provide a recursive procedure to compute the numbers ${pc}(T_n\setminus w_n)$. We will construct a finite deterministic (acceptor) automaton ${\mathsf{A}}_{ic}$ with the following structure: it has a unique initial state, each arrow in ${\mathsf{A}}_{ic}$ is labeled by a letter $x\in X$, each state of ${\mathsf{A}}_{ic}$ is labeled by a partition of a subset of the post-critical set ${\mathscr{P}}$. The automaton ${\mathsf{A}}_{ic}$ will have the property that, given a word $v\in X^n$, the final state of ${\mathsf{A}}_{ic}$ after reading $v$ corresponds to the partition of the post-critical vertices of the graph $T_n$ induced by the connected components of $T_n\setminus v$. Then ${pc}(T_n\setminus v)$ is just the number of parts in this partition. We start with the following crucial consideration for the construction of the automaton ${\mathsf{A}}_{ic}$. Let $v$ be a vertex of the tile graph $T_n$. The components of $T_n\setminus v$ partition the set of post-critical vertices of $T_n$. Let us consider only those components that contain at least one post-critical vertex. Let ${\mathscr{P}}_i\subset {\mathscr{P}}$ be the set of all post-critical sequences, which represent post-critical vertices in $i$-th component. If the vertex $v$ is not post-critical, then $\sqcup_{i} {\mathscr{P}}_i={\mathscr{P}}$. Otherwise, $\sqcup_{i} {\mathscr{P}}_i$ is a proper subset of ${\mathscr{P}}$; every sequence $p$ in ${\mathscr{P}}\setminus \sqcup_{i} {\mathscr{P}}_i$ represents $v$, i.e., $v=p_n$ (for all large enough $n$ the set ${\mathscr{P}}\setminus \sqcup_{i} {\mathscr{P}}_i$ consists of just one post-critical sequence, while for small values of $n$ the same vertex may be represented by several post-critical sequences). In any case, we say that $\{ {\mathscr{P}}_i \}_i$ is the *partition* (of a subset of ${\mathscr{P}}$) *induced by the vertex* $v$. If $T_n\setminus v$ does not contain post-critical vertices (this happens when $v=p_n$ for every $p\in{\mathscr{P}}$), then we say that $v$ induce the empty partition $\{\emptyset\}$. The set of all partitions induced by the vertices of tile graphs is denoted by $\Pi$. The set $\Pi$ can be computed algorithmically. To see this, let us show how, given the partition $P=\{ {\mathscr{P}}_i \}_i$ induced by a vertex $v$ and a letter $x\in X$, one can find the partition $F=\{\mathscr{F}_j\}_j$ induced by the vertex $vx$. We will use the model graph $M$ associated to the automaton $S$ in Section \[section\_inflation\], which has the vertex set ${\mathscr{P}}\times X$ and edges $E$. Recall that the set ${\mathscr{P}}$ is identified with the set of post-critical vertices of $M$. Let us construct the auxiliary graph $M_{P,x}$ as follows: take the model graph $M$, add an edge between $(p,x)$ and $(q,x)$ for $p,q\in{\mathscr{P}}_i$ and every $i$, and add an edge between $(p,y)$ and $(q,y)$ for every $p,q\in{\mathscr{P}}$ and $y\in X, y\neq x$. Put $K=\{ (p,x) : p\in{\mathscr{P}}\setminus\sqcup_{i} {\mathscr{P}}_i \}$. If the graph $M_{P,x}\setminus K$ contains no post-critical vertices, then we define $\{\mathscr{F}_j\}_j$ as the empty partition $\{\emptyset\}$. Otherwise, we consider the components of $M_{P,x}\setminus K$ with at least one post-critical vertex, and let $\mathscr{F}_j\subset{\mathscr{P}}$ be the set of all post-critical vertices/sequences in $j$-th component. \[lemma construction Aic\] $\{\mathscr{F}_j\}_j$ is exactly the partition induced by the vertex $vx$. Let us consider the map $\varphi:M_{P,x}\rightarrow T_{n+1}$ given by $\varphi((p,y))=p_ny$, $p\in{\mathscr{P}}$ and $y\in X$. This map is neither surjective, nor injective in general, nor a graph homomorphism. However, it preserves the inflation construction of the graph $T_{n+1}$ from the graph $T_n$. Namely, $\varphi$ maps each subset ${\mathscr{P}}\times\{y\}$ into the subset $X^ny$, and the edges in $E$ onto the edges of $T_{n+1}$ obtained under construction (see Theorem \[th\_tile\_graph\_construction\]). Also, by definition of post-critical vertices, the map $\varphi$ sends the post-critical vertices of $M_{P,x}$ onto the post-critical vertices of $T_{n+1}$. Note that the set $K$ is exactly the preimage of the vertex $vx$ under $\varphi$. Therefore we can consider the restriction $\varphi:M_{P,x}\setminus K\rightarrow T_{n+1}\setminus vx$. For every $y\in X$, $y\neq x$ all vertices in ${\mathscr{P}}\times\{y\}$ belong to the same component of $M_{P,x}\setminus K$, and $\varphi$ maps these vertices to the same component of $T_{n+1}\setminus vx$, because the subgraph of $T_{n+1}$ induced by the set of vertices $X^ny$ contains the graph $T_n$, which is connected. For each $i$ the vertices in ${\mathscr{P}}_i\times\{x\}$ are mapped to the same component of the graph $T_{n+1}\setminus vx$, because its subgraph induced by the vertices $X^nx\setminus vx$ contains the graph $T_{n}\setminus v$ and ${\mathscr{P}}_i$ is its connected component. It follows that, if two post-critical vertices $p$ and $q$ can be connected by a path $p=v_0,v_1,\ldots,v_m=q$ in $M_{P,x}\setminus K$, then for every $i$ the vertices $\varphi(v_i)$ and $\varphi(v_{i+1})$ belong to the same component of $T_{n+1}\setminus vx$, and therefore the post-critical vertices $\varphi(p)$ and $\varphi(q)$ lie in the same component. Conversely, suppose $\varphi(p)$ and $\varphi(q)$ can be connected by a path $\gamma$ in $T_{n+1}\setminus vx$. We can subdivide $\gamma$ as $\gamma_1e_1\gamma_2e_2\ldots e_m\gamma_{m+1}$, where $e_i\in \varphi(E)$ and each subpath $\gamma_i$ is a path in a copy of $T_n$ inside $T_{n+1}$. The preimages of the end points of each $e_i$ belong to the same component in $M_{P,x}\setminus K$. Therefore $p$ and $q$ lie in the same component in $M_{P,x}\setminus K$. The statement follows. It follows that we can find the set $\Pi$ algorithmically as follows. Note that the empty partition $\{\emptyset\}$ is always an element of $\Pi$ (it is induced by the unique vertex of the tile $T_0$ of zero level). We start with $\{\emptyset\}$ and for each letter $x\in X$ construct new partition $\{\mathscr{F}_j\}_j$ as above. We repeat this process for each new partition until no new partition is obtained. Since the set ${\mathscr{P}}$ is finite, the process stops in finite time. Then $\Pi$ is exactly the set of all obtained partitions. We construct the (acceptor) automaton ${\mathsf{A}}_{ic}$ over the alphabet $X$ on the set of states $\Pi$ with the the unique initial state $\{\emptyset\}\in\Pi$. The transition function is given by the rule: for $\{{\mathscr{P}}_i\}_i\in\Pi$ and $x\in X$ we put $\{{\mathscr{P}}_i\}_i \xrightarrow{\ x\ } \{\mathscr{F}_j\}_j$, where $\{\mathscr{F}_j\}_j$ is defined as above. The automaton ${\mathsf{A}}_{ic}$ has all the properties we described at the beginning of this subsection: given a word $v\in X^{*}$, the final state of ${\mathsf{A}}_{ic}$ after accepting $v$ is exactly the partition induced by the vertex $v$. Since we are interested in the number ${pc}(T_n\setminus v)$ of components in $T_n\setminus v$ containing post-critical vertices, we can label every state of ${\mathsf{A}}_{ic}$ by the number of components in the corresponding partition. We get the following statement. \[thm\_number\_of\_con\_comp\] The graph $T_{|v|}\setminus v$ has $k$ components containing a post-critical vertex if and only if the final state of the automaton ${\mathsf{A}}_{ic}$ after accepting the word $v$ is labeled by the number $k$. In particular, for every $k$ the set $C(k)$ of all words $v\in X^{*}$ such that the graph $T_{|v|}\setminus v$ has $k$ connected components containing a post-critical vertex is a regular language recognized by the automaton ${\mathsf{A}}_{ic}$. Similarly, we construct a finite deterministic acceptor automaton ${\mathsf{A}}_c$ for computing the number of all components in tile graphs with a vertex removed. The states of the automaton ${\mathsf{A}}_c$ will be pairs of the form $(\{{\mathscr{P}}_i\}_i,m)$, where $\{{\mathscr{P}}_i\}_i\in\Pi$ and $m$ is a non-negative integer number, which will count the number of connected components without post-critical vertices. We start with the state $(\{\emptyset\},0)$, which is the unique initial state of ${\mathsf{A}}_c$, and consequently construct new states and arrows as follows. Let $(P=\{{\mathscr{P}}_i\}_i,m)$ be a state already constructed. For each $x\in X$ we take the graph $M_{P,x}\setminus K$ constructed above, define $\{\mathscr{F}_j\}_j$ as above, and put $m_x$ to be equal to the number of connected components in $M_{P,x}\setminus K$ without post-critical vertices. If $(\{\mathscr{F}_j\}_j,m+m_x)$ is already a state of ${\mathsf{A}}_{c}$, then we put an arrow labeled by $x$ from the state $(\{{\mathscr{P}}_i\}_i,m)$ to the state $(\{\mathscr{F}_j\}_j,m+m_x)$. Otherwise, we introduce $(\{\mathscr{F}_j\}_j,m+m_x)$ as a new state and put this arrow. We repeat this process for each new state until no new state is obtained. Since the number of all components in $T_n\setminus v$ is not greater than $|S|$, the number $m$ cannot exceed $|S|$, and the construction stops in finite time. The automaton ${\mathsf{A}}_c$ has the following property: given a word $v\in X^{*}$, the final state of ${\mathsf{A}}_c$ after accepting the word $v$ is exactly the pair $(\{{\mathscr{P}}_i\}_i,m)$, where $\{{\mathscr{P}}_i\}_i$ is the partition induced by $v$ and $m$ is the number of components in $T_{|v|}\setminus v$ without post-critical vertices. Since we are interested only in the number $c(T_n\setminus v)$ of all components of $T_n\setminus v$, we label every state $(\{{\mathscr{P}}_i\}_i,m)$ of the automaton ${\mathsf{A}}_{c}$ by the number $k+m$, where $k$ is the number of sets in the partition $\{{\mathscr{P}}_i\}_i$. We get the following statement. \[prop\_number\_of\_con\_comp A\_c\] The graph $T_{|v|}\setminus v$ has $k$ components if and only if the final state of the automaton ${\mathsf{A}}_{c}$ after accepting the word $v$ is labeled by the number $k$. In particular, for every $k$ the set $C(k)$ of all words $v\in X^{*}$ such that the graph $T_{|v|}\setminus v$ has $k$ connected components is a regular language recognized by the automaton ${\mathsf{A}}_{c}$. We need the following properties of vertex labels in the automata ${\mathsf{A}}_{ic}$ and ${\mathsf{A}}_{c}$. \[lemma\_properties\_A\_ic\] 1. In every strongly connected component of the automata ${\mathsf{A}}_{ic}$ and ${\mathsf{A}}_c$ all states are labeled by the same number. 2. All strongly connected components of the automaton ${\mathsf{A}}_{ic}$ without outgoing arrows are labeled by the same number. *1.* Suppose that there is a strongly connected component with two states labeled by different numbers. It would imply that there exists an infinite word such that the corresponding path in the automaton passes through each of these states infinite number of times. We get a contradiction with Proposition \[prop\_infcomp\_limit\], because the sequences ${pc}(T_n\setminus w_n)$ and ${c}(T_n\setminus w_n)$ are eventually monotonic (for the last one the proof is the same). *2.* Suppose there are two strongly connected components in the automaton ${\mathsf{A}}_{ic}$ without outgoing arrows which are labeled by different numbers. Let $v$ and $u$ be finite words such that starting at the initial state of ${\mathsf{A}}_{ic}$ we end at the first and the second components respectively. Then for the infinite sequence $vuvu\ldots$ the limit in Proposition \[prop\_ends\_limit\] does not exist, and we get a contradiction. Proposition \[prop\_infcomp\_limit\] together with Theorem \[thm\_number\_of\_con\_comp\] imply the following method to find the number of infinite components in $T_w\setminus w$ for $w\in{X^\omega}$. \[prop\_infcomp\_A\_ic\] Let $w=x_1x_2\ldots\in{X^\omega}$ be a regular or critical sequence. The number of infinite connected components in $T_w\setminus w$ is equal to the label of a strongly connected component of ${\mathsf{A}}_{ic}$ that is visited infinitely often, when the automaton reads the sequence $w$. The number of ends of tile graphs --------------------------------- The characterization of the number of infinite components in the graph $T_w\setminus w$ together with Proposition \[prop\_ends\_limit\] allows us to describe the number of ends of $T_w$. Since every critical sequence $w$ is periodic, we can algorithmically find the number of ends of the graph $T_w$ using Proposition \[prop\_ends\_limit\] and the automaton ${\mathsf{A}}_{ic}$. Now we introduce some notations. Fix $k\geq 1$ and let $EC_{=k}$ be the union of cofinality classes of critical sequences $w$ whose tile graph $T_w$ has $k$ ends. Similarly we define the sets $EC_{>k}$ and $EC_{<k}$. Let - ${\mathsf{A}}_{ic}(k)$ be the subgraph of ${\mathsf{A}}_{ic}$ spanned by the strongly connected components labeled by numbers $\geq k$; - $\mathscr{R}_{\geq k}$ be the one-sided sofic subshift given by the graph ${\mathsf{A}}_{ic}(k)$, i.e., $\mathscr{R}_{\geq k}$ is the set of all sequences that can be read along right-infinite paths in ${\mathsf{A}}_{ic}(k)$ starting at any state; - $E_{\geq k}$ be the set of all sequences which are cofinal to some sequence from $\mathscr{R}_{\geq k}$. Since the set $\mathscr{R}_{\geq k}$ is shift-invariant, the set $E_{\geq k}$ coincides with $X^{*}\mathscr{R}_{\geq k}=\{vw | v\in X^{*}, w\in \mathscr{R}_{\geq k}\}$. \[thm\_number\_of\_ends\] The tile graph $T_w$ has $\geq k$ ends if and only if $w\in E_{\geq k}\setminus EC_{<k}$. Hence, the tile graph $T_w$ has $k$ ends if and only if $w\in E_{\geq k} \setminus \left(E_{\geq k+1} \cup EC_{>k}\cup EC_{<k}\right)$. We need to prove that for a regular sequence $w$ the graph $T_w$ has at least $k$ ends if and only if $w\in E_{\geq k}$. First, suppose $w\in E_{\geq k}$. Then $w$ is cofinal to a sequence $w'\in \mathscr{R}_{\geq k}$, which is also regular. The sequences $w$ and $w'$ belong to the same tile graph $T_w=T_{w'}$. There exists a finite word $v$ such that for the sequence $vw'$ the corresponding path in ${\mathsf{A}}_{ic}$ starting at the initial state eventually lies in the subgraph ${\mathsf{A}}_{ic}(k)$. Then the graph $T_{vw'}\setminus vw'$ has $\geq k$ infinite components by Proposition \[prop\_infcomp\_limit\]. Using the correspondence between infinite components of $T_{w'}\setminus w'$ and of $T_{vw'}\setminus X^{|v|}w'$ shown in the proof of Proposition \[prop\_ends\_limit\] we get that the graph $T_{w'}\setminus w'$ has $\geq k$ infinite components, and hence $T_{w'}=T_w$ has $\geq k$ ends. For the converse, suppose the graph $T_{w}$ has $\geq k$ ends and the sequence $w$ is regular. Then $ic(T_{\sigma^n(w)}\setminus \sigma^n(w))\geq k$ for some $n$ by Propositions \[prop\_ends\_limit\] and \[prop\_infcomp\_limit\]. Hence some shift $\sigma^n(w)$ of the sequence $w$ is in $\mathscr{R}_{\geq k}$ and thus $w\in E_{\geq k}$. Example with $IMG(z^2+i)$ in Section \[Section\_Examples\] shows that we cannot expect to get a description using subshifts of finite type, and indeed the description using sofic subshifts is the best possible in these settings. From ends of tile graphs to ends of Schreier graphs --------------------------------------------------- Now we can describe how to derive the number of ends of Schreier graphs from the number of ends of tile graphs. \[prop\_Schreier\_construction\] 1. The Schreier graph ${\Gamma}_w$ coincides with the tile graph $T_w$ for every regular sequence $w\in{X^\omega}$. 2. Let $w\in{X^\omega}$ be a critical sequence, and let $O(w)$ be the set of all critical sequences $v\in{X^\omega}$ such that $g(w)=v$ for some $g\in G$. The Schreier graph ${\Gamma}_w$ is constructed by taking the disjoint union of the orbital tile graphs $T_v$ for $v\in O(w)$ and connecting two critical sequences $v_1,v_2\in O(w)$ by an edge whenever $s(v_1)=v_2$ for some $s\in S$. *1.* If the point $w$ is regular, then the set of vertices of ${\Gamma}_w$ is the cofinality class $Cof(w)$, which is the set of vertices of $T_w$ by Proposition \[cofinality\]. Suppose there is an edge between $v$ and $u$ in the graph ${\Gamma}_w$. Then $s(v)=u$ for some $s\in S$. Since the sequence $w$ is regular, all the sequences in $Cof(w)$ are regular, and hence there exists a finite beginning $v'$ of $v$ such that $s|_{v'}=1$. Hence there is an edge between $v$ and $u$ in the tile graph $T_w$. *2.* If the point $w$ is critical, then the set of vertices of ${\Gamma}_w$ is the union of cofinality classes $Cof(v)$ for $v\in O(w)$. Consider an edge $s(v_1)=v_2$ in ${\Gamma}_w$. If this is not an edge of $T_v$ for $v\in O(w)$, then the restriction of $s$ on every beginning of $v_1$ is not trivial. Hence $v_1,v_2$ are critical, and this edge was added under construction. The following corollary summarizes the relation between the number of ends of the Schreier graphs ${\Gamma}_w$ with the number of ends of the tile graphs $T_w$. It justifies the fact that, for our aims, it was enough to study the number of ends and connected components in the tile graphs. \[remarktile\] 1. If $w$ is a regular sequence, then $\# Ends({\Gamma}_w)=\#Ends(T_{w})$. 2. If $w$ is critical, then $$\# Ends({\Gamma}_w)=\sum_{w'\in O(w)} \#Ends(T_{w'}),$$ where the set $O(w)$ is from Proposition \[prop\_Schreier\_construction\]. Using the automata ${\mathsf{A}}_c$ and ${\mathsf{A}}_{ic}$ one can construct similar automata for the number of components in the Schreier graphs ${\Gamma}_n$ with a vertex removed. For every state of ${\mathsf{A}}_c$ or ${\mathsf{A}}_{ic}$ take the corresponding partition of the post-critical set and combine components according to the edges $E({\Gamma}\setminus T)$ described in the last paragraph in Section \[subsection\_BoundedAutomata\]. For example, if $\{{\mathscr{P}}_i\}_i$ is a state of ${\mathsf{A}}_{ic}$, then we glue every two components ${\mathscr{P}}_s$ and ${\mathscr{P}}_t$ if $\{p,q\}\in E({\Gamma}\setminus T)$ for some $p\in {\mathscr{P}}_s$ and $q\in{\mathscr{P}}_t$. We get a new partition, and we label the state by the number of components in this partition. Basically, we get the same automata, but vertices may be labeled in a different way. A case of special interest is when all Schreier graphs have one end. In our settings of groups generated by bounded automata, our construction enables us to find a necessary and sufficient condition when all Schreier graphs $\Gamma_w$ have one end. \[all-one-ended\] All orbital Schreier graphs $\Gamma_w$ for $w\in{X^\omega}$ have one end if and only if the following two conditions hold: 1. all arrows along directed cycles in the automaton $S$ are labeled by $x|x$ for some $x\in X$ (depending on an arrow); 2. all strongly connected components of the automaton ${\mathsf{A}}_{ic}$ are labeled by $1$ (partitions consisting of one part). Let us show that the first condition is equivalent to the property that for every $w\in{X^\omega}$ the Schreier graph $\Gamma_w$ and tile graph $T_w$ coincide. If there exists a directed cycle that does not satisfy condition 1, then there exist two different critical sequences $w,w'$ that are connected in the Schreier graph, i.e., $s(w)=w'$ for some $s\in S$. In this case $\Gamma_{w}\neq T_{w}$, because by Proposition \[prop\_properties\_of\_sequences\] different critical sequences are non-cofinal, and therefore belong to different tile graphs. And vice versa, the existence of such critical sequences contradicts condition 1. Therefore condition 1 implies $O(w)=\{w\}$ for every critical sequence $w$, and thus $\Gamma_w=T_w$ by Proposition \[prop\_Schreier\_construction\]. Theorem \[thm\_number\_of\_ends\] implies that condition 2 is equivalent to the statement that every tile graph $T_w$ has one end. Therefore, if the conditions $1$ and $2$ hold, then any Schreier graph coincides with the corresponding tile graph which has one end. Conversely, Proposition \[prop\_Schreier\_construction\] implies that if the Schreier graph $\Gamma_{w}$ for a critical sequence $w$ does not coincide with the corresponding tile graph $T_{w}$, then the number of ends of $\Gamma_w$ is greater than one. (The graph $\Gamma_w$ is a disjoint union of more than one infinite tile graphs $T_v$, $v\in O(w)$ connected by a finite number of edges.) Therefore, if all Schreier graphs $\Gamma_w$ have one end, then they should coincide with tile graphs (condition 1 holds) and tile graphs have one end (condition 2 holds). \[remarkanoi\] The Hanoi Towers group $H^{(3)}$ [@gri_sunik:hanoi] is an example of a group generated by a bounded automaton for which all orbital Schreier graphs $\Gamma_w$ have one end. On the other hand, this group is not indicable (since its abelianization is finite) but can be projected onto the infinite dihedral group [@delzant_grigorchuk]. This implies that it contains a normal subgroup $N$ such that the Schreier coset graph associated with $N$ has two ends. Clearly, for what said above, $N$ does not coincide with the stabilizer of $w$ for any $w\in X^{\omega}$. The number of infinite components of tile graphs almost surely -------------------------------------------------------------- The structure of the automaton ${\mathsf{A}}_{ic}$ allows to get results about the measure of infinite sequences $w\in{X^\omega}$ for which the tile graphs $T_w\setminus w$ have a given number of infinite components. We recall that the space $X^{\omega}$ is endowed with the uniform measure. \[rem\_subword\_and\_inf\_comp\] It is useful to notice that we can construct a finite word $u\in X^{*}$ such that starting at any state of the automaton ${\mathsf{A}}_{ic}$ and following the word $u$ we end in some strongly connected component without outgoing edges. If these strongly connected components correspond to the partition of the post-critical set on $k$ parts, then it follows that ${pc}(T_n\setminus v_1uv_2)=k$ for all words $v_1,v_2\in X^{*}$ with $|v_1uv_2|=n$. In other words, ${pc}(T_n\setminus v)=k$ for every $v$ that contains $u$ as a subword. By Proposition \[prop\_infcomp\_limit\] we get the description of sequences which correspond to infinite components using the automaton ${\mathsf{A}}_{ic}$ (but only for regular and critical sequences). \[cor\_inf\_comp\_almost\_all\_seq\] The number of infinite connected components of the graph $T_w\setminus w$ is almost surely the same for all sequences $w\in{X^\omega}$. This number coincides with the label of the strongly connected components of the automaton ${\mathsf{A}}_{ic}$ without outgoing arrows. The measure of non-regular sequences is zero. For regular sequences $w\in{X^\omega}$ we can use the automaton ${\mathsf{A}}_{ic}$ to find the number ${ic}(T_w\setminus w)$. Then the corollary follows from Lemma \[lemma\_properties\_A\_ic\] item 2 and the standard fact that the measure of all sequences that are read along paths in a strongly connected component with an outgoing arrow is zero (for example, this fact follows from the observation that the adjacency matrix of such a component has spectral radius less than $|X|$). Another explanation comes from Remark \[rem\_subword\_and\_inf\_comp\] and the fact that the set of all sequences $w\in{X^\omega}$ that contain a fixed word as a subword is of full measure. The corollary does not hold for the number of all connected components of $T_w\setminus w$, see examples in Section \[Section\_Examples\]. However, given any number $k$ we can use the automaton ${\mathsf{A}}_c$ to compute the measure of the set $C(k)$ of all sequences $w\in{X^\omega}$ such that the graph $T_w\setminus w$ has $k$ components. As shown in the previous proof only strongly connected components without outgoing arrows contribute the set of sequences with a non-zero measure. Let $\Lambda_k$ be the collection of all strongly connected components of ${\mathsf{A}}_c$ without outgoing arrows and labeled by the number $k$. Let $V_k$ be the set of finite words $v\in X^{*}$ with the property that starting at the initial state and following arrows labeled by $v$ we end at a component from $\Lambda_k$, and any prefix of $v$ does not satisfy this property. Then the measure of $C(k)$ is equal to the sum $\sum_{v\in V_k} |X|^{-|v|}$. Since the automaton ${\mathsf{A}}_c$ is finite, this measure is always a rational number and can be computed algorithmically. The number of ends almost surely {#Section_number_ends} -------------------------------- \[section\_ends\_a\_s\] Corollary \[cor\_inf\_comp\_almost\_all\_seq\] together with Proposition \[prop\_ends\_limit\] imply that the tile graphs $T_w$ (and thus the Schreier graphs ${\Gamma}_w$) have almost surely the same number of ends, and that this number is equal to the label of the strongly connected components of ${\mathsf{A}}_{ic}$ without outgoing arrows. As was mentioned in introduction, this fact actually holds for any finitely generated self-similar group, which acts transitively on the levels $X^n$ for all $n\in\mathbb{N}$ (see Proposition 6.10 in [@AL]). In our setting of bounded automata we get a stronger description of the sequences $w$ for which the tile graph $T_w$ has non-typical number of ends. \[prop\_more\_ends\] There are only finitely many Schreier graphs ${\Gamma}_w$ and tile graphs $T_w$ with more than two ends. Let us prove that the graph $T_w\setminus w$ can have more than two infinite components only for finitely many sequences $w\in{X^\omega}$. Suppose not and choose sequences $w^{(1)},\ldots,w^{(m)}$ such that ${ic}(T_{w^{(i)}}\setminus w^{(i)})\geq 3$, where we take $m$ larger than the number of partitions of the post-critical set ${\mathscr{P}}$. Choose level $n$ large enough so that all words $w^{(1)}_n,\ldots,w^{(m)}_n$ are different and ${pc}(T_n\setminus w_n^{(i)})\geq 3$ for all $i$ (it is possible by Proposition \[prop\_infcomp\_limit\]). Notice that since the graph $T_n$ is connected, the deletion of different vertices $w^{(i)}$ produces different partitions of ${\mathscr{P}}$. Indeed, if ${\mathscr{P}}=\sqcup_{i=1}^k {\mathscr{P}}_i$ with $k\geq 3$ is the partition we got after removing some vertex $v$, then some $k-1$ sets ${\mathscr{P}}_i$ will be in the same component of the graph $T_n\setminus u$ for any other vertex $u$ (these $k-1$ sets will be connected through the vertex $v$). We get a contradiction with the choice of number $m$. It follows that there are only finitely many tile graphs with more than two ends. This also holds for Schreier graphs by Proposition \[prop\_Schreier\_construction\]. \[cor\_&gt;2ends\_pre\_periodic\] The Schreier graphs ${\Gamma}_w$ and tile graphs $T_w$ can have more than two ends only for pre-periodic sequences $w$. Since the graph $T_w\setminus w$ can have more than two infinite components only for finitely many sequences $w$, we get that, in the limit in Proposition \[prop\_ends\_limit\], the sequence $\sigma^n(w)$ attains a finite number of values. Hence $w$ is pre-periodic. Example with $IMG(z^2+i)$ in Section \[Section\_Examples\] shows that the Schreier graph ${\Gamma}_w$ and the tile graph $T_w$ may have more than two ends even for regular sequences $w$. \[cor\_one\_or\_two\_ends\_a\_s\] The tile graphs $T_w$ and Schreier graphs ${\Gamma}_w$ have the same number of ends for almost all sequences $w\in{X^\omega}$, and this number is equal to one or two. Two ends almost surely {#section_two_ends_a_s} ---------------------- In this section we describe bounded automata for which Schreier graphs $\Gamma_w$ and tile graphs $T_w$ have almost surely two ends. Notice that in this case the post-critical set ${\mathscr{P}}$ cannot consist of one element (actually, every finitely generated self-similar group with $|{\mathscr{P}}|=1$ is finite and cannot act transitively on $X^n$ for all $n$). \[lemma\_as\_two\_ends\_|P|=2\] If the Schreier graphs $\Gamma_w$ (equivalently, the tile graphs $T_w$) have two ends for almost all $w\in{X^\omega}$, then $|{\mathscr{P}}|=2$. We pass to a power of the alphabet so that every post-critical sequence is of the form $y^{-\omega}$ or $y^{-\omega}x$ for some letters $x,y\in X$ and different post-critical sequences end with different letters. In particular, every subset ${\mathscr{P}}\times\{x\}$ for $x\in X$ of the model graph $M$ contains at most one post-critical vertex of $M$. We again pass to a power of the alphabet so that for every nontrivial element $s\in S$ there exists a letter $x\in X$ such that $s(x)\neq x$ and $s|_x=1$. Then every post-critical sequence $p\in{\mathscr{P}}$ appears in some edge $\{(p,*), (*,*)\}$ of the model graph. Indeed, if the pair $p|q$ is read along a left-infinite path in the automaton $S\setminus\{1\}$ that ends in a nontrivial state $s$, then the pair $\{(p,x),(q,s(x))\}$ belongs to the edge set $E$ of the graph $M$. Now suppose that tile graphs have almost surely two ends. Then the strongly connected components without outgoing arrows in the automaton ${\mathsf{A}}_{ic}$ correspond to the partitions of the post-critical set ${\mathscr{P}}$ on two parts (see Corollary \[cor\_inf\_comp\_almost\_all\_seq\]). In particular, there is no state corresponding to the partition of ${\mathscr{P}}$ with one part, because such a partition would form a strongly connected component without outgoing arrows (see the construction of ${\mathsf{A}}_{ic}$). We will use the fact that all paths in the automaton ${\mathsf{A}}_{ic}$ starting at any partition ${\mathscr{P}}={\mathscr{P}}_1\sqcup {\mathscr{P}}_2$ end in partitions of ${\mathscr{P}}$ on two parts (we cannot get more parts). Let us construct an auxiliary graph $\overline{M}$ as follows: take the model graph $M$ and for each $x\in X$ add edges between all vertices in the subset ${\mathscr{P}}\times \{x\}$. We will prove that the graph $\overline{M}$ is an “interval”, i.e., there are two vertices of degree one and the other vertices have degree two, and that two end vertices of $\overline{M}$ are the only post-critical vertices. First, let us show that there are only two subsets ${\mathscr{P}}\times \{x\}$ for $x\in X$ such that the graph $\overline{M}\setminus {\mathscr{P}}\times \{x\}$ is connected. Suppose that there are three such subsets ${\mathscr{P}}\times \{x\}$, ${\mathscr{P}}\times \{y\}$, ${\mathscr{P}}\times \{z\}$. Fix any partition ${\mathscr{P}}={\mathscr{P}}_1\sqcup {\mathscr{P}}_2$ that corresponds to some state of the automaton ${\mathsf{A}}_{ic}$. Consider the arrow in the automaton ${\mathsf{A}}_{ic}$ starting at ${\mathscr{P}}_1\sqcup {\mathscr{P}}_2$ and labeled by $x$. This arrow ends in the partition ${\mathscr{P}}={\mathscr{P}}_1^{(x)}\sqcup {\mathscr{P}}_2^{(x)}$ with two parts. Recall how we construct the partition ${\mathscr{P}}_1^{(x)}\sqcup {\mathscr{P}}_2^{(x)}$ using the graph $M_{{\mathscr{P}}_1\sqcup {\mathscr{P}}_2,x}$, and notice that $M_{{\mathscr{P}}_1\sqcup {\mathscr{P}}_2,x}\setminus{\mathscr{P}}\times \{x\}$ coincides with $\overline{M}\setminus {\mathscr{P}}\times \{x\}$. Using the assumption that the graph $\overline{M}\setminus {\mathscr{P}}\times \{x\}$ is connected, we get that one of the sets ${\mathscr{P}}_i^{(x)}$ is a subset of ${\mathscr{P}}\times \{x\}$. Since ${\mathscr{P}}\times \{x\}$ contains at most one post-critical vertex, the part ${\mathscr{P}}_i^{(x)}$ consists of precisely one element (post-critical vertex), which we denote by $a\in{\mathscr{P}}$, i.e., here ${\mathscr{P}}_i^{(x)}=\{a\}$. By the same reason the subsets ${\mathscr{P}}\times \{y\}$ and ${\mathscr{P}}\times \{z\}$ also contain some post-critical vertices $b$ and $c$. Notice that the last letters of the sequences $a, b, c$ are $x,y,z$ respectively. We can suppose that the sequences $az$ and $bz$ are different from the sequence $c$ (over three post-critical sequences there are always two with this property). Consider the arrow in the automaton ${\mathsf{A}}_{ic}$ starting at the partition ${\mathscr{P}}_1^{(x)}\sqcup {\mathscr{P}}_2^{(x)}=\{a\}\sqcup {\mathscr{P}}\setminus \{a\}$ and labeled by $z$. This arrow should end in the partition of ${\mathscr{P}}$ on two parts. Since $az$ and $c$ are different, the post-critical vertex $c$ of $M_{{\mathscr{P}}_1\sqcup {\mathscr{P}}_2,x}$ belongs to the subset $({\mathscr{P}}\setminus \{a\})\times \{z\}$. Further, since $c$ is the unique post-critical vertex in ${\mathscr{P}}\times \{z\}$, there should be no edges connecting the subset $({\mathscr{P}}\setminus \{a\})\times \{z\}$ with its outside in the graph $M_{{\mathscr{P}}_1\sqcup {\mathscr{P}}_2,x}$ (otherwise all post-critical vertices will be in the same component). Hence the only edges of the graph $\overline{M}$ going outside the subset ${\mathscr{P}}\times \{z\}$ should be at the vertex $(a,z)$. Applying the same arguments to the partition $\{b\}\sqcup {\mathscr{P}}\setminus\{b\}$, we get that this unique vertex should be $(b,z)$. Hence $a=b$ and we get a contradiction. So let ${\mathscr{P}}\times \{x\}$ and ${\mathscr{P}}\times \{y\}$ be the two subsets such that their complements in the graph $\overline{M}$ are connected. Let $a$ and $b$ be the post-critical vertices in ${\mathscr{P}}\times \{x\}$ and ${\mathscr{P}}\times \{y\}$ respectively. By the same arguments as above, the subset ${\mathscr{P}}\times \{x\}$ has a unique vertex which is adjacent to a vertex from $\overline{M}\setminus {\mathscr{P}}\times \{x\}$, and this vertex is of the form $(a,x)$ or $(b,x)$. The same holds for the subset ${\mathscr{P}}\times \{y\}$. Every other component ${\mathscr{P}}\times \{z\}$ contains precisely two vertices $(a,z)$ and $(b,z)$, which have edges going outside the component ${\mathscr{P}}\times \{z\}$. However every post-critical sequence appears in one of such edges (see our assumption in the second paragraph of the proof). Hence the post-critical set contains precisely two elements and the structure of the graph $\overline{M}$ follows. \[cor\_one\_end\_pcset\_3\_points\] If the post-critical set ${\mathscr{P}}$ contains at least three sequences, then the Schreier graphs $\Gamma_w$ and tile graphs $T_w$ have almost surely one end. The following example shows that almost all Schreier graphs may have two ends for a contracting group generated by a non-bounded automaton, i.e., by an automaton with infinite post-critical set. Consider the self-similar group $G$ over $X=\{0,1,2\}$ generated by the transformation $a$, which is given by the recursion $a=(a^2,1,a^{-1})(0,1,2)$ (see Example 7.6 in [@bhn:aut_til]). The group $G$ is self-replicating and contracting with nucleus ${\mathcal{N}}=\{1,a^{\pm 1}, a^{\pm 2}\}$, but the generating automaton is not bounded and the post-critical set is infinite. Every Schreier graph ${\Gamma}_w$ with respect to the generating set $\{a,a^{-1}\}$ is a line and has two ends. \[th\_classification\_two\_ends\] Almost all Schreier graphs ${\Gamma}_w$ (equivalently, tile graphs $T_w$) have two ends if and only if the automaton $S$ brought to the basic form (see Section \[subsection\_BoundedAutomata\]) is one of the following. 1. The automaton $S$ consists of the adding machine, its inverse, and the trivial element, where the adding machine is an element of type I with a transitive action on $X$ (see Figure \[fig\_Bounded\_PCS2\], where all edges not shown in the figure go to the identity state, and the letters $x$ and $y$ are different). 2. There exists an order on the alphabet $X=\{x=x_1, x_2, \ldots, x_m=y\}$ such that one of the following cases holds. 1. The automaton $S$ consists of elements of types II and II$'$ (see Figure \[fig\_Bounded\_PCS2\]); every pair $\{x_{2i},x_{2i+1}\}$ is an orbit of the action of some element of type II and all nontrivial orbits of such elements on $X$ are of this form; also every pair $\{x_{2i-1},x_{2i}\}$ is an orbit of the action of some element of type II$'$ and all nontrivial orbits of such elements on $X$ are of this form (in particular, $|X|$ is an odd number). 2. The automaton $S$ consists of elements of types II, III, and III$'$; every pair $\{x_{2i},x_{2i+1}\}$ is an orbit of the action of some element of type II or $III$ and all nontrivial orbits of such elements on $X$ are of this form; also every pair $\{x_{2i-1},x_{2i}\}$ is an orbit of the action of some element of type III$'$ and all nontrivial orbits of such elements on $X$ are of this form (in particular, $|X|$ is an even number). Moreover, in this case, all Schreier graphs ${\Gamma}_w$ are lines except for two Schreier graphs ${\Gamma}_{x^{\omega}}$ and ${\Gamma}_{y^{\omega}}$ in Case 2 (a), and one Schreier graph ${\Gamma}_{x^{\omega}}$ in Case 2 (b), which are rays. Recall the definition of the basic form of a bounded automaton from Section \[subsection\_BoundedAutomata\]. If a bounded automaton is in the basic form, its post-critical set has size two, and every state has an incoming arrow, then it is not hard to see that the automaton can contain only the states of six types shown in Figure \[fig\_Bounded\_PCS2\]. We will be using the fact proved in Lemma \[lemma\_as\_two\_ends\_|P|=2\] that the modified model graph $\overline{M}$ is an interval. There are two cases that we need to treat a little bit differently depending on whether both post-critical sequences are periodic or not. Consider the case when both post-critical sequences are periodic, here ${\mathscr{P}}=\{x^{-\omega}, y^{-\omega}\}$. In this case the automaton $S$ can contain only the states of types I, I$'$, II, and II$'$. Suppose there is a state $a$ of type I. It contributes the edges $\{ (x^{-\omega},z),(y^{-\omega},a(z)) \}$ to the graph $\overline{M}$ for every $z\in X$. If there exists a nontrivial orbit of the action of $a$ on $X$, which does not contain $x$, then it contributes a cycle to the graph $\overline{M}$. If there exists a fixed point $a(z)=z$, then under construction of the automaton ${\mathsf{A}}_{ic}$ starting at the partition ${\mathscr{P}}=\{x^{-\omega}\}\sqcup\{y^{-\omega}\}$ and following the arrow labeled by $z$ we get a partition with one part. Hence the element $a$ should act transitively on $X$ (it is the adding machine). Every other element of type I should have the same action on $X$, and hence coincide with $a$, otherwise we would got a vertex in the graph $\overline{M}$ of degree $\geq 3$. Every element $b$ of type I$'$ contributes the edges $\{(x^{-\omega},b(z)),(y^{-\omega},z) \}$ to the graph $\overline{M}$. It follows that the action of $b$ on $X$ is the inverse of the action of $a$ (otherwise we would got a vertex of $\overline{M}$ of degree $\geq 3$), and hence $b$ is the inverse of $a$. If the automaton $S$ additionally contains a state of type II or II$'$, then there is an edge $\{(x^{-\omega},z_1),(x^{-\omega},z_2)\}$ or $\{(y^{-\omega},z_1),(y^{-\omega},z_2)\}$ in the graph $\overline{M}$ for some different letters $z_1,z_2\in X$. We get a vertex of degree $\geq 3$, contradiction. Hence, in this case, the automaton $S$ consists of the adding machine, its inverse, and the identity state. Suppose $S$ does not contain states of types I and I$'$. Since the post-critical set is equal to ${\mathscr{P}}=\{x^{-\omega}, y^{-\omega}\}$ the automaton $S$ contains states $a$ and $b$ of types II and II$'$ respectively. These elements contribute edges $\{(x^{-\omega},z),(x^{-\omega},a(z))\}$ and $\{(y^{-\omega},z),(y^{-\omega},b(z))\}$ to the graph $\overline{M}$. Since the graph $\overline{M}$ should be an interval, these edges should consequently connect all components ${\mathscr{P}}\times z$ for $z\in X$ (see Figure \[fig\_ModelLine\_Case 2a\]). It follows that there exists an order on the alphabet such that item $(a)$ holds. Consider the case ${\mathscr{P}}=\{x^{-\omega}, x^{-\omega}y\}$. In this case the automaton $S$ can consist only of states of types II, III and III$'$. Each state of type II or III contributes edges $\{(x^{-\omega},z),(x^{-\omega},a(z))\}$ to the graph $\overline{M}$. Each state of type III$'$ contributes edges $\{(x^{-\omega}y,z),(x^{-\omega}y,b(z))\}$. These edges should consequently connect all components ${\mathscr{P}}\times z$ for $z\in X$ (see Figure \[fig\_ModelLine\_Case 2b\]). It follows that there exists an order on the alphabet such that item $(b)$ holds. For the converse, one can directly check the following facts. In item 1, every Schreier graph ${\Gamma}_w(G,S)$ is a line. In Case 2 (a), the Schreier graphs ${\Gamma}_{x^{\omega}}$ and ${\Gamma}_{y^{\omega}}$ are rays, while all the other Schreier graphs are lines. In Case 2 (b), the Schreier graph ${\Gamma}_{x^{\omega}}$ is a ray, while all the other Schreier graphs are lines. Dihedral group The Grigorchuk group is a nontrivial example satisfying the conditions of the theorem. It is generated by the automaton $S$ shown in Figure \[fig\_Grigorchuk\_Aut\]. After passing to the alphabet $\{0,1\}^3\leftrightarrow X=\{0,1,2,3,4,5,6,7\}$, the automaton $S$ consists of the trivial state and the elements $a,b,c,d$, which are given by the following recursions: $$\begin{aligned} a&=&(1,1,1,1,1,1,1,1)(0,4)(1,5)(2,6)(3,7)\\ b&=&(1,1,1,1,1,1,1,b)(0,2)(1,3)(4,5)\\ c&=&(1,1,1,1,1,1,a,c)(0,2)(1,3)\\ d&=&(1,1,1,1,1,1,a,d)(4,5)\end{aligned}$$ We see that this automaton satisfies Case 2 (b) of the theorem when we choose the order $6,2,0,4,5,1,3,7$ on $X$. The Schreier graph ${\Gamma}_{7^{\omega}}$ is a ray, while the other orbital Schreier graphs are lines. In what follows below, we give an algebraic characterization of the automaton groups acting on the binary tree, whose orbital Schreier graphs have two ends. It turns out that such groups are those whose nuclei are given by the automata defined by Šunić in [@Zoran]. In order to show this correspondence we sketch the construction of the groups $G_{\omega, \rho}$ as in [@Zoran]. Let $A$ and $B$ be the abelian groups $\mathbb{Z}/2\mathbb{Z}$ and $(\mathbb{Z}/2\mathbb{Z})^k$ respectively. We think of $A$ as the field of $2$ elements and of $B$ as the $k$-dimensional vector space over this field. Let $\rho:B\to B$ be an automorphism of $B$ and $\omega:B\to A$ a surjective homomorphism. We define the action of elements of $A$ and $B$ on the binary tree $\{0,1\}^{*}$ as follows: the nontrivial element $a\in A$ only changes the first letter of input words, i.e., $a=(1,1)(0,1)$; the action of $b\in B$ is given by the recursive rule $b=(\omega(b),\rho(b))$. The automorphism group generated by the action of $A$ and $B$ is denoted by $G_{\omega, \rho}$. Notice that the group $G_{\omega, \rho}$ is generated by a bounded automaton over the binary alphabet $X=\{0,1\}$, which we denote by $A_{\omega, \rho}$. If the action of $B$ is faithful, the group $G_{\omega, \rho}$ can be given by an invertible polynomial over the field with two elements, which corresponds to the action of $\rho$ on $B$. Examples include the infinite dihedral group $D_{\infty}$ given by polynomial $x+1$ and the Gigorchuk group given by polynomial $x^2+x+1$. The following result puts in relation the groups $G_{\omega, \rho}$ and the groups whose Schreier graphs have almost surely two ends. \[th\_Sch\_2ended\_binary\] Let $G$ be a group generated by a bounded automaton over the binary alphabet $X=\{0,1\}$. Almost all Schreier graphs $\Gamma_w(G)$ have two ends if and only if the nucleus of $G$ either consists of the adding machine, its inverse and the identity element, or is equal to one of the automata ${\mathsf{A}}_{\omega, \rho}$ up to switching the letters $0\leftrightarrow 1$ of the alphabet. Suppose that the Schreier graphs $\Gamma_w(G)$ have two ends for almost all sequences $w\in{X^\omega}$ and let ${\mathcal{N}}$ be the nucleus of the group $G$. By Lemma \[lemma\_as\_two\_ends\_|P|=2\] the post-critical set ${\mathscr{P}}$ of the group contains exactly two elements. If both post-critical elements are periodic, then the nucleus ${\mathcal{N}}$ consists of the adding machine, its inverse and the identity element (see the proof of Theorem \[th\_classification\_two\_ends\]). Let us consider the case ${\mathscr{P}}=\{x^{-\omega}, x^{-\omega}y\}$. We can assume that $x=1$ and $y=0$ so that ${\mathscr{P}}=\{1^{-\omega},1^{-\omega}0\}$. It follows that every arrow along a cycle in ${\mathcal{N}}$ is labeled by $1|1$ except for a loop at the trivial state labeled by $0|0$. The nucleus ${\mathcal{N}}$ contains only one nontrivial finitary element, namely $a=(1,1)(0,1)$, since otherwise there would be a post-critical sequence with preperiod of length two. Put $A=\{1,a\}$ and $B={\mathcal{N}}\setminus\{a\}$. The set $B$ consists exactly of those elements from $G$ that belong to cycles. For every $b\in B$ we have $b(1)=1$ and $b|_1\in B$, $b|_0\in A$. It follows that all nontrivial elements of $B$ have order two. Let us show that $B$ is a subgroup of $G$. For any $b_1,b_2\in B$ there exists $k$ such that $b_i|_{1^k}=b_i$ for $i=1,2$. Hence $b_1b_2|_{1^{k}}=b_1b_2$. Therefore $b_1b_2$ belongs to the nucleus ${\mathcal{N}}$ and thus $b_1b_2\in B$. It follows that $B$ is a group, which is isomorphic to $(\mathbb{Z}/2\mathbb{Z})^m$ for certain $m$. The map $\rho: b\mapsto b|_1$ is a homomorphism from $B$ to $B$ and is bijective, because elements of $B$ form cycles in the nucleus. The map $\omega: b\rightarrow b|_0$ is a surjective homomorphism from $B$ to $A$. We have proved that the nucleus ${\mathcal{N}}$ is exactly the automaton $A_{\omega, \rho}$. On the other hand, let us consider one of the groups $G_{\omega, \rho}$. Each element $b\in B$ has a cyclic $\rho$-orbit. In the language of automata this means that $b$ belongs to a cycle in the automaton $A_{\omega, \rho}$. We have $b|_1\in B$ and $b|_0\in A$ for every $b\in B$. It follows that if $b(u)=v$ and $u\neq v$ then $u$ and $v$ are of the form $u=1^l00w$, $v=1^l01w$ or $u=1^l01w$, $v=1^l00w$, $l\in\mathbb{N}\cup\{0\}$. Therefore the Schreier graphs $\Gamma_n(G)$ are intervals and the orbital Schreier graphs $\Gamma_w(G)$ have two ends for almost all sequences $w\in{X^\omega}$. Cut-points of tiles and limit spaces {#Section_Cut-points} ==================================== In this section we first recall the construction of the limit space and tiles of a self-similar group (see [@self_sim_groups; @ssgroups_geom] for more details). Then we show how to describe the cut-points of limit spaces and tiles of self-similar groups generated by bounded automata. Limit spaces and tiles of self-similar groups --------------------------------------------- Let $G$ be a contracting self-similar group with nucleus ${\mathcal{N}}$. \[limitspacedefi\] The *limit space* ${\mathscr{J}_{}}$ of the group $G$ is the quotient of the space ${X^{-\omega}}$ by the equivalence relation, where two sequences $\ldots x_2x_1$ and $\ldots y_2y_1$ are equivalent if there exists a left-infinite path in the nucleus ${\mathcal{N}}$ labeled by the pair $\ldots x_2x_1|\ldots y_2y_1$. The limit space ${\mathscr{J}_{}}$ is compact, metrizable, finite-dimensional space. If the group $G$ is finitely generated and self-replicating, then the space ${\mathscr{J}_{}}$ is path-connected and locally path-connected (see [@self_sim_groups Corollary 3.5.3]). The shift map on the space $X^{-\omega}$ induces a continuous surjective map ${\mathsf{s}}:{\mathscr{J}_{}}\rightarrow{\mathscr{J}_{}}$. The limit space ${\mathscr{J}_{}}$ comes together with a natural Borel measure $\mu$ defined as the push-forward of the uniform Bernoulli measure on ${X^{-\omega}}$. The dynamical system $({\mathscr{J}_{}},{\mathsf{s}},\mu)$ is conjugate to the one-sided Bernoulli $|X|$-shift (see [@bk:meas_limsp]). The *limit $G$-space* ${\mathcal{X}_{}}$ of the group $G$ is the quotient of the space ${X^{-\omega}}\times G$ equipped with the product topology of discrete sets by the equivalence relation, where two sequences $\ldots x_2x_1\cdot g$ and $\ldots y_2y_1\cdot h$ of ${X^{-\omega}}\times G$ are equivalent if there exists a left-infinite path in the nucleus ${\mathcal{N}}$ that ends in the state $hg^{-1}$ and is labeled by the pair $\ldots x_2x_1|\ldots y_2y_1$. The space ${\mathcal{X}_{}}$ is metrizable and locally compact. The group $G$ acts properly and cocompactly on the space ${\mathcal{X}_{}}$ by multiplication from the right. The quotient of ${\mathcal{X}_{}}$ by the action of $G$ is the space ${\mathscr{J}_{}}$. The image of ${X^{-\omega}}\times \{1\}$ in the space ${\mathcal{X}_{}}$ is called the *tile* ${\mathscr{T}}$ of the group $G$. The image of ${X^{-\omega}}v\times \{1\}$ for $v\in X^n$ is called the *tile* ${\mathscr{T}}_v$ of $n$-th level. Alternatively, the tile ${\mathscr{T}}$ can be described as the quotient of ${X^{-\omega}}$ by the equivalence relation, where two sequences $\ldots x_2x_1$ and $\ldots y_2y_1$ are equivalent if and only if there exists a path in the nucleus ${\mathcal{N}}$ that ends in the trivial state and is labeled by the pair $\ldots x_2x_1|\ldots y_2y_1$. The push-forward of the uniform measure on ${X^{-\omega}}$ defines a measure on ${\mathscr{T}}$. The tile ${\mathscr{T}}$ covers the limit $G$-space ${\mathcal{X}_{}}$ under the action of $G$. The tile ${\mathscr{T}}$ decomposes in the union $\cup_{v\in X^n} {\mathscr{T}}_v$ of the tiles of $n$-th level for every $n$. All tiles ${\mathscr{T}}_v$ are compact and homeomorphic to ${\mathscr{T}}$. Two tiles ${\mathscr{T}}_v$ and ${\mathscr{T}}_u$ of the same level $v,u\in X^{n}$ have nonempty intersection if and only if there exists $h\in{\mathcal{N}}$ such that $h(v)=u$ and $h|_v=1$ (see [@self_sim_groups Proposition 3.3.5]). This is precisely how we connect vertices in the tile graph $T_n(G, {\mathcal{N}})$ with respect to the nucleus. Hence the graphs $T_n(G,{\mathcal{N}})$ can be used to approximate the tile ${\mathscr{T}}$, which justifies the term “tile” graph. The tile ${\mathscr{T}}$ is connected if and only if all the tile graphs $T_n=T_n(G,{\mathcal{N}})$ are connected (see [@self_sim_groups Proposition 3.3.10]); in this case also ${\mathscr{T}}$ is path-connected and locally path-connected. A contracting self-similar group $G$ satisfies the *open set condition* if for any element $g$ of the nucleus ${\mathcal{N}}$ there exists a word $v\in X^*$ such that $g|_v=1$, i.e., in the nucleus ${\mathcal{N}}$ there is a path from any state to the trivial state. If a group satisfies the open set condition, then the tile ${\mathscr{T}}$ is the closure of its interior, and any two different tiles of the same level have disjoint interiors; otherwise for large enough $n$ there exists a tile ${\mathscr{T}}_v$ for $v\in X^n$ which is covered by other tiles of $n$-th level (see [@self_sim_groups Proposition 3.3.7]). Recall that the post-critical set ${\mathscr{P}}$ of the group is defined as the set of all sequences that can be read along left-infinite paths in ${\mathcal{N}}\setminus\{1\}$. Therefore, under the open set condition, the boundary $\partial{\mathscr{T}}$ of the tile ${\mathscr{T}}$ consists precisely of points represented by the post-critical sequences. Under the open set condition, the limit space ${\mathscr{J}_{}}$ can be obtained from the tile ${\mathscr{T}}$ by gluing some of its boundary points. Namely, we need to glue two points represented by (post-critical) sequences $\ldots x_2x_1$ and $\ldots y_2y_1$ for every path in ${\mathcal{N}}\setminus\{1\}$ labeled by $\ldots x_2x_1|\ldots y_2y_1$. Every self-similar group generated by a bounded automaton is contracting as shown in [@bondnek:pcf], and we can consider the associated limit spaces and tiles. Note that every bounded automaton satisfies the open set condition. The limit spaces of groups generated by bounded automata are related to important classes of fractals: post-critically finite and finitely-ramified self-similar sets (see [@PhDBondarenko Chapter IV]). Namely, for a contracting self-similar group $G$ with nucleus ${\mathcal{N}}$ the following statements are equivalent: every two tiles of the same level have finite intersection (the limit space ${\mathscr{J}_{}}$ is finitely-ramified); the post-critical set ${\mathscr{P}}$ is finite (the limit space ${\mathscr{J}_{}}$ is post-critically finite); the nucleus ${\mathcal{N}}$ is a bounded automaton (or the generating automaton of the group is bounded). Under the open set condition, the above statements are also equivalent to the finiteness of the tile boundary $\partial{\mathscr{T}}$. **Iterated monodromy groups.** Let $f\in \mathbb{C}(z)$ be a complex rational function of degree $d\geq 2$ with finite post-critical set $P_f$. Then $f$ defines a $d$-fold partial self-covering $f: f^{-1}({\mathcal{M}})\rightarrow{\mathcal{M}}$ of the space ${\mathcal{M}}={\hat{\mathbb{C}}}\setminus P_f$. Take a base point $t\in{\mathcal{M}}$ and let $T_t$ be the tree of preimages $f^{-n}(t)$, $n\geq 0$, where every vertex $z\in f^{-n}(t)$ is connected by an edge to $f(z)\in f^{-n+1}(t)$. The fundamental group $\pi_1({\mathcal{M}},t)$ acts by automorphisms on $T_t$ through the monodromy action on every level $f^{-n}(t)$. The quotient of $\pi_1({\mathcal{M}},t)$ by the kernel of its action on $T_t$ is called the *iterated monodromy group* $IMG(f)$ of the map $f$. The group $IMG(f)$ is contracting self-similar group and the limit space ${\mathscr{J}_{}}$ of the group $IMG(f)$ is homeomorphic to the Julia set $J(f)$ of the function $f$ (see [@self_sim_groups Section 6.4] for more details). Moreover, the limit dynamical system $({\mathscr{J}_{IMG(f)}},{\mathsf{s}},{\mathsf{m}})$ is conjugated to the dynamical system $(J(f),f,\mu_f)$, where $\mu_f$ is the unique $f$-invariant probability measure of maximal entropy on the Julia set $J(f)$ (see [@bk:meas_limsp]). Cut-points of tiles and limit spaces {#cut-points section} ------------------------------------ In this section we show how the number of connected components in the orbital Schreier and tile graphs with a vertex removed is related to the number of connected components in the limit space and tile with a point removed. This allows us to get a description of cut-points using a finite acceptor automata as in Proposition \[prop\_infcomp\_A\_ic\]. Let $G$ be a self-similar group generated by a bounded automaton. We assume that the tile ${\mathscr{T}}$ is connected. Then the nucleus ${\mathcal{N}}$ of the group is a bounded automaton, every state of ${\mathcal{N}}$ has an incoming arrow, and all the tile graphs $T_n=T_n(G,{\mathcal{N}})$ are connected. Hence we are in the settings of Section \[Section\_Ends\], and we can apply its results to the tile graphs $T_n$. Since the limit space ${\mathscr{J}_{}}$ is obtained from the tile ${\mathscr{T}}$ by gluing finitely many specific boundary points (the post-critical set ${\mathscr{P}}$ is finite), it is sufficient to consider the problem for the tile ${\mathscr{T}}$, in analogy to what we made before for the Schreier and tile graphs. ### Boundary, critical and regular points The tile ${\mathscr{T}}$ decomposes into the union $\cup_{x\in X}{\mathscr{T}}_x$, where each tile ${\mathscr{T}}_x$ is homeomorphic to ${\mathscr{T}}$ under the shift map. It follows that, if we take a copy $({\mathscr{T}},x)$ of the tile ${\mathscr{T}}$ for each $x\in X$ and glue every two points $(t_1,x)$ and $(t_2,y)$ with the property that there exists a path in the nucleus ${\mathcal{N}}$ that ends in the trivial state and is labeled by $\ldots x_2x_1x|\ldots y_2y_1y$, and the sequences $\ldots x_2x_1$ and $\ldots y_2y_1$ represent the points $t_1$ and $t_2$ respectively, then we get a space homeomorphic to the tile ${\mathscr{T}}$. This is an analog of the construction of tile graphs $T_n$ given in Theorem \[th\_tile\_graph\_construction\]. The edges of the model graph $M$ now indicate which points of the copies $({\mathscr{T}},x)$ should be glued. We consider the tile ${\mathscr{T}}$ as its own topological space (with the induced topology from the space ${\mathcal{X}_{}}$), and every tile ${\mathscr{T}}_v$ for $v\in X^{*}$ as a subset of ${\mathscr{T}}$ with induced topology. Hence the boundary of ${\mathscr{T}}$ is empty, but the points represented by post-critical sequences we still call the *boundary points* of the tile. Every point in the intersection of different tiles ${\mathscr{T}}_v\cap{\mathscr{T}}_u$ of the same level $|v|=|u|$ we call *critical*. These points are precisely the boundary points of the tiles ${\mathscr{T}}_v$ for $v\in X^{*}$, and they are represented by sequences of the form $pv$ for $p\in{\mathscr{P}}$ and $v\in X^{*}$. In particular, the number of critical points is countable, and hence they are of measure zero. All other points of ${\mathscr{T}}$ we call *regular*. Note that if a regular point $t$ is represented by a sequence $\ldots x_2x_1$, then $t$ is an interior point of ${\mathscr{T}}_{x_n\ldots x_2x_1}$ for all $n$. Since each tile ${\mathscr{T}}_{x_n\ldots x_2x_1}$ is homeomorphic to ${\mathscr{T}}$, the cut-points of ${\mathscr{T}}$ also provide information about its local cut-points. ### Components in the tile with a point removed {#section_component} In what follows, let $c({\mathscr{T}}\setminus t)$ denote the number of connected components in ${\mathscr{T}}\setminus t$ for a point $t\in{\mathscr{T}}$ and $bc({\mathscr{T}}\setminus t)$ be the number of components in ${\mathscr{T}}\setminus t$ that contain a boundary point of ${\mathscr{T}}$. We will show that the numbers $c({\mathscr{T}}\setminus t)$ and $bc({\mathscr{T}}\setminus t)$ can be computed using a method similar to the one developed in Section \[Section\_Ends\] to find the number of components in the tile graphs with a vertex removed. We start with the following result which is an analog of Propositions \[prop\_infcomp\_limit\]. \[prop\_tile\_regular\_point\] Let $t\in{\mathscr{T}}$ be a regular point represented by a sequence $\ldots x_2x_1\in{X^{-\omega}}$. Then $$bc({\mathscr{T}}\setminus t)=\lim_{n\rightarrow\infty} bc({\mathscr{T}}\setminus \textrm{int}({\mathscr{T}}_{x_n\ldots x_2x_1}))= \lim_{n\rightarrow\infty} {pc}(T_n\setminus x_n\ldots x_2x_1).$$ The interior $int({\mathscr{T}}_v)$ of the tile ${\mathscr{T}}_v$ is the complement to the subset of finitely many points that also belong to other tiles of the same level. Therefore ${\mathscr{T}}\setminus \textrm{int}({\mathscr{T}}_{v})$ is the union of all tiles ${\mathscr{T}}_u$ for $u\in X^{|v|}, u\neq v$. Since the point $t$ is regular, we can choose $n$ large enough so that the tile ${\mathscr{T}}_{x_n\ldots x_2x_1}$ does not contain the boundary points of ${\mathscr{T}}$ contained in ${\mathscr{T}}\setminus t$, and every tile ${\mathscr{T}}_v$ for $v\in X^n$ contains at most one boundary point of ${\mathscr{T}}$. Since $t$ belongs to the interior $int({\mathscr{T}}_{x_n\ldots x_2x_1})$ of the tile ${\mathscr{T}}_{x_n\ldots x_2x_1}$, if two boundary points of ${\mathscr{T}}$ lie in the same connected component of ${\mathscr{T}}\setminus int({\mathscr{T}}_{x_n\ldots x_2x_1})$, they lie in the same connected component of ${\mathscr{T}}\setminus t$. Therefore the value of the first limit is not less than $bc({\mathscr{T}}\setminus t)$. Conversely, if two boundary points of ${\mathscr{T}}$ lie in the same component of ${\mathscr{T}}\setminus t$, then for sufficiently large $n$ these two points lie in the same components of ${\mathscr{T}}\setminus int({\mathscr{T}}_{x_n\ldots x_2x_1})$. Since the number of boundary points is finite, the first equality follows. For the second equality recall that two tiles ${\mathscr{T}}_v$ and ${\mathscr{T}}_u$ for $v,u\in X^n$ have nonempty intersection if and only if the vertices $v$ and $u$ are connected by an edge in the graph $T_n$. It follows that if two vertices $v$ and $u$ belong to the same component in $T_n\setminus x_n\ldots x_2x_1$, then the tiles ${\mathscr{T}}_v$ and ${\mathscr{T}}_u$ belong to the same component in ${\mathscr{T}}\setminus t$. Therefore the value of the second limit is not less than $bc({\mathscr{T}}\setminus t)$. Conversely, since the point $t$ is regular, one can choose $n$ large enough so that for any pair of boundary points of ${\mathscr{T}}$ that belong to the same connected component in ${\mathscr{T}}\setminus t$, these points also belong to the same component in ${\mathscr{T}}\setminus {\mathscr{T}}_{x_n\ldots x_2x_1}$. The second equality follows. Propositions \[prop\_infcomp\_limit\] and \[prop\_tile\_regular\_point\] establish the connection between the number of components in a punctured tile and the number of infinite components in a punctured tile graph. To describe the limit in Proposition \[prop\_infcomp\_limit\] we constructed the automaton ${\mathsf{A}}_{ic}$, which returns the number ${pc}(T_n\setminus x_n\ldots x_2x_1)$ by reading the word $x_n\ldots x_2x_1$ from left to right, so that we can apply it to right-infinite sequences. Similarly one can construct a finite automaton ${\mathsf{B}}_{bc}$, which returns ${pc}(T_n\setminus x_n\ldots x_2x_1)$ by reading the word $x_n\ldots x_2x_1$ from right to left (the reversion of a regular language is a regular language) so that we can apply it to left-infinite sequences. Then we can describe the limit in Proposition \[prop\_tile\_regular\_point\] in the same way as Proposition \[prop\_infcomp\_A\_ic\] describes the limit in Proposition \[prop\_infcomp\_limit\]. Also we can construct a finite deterministic (acceptor) automaton ${\mathsf{B}}_{ic}$ with the following property. The states of ${\mathsf{B}}_{ic}$ are labeled by tuples of the form $$(\{{\mathscr{P}}_i\}_i,\{\mathscr{F}_j\}_j,\varphi:\{{\mathscr{P}}_i\}_i\rightarrow\{\mathscr{F}_j\}_j, n).$$ This tuple indicates the following. For any word $v\in X^{*}$ let us consider the partition of the set of boundary points of ${\mathscr{T}}$ induced by the components of ${\mathscr{T}}\setminus int({\mathscr{T}}_v)$ and let ${\mathscr{P}}_i$ be the set of all post-critical sequences representing the points from the same component. Let $n$ be the number of component in ${\mathscr{T}}\setminus int({\mathscr{T}}_v)$ without a boundary point of ${\mathscr{T}}$. Similarly, we consider the boundary points of ${\mathscr{T}}_v$ and let $\mathscr{F}_j$ be the set of all post-critical sequences $p_i$ such that the points represents by $p_iv$ belong to the same component in ${\mathscr{T}}\setminus int({\mathscr{T}}_v)$. Further, we define $\varphi({\mathscr{P}}_i)=\mathscr{F}_i$ if the points represented by sequences from ${\mathscr{P}}_i$ and $\mathscr{F}_i$ belong to the same connected component. Then the final state of ${\mathsf{B}}_{ic}$ after accepting $v$ is labeled exactly by the constructed tuple. If we are interested just in the number of all components in ${\mathscr{T}}\setminus int({\mathscr{T}}_v)$, we replace the label of each state by the number of components ${\mathscr{P}}_i$ plus $n$. The following statement follows. For every integer $k$ the set of all words $v\in X^{*}$ with the property that ${\mathscr{T}}\setminus \textrm{int} ({\mathscr{T}}_v)$ has $k$ connected components is a regular language recognized by the automaton ${\mathsf{B}}_{ic}$. This result enables us to provide, in analogy to Theorem \[thm\_number\_of\_ends\], a constructive method which, given a representation $\ldots x_2x_1$ of a point $t\in{\mathscr{T}}$, determines the number of components in ${\mathscr{T}}\setminus t$. In particular, we get a description of cut-points of ${\mathscr{T}}$. We distinguish two cases. *Regular points.* If a sequence $\ldots x_2x_1\in{X^{-\omega}}$ represents a regular point $t\in{\mathscr{T}}$, then $t$ is an interior point of the tile ${\mathscr{T}}_{x_n\ldots x_2x_1}$ for all $n\geq 1$. Hence, for a regular point $t$, every connected component of ${\mathscr{T}}\setminus t$ intersects the tile ${\mathscr{T}}_{x_n\ldots x_2x_1}$. Since the number of boundary points of each tile ${\mathscr{T}}_v$ is not greater than $|{\mathscr{P}}|$, it follows that $c({\mathscr{T}}\setminus t)$ coincides with the number of components in the partition of the boundary of the tile ${\mathscr{T}}_{x_n\ldots x_2x_1}$ in ${\mathscr{T}}\setminus t$ for large enough $n$ (in particular, $c({\mathscr{T}}\setminus t)\leq |{\mathscr{P}}|$). The last problem can be subdivided on two subproblems. First, “outside” subproblem: find how the boundary of the tile ${\mathscr{T}}_{x_n\ldots x_2x_1}$ decomposes in ${\mathscr{T}}\setminus int({\mathscr{T}}_{x_n\ldots x_2x_1})$. The automaton ${\mathsf{B}}_{ic}$ provides an answer to this problem when we trace the second label $\{\mathscr{F}_j\}_j$ of states. Second, “inside” subproblem: find how the boundary $\partial{\mathscr{T}}_{x_n\ldots x_2x_1}$ decomposes in ${\mathscr{T}}_{x_n\ldots x_2x_1}\setminus t$. Since each tile ${\mathscr{T}}_v$ is homeomorphic to the tile ${\mathscr{T}}$ under the shift map, the automaton ${\mathsf{B}}_{ic}$ provides an answer to this problem when we trace the first label $\{{\mathscr{P}}_i\}_i$ of states (as well as the automaton ${\mathsf{B}}_{bc}$). By combining the two partitions of $\partial{\mathscr{T}}_{x_n\ldots x_2x_1}$ given by the two subproblems, we get the number of connected components in ${\mathscr{T}}\setminus t$. *Critical points.* Let $t$ be a critical point. Suppose $t$ is represented by a post-critical sequence $p_1x_1\in{\mathscr{P}}$ with periodic $p_1\in{\mathscr{P}}$ and $x_1\in X$. Note that the structure of bounded automata implies that periodic post-critical sequences represent regular points of the tile. Therefore we can apply the previous case to find the number $c({\mathscr{T}}\setminus \overline{p_1})$ and the partition of the boundary of ${\mathscr{T}}$ in ${\mathscr{T}}\setminus \overline{p_1}$, where $\overline{p_1}$ stands for the point represented by $p_1$. Now consider the components of ${\mathscr{T}}\setminus t$. The only difference with the regular case is that $t$ may be not an interior point of ${\mathscr{T}}_v$ for all sufficiently long words $v$. However, it is an interior point of a union of finitely many tiles, and we will be able to apply the same arguments as above. Consider vertices $(p_i,x_i), i=2,\ldots,k$ of the model graph $M$ adjacent to the vertex $(p_1,x_1)$. Notice that since $p_1$ is periodic, all $p_i$ are periodic. The sequences $p_ix_i$ are precisely all sequences that represent the point $t$. Hence the point $t$ is an interior point of the set $U=\cup_{i=1}^k {\mathscr{T}}_{x_i}$. We can find how the boundary of the set $U$ decomposes in ${\mathscr{T}}\setminus \textrm{int}\, U$. Since every tile ${\mathscr{T}}_{x_i}$ is homeomorphic to the tile ${\mathscr{T}}$ via the shift map, we can find $c({\mathscr{T}}_{x_i}\setminus t)=c({\mathscr{T}}\setminus \overline{p_i})$, and deduce the decomposition of the boundary of ${\mathscr{T}}_{x_i}$ in ${\mathscr{T}}_{x_i}\setminus t$. Combining these partitions, we can find $c({\mathscr{T}}\setminus t)$ and the corresponding decomposition of $\partial{\mathscr{T}}$ (in particular, $c({\mathscr{T}}\setminus t)\leq k|{\mathscr{P}}|$). In this way we can find $c({\mathscr{T}}\setminus t)$ for every point $t$ represented by a post-critical sequence. Now let $t$ be a critical point represented by a sequence $pyu$ with $p\in{\mathscr{P}}$, $y\in X$, $u\in X^{*}$, and $py\not\in{\mathscr{P}}$. Recall that there is a finite automaton, which reads the word $u$ and returns the decomposition of the boundary of ${\mathscr{T}}_u$ in ${\mathscr{T}}\setminus \textrm{int}\,{\mathscr{T}}_u$. Using the fact that ${\mathscr{T}}_u\setminus t$ and ${\mathscr{T}}\setminus \overline{py}$ are homeomorphic, we can find the number $c({\mathscr{T}}_u\setminus t)$ in the same way as we did above for the sequence $p_1x_1$. Since the post-critical set is finite, one can construct a finite automaton, which given a finite word $v$ and a post-critical sequence $p\in{\mathscr{P}}$ returns the number $c({\mathscr{T}}\setminus \overline{pv})$. ### The number of components in punctured limit space and tile almost surely We can use the results from Sections \[section\_ends\_a\_s\] and \[section\_two\_ends\_a\_s\] to get information about cut-points of the limit space and tile up to measure zero. \[thm\_punctured\_tile\_ends\] 1. The number of connected components in ${\mathscr{T}}\setminus t$ is the same for almost all points $t$, and is equal to one or two. Moreover, $c({\mathscr{T}}\setminus t)=2$ almost surely if and only if the Schreier graphs $\Gamma_w$ (equivalently, the tile graphs $T_w$) have two ends for almost all $w\in{X^\omega}$, and in this case the tile ${\mathscr{T}}$ is homeomorphic to an interval. 2. The number of connected components in ${\mathscr{J}_{}}\setminus t$ is the same for almost all points $t$, and is equal to one or two. Moreover, $c({\mathscr{J}_{}}\setminus t)=2$ almost surely if and only if the nucleus ${\mathcal{N}}$ of the group satisfies Case 2 of Theorem \[th\_classification\_two\_ends\]. By Corollary \[cor\_one\_or\_two\_ends\_a\_s\] we have to consider only two cases. If the tile graphs $T_w$ have one end for almost all $w\in{X^\omega}$, then by Remark \[rem\_subword\_and\_inf\_comp\] there exists a word $v\in X^{*}$ such that ${pc}(T_n\setminus u_1vu_2)=1$ for all $u_1,u_2\in X^{*}$ with $n=|u_1vu_2|$. The set of all sequences of the form $u_1vu_2$ for $u_1\in{X^{-\omega}}$ and $u_2\in X^{*}$ is of full measure. Then by Proposition \[prop\_tile\_regular\_point\] we get that all boundary points of the tile ${\mathscr{T}}$ belong to the same component in ${\mathscr{T}}\setminus t$ for almost all points $t$. Since every tile ${\mathscr{T}}_v$ is homeomorphic to ${\mathscr{T}}$, we get that the boundary of every tile ${\mathscr{T}}_v$ belongs to the same component in ${\mathscr{T}}_v\setminus t$ for almost all points $t$. It follows that $c({\mathscr{T}}\setminus t)=1$ almost surely. Since the limit space ${\mathscr{J}_{}}$ can be constructed from ${\mathscr{T}}$ by gluing finitely many of its points, in this case we get $c({\mathscr{J}_{}}\setminus t)=1$ almost surely. If the tile graphs $T_w$ have two ends almost surely, then we are in the settings of Theorem \[th\_classification\_two\_ends\]. It is direct to check that in both cases of this theorem the tile ${\mathscr{T}}$ is homeomorphic to an interval (because the tile graphs $T_n$ are intervals), and hence $c({\mathscr{T}}\setminus t)=2$ for almost all points $t$. In Case 1 the limit space is homeomorphic to a circle and therefore $c({\mathscr{J}_{}}\setminus t)=1$ almost surely. In Case 2 the limit space is homeomorphic to an interval and therefore $c({\mathscr{J}_{}}\setminus t)=2$ almost surely. \[cor\_limsp\_interval\_circle\] 1. The tile ${\mathscr{T}}$ of a contracting self-similar group with open set condition is homeomorphic to an interval if and only if the nucleus of the group satisfies Theorem \[th\_classification\_two\_ends\]. 2. The limit space ${\mathscr{J}_{}}$ of a contracting self-similar group with connected tiles and open set condition is homeomorphic to a circle if and only if the nucleus of the group satisfies Case 1 of Theorem \[th\_classification\_two\_ends\], i.e., it consists of the adding machine, its inverse, and the identity state. 3. The limit space ${\mathscr{J}_{}}$ of a contracting self-similar group with connected tiles and open set condition is homeomorphic to an interval if and only if the nucleus of the group satisfies Case 2 of Theorem \[th\_classification\_two\_ends\]. Let $G$ be a contracting self-similar groups with open set condition, and let the tile ${\mathscr{T}}$ of the group $G$ be homeomorphic to an interval. Then the group $G$ has connected tiles and all tiles ${\mathscr{T}}_v$ are homeomorphic to an interval. The boundary of tiles is finite, hence the group $G$ is generated by a bounded automaton (here we use the open set condition), and we are under the settings of this section. Since $c({\mathscr{T}}\setminus t)=2$ almost surely, the Schreier graphs $\Gamma_w$ with respect to the nucleus ${\mathcal{N}}$ have almost surely two ends, and hence ${\mathcal{N}}$ satisfies Theorem \[th\_classification\_two\_ends\]. It is left to prove the statements about limit spaces. A small connected neighborhood of any point of a circle or of an interval is homeomorphic to an interval. Hence, if the limit space ${\mathscr{J}_{}}$ is a circle or an interval, the tile ${\mathscr{T}}$ is homeomorphic to an interval. Therefore we are in the settings of Theorem \[th\_classification\_two\_ends\]. As was mentioned above, in Case 1 of Theorem \[th\_classification\_two\_ends\] the limit space ${\mathscr{J}_{}}$ is homeomorphic to a circle, and in Case 2 it is homeomorphic to an interval. The last corollary together with Theorem \[th\_Sch\_2ended\_binary\] agree with the following result of Nekrashevych and Šunić: The limit dynamical system $({\mathscr{J}_{}},{\mathsf{s}})$ of a contracting self-similar group $G$ is topologically conjugate to the tent map if and only if $G$ is equivalent as a self-similar group to one of the automata ${\mathsf{A}}_{\omega, \rho}$. Examples {#Section_Examples} ======== Basilica group -------------- The Basilica group $G$ is generated by the automaton shown in Figure \[fig\_BasilicaAutomaton\]. This group is the iterated monodromy group of $z^2-1$. It is torsion-free, has exponential growth, and is the first example of amenable but not subexponentially amenable group (see [@gri_zuk:spect_pro]). The orbital Schreier graphs ${\Gamma}_w$ of this group have polynomial growth of degree $2$ (see [@PhDBondarenko Chapter VI]). The structure of Schreier graphs ${\Gamma}_w$ was investigated in [@ddmn:GraphsBasilica]. In particular, it was shown that there are uncountably many pairwise non-isomorphic graphs ${\Gamma}_w$ and the number of ends was described. Let us show how to get the result about ends using the developed method. The alphabet is $X=\{0,1\}$ and the post-critical set ${\mathscr{P}}$ consists of three elements $a=0^{-\omega}$, $b=(01)^{-\omega}$, $c=(10)^{-\omega}$. The model graph is shown in Figure \[fig\_BasilicaAutomaton\]. The automata ${\mathsf{A}}_{c}$ and ${\mathsf{A}}_{ic}$ are shown in Figure \[fig\_Basilica\_Ac\_Aic\]. We get that each tile graph $T_w$ has one or two ends, and we denote by $E_1$ and $E_2$ the corresponding sets of sequences $w$. For the critical sequences $w=0^{\omega}$ the tile graph $T_w$ has two ends, while for the other critical sequences $(01)^{\omega}$ and $(10)^{\omega}$ the tile graph $T_w$ has one end. Using the automaton ${\mathsf{A}}_{ic}$ the sets $E_1$ and $E_2$ can be described by Theorem \[thm\_number\_of\_ends\] as follows: $$\begin{aligned} E_2&=& X^{*}(0X)^{\omega}\setminus\left( Cof((01)^{\omega}\cup (10)^{\omega}) \right),\\ E_1&=&{X^\omega}\setminus E_2.\end{aligned}$$ Almost every tile graph $T_w$ has one end, the set $E_2$ is uncountable but of measure zero. Every graph $T_w\setminus w$ has one, two, or three connected components, and we denote by $C_1$, $C_2$, and $C_3$ the corresponding sets of sequences. Using the automaton ${\mathsf{A}}_c$ these sets can be described precisely as follows: $$\begin{aligned} C_3&=&\bigcup_{k\geq 0} \left( 010(10)^k0 (0X)^{\omega} \cup 000^k1 (0X)^{\omega} \right),\\ C_2&=&\bigcup_{k\geq 1} (10)^k0(0X)^{\omega} \bigcup \left(00{X^\omega}\cup 01{X^\omega}\right) \setminus C_3, \\ C_1&=&{X^\omega}\setminus \left( C_2\cup C_3\right).\end{aligned}$$ The set $C_3$ is uncountable but of measure zero, while the sets $C_1$ and $C_2$ are of measure $1/2$. Each graph $T_w\setminus w$ has one or two infinite components. The corresponding sets $IC_1$ and $IC_2$ can be described using the automaton ${\mathsf{A}}_{ic}$ as follows: $$\begin{aligned} IC_2&=&\bigcup_{k\geq 1} \left( (10)^k 0 (0X)^{\omega} \cup 0(10)^k0 (0X)^{\omega} \cup 00^k1 (0X)^{\omega} \right) \setminus \left(Cof((01)^{\omega}\cup (10)^{\omega}) \right),\\ IC_1&=& {X^\omega}\setminus IC_2.\end{aligned}$$ The set $IC_2$ is uncountable but of measure zero. The finite Schreier graph ${\Gamma}_n$ differs from the finite tile graph $T_n$ by two edges $\{a_n,b_n\}$ and $\{a_n,c_n\}$. Assuming these edges one can relabel the states of the automaton ${\mathsf{A}}_c$ so that it returns the number of components in ${\Gamma}_n\setminus v$. In this way we get that $c({\Gamma}_n\setminus v)=1$ if the word $v$ starts with $10$ or $11$; in the other cases $c({\Gamma}_n\setminus v)=2$. In particular, the Schreier graph ${\Gamma}_n$ has $2^{n-1}$ cut-vertices. The orbital Schreier graph ${\Gamma}_w$ coincides with the the tile graph $T_w$ except when $w$ is critical. The critical sequences $0^{\omega}$, $(01)^{\omega}$, and $(10)^{\omega}$ lie in the same orbit and the corresponding Schreier graph consists of three tile graphs $T_{0^{\omega}}$, $T_{(01)^{\omega}}$, $T_{(10)^{\omega}}$ with two new edges $(0^{\omega},(01)^{\omega})$ and $(0^{\omega},(10)^{\omega})$. It follows that this graph has four ends. The limit space ${\mathscr{J}_{}}$ of the group $G$ is homeomorphic to the Julia set of $z^2-1$ shown in Figure \[fig\_Basilica\_Graph LimitSpace\]. The tile ${\mathscr{T}}$ can be obtained from the limit space by cutting the limit space in the way shown in the figure, or, vise versa, the limit space can be obtained from the tile by gluing points represented by post-critical sequences $0^{-\omega}$, $(01)^{-\omega}$, $(10)^{-\omega}$. Every point $t\in{\mathscr{T}}$ divides the tile ${\mathscr{T}}$ into one, two, or three connected components. Put $\mathscr{C}=\{0^{-\omega}1, (01)^{-\omega}1, (10)^{-\omega}0\}$. Then the sets $\mathscr{C}_1$, $\mathscr{C}_2$, and $\mathscr{C}_3$ of sequences from ${X^{-\omega}}$, which represent the corresponding cut-points, can be described as follows: $$\begin{aligned} \mathscr{C}_3&=& \bigcup_{n\geq 0} \mathscr{C}(0X)^n \cup \mathscr{C}(0X)^n0,\\ \mathscr{C}_2&=& \bigcup_{n\geq 0} \left(\mathscr{C}(X0)^n \cup \mathscr{C}(X0)^nX\right)\bigcup \left( (0X)^{-\omega}\cup (X0)^{-\omega}\right)\setminus \left(\mathscr{C}_3\cup \left\{(10)^{-\omega}, (01)^{-\omega}\right\}\right),\\ \mathscr{C}_1&=&{X^{-\omega}}\setminus \left(\mathscr{C}_2\cup\mathscr{C}_3\right).\end{aligned}$$ The set $\mathscr{C}_3$ of three-section points is countable, the set $\mathscr{C}_2$ of bisection points is uncountable and of measure zero, and the tile ${\mathscr{T}}\setminus t$ is connected for almost all points $t$. Every point $t\in{\mathscr{J}_{}}$ divides the limit space ${\mathscr{J}_{}}$ into one or two connected components. The corresponding sets $\mathscr{C}'_1$ and $\mathscr{C}'_2$ can be described as follows: $$\begin{aligned} \mathscr{C}'_2&=& \bigcup_{n\geq 0} \left(\mathscr{C}(X0)^n \cup \mathscr{C}(0X)^n \cup \mathscr{C}(0X)^n0 \cup \mathscr{C}(X0)^nX\right)\bigcup\\ && \bigcup \left( (0X)^{-\omega}\cup (X0)^{-\omega}\right)\setminus \left\{(10)^{-\omega}, (01)^{-\omega}\right\},\\ \mathscr{C}'_1&=&{X^{-\omega}}\setminus \mathscr{C}'_2.\end{aligned}$$ The set $\mathscr{C}'_2$ of bisection points is uncountable and of measure zero, and the limit space ${\mathscr{J}_{}}\setminus t$ is connected for almost all points $t$. Gupta-Fabrykowski group ----------------------- The Gupta-Fabrykowski group $G$ is generated by the automaton shown in Figure \[fig\_GuptaFabrAutomaton\]. It was constructed in [@gupta_fabr2] as an example of a group of intermediate growth. Also this group is the iterated monodromy group of $z^3(-\frac{3}{2}+i\frac{\sqrt{3}}{2})+1$ (see [@self_sim_groups Example 6.12.4]). The Schreier graphs ${\Gamma}_w$ of this group were studied in [@barth_gri:spectr_Hecke], where their spectrum and growth were computed (they have polynomial growth of degree $\frac{\log 3}{\log 2}$). The alphabet is $X=\{0,1,2\}$ and the post-critical set ${\mathscr{P}}$ consists of two elements $a=2^{-\omega}$ and $b=2^{-\omega}0$. The model graph is shown in Figure \[fig\_GuptaFabrAutomaton\]. The automata ${\mathsf{A}}_{c}$ and ${\mathsf{A}}_{ic}$ are shown in Figure \[fig\_GuptaFabr\_Ac\_Aic\]. Every Schreier graph ${\Gamma}_w$ coincides with the tile graph $T_w$. We get that every tile graph $T_w$ has one or two ends, and we denote by $E_1$ and $E_2$ the corresponding sets of sequences. For the only critical sequence $2^{\omega}$ the tile graph $T_w$ has one end. Using the automaton ${\mathsf{A}}_{ic}$ the sets $E_1$ and $E_2$ can be described by Theorem \[thm\_number\_of\_ends\] as follows: $$\begin{aligned} E_2=X^{*} \{0,2\}^{\omega}\setminus Cof(2^{\omega}),\qquad E_1={X^\omega}\setminus E_2.\end{aligned}$$ Almost every tile graph has one end, the set $E_2$ is uncountable but of measure zero. Every graph $T_w\setminus w$ has one or two connected components, and we denote by $C_1$ and $C_2$ the corresponding sets of sequences. Using the automaton ${\mathsf{A}}_c$ these sets can be described precisely as follows: $$\begin{aligned} C_2= \bigcup_{k\geq 0} \left(2^k01{X^\omega}\cup 2^k0\{0,2\}^{\omega}\right), \qquad C_1={X^\omega}\setminus C_2.\end{aligned}$$ The sets $C_1$ and $C_2$ have measure $\frac{5}{6}$ and $\frac{1}{6}$ respectively. Every graph $T_w\setminus w$ has one or two infinite components. The corresponding sets $IC_1$ and $IC_2$ can be described using the automaton ${\mathsf{A}}_{ic}$ as follows: $$\begin{aligned} IC_2=\bigcup_{k\geq 0} 2^k0\{0,2\}^{\omega} \setminus Cof(2^{\omega}),\qquad IC_1={X^\omega}\setminus IC_2.\end{aligned}$$ The set $IC_2$ is uncountable but of measure zero. The limit space ${\mathscr{J}_{}}$ and the tile ${\mathscr{T}}$ of the group $G$ are homeomorphic to the Julia set of the map $z^3(-\frac{3}{2}+i\frac{\sqrt{3}}{2})+1$ shown in Figure \[fig\_GuptaFabr\_Graph LimitSpace\]. Every point $t\in{\mathscr{J}_{}}$ divides the limit space into one, two, or three connected components. The sets $\mathscr{C}_1$, $\mathscr{C}_2$, and $\mathscr{C}_3$ of sequences from ${X^{-\omega}}$, which represent the corresponding points, can be described as follows: $$\begin{aligned} \mathscr{C}_3&=&2^{-\omega}0X^{*}\setminus \{2^{-\omega}0\},\\ \mathscr{C}_2&=&\{0,2\}^{-\omega}\setminus \left(\mathscr{C}_3\cup \{2^{-\omega}, 2^{-\omega}0\}\right),\\ \mathscr{C}_1&=&{X^{-\omega}}\setminus \left(\mathscr{C}_2\cup\mathscr{C}_3\right).\end{aligned}$$ The set $\mathscr{C}_3$ of three-section points is countable, the set $\mathscr{C}_2$ of bisection points is uncountable and of measure zero, and the limit space ${\mathscr{J}_{}}\setminus t$ is connected for almost all points $t$. Iterated monodromy group of $z^2+i$ ----------------------------------- The iterated monodromy group of $z^2+i$ is generated by the automaton shown in Figure \[fig\_IMGz2iAutomaton\]. This group is one more example of a group of intermediate growth (see [@bux_perez:img]). The algebraic properties of $IMG(z^2+i)$ were studied in [@img_z2_i]. The Schreier graphs ${\Gamma}_w$ of this group have polynomial growth of degree $\frac{\log 2}{\log \lambda}$, where $\lambda$ is the real root of $x^3-x-2$ (see [@PhDBondarenko Chapter VI]). The alphabet is $X=\{0,1\}$ and the post-critical set ${\mathscr{P}}$ consists of three elements $a=(10)^{-\omega}0$, $b=(10)^{-\omega}$, and $c=(01)^{-\omega}$. The model graph is shown in Figure \[fig\_IMGz2iAutomaton\]. The automata ${\mathsf{A}}_{c}$ and ${\mathsf{A}}_{ic}$ are shown in Figure \[fig\_IMGz2i\_Aic\]. Every Schreier graph ${\Gamma}_w$ coincides with the tile graph $T_w$ and it is a tree. We get that every tile graph $T_w$ has one, two, or three ends, and we denote by $E_1$, $E_2$, and $E_3$ the corresponding sets of sequences. Using the automaton ${\mathsf{A}}_{ic}$ the sets $E_1$, $E_2$, $E_3$ can be described by Theorem \[thm\_number\_of\_ends\] as follows. For the both critical sequences $(10)^{\omega}$ and $(01)^{\omega}$ the tile graph $T_w$ has one end. Denote by $\mathscr{R}$ the right one-sided sofic subshift given by the subgraph emphasized in Figure \[fig\_IMGz2i\_Aic\]. Then $$\begin{aligned} E_3= Cof(0^{\omega}), \ \ E_2=X^{*}\mathscr{R} \setminus Cof(0^{\omega}\cup (10)^{\omega}\cup (01)^{\omega}), \ \ E_1={X^\omega}\setminus E_2.\end{aligned}$$ (We cannot describe these sets in the way we did with the previous examples, because the subshift $\mathscr{R}$ is not of finite type). Almost every tile graph has one end, the set $E_2$ is uncountable but of measure zero, and there is one graph, namely $T_{0^{\omega}}$, with three ends. This example shows that Corollary \[cor\_&gt;2ends\_pre\_periodic\] may hold for regular sequences (here $0^{\omega}$ is regular). Every graph $T_w\setminus w$ has one, two, or three connected components, and we denote by $C_1$, $C_2$, and $C_3$ the corresponding sets of sequences. Using the automaton ${\mathsf{A}}_c$ these sets can be described precisely as follows: $$\begin{aligned} C_3=\bigcup_{k\geq 0} 0(10)^k0X\mathscr{R}\bigcup_{k\geq 2} 0^k1\mathscr{R}\bigcup \{0^{\omega}\},\quad C_2={X^\omega}\setminus \left(C_3\cup C_1\right),\quad C_1=\bigcup_{k\geq 0} 1(01)^k1{X^\omega}.\end{aligned}$$ The set $C_3$ is of measure zero, and the sets $C_1$ and $C_2$ have measure $\frac{1}{3}$ and $\frac{2}{3}$ respectively. Every graph $T_w\setminus w$ has one, two, or three infinite components. The corresponding sets $IC_1$ and $IC_2$ can be described using the automaton ${\mathsf{A}}_{ic}$ as follows: $$\begin{aligned} IC_2&=&\bigcup_{k\geq 1} \left(0^k01\mathscr{R}\cup (10)^k0X\mathscr{R}\cup 0(10)^k0X\mathscr{R}\right),\\ IC_3&=&\{0^{\omega}\},\quad IC_1={X^\omega}\setminus \left(IC_2\cup IC_3\right).\end{aligned}$$ The set $IC_2$ is uncountable but of measure zero. The limit space ${\mathscr{J}_{}}$ and the tile ${\mathscr{T}}$ of the group $IMG(z^2+i)$ are homeomorphic to the Julia set of the map $z^2+i$ shown in Figure \[fig\_IMG\_Graph LimitSpace\]. Every point $t\in{\mathscr{J}_{}}$ divides the limit space into one, two, or three connected components. The sets $\mathscr{C}_1$, $\mathscr{C}_2$, and $\mathscr{C}_3$ of sequences from ${X^{-\omega}}$, which represent the corresponding points, can be described as follows: $$\begin{aligned} \mathscr{C}_3= Cof(0^{-\omega}),\quad \mathscr{C}_2= \mathscr{L} X^{*}\setminus \mathscr{C}_3,\quad \mathscr{C}_1= {X^{-\omega}}\setminus \left(\mathscr{C}_2\cup \mathscr{C}_3\right),\end{aligned}$$ where $\mathscr{L}$ is the left one-sided sofic subshift given by the subgraph emphasized in Figure \[fig\_IMGz2i\_Aic\]. The set $\mathscr{C}_3$ of three-section points is countable, the set $\mathscr{C}_2$ of bisection points is uncountable and of measure zero, and the limit space ${\mathscr{J}_{}}\setminus t$ is connected for almost all points $t$. [10]{} D. Aldous and R. Lyons, Processes on unimodular random networks, **12** (2007), 1454–1508. L. Bartholdi and R. Grigorchuk, On the spectrum of [H]{}ecke type operators related to some fractal groups, **231** (2000), 5–45. L. Bartholdi, R. Grigorchuk, and V. Nekrashevych, From fractal groups to fractal sets, in [*Fractals in Graz 2001*]{}, Trends Math., Birkhäuser, Basel, 2003, 25–118. L. Bartholdi, A. G. Henriques, and V. Nekrashevych, , **305** (2006), 629–663. I. Bondarenko and V. Nekrashevych, Post-critically finite self-similar groups, **4** (2003), 21–32. I. Bondarenko, Groups generated by bounded automata and their [S]{}chreier graphs. Ph.D. Dissertation, Texas A&M University, 2007. I. Bondarenko, **354** (2012), 765–785. I. Bondarenko, **434** (2015), 1–11. I. Bondarenko and R. Kravchenko, **226** (2011), 2169–2191. K.-U. Bux and R. P[é]{}rez, On the growth of iterated monodromy groups, in: [*Topological and asymptotic aspects of group theory*]{}, vol. 394, Contemp. Math., Amer. Math. Soc., Providence, RI, 2006, 61–76. D. D’Angeli, A. Donno, M. Matter, and T. Nagnibeda, **4** (2010), 167–205. D. D’Angeli, A. Donno, and T. Nagnibeda, in: [*Random Walks, Boundaries and Spectra*]{} (D. Lenz, F. Sobieczky and W. Woess Eds.), Progress in Prob. Vol. 64, Birkhäuser, Springer, Basel, 2011, 277–304. D. D’Angeli, A. Donno, and T. Nagnibeda, **33** (2012), 1484–1513. T. Delzant and R. Grigorchuk, Homomorphic images of branch groups, and Serre’s property (FA), in: [*Geometry and dynamics of groups and spaces*]{}, Progr. Math., 265, Birkhäuser, Basel, 2008, 353–375. J. Fabrykowski and N. Gupta, On groups with sub-exponential growth functions [II]{}, **56** (1991), 217–228. R. Grigorchuk, V. Nekrashevych, and V. Sushchanskii, Automata, dynamical systems and groups, **231** (2000), 134–214. R. Grigorchuk, D. Savchuk, and Z. [Š]{}unić, **42** (2007), 225–248. R. Grigorchuk and Z. [Š]{}uni[ć]{}, , **342** (2006), 545–550. R. Grigorchuk and A. [Ż]{}uk, , **87** (2001), 209–244. R. Grigorchuk and A. [Ż]{}uk, in: [*Computational and statistical group theory (Las Vegas, NV/Hoboken, NJ, 2001)*]{}, vol. 298, Contemp. Math., Amer. Math. Soc., Providence, RI, 2002, 57–82. M. Gromov, , , 1. CEDIC, Paris, 1981. iv+152 pp. ISBN: 2-7124-0714-8. J. Kigami, , Volume 143 of Cambridge Tracts in Mathematics, University Press, Cambridge, 2001. M. Matter and T. Nagnibeda, **199** (2014), 363–420. J. P. Previte, **18** (1998), 661–685. V. Nekrashevych, , Volume 117 of [*Mathematical Surveys and Monographs*]{}, Amer. Math. Soc., Providence, RI, 2005. V. Nekrashevych, **1** (2007), 41–96. V. Nekrashevych, in: [*Groups St Andrews 2009 in Bath*]{}, vol. 1, London Math. Soc. Lecture Note Ser. 387, Cambridge University Press, Cambridge, 2011, 41–93. Z. Šunić, **124** (2007), 213–236. S. Sidki, **100** (2000), 1925–1943. S. K. Smirnov, *Colloq. Math.* **87** (2001), 287–295. A. Zdunik, *Fund. Math.* **163** (2000), 277–286. [^1]: The second author was supported by Austrian Science Fund project FWF P24028-N18. The third author acknowledge the support of the Swiss National Science Foundation Grant PP0022-118946.
--- abstract: 'We combine two recent ideas: cartesian differential categories, and restriction categories. The result is a new structure which axiomatizes the category of smooth maps defined on open subsets of ${\ensuremath{\mathbb R}\xspace}^n$ in a way that is completely algebraic. We also give other models for the resulting structure, discuss what it means for a partial map to be additive or linear, and show that differential restriction structure can be lifted through various completion operations.' author: - | J.R.B. Cockett[^1], G.S.H. Cruttwell[^2], and J. D. Gallagher\ Department of Computer Science, University of Calgary,\ Alberta, Canada title: Differential Restriction Categories --- Introduction ============ In [@cartDiff], the authors proposed an alternative way to view differential calculus. The derivative was seen as an operator on maps, with many of its typical properties (such as the chain rule) axioms on this operation. The resulting categories were called cartesian differential categories, and the standard model is smooth maps between the spaces ${\ensuremath{\mathbb R}\xspace}^n$. One interesting aspect of this project was the algebraic feel it gave to differential calculus. The seven axioms of a cartesian differential category described all the necessary properties that the standard Jacobian has. Thus, instead of reasoning with epsilon arguments, one could reason about differential calculus by manipulating algebraic axioms. Moreover, as shown in [@resourceCalc], cartesian (closed) differential categories provide a semantic basis for modeling the simply typed differential lambda-calculus described in [@diffLambda]. This latter calculus is linked to various resource calculi which, as their name suggests, are useful in understanding the resource requirements of programs. Thus, models of computation in settings with a differential operator are of interest in the semantics of computation when resource requirements are being considered. Fundamental to computation is the possibility of non-termination. Thus, an obvious extension of cartesian differential categories is to allow partiality of maps. Of course, this has a natural analogue in the standard model: smooth maps defined on open subsets of ${\ensuremath{\mathbb R}\xspace}^n$ are a notion of partial smooth map which is ubiquitous in analysis. To axiomatize these ideas, we combine cartesian differential categories with the restriction categories of $\cite{restrictionI}$. Again, the axiomatization is completely algebraic: there are two operations (differentiation and restriction) that satisfy seven axioms for the derivative, four for the restriction, and two for the interaction of derivation and restriction. Our goal in this paper is not only to give the definitions and examples of these “differential restriction categories”, but also to show how natural the structure is. There are a number of points of evidence for this claim. In a differential restriction category, one can define what it means for a partial map such as $$f(x) = \left\{ \begin{array}{ll} 2x & \mbox{if $x \ne 5$};\\ \uparrow & \mbox{if $x = 5$}.\end{array} \right.$$ to be “linear”. One can give a similar description for the notion of “additive”. The differential interacts so well with the restriction that not only does it preserve the order and compatibility relations, it also preserves joins of maps, should they exist. Moreover, differential restriction structure is surprisingly robust[^3]. In the final two sections of the paper, we show that differential structure lifts through two completion operations on restriction categories. The first completion is the join completion, which freely add joins of compatible maps to a restriction category. We show that if differential structure is present on the original restriction category, then one can lift this differential structure to the join completion. The second completion operation is much more drastic: it adds “classical” structure to the restriction category, allowing one to classically reason about the restriction category’s maps. Again, we show that if the original restriction category has differential structure, then this differential structure lifts to the classical setting. This is perhaps the most surprising result of the paper, as one typically thinks of differential structure as being highly non-classical. In particular, it is not obvious how differentials of functions defined at a single point should work. We show that what the classical completion is doing is adding germs of functions, so that a function defined on a point (or a closed set) is defined by how it works on any open set around that point (or closed set). It is these germs of functions on which one can define differential restriction structure. The paper is laid out as follows. In Section \[sectionRes\], we review the theory of restriction categories. This includes reviewing the notions of joins of compatible maps, as well as the notion of a cartesian restriction category. In Section \[sectionDiff\], we define differential restriction categories. We must begin, however, by defining left additive restriction categories. Left additive categories are categories in which it is possible to add two maps, but the maps themselves need not preserve the addition (for example, the set of smooth maps between ${\ensuremath{\mathbb R}\xspace}^n$). Such categories were an essential base for defining cartesian differential categories, as the axioms need to discuss what happens when maps are added. Here, we describe left additive restriction categories, in which the maps being added may only be partial. One interesting aspect of this section is the definition of additive maps (those maps which do preserve the addition), which is slightly more subtle than its total counterpart. With the theories of cartesian restriction categories and left additive restriction categories described, we are finally able to define differential restriction categories. One surprise is that the differential automatically preserves joins. Again, as with additive maps, the definition of linear is slightly more subtle than its total counterpart. In Section \[rat\], we develop a family of examples of differential restriction categories: rational functions over a commutative ring. Rational functions (even over rigs), because of their “poles”, provide a natural source of restriction structure. We show that the formal derivative on these functions, together with this restriction, naturally forms a differential restriction category. The construction of rational functions presented here, is, we believe, novel: it involves the use of weak and rational rigs (described in \[rat:frac\]). While one can describe restriction categories of rational functions directly, the description of the restriction requires some justification. Thus, we first characterize the desired categories abstractly, by showing they occur as subcategory of a particular, more general, partial map category. This then makes the derivation of the concrete description straightforward. Moreover, the theory we develop to support this abstract characterization appears to be interesting in its own right. While many of the ideas of this section are implicit in algebraic geometry, the packaging of differential restriction categories makes both the partial aspects of these settings and their differential structure explicit. In the next two sections, we describe what happens when we join or classically complete the underlying restriction category of a differential restriction category, and show that the differential structure lifts in both cases. Again, this is important, as it shows how robust differential restriction structure is, as well as allowing one to differentiate in a classical setting. Finally, in section \[sectionConclusion\], we discuss further developments. An obvious step, given a differential restriction category with joins, is to use the manifold completion process of [@manifolds] to obtain a category of smooth manifolds. While the construction does not yield a differential restriction category, it is clearly central to developing the differential geometry of such settings. This is the subject of continuing work. On that note, we would like to compare our approach to other categorical theories of smooth maps. Lawvere’s synthetic differential geometry (carried out in [@dubuc], [@kock], and\ [@reyes]) is one such example. The notion of smooth topos is central to Lawvere’s program. A smooth topos is a topos which contains an object of “infinitesimals”. One thinks of the this object as the set $D = \{x: x^2 = 0 \}$. Smooth toposes give an extremely elegant approach to differential geometry. For example, one defines the tangent space of an object $X$ to be the exponential $X^D$. This essentially makes the tangent space the space of all infinitesimal paths in $X$, which is precisely the intuitive notion of what the tangent space is. The essential difference between the synthetic differential geometry approach and ours is the level of power of the relative settings. A smooth topos is, in particular, a topos, and so enjoys a great number of powerful properties. The differential restriction categories we describe here have fewer assumptions: we only ask for finite products, and assume no closed structure or subobject classifier. Thus, our approach begins at a much more basic level. While the standard model of a differential restriction category is smooth maps defined on open subsets of ${\ensuremath{\mathbb R}\xspace}^n$, the standard model of a smooth topos is a certain completion of smooth maps between all smooth manifolds. In contrast to the synthetic differential geometry approach, our goal is thus to see at what minimal level differential calculus can be described, and only then move to more complicated objects such as smooth manifolds. A number of authors have described others notions of smooth space: see, for example, [@chen], [@fro], [@sik]. All have a similar approach, and the similarity is summed up in [@comparativeSmooth]: > “...we know what it means for a map to be smooth between certain subsets of Euclidean space and so in general we declare a function smooth if whenever, we examine it using those subsets, it is smooth. This is a rather vague statement - what do we mean by ‘examine’? - and the various definitions can all be seen as ways of making this precise.” Thus, in each of these approaches, the author assumes an existing knowledge of smooth maps defined on open subsets of ${\ensuremath{\mathbb R}\xspace}^n$. Again, our approach is more basic: we are seeking to understand the nature of these smooth maps between ${\ensuremath{\mathbb R}\xspace}^n$. In particular, one could define Chen spaces, or Frölicher spaces, based on a differential restriction category other than the standard model, and get new notions of generalized smooth space. Finally, it is important to note that none of these other approaches work with *partial* maps. Our approach, in addition to starting at a more primitive level, gives us the ability to reason about the partiality of maps which is so central to differential calculus, geometry, and computation. Restriction categories review {#sectionRes} ============================= In this section, we begin by reviewing the theory of restriction categories. Restriction categories were first described in $\cite{restrictionI}$ as an alternative to the notion of a “partial map category”. In a partial map category, one thinks of a partial map from $A$ to $B$ as a span $$\xymatrix{& A' \ar[dl]_m \ar[dr]^f &\\A & & B\\}$$ where the arrow $m$ is a monic. Thus, $A'$ describes the domain of definition of the partial map. By contrast, a restriction category is a category which has to each arrow $f: A \to B$ a “restriction” ${\ensuremath{\overline{f}\,}}: A \to A$. One thinks of this ${\ensuremath{\overline{f}\,}}$ as giving the domain of definition: in the case of sets and partial functions, the map ${\ensuremath{\overline{f}\,}}$ is given by $$\overline{f}x= \begin{cases} x & \mbox{ if} f(x) \text{ defined}\\ \text{undefined} & \text{ otherwise.} \end{cases}$$ There are then four axioms which axiomatize the behavior of these restrictions (see below). There are two advantages of restriction categories when compared to partial map categories. The first is that they are more general than partial map categories. In a partial map category, one needs to have as objects each of the possible domains of definition of the partial functions. In a restriction category, this is not the case, as the domains of definition are expressed by the restriction maps. This is important for the examples considered below. In particular, the canonical example of a differential restriction category will have objects the spaces ${\ensuremath{\mathbb R}\xspace}^n$, and maps the smooth maps defined on open subsets of these spaces. This is not an example of a partial map category, as the open subsets are not objects, but it is naturally a restriction category, with the same restriction as for sets and partial functions. The second advantage is that the theory is completely algebraic. In partial map categories, one deals with equivalence classes of spans and their pullbacks. As a result, they are often difficult to work with directly. In a restriction category, one simply manipulates equations involving the restriction operator, using the four given axioms. As cartesian differential categories give a completely algebraic description of the derivatives of smooth maps, bringing these two algebraic theories together is a natural approach to capturing smooth maps which are partially defined. Definition and examples ----------------------- Restriction categories are axiomatized as follows. Note that throughout this paper, we are using diagrammatic order of composition, so that “$f$, followed by $g$”, is written $fg$. Given a category, ${\ensuremath{\mathbb X}\xspace}$, a [**restriction structure**]{} on ${\ensuremath{\mathbb X}\xspace}$ gives for each, $A \stackrel{f}{\longrightarrow} B$, a restriction arrow, $A \stackrel{{\ensuremath{\overline{f}\,}}}{\longrightarrow} A$, that satisfies four axioms: 1. ${\ensuremath{\overline{f}\,}} f = f$; 2. If $dom(f) = dom(g)$ then ${\ensuremath{\overline{g}\,}} {\ensuremath{\overline{f}\,}} = {\ensuremath{\overline{f}\,}} {\ensuremath{\overline{g}\,}}$; 3. If $dom(f) = dom(g)$ then ${\ensuremath{\overline{{\ensuremath{\overline{g}\,}}f}\,}} = {\ensuremath{\overline{g}\,}} {\ensuremath{\overline{f}\,}}$; 4. If $dom(h) = cod(f)$ then $f {\ensuremath{\overline{h}\,}} = {\ensuremath{\overline{fh}\,}} f$. A category with a specified restriction structure is a [**restriction category**]{}. We have already seen two examples of restriction categories: sets and partial functions, and smooth functions defined on open subsets of ${\ensuremath{\mathbb R}\xspace}^n$. For more examples see [@restrictionI], as well as [@turing], where restriction categories are used to describe categories of partial computable maps. A rather basic fact is that each restriction ${\ensuremath{\overline{f}}}$ is idempotent: we will call such idempotents [**restriction idempotents**]{}. We record this together with some other basic consequences of the definition: \[lemma:restriction\] If [$\mathbb X$]{}is a restriction category then: 1. ${\ensuremath{\overline{f}\,}}$ is idempotent; 2. ${\ensuremath{\overline{f}\,}}{\ensuremath{\overline{fg}\,}} = {\ensuremath{\overline{fg}\,}}$; 3. ${\ensuremath{\overline{f{\ensuremath{\overline{g}\,}}}\,}}={\ensuremath{\overline{fg}\,}}$; \[oldR3\] 4. ${\ensuremath{\overline{{\ensuremath{\overline{f}\,}}}\,}} = {\ensuremath{\overline{f}\,}}$; 5. ${\ensuremath{\overline{{\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}}\,}} = {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}$; 6. If $f$ is monic then ${\ensuremath{\overline{f}\,}} = 1$ (and so in particular ${\ensuremath{\overline{1}\,}} = 1$); \[monics-total\] 7. ${\ensuremath{\overline{f}\,}}g = g$ implies ${\ensuremath{\overline{g}\,}} = {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}$. Left as an exercise. Partial map categories ---------------------- As alluded to in the introduction to this section, an alternative way of axiomatizing categories of partial maps is via spans where one leg is a monic. We recall this notion here. These will be important, as we shall see that rational functions over a commutative rig naturally embed in a larger partial map category. \[stablemonics\] Let ${\ensuremath{\mathbb X}\xspace}$ be a category, and $\mathcal{M}$ a class of monics in ${\ensuremath{\mathbb X}\xspace}$. $\mathcal{M}$ is a [**stable system of monics**]{} in case 1. all isomorphisms are in $\mathcal{M}$; 2. $\mathcal{M}$ is closed to composition; 3. for any $m : B' \rightarrow B \in \mathcal{M}$, $f : A \rightarrow B \in \bf{C}$ the following pullback, called an $\mathcal{M}$-pullback, exists and $m' \in \mathcal{M}$: $$\xymatrix{ A' \ar[r]^{f'} \ar[d]_{m'} & B' \ar[d]^m\\ A \ar[r]_f & B\\ }$$ An [**$\mathcal{M}$-Category**]{} is a pair $({\ensuremath{\mathbb X}\xspace},\mathcal{M})$ where ${\ensuremath{\mathbb X}\xspace}$ is a category with a specified system of stable monics $\mathcal{M}$. Given an $\mathcal{M}$-Category, we can define a category of partial maps. Let $({\ensuremath{\mathbb X}\xspace},\mathcal{M})$ be an $\mathcal{M}$-Category. Define ${{\sf Par}\xspace}({\ensuremath{\mathbb X}\xspace},\mathcal{M})$ to be the category where Obj: : The objects of ${\ensuremath{\mathbb X}\xspace}$ Arr: : $A \stackrel{(m,f)}{\longrightarrow} B$ are classes of spans $(m,f)$, $$\xymatrix{& A' \ar[dl]_m \ar[dr]^f &\\A & & B\\}$$ where $m \in \mathcal{M}$. The classes of spans are quotiented by the equivalence relation $(m,f) \sim (m',f')$ if there is an isomorphism, $\phi$, such that both triangles in the following diagram commute. $$\xymatrix{ & A' \ar@/_1pc/[dl]_m \ar@/_/[drr]^f \ar@{.>}[r]^\phi & A'' \ar@/^/[dll]^{m'} \ar@/^1pc/[dr]^{f'} &\\ A & & & B\\ }$$ Id: : $A \stackrel{(1_A,1_A)}{\longrightarrow} A$ Comp: : By pullback; i.e. given $A \stackrel{(m,f)}{\longrightarrow} B, B \stackrel{(m',f')}{\longrightarrow} C$, the pullback $$\xymatrix{ & & A'' \ar[dl]_{m''} \ar[dr]^{f''}& &\\ & A' \ar[dl]_m \ar[dr]^f & & B' \ar[dl]_{m'} \ar[dr]^{f'} &\\ A & & B & & C\\ }$$ gives a composite $A \to^{(m''m,f''f')} C$. (Note that without the equivalence relation on the arrows, the associative law would not hold.) Moreover, this has restriction structure: given an arrow $(m,f)$, we can define its restriction to be $(m,m)$. From [@restrictionI], we have the following completeness result: Every restriction category is a full subcategory of a category of partial maps. However, it is not true that every full subcategory of a category of partial maps is a category of partial maps, so the restriction notion is more general. Joins of compatible maps {#subsecJoins} ------------------------ An important aspect of the theory of restriction categories is the idea of the join of two compatible maps. We first describe what it means for two maps to be compatible, that is, equal where they are both defined. Two parallel maps $f,g$ in a restriction category are *compatible*, written $f \smile g$, if ${\ensuremath{\overline{f}\,}}g = {\ensuremath{\overline{g}\,}}f$. Note that compatibility is *not* transitive. Recall also the notion of when a map $f$ is less than or equal to a map $g$: $f \leq g$ if ${\ensuremath{\overline{f}\,}}g = f$. This captures the notion of $g$ having the same values as $f$, but having a smaller domain of definition. Note that this inequality is in fact anti-symmetric. An important alternative characterization of compatibility is the following: \[lemmaAltComp\] In a restriction category, $$f \smile g \Leftrightarrow {\ensuremath{\overline{f}\,}}g \leq f \Leftrightarrow {\ensuremath{\overline{g}\,}}f \leq g.$$ If $f \smile g$, then ${\ensuremath{\overline{f}\,}}g = {\ensuremath{\overline{g}\,}}f \leq f$. Conversely, if ${\ensuremath{\overline{f}\,}}g \leq f$, then by definition, ${\ensuremath{\overline{{\ensuremath{\overline{f}\,}}g}\,}}g = {\ensuremath{\overline{f}\,}}g$, so ${\ensuremath{\overline{g}\,}}f = {\ensuremath{\overline{f}\,}}g$. We can now describe what it means to take the join of compatible maps. Intuitively, the join of two compatible maps $f$ and $g$ will be a map which is defined everywhere $f$ and $g$ are, while taking the value of $f$ where $f$ is defined, and the value of $g$ where $g$ is defined. There is no ambiguity, since the maps are compatible. Let [$\mathbb X$]{}be a restriction category. We say that [$\mathbb X$]{}is a [**join restriction category**]{} if for any family of pairwise compatible maps $(f_i : X \to Y)_{i \in I}$, there is a map $\bigvee_{i \in I} f_i : X \to Y$ such that - for all $i \in I$, $f_i \leq \bigvee_{i \in I} f_i$; - if there exists a map $g$ such that $f_i \leq g$ for all $i \in I$, then $\bigvee f_i \leq g$; (that it, it is the join under the partial ordering of maps in a restriction category) and these joins are compatible with composition: that is, for any $h: Z \to X$, - $h(\bigvee_{i \in I} f_i) = \bigvee_{i \in I}hf_i$. Note that by taking an empty family of compatible maps between objects $X$ and $Y$, we get a “nowhere-defined” map $\emptyset_{X,Y}: X \to Y$ which is the bottom element of the partially ordered set of maps from $X$ to $Y$. Obviously, sets and partial functions have all joins - simply take the union of the domains of the compatible maps. Similarly, continuous functions on open subsets also have joins. Note that the definition only asks for compatibility of joins with composition on the left. In the following proposition, we show that this implies compatibility with composition on the right. \[propJoins\] Let ${\ensuremath{\mathbb X}\xspace}$ be a join restriction category, and $(f_i)_{i \in I}: X \to Y$ a compatible family of arrows. (i) for any $j \in I$, ${\ensuremath{\overline{f_j}\,}} (\bigvee_{i \in I} f_i) = f_j$; (ii) ${\ensuremath{\overline{\bigvee_{i \in I} f_i}\,}} = \bigvee_{i \in I} {\ensuremath{\overline{f_i}\,}}$; (iii) for any $h: Y \to Z$, $(\bigvee_{i \in I} f_i)h = \bigvee_{i \in I} f_ih$. <!-- --> (i) This is simply a reformulation of $f_j \leq \bigvee_i f_i$. (ii) By the universal property of joins, we always have $\bigvee {\ensuremath{\overline{f_i}\,}} \leq {\ensuremath{\overline{\bigvee f_i}\,}}$. Note that this also implies that $\bigvee {\ensuremath{\overline{f_i}\,}}$ is a restriction idempotent, since it is less than or equal to a restriction idempotent. Now, to show the reverse inequality, consider: $$\begin{aligned} & & {\ensuremath{\overline{\bigvee_{i \in I} f_i}\,}} \bigvee_{j \in I} {\ensuremath{\overline{f_j}\,}} \\ & = & {\ensuremath{\overline{ \bigvee_{j \in I} {\ensuremath{\overline{f_j}\,}} \bigvee_{i \in I} f_i}\,}} \mbox{ since $\bigvee {\ensuremath{\overline{f_j}\,}}$ is a restriction idempotent,} \\ & = & {\ensuremath{\overline{ \bigvee_{j \in I} f_j}\,}} \mbox{ by (i),}\end{aligned}$$ as required. (iii) Again, by the universal property of joins, we automatically have $\bigvee (f_ih) \leq (\bigvee f_i)h$. In this case, rather than show the reverse inequality, we will instead show that their restrictions are equal: if one map is less than or equal to another, and their restrictions agree, then they must be equal. To show that their restrictions are equal, we first show $\bigvee (f_i{\ensuremath{\overline{h}\,}}) = (\bigvee f_i){\ensuremath{\overline{h}\,}}$: $$\begin{aligned} & & \left(\bigvee_{i \in I}f_i\right){\ensuremath{\overline{h}\,}} \\ & = & {\ensuremath{\overline{\left(\bigvee_{j \in I} f_j\right)h}\,}} \left(\bigvee_{i \in I} f_i\right) \mbox{ by {{\bf [R.4]}},} \\ & = & \bigvee_{i \in I} {\ensuremath{\overline{\left(\bigvee_{j \in I}f_j\right)h}\,}} f_i \\ & = & \bigvee_{i \in I} {\ensuremath{\overline{{\ensuremath{\overline{f_i}\,}}\left(\bigvee_{j \in I}f_j\right)h}\,}} f_i \\ & = & \bigvee_{i \in I} {\ensuremath{\overline{f_ih}\,}} f_i \mbox{ by (i),} \\ & = & \bigvee_{i \in I} f_i{\ensuremath{\overline{h}\,}} \mbox{ by {{\bf [R.4]}}.}\end{aligned}$$ Now, we can show that the restrictions of $\bigvee (f_ih)$ and $(\bigvee f_i)h$ are equal: $$\begin{aligned} & & {\ensuremath{\overline{ \bigvee_{i \in I}f_ih}\,}} \\ & = & \bigvee_{i \in I} {\ensuremath{\overline{f_ih}\,}} \mbox{ by (ii),} \\ & = & \bigvee_{i \in I} {\ensuremath{\overline{f_i {\ensuremath{\overline{h}\,}}}\,}} \\ & = & {\ensuremath{\overline{ \bigvee_{i \in i} f_i {\ensuremath{\overline{h}\,}}}\,}} \\ & = & {\ensuremath{\overline{ \left(\bigvee f_i \right) {\ensuremath{\overline{h}\,}}}\,}} \mbox{ by the result above,}\end{aligned}$$ as required. Cartesian restriction categories -------------------------------- Not surprisingly, cartesian differential categories involve cartesian structure. Thus, to develop the theory which combines cartesian differential categories with restriction categories, it will be important to recall how cartesian structure interacts with restrictions. This was described in [@restrictionIII] where it was noted that the resulting structure was equivalent to the P-categories introduced in [@pCategories]. We recall the basic idea here: Let [$\mathbb X$]{}be a restriction category. A [**restriction terminal object**]{} is an object $T$ in [$\mathbb X$]{}such that for any object $A$, there is a unique total map $!_A : A \longrightarrow T$ which satisfies $!_T = 1_T$. Further, these maps $!$ must satisfy the property that for any map $f : A \longrightarrow B$, $f!_B \leq !_A$, i.e. $f!_B = \overline{f!_B} \ !_A = \overline{f \overline{!_B}} \ !_A = \overline{f} \ !_A$. A [**restriction product**]{} of objects $A,B$ in [$\mathbb X$]{}is defined by total projections $$\begin{aligned} \pi_0 : A \times B \longrightarrow A & & \pi_1 : A \times B \longrightarrow B\end{aligned}$$ satisfying the property that for any object $C$ and maps $f : C \longrightarrow A,g:C \longrightarrow B$ there is a unique pairing map, ${\langle}f,g{\rangle}: C \longrightarrow A \times B$ such that both triangles below exhibit lax commutativity $$\xymatrix{ & C \ar[dl]_{f} \ar[dr]^g \ar@{.>}[d]|{{\langle}f,g{\rangle}} \ar@{}@<2ex>[dl]|{\geq} \ar@{}@<-2ex>[dr]|{\leq}& \\ A & A \times B \ar[l]^{\pi_0} \ar[r]_{\pi_1} & B\\ }$$ that is, $$\begin{aligned} {\langle}f,g{\rangle}\pi_0 = \overline{{\langle}f,g{\rangle}}f & \text{and} & {\langle}f,g{\rangle}\pi_1 = \overline{{\langle}f,g{\rangle}} g. \end{aligned}$$ In addition, we ask that ${\ensuremath{\overline{{\langle}f,g{\rangle}}\,}} = {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}$. We require *lax* commutativity as a pairing ${\langle}f,g{\rangle}$ should only be defined as much as both $f$ and $g$ are. A restriction category [$\mathbb X$]{}is a [**cartesian restriction category**]{} if [$\mathbb X$]{}has a restriction terminal object and all restriction products. Clearly, both sets and partial functions, and smooth functions defined on open subsets of ${\ensuremath{\mathbb R}\xspace}^n$ are cartesian restriction categories. The following contains a number of useful results. \[propCart\] In any cartesian restriction category, (i) ${\langle}f,g{\rangle}\pi_0 = {\ensuremath{\overline{f}\,}}g$ and ${\langle}f,g{\rangle}\pi_1 = {\ensuremath{\overline{g}\,}}f$; (ii) if $e = {\ensuremath{\overline{e}\,}}$, then $e{\langle}f,g{\rangle}= {\langle}ef,g{\rangle}= {\langle}f,eg{\rangle}$; (iii) $f{\langle}g,h{\rangle}= {\langle}fg,fh{\rangle}$; (iv) if $f \leq f'$ and $g \leq g'$, then ${\langle}f,g{\rangle}\leq {\langle}f',g'{\rangle}$; (v) if $f \smile f'$ and $g \smile g'$, then ${\langle}f,g{\rangle}\smile {\langle}f',g'{\rangle}$; (vi) if $f$ is total, then $(f \times g)\pi_1 = \pi_1g$. If $g$ is total, $(f \times g)\pi_0 = \pi_0f$. <!-- --> (i) By the lax commutativity, ${\langle}f,g{\rangle}\pi_0 = {\ensuremath{\overline{{\langle}f,g{\rangle}}\,}} f = {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}f = {\ensuremath{\overline{g}\,}}f$ and similarly with $\pi_1$. (ii) Note that $$e{\langle}f,g{\rangle}\pi_0 = e{\ensuremath{\overline{g}\,}}f = {\ensuremath{\overline{e}\,}}{\ensuremath{\overline{g}\,}}f = {\ensuremath{\overline{{\ensuremath{\overline{e}\,}}g}\,}}f = {\ensuremath{\overline{eg}\,}}f = {\langle}f,eg{\rangle}\pi_0$$ A similar result holds with $\pi_1$, and so by universality of pairing, $e{\langle}f,g{\rangle}= {\langle}f,eg{\rangle}$. By symmetry, it also equals ${\langle}ef,g{\rangle}$. (iii) Note that $$f{\langle}g,h{\rangle}\pi_0 = f\bar{h}g = {\ensuremath{\overline{fh}\,}}fg = {\langle}fg,fh{\rangle}\pi_0$$ where the second equality is by [[**\[R.4\]**]{}]{}. A similar result holds for $\pi_1$, and so the result follows by universality of pairing. (iv) Consider $$\begin{aligned} & & {\ensuremath{\overline{{\langle}f,g{\rangle}}\,}}{\langle}f',g'{\rangle}\\ & = & {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}{\langle}f',g'{\rangle}\mbox{ by (i)}\\ & = & {\langle}f',{\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}g'{\rangle}\mbox{ by (ii)}\\ & = & {\langle}f',{\ensuremath{\overline{f}\,}},g{\rangle}\mbox{ since $g \leq g'$} \\ & = & {\langle}{\ensuremath{\overline{f}\,}}f',g{\rangle}\mbox{ by (ii)} \\ & = & {\langle}f,g{\rangle}\mbox{ since $f \leq f'$}. \end{aligned}$$ Thus ${\langle}f,g{\rangle}\leq {\langle}f',g'{\rangle}$. (v) By Lemma \[lemmaAltComp\], we only need to show that ${\ensuremath{\overline{{\langle}f,g{\rangle}}\,}}{\langle}f',g'{\rangle}\leq {\langle}f,g{\rangle}$. But, again by Lemma \[lemmaAltComp\], we have ${\ensuremath{\overline{f}\,}}f' \leq f$ and ${\ensuremath{\overline{g}\,}}g' \leq g$, so by (iv) we get ${\langle}\bar{f}f',\bar{g}g'{\rangle}\leq {\langle}f,g{\rangle}$ and thus by (ii) and (i), we get ${\ensuremath{\overline{{\langle}f,g{\rangle}}\,}}{\langle}f',g'{\rangle}\leq {\langle}f,g{\rangle}$. (vi) $$(f \times g)\pi_1 = {\langle}\pi_0f,\pi_1g{\rangle}\pi_1 = {\ensuremath{\overline{\pi_0f}\,}}\pi_1g = \pi_1g$$ If ${\ensuremath{\mathbb X}\xspace}$ is a cartesian restriction category which also has joins, then the two structures are automatically compatible: In any cartesian restriction category with joins, (i) ${\langle}f \vee g, h{\rangle}= {\langle}f,h{\rangle}\vee {\langle}g,h{\rangle}$ and ${\langle}f,\emptyset{\rangle}= {\langle}\emptyset,f{\rangle}= \emptyset$; (ii) $(f \vee g) \times h = (f \times h) \vee (g \times h)$ and $f \times \emptyset = \emptyset \times f = \emptyset$. <!-- --> (i) Since ${\ensuremath{\overline{{\langle}f,\emptyset{\rangle}}\,}} = {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{\emptyset}\,}} = {\ensuremath{\overline{f}\,}}\emptyset = \emptyset$, by Proposition \[propJoins\], we have ${\langle}f,\emptyset{\rangle}= \emptyset$. For pairing, $$\begin{aligned} {\langle}f \vee g, h {\rangle}& = & {\ensuremath{\overline{{\langle}f \vee g, h {\rangle}}}} {\langle}f \vee g, h {\rangle}\\ & = & {\ensuremath{\overline{f \vee g}}} {\ensuremath{\overline{h}}} {\langle}f \vee g, h {\rangle}\\ & = & ({\ensuremath{\overline{f}}} \vee {\ensuremath{\overline{g}}}){\langle}f \vee g, h {\rangle}\\ & = & ({\ensuremath{\overline{f}}}{\langle}f \vee g, h{\rangle}) \vee ({\ensuremath{\overline{g}}}{\langle}f \vee g, h {\rangle}) \\ & = & {\langle}{\ensuremath{\overline{f}}}(f \vee g), h {\rangle}\vee {\langle}{\ensuremath{\overline{g}}}(f \vee g), h {\rangle}\\ & = & {\langle}f,h{\rangle}\vee {\langle}g,h{\rangle}\end{aligned}$$ as required. (ii) Using part (a), $f \times \emptyset = {\langle}\pi_0f, \pi_1\emptyset{\rangle}= {\langle}\pi_0f, \emptyset{\rangle}= \emptyset$ and $$\begin{aligned} (f \vee g) \times h & = & {\langle}\pi_0(f \vee g), \pi_1 h {\rangle}\\ & = & {\langle}(\pi_0f) \vee (\pi_0g), \pi_1h {\rangle}\\ & = & {\langle}\pi_0f, \pi_1h {\rangle}\vee {\langle}\pi_0g, \pi_1h {\rangle}\\ & = & (f \times h) \vee (g \times h) \end{aligned}$$ We shall see that this pattern continues with left additive and differential restriction categories: if the restriction category has joins, then it is automatically compatible with left additive or differential structure. Differential restriction categories {#sectionDiff} =================================== Before we define differential restriction categories, we need to define left additive restriction categories. Left additive categories were introduced in [@cartDiff] as a precursor to differential structure. To axiomatize how the differential interacts with addition, one must define categories in which it is possible to add maps, but not have these maps necessarily preserve the addition (as is the case with smooth maps defined on real numbers). The canonical example of one of these left additive categories is the category of commutative monoids with *arbitrary* functions between them. These functions have a natural additive structure given pointwise: $(f+g)(x) := f(x) + g(x)$, as well as $0$ maps: $0(x) := 0$. Moreover, while this additive structure does not interact well with postcomposition by a function, it does with precomposition: $h(f+g) = hf + hg$, and $f0 = 0$. This is essentially the definition of a left additive category. Left additive restriction categories ------------------------------------ To define left additive *restriction* categories, we need to understand what happens when we add two partial maps, as well as the nature of the $0$ maps. Intuitively, the maps in a left additive category are added pointwise. Thus, the result of adding two partial maps should only be defined where the original two maps were both defined. Moreover, the $0$ maps should be defined everywhere. Thus, the most natural requirement for the interaction of additive and restriction structure is that ${\ensuremath{\overline{f+g}}} = {\ensuremath{\overline{f}}}{\ensuremath{\overline{g}}}$, and that the $0$ maps be total. ${\ensuremath{\mathbb X}\xspace}$ is a [**left additive restriction category**]{} if each ${\ensuremath{\mathbb X}\xspace}(A,B)$ is a commutative monoid with ${\ensuremath{\overline{f+g}}} = {\ensuremath{\overline{f}}}{\ensuremath{\overline{g}}}$, ${\ensuremath{\overline{0}}} = 1$, and furthermore is left additive: $f(g+h) = fg + fh$ and $f0 = {\ensuremath{\overline{f}\,}}0$. It is important to note the difference between the last axiom ($f0 = {\ensuremath{\overline{f}\,}}0$) and its form for left additive categories ($f0 = 0$). $f0$ need not be total, so rather than ask that this be equal to $0$ (which is total), we must instead ask that $f0 = {\ensuremath{\overline{f}\,}}0$. This phenomenon will return when we define differential restriction categories. In general, any time an equational axiom has a variable which occurs on only one side, we must modify the axiom to ensure the variable occurs on both sides, by including the restriction of the variable on the other side. There are two obvious examples of left additive restriction categories: commutative monoids with arbitrary partial functions between them, and the subcategory of these consisting of continuous or smooth functions defined on open subsets of ${\ensuremath{\mathbb R}\xspace}^n$.\ Some results about left additive structure: \[propLA\] In any left additive restriction category: (i) f + g = [$\overline{g}\,$]{}f + [$\overline{f}\,$]{}g; (ii) if $e = {\ensuremath{\overline{e}\,}}$, then $e(f+g) = ef + g = f + eg$; (iii) if $f \leq f', g \leq g'$, then $f + g \leq f' + g'$; (iv) if $f \smile f', g \smile g'$, then $(f+g) \smile (f' + g')$. <!-- --> (i) $$f + g = {\ensuremath{\overline{f + g}\,}}(f+g) = {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}(f + g) = {\ensuremath{\overline{g}\,}}{\ensuremath{\overline{f}\,}}f + {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}g = {\ensuremath{\overline{g}\,}}f + {\ensuremath{\overline{f}\,}}g$$ (ii) $$\begin{aligned} & & f + eg \\ & = & {\ensuremath{\overline{eg}\,}}f + {\ensuremath{\overline{f}\,}}eg \mbox{ by (i)} \\ & = & {\ensuremath{\overline{e}\,}}\, {\ensuremath{\overline{g}\,}}f + {\ensuremath{\overline{e}\,}}{\ensuremath{\overline{f}\,}}g \\ & = & {\ensuremath{\overline{e}\,}}({\ensuremath{\overline{g}\,}}f + {\ensuremath{\overline{f}\,}}g) \\ & = & e(f + g) \mbox{ by (i)}\\ \end{aligned}$$ (iii) Suppose $f \leq f'$, $g \leq g'$. Then: $$\begin{aligned} & & {\ensuremath{\overline{f+g}\,}}(f' + g') \\ & = & {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}(f' + g') \\ & = & {\ensuremath{\overline{g}\,}}{\ensuremath{\overline{f}\,}}f' + {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}g' \\ & = & {\ensuremath{\overline{g}\,}}f + {\ensuremath{\overline{f}\,}}g \mbox{ since $f \leq f', g \leq g'$} \\ & = & f + g \mbox{ by (i)}. \end{aligned}$$ so $(f+g) \leq (f' + g')$. (iv) Suppose $f \smile f'$, $g \smile g'$. By lemma \[lemmaAltComp\], it suffices to show that ${\ensuremath{\overline{f+g}\,}}(f' + g') \leq f + g$. By lemma \[lemmaAltComp\], we have ${\ensuremath{\overline{f}\,}}f' \leq f$ and ${\ensuremath{\overline{g}\,}}g' \leq g$, so by (ii), we can start with $$\begin{aligned} {\ensuremath{\overline{f}\,}}f' + {\ensuremath{\overline{g}\,}}g' & \leq & f + g \\ {\ensuremath{\overline{{\ensuremath{\overline{g}\,}}g'}\,}}{\ensuremath{\overline{f}\,}} f' + {\ensuremath{\overline{{\ensuremath{\overline{f}\,}}f'}\,}}{\ensuremath{\overline{g}\,}}g' & \leq & f + g \\ {\ensuremath{\overline{g}\,}}{\ensuremath{\overline{g'}\,}}{\ensuremath{\overline{f}\,}}f' + {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{f'}\,}}{\ensuremath{\overline{g}\,}}g' & \leq & f+ g \mbox{ by R3} \\ {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}({\ensuremath{\overline{g'}\,}}f' + {\ensuremath{\overline{f'}\,}}g') & \leq & f + g \mbox{ by left additivity} \\ {\ensuremath{\overline{f + g}\,}}(f' + g') & \leq & f + g \mbox{ by (i)} \end{aligned}$$ If ${\ensuremath{\mathbb X}\xspace}$ has joins and left additive structure, then they are automatically compatible: If ${\ensuremath{\mathbb X}\xspace}$ is a left additive restriction category with joins, then: (i) $f + \emptyset = \emptyset$; (ii) $(\bigvee_i f_i) + (\bigvee_j g_j) = \bigvee_{i,j} (f_i+g_j)$. <!-- --> (i) ${\ensuremath{\overline{f + \emptyset}\,}} = {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{\emptyset}\,}} = {\ensuremath{\overline{f}\,}}\emptyset = \emptyset$, so by Proposition \[propJoins\], $f + \emptyset = \emptyset$. (ii) Consider: $$\begin{aligned} (\bigvee_i f_i)+ (\bigvee_j g_j) & = & {\ensuremath{\overline{(\bigvee_i f_i)+ (\bigvee_j g_j)}}} (\bigvee_i f_i)+ (\bigvee_j g_j) \\ & = & (\bigvee_i {\ensuremath{\overline{f_i}}})(\bigvee_j {\ensuremath{\overline{g_j}}})((\bigvee_i f_i)+ (\bigvee_j g_j)) \\ & = & (\bigvee_{i,j} {\ensuremath{\overline{f_i}}}~{\ensuremath{\overline{g_j}}}) ((\bigvee_i f_i)+ (\bigvee_j g_j)) \\ & = & \bigvee_{i,j} {\ensuremath{\overline{g_j}}}~{\ensuremath{\overline{f_i}}} (\bigvee_i f_i)+ {\ensuremath{\overline{f_i}}}~{\ensuremath{\overline{g_j}}} (\bigvee_j g_j)) \\ & = & \bigvee_{i,j} {\ensuremath{\overline{g_j}}}~f_i + {\ensuremath{\overline{f_i}}}~g_j \mbox{ by Proposition \ref{propJoins},}\\ & = & \bigvee_{i,j} f_i + g_j, \end{aligned}$$ as required. Additive and strongly additive maps ----------------------------------- Before we get to the definition of a differential restriction category, it will be useful to have a slight detour, and investigate the nature of the additive maps in a left additive restriction category. In a left additive category, arbitrary maps need not preserve the addition, in the sense that $$(x+y)f = xf + yf \mbox{ and } 0f = 0,$$ are not taken as axioms. Those maps which do preserve the addition (in the above sense) form an important subcategory, and such maps are called additive. Similarly, it will be important to identify which maps in a left additive restriction category are additive. Here, however, we must be a bit more careful in our definition. Suppose we took the above axioms as our definition of additive in a left additive restriction category. In particular, asking for that equality would be asking for the restrictions to be equal, so that $${\ensuremath{\overline{(x+y)f}\,}} = {\ensuremath{\overline{xf + yf}\,}} = {\ensuremath{\overline{xf}\,}}{\ensuremath{\overline{yf}\,}}$$ That is, $xf$ and $yf$ are defined exactly when $(x+y)f$ is. Obviously, this is a problem in one direction: it would be nonsensical to ask that $f$ be defined on $x+y$ implies that $f$ is defined on both $x$ and $y$. The other direction seems more logical: asking that if $f$ is defined on $x$ and $y$, then it is defined on $x+y$. That is, in addition to being additive as a function, its domain is also additively closed. Even this, however, is often too strong for general functions. A standard example of a smooth partial function would be something $2x$, defined everywhere but $x=5$. This map does preserve addition, wherever it is defined. But it is not additive in the sense that its domain is not additively closed. Thus, we need a weaker notion of additivity: we merely ask that $(x+y)f$ be *compatible* with $xf + yf$. Of course, the stronger notion, where the domain is additively closed, is also important, and will be discussed further below. Say that a map $f$ in a left additive restriction category is **additive** if for any $x,y$, $$(x+y)f \smile xf + yf \mbox{ and } 0f \smile 0$$ We shall see below that for total maps, this agrees with the usual definition. We also have the following alternate characterizations of additivity: A map $f$ is additive if and only if for any $x,y$, $${\ensuremath{\overline{xf}\,}}{\ensuremath{\overline{yf}\,}}(x + y)f \leq xf + yf \mbox{ and } 0f \leq 0$$ or $$(x{\ensuremath{\overline{f}\,}} + y{\ensuremath{\overline{f}\,}})f \leq xf + yf \mbox{ and } 0f \leq 0.$$ Use the alternate form of compatibility (Lemma \[lemmaAltComp\]) for the first part, and then [[**\[R.4\]**]{}]{} for the second. In any left additive restriction category, (i) total maps are additive if and only if $(x+y)f = xf + yf$; (ii) restriction idempotents are additive; (iii) additive maps are closed under composition; (iv) if $g \leq f$ and $f$ is additive, then $g$ is additive; (v) 0 maps are additive, and additive maps are closed under addition. In each case, the 0 axiom is straightforward, so we only show the addition axiom. (i) It suffices to show that if $f$ is total, then ${\ensuremath{\overline{(x+y)f}\,}} = {\ensuremath{\overline{xf + yf}\,}}$. Indeed, if $f$ is total, $${\ensuremath{\overline{(x+y)f}\,}} = {\ensuremath{\overline{x+y}\,}} = {\ensuremath{\overline{x}\,}}{\ensuremath{\overline{y}\,}} = {\ensuremath{\overline{xf}\,}}{\ensuremath{\overline{yf}\,}} = {\ensuremath{\overline{xf + yf}\,}}.$$ (ii) Suppose $e = {\ensuremath{\overline{e}\,}}$. Then by [[**\[R.4\]**]{}]{}, $$(xe + ye){\ensuremath{\overline{e}\,}} = {\ensuremath{\overline{{\ensuremath{\overline{xe + ye}\,}} {\ensuremath{\overline{e}\,}}}\,}} (xe + ye) \leq xe + ye$$ so that $e$ is additive. (iii) Suppose $f$ and $g$ are additive. Then $$\begin{aligned} & & {\ensuremath{\overline{xfg}\,}}{\ensuremath{\overline{yfg}\,}}(x+ y)fg \\ & = & {\ensuremath{\overline{xfg}\,}}{\ensuremath{\overline{yfg}\,}}{\ensuremath{\overline{xf}\,}}{\ensuremath{\overline{yf}\,}}(x + y)fg \\ & \leq & {\ensuremath{\overline{xfg}\,}}{\ensuremath{\overline{yfg}\,}}(xf + yf)g \mbox{ since $f$ is additive,} \\ & \leq & xfg + yfg \mbox{ since $g$ is additive,} \end{aligned}$$ as required. (iv) If $g \leq f$, then $g = {\ensuremath{\overline{g}\,}}f$, and since restriction idempotents are additive, and the composites of additive maps are additive, $g$ is additive. (v) For any $0$ map, $(x+y)0 = 0 = 0 + 0 = x0 + y0$, so it is additive. For addition, suppose $f$ and $g$ are additive. Then we have $$(x+y)f \smile xf + yf \mbox{ and } (x+y)g \smile xg + yg.$$ Since adding preserves compatibility, this gives $$(x+y)f + (x+y)g \smile xf + yf + xg + yg.$$ Then using *left* additivity of $x, y$, and $x+y$, we get $$(x+y)(f+g) \smile x(f+g) + y(f+g)$$ so that $f+g$ is additive. The one property we do not have is that if $f$ is additive and has a partial inverse $g$, then $g$ is additive. Indeed, consider the left additive restriction category of arbitrary partial maps from ${\ensuremath{\mathbb Z}\xspace}$ to ${\ensuremath{\mathbb Z}\xspace}$. In particular, consider the partial map $f$ which is only defined on $\{p,q,r\}$ for $r \ne p + q$, and maps those points to $\{n, m, n + m \}$. In this case, $f$ is additive, since $(p+q)f$ is undefined. However, $f$’s partially inverse $g$, which sends $\{n, m, n+m\}$ to $\{p,q,r\}$, is not additive, since $ng + mg \ne (n+m)g$. The problem is that $f$’s domain is not additively closed, and this leads us to the following definition. Say that a map $f$ in a left additive restriction category is **strongly additive** if for any $x,y$, $$xf + yf \leq (x+y)f \mbox{ and } 0f = 0.$$ An alternate description, which can be useful for some proofs, is the following: \[strongadd\] $f$ is strongly additive if and only if $(x{\ensuremath{\overline{f}\,}} + y{\ensuremath{\overline{f}\,}})f = xf + yf$ and $0f = 0$. $$\begin{aligned} & & xf + yf \leq (x+y)f \\ & \Leftrightarrow & {\ensuremath{\overline{xf + yf}\,}}(x+y)f = xf + yf \\ & \Leftrightarrow & {\ensuremath{\overline{xf}\,}}{\ensuremath{\overline{yf}\,}}(x+y)f = xf + yf \\ & \Leftrightarrow & ({\ensuremath{\overline{xf}\,}}x + {\ensuremath{\overline{yf}\,}}y)f = xf + yf \\ & \Leftrightarrow & (x{\ensuremath{\overline{f}\,}} + y{\ensuremath{\overline{f}\,}})f = xf + yf \mbox{ by {{\bf [R.4]}}.}\end{aligned}$$ Intuitively, the strongly additive maps are the ones which are additive in the previous sense, but whose domains are also closed under addition and contain $0$. Note then that not all restriction idempotents will be strongly additive, and a map less than or equal to a strongly additive map need not be strongly additive. Excepting this, all of the previous results about additive maps hold true for strongly additive ones, and in addition, a partial inverse of a strongly additive map is strongly additive. In a left additive restriction category, (i) strongly additive maps are additive, and if $f$ is total, then $f$ is additive if and only if it is strongly additive; (ii) $f$ is strongly additive if and only if ${\ensuremath{\overline{f}\,}}$ is strongly additive and $f$ is additive; (iii) identities are strongly additive, and if $f$ and $g$ are strongly additive, then so is $fg$; (iv) $0$ maps are strongly additive, and if $f$ and $g$ are strongly additive, then so is $f+g$; (v) if $f$ is strongly additive and has a partial inverse $g$, then $g$ is also strongly additive. In most of the following proofs, we omit the proof of the $0$ axiom, as it is straightforward. (i) Since $\leq$ implies $\smile$, strongly additive maps are additive, and by previous discussion, if $f$ is total, the restrictions of $xf + yf$ and $(x+y)f$ are equal, so $\smile$ implies $\leq$. (ii) When $f$ is strongly additive then $f$ is additive. To show that ${\ensuremath{\overline{f}\,}}$ is strongly additive we have: $$\begin{aligned} & & (x{\ensuremath{\overline{f}\,}} + y{\ensuremath{\overline{f}\,}}){\ensuremath{\overline{f}\,}} \\ & = & {\ensuremath{\overline{(x{\ensuremath{\overline{f}\,}} + y{\ensuremath{\overline{f}\,}})f}\,}}(x{\ensuremath{\overline{f}\,}} + y{\ensuremath{\overline{f}\,}}) \mbox{ by {{\bf [R.4]}},} \\ & = & {\ensuremath{\overline{xf + yf}\,}}(x{\ensuremath{\overline{f}\,}} + y{\ensuremath{\overline{f}\,}}) \mbox{by \ref{strongadd} as $f$ is strongly additive,} \\ & = & {\ensuremath{\overline{x{\ensuremath{\overline{f}\,}}}\,}}{\ensuremath{\overline{y{\ensuremath{\overline{f}\,}}}\,}}(x{\ensuremath{\overline{f}\,}} + y{\ensuremath{\overline{f}\,}}) \\ & = & x{\ensuremath{\overline{f}\,}} + y{\ensuremath{\overline{f}\,}} \end{aligned}$$ Together with $0{\ensuremath{\overline{f}\,}} = {\ensuremath{\overline{0f}\,}}0 = {\ensuremath{\overline{0}\,}}0 = 0$, this implies, using Lemma \[strongadd\], that ${\ensuremath{\overline{f}\,}}$ is strongly additive. Conversely, suppose ${\ensuremath{\overline{f}\,}}$ is strongly additive and $f$ is additive. First, observe: $$\begin{aligned} {\ensuremath{\overline{xf + yf}\,}} &=& {\ensuremath{\overline{xf}\,}}{\ensuremath{\overline{yf}\,}}\\ &=& {\ensuremath{\overline{x{\ensuremath{\overline{f}\,}}}\,}}{\ensuremath{\overline{y{\ensuremath{\overline{f}\,}}}\,}}\\ &=& {\ensuremath{\overline{x{\ensuremath{\overline{f}\,}}+y{\ensuremath{\overline{f}\,}}}\,}}\\ &=& {\ensuremath{\overline{(x{\ensuremath{\overline{f}\,}}+y{\ensuremath{\overline{f}\,}}){\ensuremath{\overline{f}\,}}}\,}} \mbox{by \ref{strongadd} as ${\ensuremath{\overline{f}\,}}$ is strongly additive}\\ &=& {\ensuremath{\overline{(x{\ensuremath{\overline{f}\,}}+y{\ensuremath{\overline{f}\,}})f}\,}} \end{aligned}$$ This can be used to show: $$\begin{aligned} xf+yf &=& {\ensuremath{\overline{xf+yf}\,}}(xf+yf)\\ &=& {\ensuremath{\overline{(x{\ensuremath{\overline{f}\,}}+y{\ensuremath{\overline{f}\,}})f}\,}}(xf+yf) \mbox{ by the above}\\ &=& {\ensuremath{\overline{xf+yf}\,}}(x{\ensuremath{\overline{f}\,}}+y{\ensuremath{\overline{f}\,}})f \mbox{as $f$ is additive }\\ &=& {\ensuremath{\overline{(x{\ensuremath{\overline{f}\,}}+y{\ensuremath{\overline{f}\,}})f}\,}}(x{\ensuremath{\overline{f}\,}}+y{\ensuremath{\overline{f}\,}})f \mbox{ by the above}\\ &=& (x{\ensuremath{\overline{f}\,}}+y{\ensuremath{\overline{f}\,}})f, \end{aligned}$$ For the zero case we have: $$\begin{aligned} 0f &=& {\ensuremath{\overline{0f}\,}}0 \mbox{ since $f$ is additive}\\ &=& {\ensuremath{\overline{0{\ensuremath{\overline{f}\,}}}\,}}0\\ &=& {\ensuremath{\overline{0}\,}}0 \mbox{ since ${\ensuremath{\overline{f}\,}}$ is strongly additive}\\ &=& 0 \end{aligned}$$ Thus, by lemma \[strongadd\], $f$ is strongly additive. (iii) Identities are total and additive, so are strongly additive. Suppose $f$ and $g$ are strongly additive. Then $$\begin{aligned} & & xfg + yfg \\ & \leq & (xf + yf)g \mbox{ since $g$ strongly additive,} \\ & \leq & (x+y)fg \mbox{ since $f$ strongly additive,} \end{aligned}$$ so $fg$ is strongly additive. (iv) Since any $0$ is total and additive, $0$’s are strongly additive. Suppose $f$ and $g$ are strongly additive. Then $$\begin{aligned} & & x(f+g) + y(f+g) \\ & = & xf + xg + yf + yf \mbox{ by left additivity,} \\ & \leq & (x+y)f + (x+y)g \mbox{ since $f$ and $g$ are strongly additive,} \\ & = & (x+y)(f+g) \mbox{ by left additivity,} \end{aligned}$$ so $f+g$ is strongly additive. (v) Suppose $f$ is strongly additive and has a partial inverse $g$. Using the alternate form of strongly additive, $$\begin{aligned} & & (x{\ensuremath{\overline{g}\,}} + y{\ensuremath{\overline{g}\,}})g \\ & = & (xgf + ygf)g \\ & = & (xg{\ensuremath{\overline{f}\,}} + yg{\ensuremath{\overline{f}\,}})fg \mbox{ since $f$ is strongly additive,} \\ & = & (xg{\ensuremath{\overline{f}\,}} + yg{\ensuremath{\overline{f}\,}}){\ensuremath{\overline{f}\,}} \\ & = & xg{\ensuremath{\overline{f}\,}} + yg{\ensuremath{\overline{f}\,}} \mbox{ since ${\ensuremath{\overline{f}\,}}$ strongly additive,} \\ & = & xg + yg \end{aligned}$$ and $0g = 0fg = 0{\ensuremath{\overline{f}\,}} = 0$, so $g$ is strongly additive. Finally, note that neither additive nor strongly additive maps are closed under joins. For additive, the join of the additive maps $f: \{n,m\} \to \{p,q\}$ and $g: \{n+m\} \to \{r\}$, where $p+q \ne r$, is not additive. For strongly additive, if $f$ is defined on multiples of $2$ and $g$ on multiples of $3$, their join is not closed under addition, so is not strongly additive. Cartesian left additive restriction categories ---------------------------------------------- In a differential restriction category, we will need both cartesian and left additive structure. Thus, we describe here how cartesian and additive restriction structures must interact. ${\ensuremath{\mathbb X}\xspace}$ is a [**cartesian left additive restriction category**]{} if it is both a left additive and cartesian restriction category such that the product functor preserves addition (that is $(f+g) {\times}(h+k) = (f {\times}h) + (g{\times}k)$ and $0 = 0 {\times}0$) and the maps $\pi_0$,$\pi_1$, and $\Delta$ are additive. If ${\ensuremath{\mathbb X}\xspace}$ is a cartesian left additive restriction category, then each object becomes canonically a (total) commutative monoid by $+_X = \pi_0+\pi_1: X {\times}X \to X$ and $0: 1 \to X$. Surprisingly, assuming these total commutative monoids are coherent with the cartesian structure, one can then recapture the additive structure, as the following theorem shows. Thus, in the presence of cartesian restriction structure, it suffices to give additive structure on the total maps to get a cartesian left additive restriction category. \[thmAddCart\] ${\ensuremath{\mathbb X}\xspace}$ is a left additive cartesian restriction category if and only if ${\ensuremath{\mathbb X}\xspace}$ is a cartesian restriction category in which each object is canonically a total commutative monoid, that is, for each object $A$, there are given maps $A \times A \to^{+_A} A$ and $1 \to^{0_A} A$ making $A$ a total commutative monoid, such that following exchange[^4] axiom holds: $$+_{X {\times}Y} = (X {\times}Y) {\times}(X {\times}Y)\to^{\sf ex} (X {\times}X) {\times}(Y {\times}Y) \to^{+_X {\times}+_Y} X {\times}Y.$$ Given a canonical commutative monoid structure on each object, the left additive structure on ${\ensuremath{\mathbb X}\xspace}$ is defined by: $$\infer[\mbox{add}]{~~A \to_{f + g := {\langle}f,g{\rangle}+_B} B~~}{A \to^f B & A \to^g B} ~~~~~ \infer[\mbox{zero}]{~~A \to_{0_{AB} := !_A 0_B} B~~}{}$$ That this gives a commutative monoid on each ${\ensuremath{\mathbb X}\xspace}(A,B)$ follows directly from the commutative monoid axioms on $B$ and the cartesian structure. For example, to show $f + 0 = f$, we need to show ${\langle}f,!_A 0_B {\rangle}= f$. Indeed, we have $$\bfig \node a(0,0)[A] \node b(2100,0)[B \times B] \node c(700,-400)[A \times 1] \node d(1400,-800)[B \times 1] \node e(2100,-1200)[B] \arrow[a`b;{\langle}f,!_A0_B{\rangle}] \arrow[a`c;\cong] \arrow[c`b;f \times 0_B] \arrow[c`d;f \times 1] \arrow[d`e;\cong] \arrow[b`e;+_B] \arrow[d`b;1 \times 0_B] \arrow|l|/{@{>}@/^-40pt/}/[a`e;f] \efig$$ the right-most shape commutes by one of the commutative monoid axioms for $B$, and the other shapes commute by coherences of the cartesian structure. The other commutative monoid axioms are similar. For the interaction with restriction, $${\ensuremath{\overline{f+g}\,}} = {\ensuremath{\overline{{\langle}f,g{\rangle}+_B}\,}} = {\ensuremath{\overline{{\langle}f,g{\rangle}{\ensuremath{\overline{+_B}\,}}}\,}} = {\ensuremath{\overline{{\langle}f,g{\rangle}}\,}} = {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}},$$ and ${\ensuremath{\overline{0_{AB}}\,}} = {\ensuremath{\overline{!_A 0_B}\,}} = 1$ since $!$ and $0$ are themselves total. For the interaction with composition, $$f(g+h) = f{\langle}g,h{\rangle}+_C = {\langle}fg,fh{\rangle}+_C = fg + fh$$ and $$f0_{BC} = f!_B0_C = {\ensuremath{\overline{f}\,}}!_A0_C = {\ensuremath{\overline{f}\,}}0_{AC}$$ as required. The requirement that $(f + g) \times (h + k) = (f \times h) + (g \times k)$ follows from the exchange axiom: $$\xymatrix@C=4pc @R=2pc{A {\times}C \ar@/_5pc/[rrdd]_{{\langle}f,g{\rangle}+_B {\times}{\langle}h,k{\rangle}+_D} \ar[rr]^{{\langle}f {\times}h, g {\times}k {\rangle}} \ar[dr]^{{\langle}f, g{\rangle}{\times}{\langle}h,k{\rangle}} & & (B {\times}D) {\times}(B {\times}D) \ar[dl]_{\mbox{ex}} \ar[dd]^{+_{B {\times}D}} \\ & (B {\times}B) {\times}(D {\times}D) \ar[dr]^{+_B {\times}+_D} \\ & & B {\times}D}$$ the right triangle is the exchange axiom, and the other two shapes commute by the cartesian coherences. Since $\pi_0$ is total, $\pi_0$ is additive in case for all $f,g : A \to B {\times}C$, $(f+g)\pi_0 = f\pi_0 + g\pi_0$, which is shown by the following diagram: $$\xymatrix@C=4pc @R=2pc{ A \ar@/_5pc/[drr]_{{\langle}f\pi_0,g\pi_0{\rangle}}\ar[r]^{{\langle}f,g{\rangle}} \ar[dr]|{{\langle}{\langle}f\pi_0,g\pi_0{\rangle},{\langle}f\pi_1,g\pi_1{\rangle}{\rangle}} & (B{\times}C) {\times}(B{\times}C) \ar[r]^{+_{(B{\times}C)}} \ar[d]|{\mbox{ex}}& B{\times}C \ar[r]^{\pi_0}& B\\ & (B{\times}B) {\times}(C{\times}C) \ar[ur]_{+_B{\times}+_C} \ar[r]_{\pi_0} & B{\times}B \ar[ur]_{+_B} & }$$ A similar argument shows that $\pi_1$ is additive. Since $\Delta$ is total, $\Delta$ is additive when for all $f,g: A \to B$, $f\Delta + g\Delta = (f+g)\Delta$. This is shown by the following diagram: $$\xymatrix@C=4pc @R=2pc{ A \ar[r]^{{\langle}f\Delta,g\Delta{\rangle}~~} \ar[d]_{{\langle}f,g{\rangle}} \ar@/_5pc/[dd]_{{\langle}f,g{\rangle}+_{B}} & (B{\times}B){\times}(B{\times}B) \ar[d]|{\mbox{ex}} \ar@/^5pc/@<3ex>[dd]^{+_{B{\times}B}} \\ B{\times}B \ar[d]_{+_{B}} \ar[ur]^{\Delta {\times}\Delta} \ar[dr]_{{\langle}+_{B},+_{B}{\rangle}} \ar[r]^{\Delta} & (B{\times}B){\times}(B{\times}B)\ar[d]|{+_{B}{\times}+_{B}} \\ B \ar[r]_{\Delta} & B{\times}B }$$ \[propCLA\] In a cartesian left additive restriction category: (i) ${\langle}f,g{\rangle}+ {\langle}f',g'{\rangle}= {\langle}f + f',g+g'{\rangle}$ and ${\langle}0,0{\rangle}= 0$; (ii) if $f$ and $g$ are additive, then so is ${\langle}f,g{\rangle}$; (iii) the projections are strongly additive, and if $f$ and $g$ are strongly additive, then so is ${\langle}f,g{\rangle}$, (iv) $f$ is additive if and only if $$(\pi_0 + \pi_1)f \smile \pi_0f + \pi_1f \mbox{ and } \, 0f \smile 0;$$ (that is, in terms of the monoid structure on objects, $(+)(f) \smile (f \times f)(+)$ and $0f \smile 0$), (v) $f$ is strongly additive if only if $$(\pi_0 + \pi_1)f \geq \pi_0f + \pi_1f \mbox{ and } \, 0f = 0;$$ (that is, $(+)(f) \geq (f \times f)(+)$ and $0f \geq 0$). Note that $f$ being strongly additive only implies that $+$ and $0$ are lax natural transformations. (i) Since the second term is a pairing, it suffices to show they are equal when post-composed with projections. Post-composing with $\pi_0$, we get $$\begin{aligned} & & ({\langle}f,g{\rangle}+ {\langle}f',g'{\rangle})\pi_0 \\ & = & {\langle}f,g{\rangle}\pi_0 + {\langle}f',g'{\rangle}\pi_0 \mbox{ since $\pi_0$ is additive,} \\ & = & {\ensuremath{\overline{g}\,}}f + {\ensuremath{\overline{g'}\,}}f' \\ & = & {\ensuremath{\overline{g}\,}}{\ensuremath{\overline{g'}\,}}(f + f') \\ & = & {\ensuremath{\overline{g + g'}\,}}(f + f') \\ & = & {\langle}f + f',g + g'{\rangle}\pi_0 \end{aligned}$$ as required. The 0 result is direct. (ii) We need to show $$(x+y){\langle}f,g{\rangle}\smile x{\langle}f,g{\rangle}+ y{\langle}f,g{\rangle};$$ however, since the first term is a pairing, it suffices to show they are compatible when post-composed by the projections. Indeed, $$(x+y){\langle}f,g{\rangle}\pi_0 = (x+y){\ensuremath{\overline{g}\,}}f \smile x{\ensuremath{\overline{g}\,}}f + y{\ensuremath{\overline{g}\,}}f$$ while since $\pi_0$ is additive, $$(x{\langle}f,g{\rangle}+ y{\langle}f,g{\rangle})\pi_0 = x{\langle}f,g{\rangle}\pi_0 + y{\langle}f,g{\rangle}\pi_0 = x{\ensuremath{\overline{g}\,}}f + y{\ensuremath{\overline{g}\,}}f$$ so the two are compatible, as required. Post-composing with $\pi_1$ is similar. (iii) Since projections are additive and total, they are strongly additive. If $f$ and $g$ are strongly additive, $$\begin{aligned} & & x{\langle}f,g{\rangle}+ y{\langle}f,g{\rangle}\\ & = & {\langle}xf,xg{\rangle}+ {\langle}yf,yg{\rangle}\\ & = & {\langle}xf + yf, xg + yg{\rangle}\mbox{ by (i)} \\ & \leq & {\langle}(x+y)f, (x+y)g{\rangle}\mbox{ since $f$ and $g$ are strongly additive,} \\ & = & (x+y){\langle}f,g{\rangle}\end{aligned}$$ so ${\langle}f,g{\rangle}$ is strongly additive. (iv) If $f$ is additive, the condition obviously holds. Conversely, if we have the condition, then $f$ is additive, since $$(x+y)f = {\langle}x,y{\rangle}(\pi_0 + \pi_1)f \smile {\langle}x,y{\rangle}(\pi_0f + \pi_1f) = xf + yf$$ as required. (v) Similar to the previous proof. Differential restriction categories {#differential-restriction-categories} ----------------------------------- With cartesian left additive restriction categories defined, we turn to defining differential restriction categories. To do this, we begin by recalling the notion of a cartesian differential category. The idea is to axiomatize the Jacobian of smooth maps. Normally, the Jacobian of a map $f: X \to Y$ gives, for each point of $X$, a linear map $X \to Y$. That is, $D[f]: X \to [X,Y]$. However, we don’t want to assume that our category has closed structure. Thus, uncurrying, we get that the derivative should be of the type $D[f]: X \times X \to Y$. The second coordinate is simply the point at which the derivative is being taken, while the first coordinate is the direction in which this derivative is being evaluated. With this understanding, the first five axioms of a cartesian differential category should be relatively clear. Axioms 6 and 7 are slightly more tricky, but in essence they say that the derivative is linear in its first variable, and that the order of partial differentiation does not matter. For more discussion of these axioms, see [@cartDiff]. A **cartesian differential category** is a cartesian left additive category with a differentiation operation $$\infer{X {\times}X \to_{D[f]} Y}{X \to^f Y}$$ such that 1. $D[f+g] = D[f]+D[g]$ and $D[0]=0$ (additivity of differentiation); 2. ${\langle}g+h,k{\rangle}D[f] = {\langle}g,k{\rangle}D[f] + {\langle}h,k{\rangle}D[f]$ and ${\langle}0,g{\rangle}D[f] = 0$ (additivity of a derivative in its first variable); 3. $D[1] = \pi_0, D[\pi_0] = \pi_0\pi_0$, and $D[\pi_1] = \pi_0\pi_1$ (derivatives of projections); 4. $D[{\langle}f,g{\rangle}] = {\langle}D[f],D[g]{\rangle}$ (derivatives of pairings); 5. $D[fg] = {\langle}D[f],\pi_1f {\rangle}D[g]$ (chain rule); 6. ${\langle}{\langle}g,0{\rangle},{\langle}h,k{\rangle}{\rangle}D[D[f]] = {\langle}g,k{\rangle}D[f]$ (linearity of the derivative in the first variable); 7. ${\langle}{\langle}0,h{\rangle},{\langle}g,k{\rangle}{\rangle}D[D[f]] = {\langle}{\langle}0,g{\rangle},{\langle}h,k{\rangle}{\rangle}D[D[f]]$ (independence of partial differentiation). We now give the definition of a differential restriction category. Axioms 8 and 9 are the additions to the above. Axiom 8 says that the differential of a restriction is similar to the derivative of an identity, with the partiality of $f$ now included. Axiom 9 says that the restriction of a differential is nothing more than $1 \times {\ensuremath{\overline{f}}}$: the first component, being simply the co-ordinate of the direction the derivative is taken, is always total. In addition to these new axioms, one must also modify axioms 2 and 6 to take into account the partiality when one loses maps, and remove the first part of axiom 3 ($D[1] = \pi_0$), since axiom 8 makes it redundant. A **differential restriction category** is a cartesian left additive restriction category with a differentiation operation $$\infer{X {\times}X \to_{D[f]} Y}{X \to^f Y}$$ such that 1. $D[f+g] = D[f]+D[g]$ and $D[0]=0$; 2. ${\langle}g+h,k{\rangle}D[f] = {\langle}g,k{\rangle}D[f] + {\langle}h,k{\rangle}D[f]$ and ${\langle}0,g{\rangle}D[f] = {\ensuremath{\overline{gf}}}0$; 3. $D[\pi_0] = \pi_0\pi_0$, and $D[\pi_1] = \pi_0\pi_1$; 4. $D[{\langle}f,g{\rangle}] = {\langle}D[f],D[g]{\rangle}$; 5. $D[fg] = {\langle}D[f],\pi_1f {\rangle}D[g]$; 6. ${\langle}{\langle}g,0{\rangle},{\langle}h,k{\rangle}{\rangle}D[D[f]] = {\ensuremath{\overline{h}}} {\langle}g,k{\rangle}D[f]$; 7. ${\langle}{\langle}0,h{\rangle},{\langle}g,k{\rangle}{\rangle}D[D[f]] = {\langle}{\langle}0,g{\rangle},{\langle}h,k{\rangle}{\rangle}D[D[f]]$; 8. $D[{\ensuremath{\overline{f}}}] = (1 \times {\ensuremath{\overline{f}}})\pi_0$; 9. ${\ensuremath{\overline{D[f]}}} = 1 {\times}{\ensuremath{\overline{f}}}$. Of course, any cartesian differential category is a differential restriction category, when equipped with the trivial restriction structure (${\ensuremath{\overline{f}\,}} = 1$ for all $f$). The standard example with a non-trivial restriction is smooth functions defined on open subsets of ${\ensuremath{\mathbb R}\xspace}^n$; that this is a differential restriction category is readily verified. In the next section, we will present a more sophisticated example (rational functions over a commutative ring). There is an obvious notion of differential restriction functor: If ${\ensuremath{\mathbb X}\xspace}$ and ${\ensuremath{\mathbb Y}\xspace}$ are differential restriction categories, a **differential restriction functor** ${\ensuremath{\mathbb X}\xspace}\to^F {\ensuremath{\mathbb Y}\xspace}$ is a restriction functor such that - $F$ preserves the addition and zeroes of the homsets; - $F$ preserves products strictly: $F(A \times B) = FA \times FB, F1 = 1$, as well as pairings and projections, - $F$ preserves the differential: $F(D[f]) = D[F(f)]$. The differential itself automatically preserves both the restriction ordering and the compatibility relation: \[propDiff\] In a differential restriction category: (i) $D[{\ensuremath{\overline{f}}}g] = (1 {\times}{\ensuremath{\overline{f}}}) D[g]$; (ii) If $f \leq g$ then $D[f] \leq D[g]$; (iii) If $f \smile g$ then $D[f] \smile D[g]$. <!-- --> (i) Consider: $$\begin{aligned} & & D[{\ensuremath{\overline{f}}}g] \\ & = & {\langle}D {\ensuremath{\overline{f}\,}}, \pi_1 {\ensuremath{\overline{f}\,}} {\rangle}D[g] \mbox{ by [{{\bf [DR.{5}]}}]} \\ & = & {\langle}(1 \times {\ensuremath{\overline{f}\,}}) \pi_0 , \pi_1 {\ensuremath{\overline{f}\,}} {\rangle}D[g] \mbox{ by [{{\bf [DR.{8}]}}]} \\ & = & {\langle}(1 \times {\ensuremath{\overline{f}\,}}) \pi_0 , (1 \times {\ensuremath{\overline{f}\,}}) \pi_1 {\rangle}D[g] \mbox{ by naturality} \\ & = & (1 \times {\ensuremath{\overline{f}\,}})D[g] \mbox{ by Lemma \ref{propCart}.}\end{aligned}$$ as required. (ii) If $f \leq g$, then $${\ensuremath{\overline{D[f]}\,}} D[g] = (1 \times {\ensuremath{\overline{f}\,}}) D[g] = D[{\ensuremath{\overline{f}\,}}g] = D[f],$$ so $D[f] \leq D[g]$. (iii) If $f \smile g$, then $${\ensuremath{\overline{D[f]}\,}} D[g] = (1 \times {\ensuremath{\overline{f}\,}}) D[g] = D[{\ensuremath{\overline{f}\,}}g] = D[{\ensuremath{\overline{g}\,}}f] = (1 \times {\ensuremath{\overline{g}\,}})D[f] = {\ensuremath{\overline{D[g]}\,}}D[f],$$ so $D[f] \smile D[g]$. Moreover, just as for cartesian and left additive structure, if ${\ensuremath{\mathbb X}\xspace}$ has joins and differential structure, then they are automatically compatible: \[propjoindiffcompat\] In a differential restriction category with joins, (i) $D[\emptyset] = \emptyset$, (ii) $D \left[ \bigvee_i f_i \right] = \bigvee_i D[f_i]$. <!-- --> (i) ${\ensuremath{\overline{D[\emptyset]}\,}} = 1 \times {\ensuremath{\overline{\emptyset}\,}} = \emptyset$, so by Lemma \[propJoins\], $D[\emptyset] = \emptyset$. (ii) Consider: $$\begin{aligned} & & \bigvee_{i \in I} D[f_i] \\ & = & \bigvee_{i \in I} D\left[{\ensuremath{\overline{f_i}\,}} \bigvee_{j \in I} f_j \right] \mbox{ by Lemma \ref{propJoins}} \\ & = & \bigvee_{i \in I} (1 \times {\ensuremath{\overline{f_i}\,}}) D\left[ \bigvee_{j \in I} f_j \right] \mbox{ by Lemma \ref{propDiff}} \\ & = & \left( 1 \times \bigvee_{i \in I} {\ensuremath{\overline{f_i}\,}} \right) D \left[ \bigvee_{j \in I} f_j \right] \\ & = & \left( 1 \times {\ensuremath{\overline{\bigvee_{i \in I} f_i}\,}} \right) D \left[ \bigvee_{j \in I} f_j \right] \\ & = & {\ensuremath{\overline{D \left[ \bigvee_{i \in I} f_i \right]}\,}} D \left[ \bigvee_{j \in I} f_j \right] \mbox{ by {{\bf [DR.{9}]}}} \\ & = & D \left[ \bigvee_{i \in I} f_i \right] \end{aligned}$$ as required. Linear maps ----------- Just as we had to modify the definition of additive maps for left additive restriction categories, so too do we have to modify linear maps when dealing with differential restriction categories. Recall that in a cartesian differential category, a map is linear if $D[f] = \pi_0f$. If we asked for this in a differential restriction category, we would have $${\ensuremath{\overline{\pi_0f}\,}} = {\ensuremath{\overline{D[f]}\,}} = 1 \times {\ensuremath{\overline{f}\,}} = {\ensuremath{\overline{\pi_1f}\,}},$$ which is never true unless $f$ is total. In contrast to the additive situation, however, there is no obvious preference for one side to be more defined that the other. Thus, a map will be linear when $D[f]$ and $\pi_0f$ are compatible. A map $f$ in a differential restriction category is [**linear**]{} if $$D[f] \smile \pi_0 f$$ We shall see below that for total $f$, this agrees with the usual definition. We also have the following alternate characterizations of linearity: In a differential restriction category, $$\begin{aligned} & & f \mbox{ is linear} \\ & \Leftrightarrow & {\ensuremath{\overline{\pi_1f}}}\pi_0f \leq D[f] \\ & \Leftrightarrow & {\ensuremath{\overline{\pi_0f}}}D[f] \leq \pi_0f\end{aligned}$$ Use the alternate form of compatibility (Lemma \[lemmaAltComp\]). Linear maps then have a number of important properties. Note one surprise: while additive maps were not closed under partial inverses, linear maps are. \[propAdd\] In a differential restriction category: (i) if $f$ is total, $f$ is linear if and only if $D[f] = \pi_0f$; (ii) if $f$ is linear, then $f$ is additive; (iii) restriction idempotents are linear; (iv) if $f$ and $g$ are linear, so is $fg$; (v) if $g \leq f$ and $f$ is linear, then $g$ is linear; (vi) $0$ maps are linear, and if $f$ and $g$ are linear, so is $f+g$; (vii) projections are linear, and if $f$ and $g$ are linear, so is ${\langle}f,g{\rangle}$; (viii) ${\langle}1,0{\rangle}D[f]$ is linear for any $f$; (ix) if $f$ is linear and has a partial inverse $g$, then $g$ is also linear. <!-- --> (i) It suffices to show that if $f$ is total, ${\ensuremath{\overline{D[f]}\,}} = {\ensuremath{\overline{\pi_0f}\,}}$. Indeed, if $f$ is total, $${\ensuremath{\overline{D[f]}\,}} = 1 \times {\ensuremath{\overline{f}\,}} = {\ensuremath{\overline{f}\,}} \times 1 = {\ensuremath{\overline{\pi_0f}\,}}.$$ (ii) For the 0 axiom: $$\begin{aligned} 0f & = & {\ensuremath{\overline{0f}\,}}0f \\ & = & {\ensuremath{\overline{{\langle}0,0{\rangle}\pi_1f}\,}}{\langle}0,0{\rangle}\pi_0f \\ & = & {\langle}0,0{\rangle}{\ensuremath{\overline{\pi_1f}\,}}\pi_0f \mbox{ by {{\bf [R.4]}},} \\ & \leq & {\langle}0,0{\rangle}D[f] \mbox{ since $f$ linear,} \\ & = & {\ensuremath{\overline{0f}\,}}0 \mbox{ by {{\bf [DR.{2}]}},} \\ & \leq & 0 \end{aligned}$$ and for the addition axiom: $$\begin{aligned} {\ensuremath{\overline{(x+y)f)}}}(xf + yf) & = & {\ensuremath{\overline{(x+y)f}}} ({\ensuremath{\overline{xf}}}xf + {\ensuremath{\overline{y}}}{\ensuremath{\overline{xf}}}{\ensuremath{\overline{x}}}yf) \\ & = & {\ensuremath{\overline{(x+y)f}}} ({\ensuremath{\overline{xf}}}xf + {\ensuremath{\overline{{\ensuremath{\overline{y}}}xf}}}{\ensuremath{\overline{x}}}yf) \\ & = & {\ensuremath{\overline{(x+y)f}}} ({\ensuremath{\overline{{\langle}x,x{\rangle}\pi_1f}}}{\langle}x,x{\rangle}\pi_0f + {\ensuremath{\overline{{\langle}y,x{\rangle}\pi_1f}}}{\langle}y,x{\rangle}\pi_0f) \\ & = & {\ensuremath{\overline{(x+y)f}}} ({\langle}x,x{\rangle}{\ensuremath{\overline{\pi_1f}}} \pi_0f + {\langle}y,x{\rangle}{\ensuremath{\overline{\pi_1f}}} \pi_0f) \\ & \leq & {\ensuremath{\overline{(x+y)f}}} ({\langle}x,x{\rangle}D[f] + {\langle}y,x{\rangle}D[f]) \mbox{ since $f$ is linear} \\ & = & {\ensuremath{\overline{{\langle}x+y,x{\rangle}\pi_0f}}} {\langle}x+y,x{\rangle}D[f] \mbox{ by {{\bf [DR.{2}]}}} \\ & = & {\langle}x+y,x{\rangle}{\ensuremath{\overline{\pi_0f}}} D[f] \\ & = & {\langle}x+y,x{\rangle}{\ensuremath{\overline{\pi_1f}}} \pi_0f \mbox{ since $f$ is linear} \\ & = & {\ensuremath{\overline{x+y,x{\rangle}\pi_1f}}}{\langle}x+y,x{\rangle}\pi_0f \\ & = & {\ensuremath{\overline{{\ensuremath{\overline{x+y}}}xf}}}{\ensuremath{\overline{x}}}(x+y)f \\ & \leq & (x+y)f \end{aligned}$$ as required. (iii) Suppose $e = {\ensuremath{\overline{e}\,}}$. Then consider $$\begin{aligned} & & {\ensuremath{\overline{\pi_1e}\,}}\pi_0 {\ensuremath{\overline{e}\,}} \\ & = & {\ensuremath{\overline{\pi_1e}\,}}{\ensuremath{\overline{\pi_0 e}\,}} \pi_0 \\ & \leq & {\ensuremath{\overline{\pi_1e}\,}}\pi_0 \\ & = & {\langle}\pi_0e,\pi_1e{\rangle}\pi_0 \\ & = & (1 \times e)\pi_0 \\ & = & D[e] \end{aligned}$$ so that $e$ is additive. (iv) Suppose $f$ and $g$ are linear; then consider $$\begin{aligned} D[fg] & = & {\langle}D[f],\pi_1f{\rangle}D[g] \\ & \geq & {\langle}{\ensuremath{\overline{\pi_1f}}}\pi_0f, \pi_1f{\rangle}{\ensuremath{\overline{\pi_1g}}} \pi_0g \mbox{ since $f$ and $g$ are linear} \\ & = & {\ensuremath{\overline{{\langle}{\ensuremath{\overline{\pi_1f}}}\pi_0f, \pi_1f{\rangle}\pi_1g}}} {\langle}{\ensuremath{\overline{\pi_1f}}}\pi_0f, \pi_1f{\rangle}\pi_0g \mbox{ by {{\bf [R.4]}}} \\ & = & {\ensuremath{\overline{{\ensuremath{\overline{\pi_1f}}} {\ensuremath{\overline{\pi_0f}}} \pi_1 fg}}} {\ensuremath{\overline{\pi_1f}}} {\ensuremath{\overline{\pi_1f}}} \pi_0fg \\ & = & {\ensuremath{\overline{\pi_1f}}}{\ensuremath{\overline{\pi_1fg}}}{\ensuremath{\overline{\pi_0f}}}\pi_0 fg \\ & = & {\ensuremath{\overline{\pi_1 fg}}} \pi_0 fg \end{aligned}$$ (v) If $g \leq f$, then $g = {\ensuremath{\overline{g}\,}}f$; since restriction idempotents are linear and the composite of linear maps is linear, $g$ is linear. (vi) Since $D[0] = 0 = \pi_00$, $0$ is linear. Suppose $f$ and $g$ are linear; then consider $$\begin{aligned} {\ensuremath{\overline{\pi_0(f+g)}}} D[f+g] & = & {\ensuremath{\overline{\pi_0f + \pi_0g}}}(D[f] + D[g]) \\ & = & {\ensuremath{\overline{\pi_0f}}}{\ensuremath{\overline{\pi_0g}}} (D[f] + D[g]) \\ & = & {\ensuremath{\overline{\pi_0f}}}D[f] + {\ensuremath{\overline{\pi_0g}}}D[g] \\ & = & {\ensuremath{\overline{\pi_1f}}}\pi_0f + {\ensuremath{\overline{\pi_1g}}}\pi_0g \mbox{ since $f$ and $g$ are linear} \\ & = & {\ensuremath{\overline{\pi_1f}}}{\ensuremath{\overline{\pi_1g}}}\pi_0(f + g) \\ & \leq & \pi_0(f+g) \end{aligned}$$ as required. (vii) By [[**\[DR.[3]{}\]**]{}]{}, projections are linear. Suppose $f$ and $g$ are linear; then consider $$\begin{aligned} D[{\langle}f,g{\rangle}] & = & {\langle}D[f],D[g] {\rangle}\\ & \geq & {\langle}{\ensuremath{\overline{\pi_1f}}}\pi_0f, {\ensuremath{\overline{\pi_1g}}}\pi_0g {\rangle}\mbox{ since $f$ and $g$ are linear} \\ & = & {\ensuremath{\overline{\pi_1f}}}{\ensuremath{\overline{\pi_1g}}} \pi_0{\langle}f,g{\rangle}\\ & = & {\ensuremath{\overline{{\ensuremath{\overline{\pi_1f}}}\pi_1g}}} \pi_0{\langle}f,g{\rangle}\\ & = & {\ensuremath{\overline{{\ensuremath{\overline{\pi_1{\ensuremath{\overline{f}}}}}}\pi_1{\ensuremath{\overline{g}}}}}} \pi_0{\langle}f,g{\rangle}\\ & = & {\ensuremath{\overline{\pi_1{\ensuremath{\overline{f}}}{\ensuremath{\overline{g}}}}}} \pi_0{\langle}f,g{\rangle}\mbox{ by {{\bf [R.4]}}} \\ & = & {\ensuremath{\overline{\pi_1 {\langle}f,g{\rangle}}}} \pi_0{\langle}f,g{\rangle}\end{aligned}$$ as required. (viii) The proof is identical to that for total differential categories: $$\begin{aligned} D[{\langle}1,0{\rangle}D[f]] & = & {\langle}D[{\langle}1,0{\rangle}], \pi_1{\langle}1,0{\rangle}{\rangle}D[D[f]] \\ & = & {\langle}{\langle}\pi_0,0{\rangle}, {\langle}\pi_1, 0{\rangle}{\rangle}D[D[f]] \\ & = & {\langle}\pi_0,0{\rangle}D[f] \mbox{ by {{\bf [DR.{6}]}}} \\ & = & \pi_0{\langle}1,0{\rangle}D[f] \end{aligned}$$ as required. (ix) If $g$ is the partial inverse of a linear map $f$, then $$\begin{aligned} D[g] & \geq & ({\ensuremath{\overline{g}\,}} \times {\ensuremath{\overline{g}\,}})D[g] \\ & = & (gf \times gf)D[g] \\ & = & (g \times g)(f \times f)D[g] \\ & = & (g \times g){\langle}\pi_0f, \pi_1f{\rangle}D[g] \\ & = & (g \times g){\langle}{\ensuremath{\overline{\pi_1f}\,}}\pi_0f, \pi_1f{\rangle}D[g] \\ & = & (g \times g){\langle}{\ensuremath{\overline{\pi_0f}\,}}D[f], \pi_1f{\rangle}D[g] \mbox{ since $f$ is linear,} \\ & = & (g \times g){\ensuremath{\overline{\pi_0f}\,}}{\langle}D[f],\pi_1f{\rangle}D[g] \\ & = & (g \times g){\ensuremath{\overline{\pi_0f}\,}}D[fg] \mbox{ by {{\bf [DR.{5}]}},} \\ & = & (g \times g){\ensuremath{\overline{\pi_0f}\,}}D[{\ensuremath{\overline{f}\,}}] \\ & = & (g \times g){\ensuremath{\overline{\pi_0f}\,}}(1 \times {\ensuremath{\overline{f}\,}})\pi_0 \mbox{ by {{\bf [DR.{8}]}},} \\ & = & {\ensuremath{\overline{(g \times g)\pi_0f}\,}} (g \times g)(1 \times {\ensuremath{\overline{f}\,}})\pi_0 \mbox{ by {{\bf [R.4]}},} \\ & = & {\ensuremath{\overline{{\ensuremath{\overline{\pi_1g}\,}}\pi_0gf}\,}} (g \times g)\pi_0 \\ & = & {\ensuremath{\overline{\pi_1g}\,}}{\ensuremath{\overline{\pi_0{\ensuremath{\overline{g}\,}}}\,}}{\ensuremath{\overline{\pi_1g}\,}}\pi_0g \\ & = & {\ensuremath{\overline{\pi_1g}\,}}\pi_0 g \end{aligned}$$ as required. Note that the join of linear maps need not be linear. Indeed, consider the linear partial maps $2x: (0,2) \to (0,4)$ and $3x: (3,5) \to (9,15)$. If their join was linear, then it would be additive. But this is a contradiction, since $2(1.75) + 2(1.75) \ne 3(3.5)$. However, the join of linear maps is a standard concept of analysis: If $f$ is a finite join of linear maps, say that $f$ is **piecewise linear**. An interesting result from [@cartDiff] is the nature of the differential of additive maps. We get a similar result in our context: If $f$ is additive, then $D[f]$ is additive and $$D[f] \smile \pi_0{\langle}1,0{\rangle}D[f];$$ if $f$ is strongly additive, then $D[f]$ is strongly additive and $$D[f] \leq \pi_0{\langle}1,0{\rangle}D[f].$$ The proof that $f$ being (strongly) additive implies $f$ (strongly) additive is the same as for total differential categories ([@cartDiff], pg. 19) with $\smile$ or $\leq$ replacing $=$ when one invokes the additivity of $f$. The form of $D[f]$ in each case, however, takes a bit more work. We begin with a short calculation: $${\langle}0,\pi_1{\rangle}{\ensuremath{\overline{\pi_1f}\,}} = {\ensuremath{\overline{{\langle}0,\pi_1{\rangle}\pi_1f}\,}} {\langle}0,\pi_1{\rangle}= {\ensuremath{\overline{\pi_1f}\,}} {\langle}0,\pi_1{\rangle}$$ and $${\langle}\pi_0,0{\rangle}{\ensuremath{\overline{\pi_1f}\,}} = {\ensuremath{\overline{{\langle}\pi_0,0{\rangle}\pi_1 f }\,}} {\langle}\pi_0,0{\rangle}= {\ensuremath{\overline{0f}\,}}{\langle}\pi_0,0{\rangle}.$$ Now, if $f$ is additive, we have: $$\begin{aligned} & & {\ensuremath{\overline{\pi_0 {\langle}1,0{\rangle}D[f]}\,}} D[f] \\ & = & {\ensuremath{\overline{{\langle}\pi_0,0{\rangle}\pi_1f}\,}} D[f] \\ & = & {\ensuremath{\overline{0f}\,}}{\ensuremath{\overline{{\langle}\pi_0,0{\rangle}}\,}} D[f] \mbox{ by the second calculation above,} \\ & = & {\ensuremath{\overline{0f}\,}}D[f] \\ & = & {\ensuremath{\overline{0f}\,}}{\ensuremath{\overline{\pi_1f}\,}}D[f] \\ & = & {\ensuremath{\overline{0f}\,}}{\ensuremath{\overline{\pi_1f}\,}}({\langle}0,\pi_1{\rangle}+ {\langle}\pi_0,0{\rangle})D[f] \\ & = & ({\ensuremath{\overline{\pi_1f}\,}}{\langle}0,\pi_1{\rangle}+ {\ensuremath{\overline{0f}\,}}{\langle}\pi_0,0{\rangle})D[f] \\ & = & ({\langle}0,\pi_1{\rangle}{\ensuremath{\overline{\pi_1f}\,}} + {\langle}\pi_0,0{\rangle}{\ensuremath{\overline{\pi_1f}\,}})D[f] \mbox{ by both calculations above,} \\ & \leq & {\langle}0,\pi_1{\rangle}D[f] + {\langle}\pi_0,0{\rangle}D[f] \mbox{ since $D[f]$ is additive,} \\ & = & {\ensuremath{\overline{\pi_1f}\,}}0 + {\langle}\pi_0,0{\rangle}D[f] \mbox{ by {{\bf [DR.{2}]}},} \\ & \leq & 0 + {\langle}\pi_0,0{\rangle}D[f] \\ & = & \pi_0{\langle}1,0{\rangle}D[f] \end{aligned}$$ so that $D[f] \smile \pi_0 {\langle}1,0{\rangle}D[f]$, as required. If $f$ is strongly additive, consider $$\begin{aligned} & & {\ensuremath{\overline{D[f]}\,}}{\langle}\pi_0,0{\rangle}D[f] \\ & = & {\ensuremath{\overline{\pi_1f}\,}}{\langle}\pi_0,0{\rangle}D[f] \\ & = & {\ensuremath{\overline{\pi_1f}\,}}0 + {\langle}\pi_0,0{\rangle}D[f] \\ & = & {\langle}0,\pi_1{\rangle}D[f] + {\langle}\pi_0,0{\rangle}D[f] \\ & = & ({\langle}0,\pi_1{\rangle}{\ensuremath{\overline{\pi_1f}\,}} + {\langle}\pi_0,0{\rangle}{\ensuremath{\overline{\pi_1f}\,}}) D[f] \mbox{ since $D[f]$ is strongly additive,} \\ & = & {\ensuremath{\overline{\pi_1f}\,}}{\ensuremath{\overline{0f}\,}}({\langle}0,\pi_1{\rangle}+ {\langle}\pi_0,0{\rangle})D[f] \mbox{ by the calculations above,} \\ & = & {\ensuremath{\overline{\pi_1f}\,}}{\ensuremath{\overline{0}\,}}(1)D[f] \mbox{ since $f$ strongly additive,} \\ & = & {\ensuremath{\overline{\pi_1f}\,}}D[f] \\ & = & D[f] \end{aligned}$$ so that $ D[f] \leq \pi_0{\langle}1,0{\rangle}D[f]$, as required. Any differential restriction category has the following differential restriction subcategory: If ${\ensuremath{\mathbb X}\xspace}$ is a differential restriction category, then ${\ensuremath{\mathbb X}\xspace}_0$, consisting of the maps which preserve 0 if it is in their domain (i.e., satisfying $0f \leq 0$), is a differential restriction subcategory. The result is immediate, since the differential has this property: $${\langle}0,0{\rangle}D[f] = {\ensuremath{\overline{0f}\,}}0 \leq 0.$$ Finally, note that any differential restriction functor preserves additive, strongly additive, and linear maps: \[diffFunctors\] If $F$ is a differential restriction functor, then (i) $F$ preserves additive maps; (ii) $F$ preserves strongly additive maps; (iii) $F$ preserves linear maps. Since any restriction functor preserves $\leq$ and $\smile$, the result follows automatically. Rational functions {#rat} ================== Thus far, we have only seen a single, analytic example of a differential restriction category. This section rectifies this situation by presenting a class of examples of differential restriction categories with a more algebraic flavour. Rational functions over a commutative ring have an obvious formal derivative. Thus, rational functions are a natural candidate for differential structure. Moreover, rational functions have an aspect of partiality: one thinks of a rational function as being undefined at its poles – that is wherever the denominator is zero. To capture this partiality, we provide a very general construction of rational functions from which we extract a (partial) Lawvere theory of rational functions for any commutative rig and whence, in particular, for any commutative ring. We will then show that, for each commutative ring $R$, this category of rational functions over $R$ is a differential restriction category. Moreover, we will also show that these categories of rational functions embed into the partial map category of affine schemes with respect to localizations. Thus, we relate these categories of rational functions to categories which are of traditional interest in algebraic geometry. The fractional monad {#rat:frac} -------------------- In order to provide a general categorical account of rational functions, it is useful to first have a monadic construction for fractions. When a construction is given by a monad, not only can one recover substitution – as composition in the Kleisli category – but also one has the whole category of algebras in which to interpret structures. The main difficulty with the construction of fractions is that, to start with, one has to find both an algebraic interpretation of the construction, and a setting where it becomes a monad. A formal fraction is a pair $(a,b)$, which one thinks of as $\frac{a}{b}$, with addition and multiplication defined as expected for fractions. If one starts with a commutative ring and one builds these formal fractions the very first peculiarity one encounters is that, to remain algebraic, one must allow the pair $(a,0)$ into the construction: that is one must allow division by zero. Allowing division by zero, $\frac{a}{0}$, introduces a number of problems. For example, because $\frac{a}{0} + \frac{-a}{0} = \frac{0}{0}$ and not, as one would like, $\frac{0}{1}$, one loses negatives. One can, of course, simply abandon negatives and settle for working with commutative rigs. However, this does not resolve all the problems. Without cancellation, fractions under the usual addition and multiplication will not be a rig: binary distributivity of multiplication over addition will fail – as will the nullary distribution (that is $0 \cdot x=0$). Significantly, to recover the binary distributive law, requires only a limited ability to perform cancellation: one needs precisely the equality $\frac{a}{a^2} = \frac{1}{a}$. By imposing this equality, one can recover, from the fraction construction applied to a rig, a [*weak*]{} rig – weak because the nullary distributive law has been lost (although the equalities $0 \cdot 0 = 0$ and $0 \cdot x = 0 \cdot x^2$ are retained). As we shall show below, this construction of fractions does then produce a monad on the category of weak rigs. Furthermore, the algebras for this monad, [*fractional rigs*]{}, can be used to provide a general description of rational functions. An algebraic structure, closely related to our notion of a fractional rig, which was proposed in order to solve very much the same sort of problems, is that of a [*wheel*]{} [@wheel]. Wheels also arise from formal fraction constructions, but the equalities imposed on these fractions is formulated differently. In particular, this means that the monadic properties over weak rigs – which are central to the development below – do not have a counterpart for wheels. Nonetheless, the theory developed here has many parallels in the theory of wheels. Certainly the theory of wheels illustrates the rich possibilities for algebraic structures which can result from allowing division by zero, and there is a nice discussion of the motivation for studying such structures in [@wheel]. Technically a wheel, as proposed in [@wheel], does not satisfy the binary distributive law (instead, it satisfies $(x+y)z +0z = xz + yz$ – where notably $0z \not= 0$ in general) and in this regard it is a weaker notion than a fractional rig. A wheel also has an involution with $x^{**} = x$, while fractional rigs have a star operation satisfying the weaker requirement $x^{***} = x^{*}$. Thus, the structures are actually incomparable, although they certainly have many common features. A [**weak commutative rig**]{} $R = (U(R),\cdot,+,1,0)$ (where $U(R)$ is the underlying set) is a set with two commutative associative operations, $\_\cdot\_$ with unit $1$, and $\_ + \_$ with unit $0$ which satisfies the binary distributive law $x \cdot (y + z) = x \cdot y + x \cdot z$, has $0 \cdot 0 = 0$, and $0 \cdot x = 0 \cdot x \cdot x$ (but in which the nullary distributive law fails, so in general $x \cdot 0 \not= 0$). Weak rigs with evident homomorphisms form a category ${\bf wCRig}$. For convenience, when manipulating the terms of a (weak) rig, we shall tend to drop the multiplication symbol, writing $x \cdot y$ simply by juxtaposition as $xy$. Notice that there is a significant difference between a weak rig and a rig: a weak rig $R$ can have a non-trivial “zero ideal”, $R_0 = \{ 0r |r \in R \}$. Clearly $0 \in R_0$, and it is closed to addition and multiplication. In fact, $R_0$ itself is a weak rig with the peculiar property that $0=1$. To convince the reader that weak rigs with $0=1$ are a plausible notion, consider the natural numbers with the addition [*and*]{} multiplication given by maximum: this is a weak rig in which necessarily the additive and multiplicative units coincide. The fact that, in this example, the addition and multiplication are the same is not a coincidence: \[zero-ideal\] In a weak commutative rig $R$, in which $0=1$, we have: (i) Addition and multiplication are equal: $x+y = x y$; (ii) Addition – and so multiplication – is idempotent, making $R$ into a join semilattice (where $x \leq y$ if $x+y=y$). [[Proof:]{}]{} When $1=0$, to show that $x+y = x y$ it is useful to first observe that both are addition and multiplication are idempotent: $$x+x = x0+x0 = x(0+0) = x0 = x ~~~\mbox{and}~~~ xx = 0x0x = 0xx=0x=x.$$ Now we have the following calculation: $$\begin{aligned} x+y & = & (x+y)(x+y) = x^2 +xy +yx +y^2 = x + xy +y \\ & = & x(1 +y) +y = x(0+y) + y = xy + y \\ & = & (x+1)y = (x+0)y = xy \end{aligned}$$ Thus, one now has a join semilattice determined by this operation. Both rigs and rings, of course, are weak rigs in which $R_0 = \{ 0 \}$. Define the [**fractions**]{}, ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(R)$, of a weak commutative rig $R$ as the set of pairs $U(R) {\times}U(R)$ modulo the equivalence relation generated by $(r,as) \sim (ar,a^2s)$, with the following “fraction” operations: ------------------------------------------------- --------------- $(r,s) + (r',s') := (r s' + s r',s s'),$        $0 := (0,1),$ $(r,s) \cdot (r',s') := (r r',s s'),$ $1 := (1,1).$ ------------------------------------------------- --------------- Now it is not at all obvious that this structure is, with this equivalence, a weak commutative rig. To establish this, it is useful to analyze the equivalence relation more carefully. We shall, as is standard, write $a | s$ to mean $a$ divides $s$, in the sense that there is an $s'$ with $s = a s'$. We may then write the generating relations for the equivalence above as $(r,s) \rightarrowtriangle_a (r',s')$ where $r' =a r$, $s' = a s$ and $a | s$. Furthermore, we shall say $a$ [**iteratively divides**]{} $r$, written $a |^{*} r$, in case there is a decomposition $a = a_1 \dots a_n$ such that $a_1 | a_2 \dots a_n r$, and $a_2 | a_3 \dots a_n \cdot r$, and ... , and $a_n | r$. Then define $(r,s) \rightarrowtriangle^{*}_a (r',s')$ to mean $r' = a r$, $s' = a s$ and $a |^{*} s$. Observe that to say $(r,s) \rightarrowtriangle^*_a (r',s')$ is precisely to say there is a decomposition $a=a_1 \dots a_n$ such that $$(r,s) \rightarrowtriangle_{a_n} (a_n r, a_n s) \rightarrowtriangle_{a_{n-1}} \dots \rightarrowtriangle_{a_1} (a_1 \dots a_n r,a_1 \dots a_n r) = (r',s').$$ Thus $\_ \rightarrowtriangle^{*} \_$ is just the transitive reflexive closure of $\_ \rightarrowtriangle \_$, the generating relation of the equivalence. Next, say that $(r_1,s_1) \sim (r_2,s_2)$ if and only if there is a $(r_0,s_0)$ and $a,b \in R$ such that $$\xymatrix@=10pt{ & (r_0,s_0) & \\ (r_1,s_1) \ar[ur]_{*}^a & & (r_1,s_1) \ar[ul]^{*}_b }.$$ Then we have: For any weak commutative rig, the relation $(r_1,s_1) \sim (r_2,s_2)$ on $R {\times}R$ is the equivalence relation generated by $\rightarrowtriangle$. Furthermore, it is a congruence with respect to fraction addition and multiplication, turning $R {\times}R/\sim$ into a weak commutative rig ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(R)$. [[Proof:]{}]{}That $\sim$ contains the generating relations and is contained in the equivalence relation generated by the generating relations is clear. That it is symmetric and reflexive is also clear. What is less clear is that it is transitive: for that we need the transitivity of $\rightarrowtriangle^*$ – which is immediate – and the ability to pushout the generating relations with generating relations: $$\xymatrix@=10pt{ & (r,s) \ar[dl]_a \ar[dr]^b \\ (a r,a s) \ar[dr]_b & & (b r,b s) \ar[dl]^a \\ & (a b r, a b s) }$$ This shows that it is an equivalence relation. To see that this is a congruence with respect to the fraction operations it suffices (given symmetries) to show that if $(r,s) \rightarrowtriangle_a (a r,a s) = (r',s')$ that $(r,s) + (p,q) \rightarrowtriangle_a (r',s') + (p,q)$, and similarly for multiplication. That this works for multiplication is straightforward. For addition we have: $$(r,s) + (p,q) = (r q + s p, s q) \rightarrowtriangle_a (a (r q + s p),a s q) = (a r,a s) + (p,q)= (r',s') + (p,q)$$ where $a | s q$ as $a | s$. Finally, we must show that this is a weak commutative rig. It is clear that the multiplication has unit $(1,1)$, and is commutative and associative. Similarly for addition it is clearly commutative, the unit is $(0,1)$ as: $$\begin{aligned} (0,1)+(a,b) & = & (0b+a,b) \rightarrowtriangle_b ((0b+a)b,b^2) = (0b^2 + ab,b^2) = (0b+ab,b^2) \\ & = & ((0+a)b,b^2) = (ab,b^2) \leftarrowtriangle_b (a,b).\end{aligned}$$ Furthermore, $(0,1) (0,1) = (0,1)$ and $(0,1)(a,b)= (0,1)(a,b)^2$ as: $$(0,1)(a,b) = (0a,b) \rightarrowtriangle_{b^2} (0ab^2,b^3) = (0a^2b,b^3) \leftarrowtriangle_b (0a^2,b^2) = (0,1)(a,b)^2.$$ That addition is associative is a standard calculation. The only other non-standard aspect is binary distributivity: $$\begin{aligned} (a,b) ((c,d)+(e,f)) & = & (a,b) (c f + d e,d f) \\ & = & (a c f + a d e,b d f) \\ & \rightarrowtriangle_b & (a b c f + a b d e,b^2 d f) \\ & = & (a c,b d) + (a e,b f) \\ & = & (a,b) (c,d) + (a,b) (e,f).\end{aligned}$$ Notice that forcing binary distributivity to hold implies $$(1,b) = (1,b) ((1,1)+(0,1)) \equiv (1,b) (1,1) + (1,b) (0,1) = (1,b)+(0,b) = (b+0b,b^2)=(b,b^2)$$ so that the generating equivalences above [*must*]{} hold, when distributivity is present. This means we are precisely forcing binary distributivity of multiplication over fraction addition with these generating equivalences. Note also that nullary distributivity, even when one starts with a rig $R$, will not hold in ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(R)$, as $(0,0) = (0,0)(0,1)$ and $(0,1)$ are distinct unless $0=1$. It is worth briefly considering some examples: (1) Any lattice $L$ is a rig. ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(L)$ has as its underlying set pairs $\{ (x,y) | x,y \in L, x \leq y \}$ as, in this case, $(x,y)\sim (x \wedge y,y)$. These are the set of intervals of the lattice. The resulting addition and multiplication are both idempotent and are, respectively, the join and meet for two different ways of ordering intervals. For the multiplication the ordering is $(x,y) \leq (x',y')$ if and only if $x \leq x'$ and $y \leq y'$. For the addition the ordering[^5] is $(x,y) \leq (x',y')$ if and only if $x' \wedge y \leq x$ and $y \leq y'$. Notice that the zero ideal consists of all intervals $(\bot,a)$. (2) In any unique factorization domain, $R$, such as the integers or any polynomial ring over a unique factorization domain, the equality in ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(R)$ may be expressed by reduction (as opposed to the expansion given above). This reduction to a canonical form performs cancellation while the factor is not eliminated from the denominator. Thus, in ${\ensuremath{{\mathfrak f}{\mathfrak r}}}({\ensuremath{\mathbb Z}\xspace})$ we have $(18,36)$ reduces to $(3,6)$ but no further reduction is allowed as this would eliminate a factor (in this case $3$) from the denominator. In any rig $R$, as zero divides zero, we have $(r,0) = (0,0)$ for every $r \in R$. The zero ideal will, in general, be quite large as it is ${\ensuremath{{\mathfrak f}{\mathfrak r}}}{R}_0 = \{ (0,r)| r \in R\}$. (3) A special case of the above is when $R$ is a field. In this case $(x,y) = (xy^{-1},1)$ when $y \not= 0$ and when $y=0$ then, as above, $(x,0) = (0,0)$. Thus, in this case the construction adds a single point “at infinity”, $\infty = (0,0)$. Note that the zero ideal is $\{ 0,\infty \}$. (4) The initial weak commutative rig is just the natural numbers, ${\ensuremath{\mathbb N}\xspace}$. Thus, it is of some interest to know what ${\ensuremath{{\mathfrak f}{\mathfrak r}}}({\ensuremath{\mathbb N}\xspace})$ looks like as this will be the initial algebra of the monad. The canonical form of the elements is, as for unique factorization domains, determined by canceling factors from the fractions while the denominator remains divisible by that factor. Addition and multiplication are performed as usual for fractions and then reduced by canceling in this manner to the canonical form. The zero ideal consists of $(0,0)$ and element of the form $(0,p_1p_2...p_n)$ where the denominator is a (possibly empty) product of distinct primes. Clearly we always have a weak rig homomorphism: $$\eta: {\cal R} \to {\ensuremath{{\mathfrak f}{\mathfrak r}}}({\cal R}); r \mapsto (r,1)$$ Furthermore this is always a faithful embedding: if $(r,1) \sim (s,1)$, then we have $(u,v) \rightarrowtriangle^*_a (r,1)$ and $(u,v) \rightarrowtriangle^*_b (s,1)$. This means $a$ iteratively divides $1$ but this means $a = a_1 \cdot \dots \cdot a_n$ where $a_n | 1$ which, in turn, means $a_n$ is a unit (i.e. has an inverse). But now we may argue similarly for $a_{n-1}$ and this eventually gives that $a$ itself is a unit. Similarly $b$ is a unit and as $a \cdot v = 1 = b \cdot v$ it follows $a=b$ and whence that $r=s$. In order to show that ${\ensuremath{{\mathfrak f}{\mathfrak r}}}$ is a monad, we will use the “Kleisli triple” presentation of a monad. For this we need a combinator $$\infer{\#(f): {\ensuremath{{\mathfrak f}{\mathfrak r}}}(R) \to {\ensuremath{{\mathfrak f}{\mathfrak r}}}(S)}{f : R \to {\ensuremath{{\mathfrak f}{\mathfrak r}}}(S)}$$ such that $\#(\eta) = 1$, $\eta \#(f) = f$ and $\#(f)\#(g) = \#(f\#(g))$. Recall that given this, the functor is defined by ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(f) := \#(f \eta)$ and the multiplication is defined by $\mu_X := \#(1_{{\ensuremath{{\mathfrak f}{\mathfrak r}}}(X)})$. We define this combinator as $\#(f)(x,y) := [(x_1 y_2^2,x_2 y_1 y_2)]$, where $[(x_1,x_2)] = f(x)$ and $[(y_1,y_2)] = f(y)$. To simplify notation we shall write $(x_1,x_2) \in f(x)$, rather than $[(x_1,x_2)] = f(x)$, to mean $(x_1,x_2)$ is in the equivalence class determined by $f(x)$. Our very first problem is to prove that this is well-defined. That is if $(x,y) \sim (x',y')$ that $\#(f)(x,y) \sim \#(f)(x',y')$ and as this is a little tricky we shall give an explicit proof. First note that it suffices to prove this for a generating equivalence: so we may assume that $(x,y) \rightarrowtriangle_a (x',y')=(ax,ay)$ (where this also means $a | y$) and we must prove that $\#(f)(x,y) \sim \#(f)(ax,ay)$. Now $\#(f)(ax,ay) = (x'_1(y'_2)^2,x'_2y'_1y'_2)$ where $(x'_1,x'_2) \in f(ax)$ and $(y'_1,y'_2) \in f(ay)$. But we have $f(ax) \sim f(a)f(x)$ and $f(ay) \sim f(a)f(y)=f(a)f(a)f(z)$ where $y=az$ thus, letting $(a_1,a_2) \in f(a)$ and $(z_1,z_2) \in f(z)$, there are $\alpha$,$\beta$,$\gamma$, and $\delta$ such that $$\xymatrix@=10pt{ & (\beta a_1 x_1,\beta a_2 x_2) & & & (\delta a_1 a_1 z_1, \delta a_2 a_2 z_2) \\ (x'_1,x'_2) \ar[ur]^{\alpha} & & (a_1 x_1,a_2 x_2) \ar[ul]_{\beta} & (y'_1,y'_2) \ar[ur]^{\gamma} & & (a_1 a_1 z_1,a_2 a_2 z_2) \ar[ul]_{\delta} }$$ we may now calculate: $$\begin{aligned} (x'_1 {y'_2}^2,y'_1y'_2x'_2) & \rightarrowtriangle_\alpha & (\beta a_1 x_1 {y'_2}^2,y'_1y'_2 \beta a_2 x_2) \\ & \rightarrowtriangle_\gamma & (\beta a_1 x_1 \delta a_2 a_2 z_2 y'_2,\delta a_1 a_1 z_1 y'_2 \beta a_2 x_2) \\ & \rightarrowtriangle_\gamma & (\beta a_1 x_1 \delta a_2 a_2 z_2 \delta a_2 a_2 z_2,\delta a_1 a_1 z_1 \delta a_2 a_2 z_2 \beta a_2 x_2) \\ & = & (\beta \delta^2 a_1 x_1 a_2^4 y_2^2,\beta \delta^2 a_1^2 a_2^3 z_1 z_2 x_2) \\ & \leftarrowtriangle_{\beta\delta^2a_2^2a_1} & (x_1 (a_2 z_2)^2,(a_1 z_1)(a_2 z_2) x_2) \\ & = & (x_1 y_2^2,y_1 y_2 x_2) \in \#(f)(x,y) \end{aligned}$$ This is the first step in proving: $({\ensuremath{{\mathfrak f}{\mathfrak r}}},\eta, \mu)$ is a monad, called the [**fractional monad**]{}, on ${\bf wCRig}$ [[Proof:]{}]{}It remains to show that $\#(f)$ is a weak rig homomorphism and satisfies the Kleisli triple requirements. It is straightforward to check that $\#(f)$ preserves the units and multiplication. The argument for addition is a little more tricky. First note: $$\begin{aligned} f(rq+sp,sq) & = & [ ( (r_1,r_2)(q_1,q_2) + (s_1,s_2)(p_1,p_2),(s_1,s_2)(q_1,q_2) )] \\ & & \mbox{where}~ (r_1,r_2) \in f(r), (s_1,s_2) \in f(s),(p_1,p_2) \inf(p),(q_1,q_2) \in f(q) \\ & = & [( (r_1q_1,r_2q_2) + (s_1p_1,s_2p_2),(s_1q_1,s_2q_2))] \\ & = & [(r_1q_1s_2p_2 +r_2q_2s_1p_1,r_2q_2s_2p_2)] \\ f(sq) & = & [(s_1q_1,s_2q_2)] \end{aligned}$$ so that we now have: $$\begin{aligned} \#(f)([(r,s)]+[(p,q)]) & = & \#(f)([(rq+sp,sq)]) \\ & = &[ ((r_1q_1s_2p_2 +r_2q_2s_1p_1)(s_2q_2)^2,s_1p_2q_1r_2(s_2q_2)^2)] \\ & \sim & [((r_1q_1s_2p_2+ r_2q_2s_1p_1)s_2q_2,s_1p_2q_1r_2q_2s_2)] \\ & = & [(r_1s_2^2p_2q_1q_2+ r_2s_1s_2p_1q_2^2,r_2s_1s_2p_2q_1q_2)] \\ & = & [(r_1s_2^2,r_2s_1s_2)] + [(p_1q_2^2,p_2q_1q_2)] \\ & = & \#(f)([(r,s)]) + \#(f)([(p,q)])\end{aligned}$$ It remains to check the monad identities for the Kleisli triple. The first two are straightforward we shall illustrate the last identity: $$\begin{aligned} \#(g)(\#(f)([(x,y)])) & = & \#(g)(x_1y_2^2,x_2y_1y_2) ~~~\mbox{where}~~(x_1,x_2) \in f(x), (y_1,y_2) \in g(y) \\ & = & [(x_{11}y_{12}^2(x_{22}y_{21}y_{22})^2,x_{21}y_{22}^2x_{12} y_{11}y_{12}x_{22}y_{21}y_{22})] \\ & & \mbox{where}~~ (x_{1i},x_{2i}) \in g(x_i), (y_{1j},y_{2j}) \in g(y_j) \\ & = & [( x_{11}x_{22}^2(y_{21}y_{22}y_{12})^2,x_{12}x_{22}x_{12}y_{11}y_{22}^2y_{21}y_{22}y_{12})] \\ & = & [(x_1'{y'_2}^2,x'_2y_1'y_2')] ~~~\mbox{where}~~~ (x_1',x_2') \in \#(g)(f(x)), (y'_1,y'_2) \in \#(g)(f(y)) \\ & & \mbox{and}~~( x_{11}x_{22}^2,x_{21}x_{22}x_{12}) \in \#(g)([(x_1,x_2)]) =\#(g)(f(x)) \\ & & ~~~~~~~ (y_{11}y_{22}^2,y_{12}y_{22}y_{12}) \in \#(g)([(y_1,y_2)]) =\#(g)(f(y)) \\ & = & \#(f\#(g))([(x,y)]).\end{aligned}$$ The algebras for this monad are “fractional rigs” as we will now show. A [**fractional rig**]{} is a weak commutative rig with an operation $(\_)^{*}$ such that - $1^{*} = 1$, $x^{***} = x^{*}$, $(xy)^{*} = y^{*}x^{*}$; - $x^{*}xx^{*}= x^{*}$ (that is, $x^{*}$ is regular); - $x^{*}x(y+z) = x^{*}xy+z$ (linear distributivity for idempotents). The last axiom is equivalent to demanding $x^{*}xy = x^{*}x0 + y$. In particular, setting $y=1$, this means that $x^{*}x = x^{*}x0 + 1$. Fractional rigs are of interest in their own right. Here are some simple observations: In any fractional rig: (i) $xx^{*}$ is idempotent; (ii) If $x$ is a unit, with $xy=1$, then $x^{*}=y$; (iii) $x^{*}x^{**}x^{*} = x^{*}$; (iv) $xx^{*} =(xx^{*})^{*}$; (v) $e$ is idempotent with $e^{*}=e$ if and only if there is an $x$ with $e=xx^{*}$; (vi) $xx^{*}x = x^{**}$; (vii) An element $x$ is regular (that is, $xx^{*}x=x$) if and only if $x=x^{**}$. (viii) if $0=0^{*}$ then $0=1$, addition equals multiplication, and both operations are idempotent. We shall call an element $e$ a [**${*}$-idempotent**]{} when $e$ is idempotent and $e^{*}=e$. [[Proof:]{}]{}  1. $(xx^{*})(xx^{*}) = x(x^{*}xx^{*}) = xx^{*}$. 2. As $xx^{*}$ is idempotent if it has an inverse it is the identity. However, $yy^{*}$ is its inverse: this means $y=x^{*}$. 3. $x^{*}x^{**}x^{*} = x^{***}x^{**}x^{***} = x^{***} = x^{*}$; 4. $xx^{*} =xx^{*}x^{**}x^{*} = x^{*}xx^{*}x^{**} = x^{*}x^{**} = (xx^{*})^{*}$; 5. If $e$ is idempotent with $e^{*}=e$ then $ee^{*}= ee = e$ and the converse follows from the above. 6. $xx^{*}x = xx^{*}x^{**}x^{*}x = x^{*}xx^{*}x^{**}x = x^{**}x^{*}x = x^{**}x^{*}x^{**}x^{*}x = x^{**}x^{*}x^{**} = x^{**}$; 7. If $x$ is regular in this sense then $x^{**}=xx^{*}x=x$ so $x=x^{**}$ and the converse follows from above. 8. If $0=0^{*}$ then $0=00=00^{*}$ so $0$ is a $*$-idempotent. This means $1=1+0=0(1+0)=01+ 0 =0$! In particular, as a consequence of the last observation, it follows from Lemma \[zero-ideal\] that for any fractional rig $R$, the fractional rig $R_{00^{*}}=\{ 00^{*}r | r \in R\}$, which we discuss further in the next section, is a semilattice. In fact, we may say more: In any fractional rig in which $0=1$: (i) The addition and multiplication are equal and idempotent, producing a join semilattice; (ii) $0=0^{*}$ and $(\_)^{*}$ is a closure operator (that is, it is monotone with $x \leq x^{*}$ and $x^{**}=x^{*}$). [[Proof:]{}]{}The first part follows from Lemma \[zero-ideal\]. For the second part: as $0=1$ and $1^{*}=1$, the first observation is immediate. Now $x \leq y$ if and only if $y = xy$ but then, as $(\_)^{*}$ preserves multiplication, $y^{*} = x^{*}y^{*}$ so that $x^{*} \leq y^{*}$. Thus, $(\_)^{*}$ is monotone. $x \leq x^{*}$ if and only if $x^{*}x = x^{*}$ but $x^{*}x = xx^{*}x = x^{**}$, so we are done if we can show $x^{*} = x^{**}$. But $x^{*} = x^{*}x^{**}x^{*} = x^{*}x^{**} = x^{**}x^{*}x^{**} = x^{**}x^{***}x^{**} = x^{**}$. We observe next that for any weak commutative rig $R$, ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(R)$ is a fractional rig with $(\_)^{*}$ defined by $(x,y)^{*} := (y^2,yx)$. Notice first that it is straightforward to check that this is a well-defined operation which is multiplicative and that $x^{***} = x^{*}$. Furthermore, $(x,y)^{*}$ is regular in the sense that $$(x,y)^{*}(x,y)(x,y)^{*} = (y^2,xy)(x,y)(y^2,xy) = (y^4x,y^3x^2) = (y^2,xy)= (x,y)^{*}.$$ For the linear distribution observe that $$(z,z)((p,q) + (r,s)) = (z,z)(ps+qr,qs) = (zps+zqr,zqs) = (z,z)(p,q)+(r,s).$$ Note that, for example, in ${\ensuremath{{\mathfrak f}{\mathfrak r}}}({\ensuremath{\mathbb N}\xspace})$ we have, for any two primes $p \not= q$, $(p,q)^{*} = (q^2,pq)$ and $(p,q)^{**}=(p^2q^2,q^3p) = (p^2,pq)$ and $(p,q)^{***} = (p^2q^2,p^3q) = (q^2,pq) = (p,q)^{*}$. So here $x^{**} \not= x$ and $x^{**} \not= x^{*}$. On the other hand, $(0,p)^* = (p^2,0p) = (0,0) = (0,0)^{*} = (0,p)^{**}$, thus the “closure” of everything in the zero ideal is its top element, $(0,0)$. To show that fractional weak rigs are exactly the algebras for the fractional monad, we need to show that for any fractional rig $R$, there is a structure map $\nu: {\ensuremath{{\mathfrak f}{\mathfrak r}}}(R) \to R$ such that $$\xymatrix{{\ensuremath{{\mathfrak f}{\mathfrak r}}}^2(R) \ar[d]_{{\ensuremath{{\mathfrak f}{\mathfrak r}}}(\nu)} \ar[r]^\mu & {\ensuremath{{\mathfrak f}{\mathfrak r}}}(R) \ar[d]^{\nu} \\ {\ensuremath{{\mathfrak f}{\mathfrak r}}}(R) \ar[r]_{\nu} & R}$$ commutes. Define $\nu: {\ensuremath{{\mathfrak f}{\mathfrak r}}}(R) \to R; (r,s) \mapsto rs^{*}$, then: For every fractional weak rig, $R$, $\nu$ as defined above is a fractional rig homomorphism. [[Proof:]{}]{}We must check that $\nu$ is well-defined and is a fractional rig homomorphism. To establish the former it suffices to prove $\nu(x,ay) = \nu(ax,a^2y)$ which is so as $$\nu(ax,a^2y) = axa^2y({a^{*}}^2y^{*})^2 = xay{a^{*}}^2{y^{*}}^2 = \nu(x,ay).$$ It is straightforward to check that multiplication and the units are preserved by $\nu$. This leaves addition: $$\begin{aligned} \nu((r,s)+(p,q)) & = & \nu(rq+sp,sq) = (rq+sp)sq{s^{*}}^2{q^{*}}^2 \\ & = & s^{*}q^{*}(qq^{*}(rq+sp)ss^{*}) = s^{*}q^{*}(q^2q^{*}r+s^2s^{*}p) \\ & = & q^2{q^{*}}^2rs^{*}+s^2{s^{*}}^2pq^{*} = qq^{*}rs^{*}+ss^{*}pq^{*}\\ & = & qq^{*}(rs^{*}+pq^{*})ss^{*} = ss^{*}(rs^{*}+pq^{*})qq^{*} \\ & = & sr{s^{*}}^2+pq{q^{*}}^2 = \nu(r,s) + \nu(p,q).\end{aligned}$$ Finally, $\nu$ preserves the $(\_)^*$ as $$\nu((x,y)^{*}) = \nu(y^2,xy) = y^2x^{*}y^{*} = x^{*}(yy^{*}y) = x^{*}y^{**} = (xy^{*})^{*} = \nu(x,y)^{*}.$$ Now we can complete the story by showing not only that this definition of $\nu$ makes every fractional rig an algebra, but also that such an algebra inherits the structure of a fractional rig. An algebra for the fractional monad is exactly a fractional rig. [[Proof:]{}]{}Every fractional rig is an algebra, that is the diagram above commutes: $$\begin{aligned} \nu(\mu((r,s),(p,q))) & = & \nu(rq^2,spq) = \\ & = & rq^2s^{*}p^{*}q^{*}\\ & = & rs^{*}p^{*}qq^{*}q \\ & = & rs^{*}p^{*}q^{**} \\ & = & \nu(rs^{*},pq^{*}) \\ & = & \nu({\ensuremath{{\mathfrak f}{\mathfrak r}}}(\nu)((r,s),(p,q))\end{aligned}$$ Conversely an algebra $\nu: {\ensuremath{{\mathfrak f}{\mathfrak r}}}(R) \to R$ has $r^* = \nu(\eta(r)^*)$. It remains to check that this definition turns $R$ into a fractional rig. The identities which do not involve nested uses of $(\_)^{*}$ are straightforward. For example to show $r^{*}rr^{*} = r^{*}$ we have: $$r^{*}rr^{*} = \nu(\eta(r)^*)r\nu(\eta(r)^*) = \nu(\eta(r)^*)\nu(\eta(r))\nu(\eta(r)^*) = \nu(\eta(r)^*\eta(r)\eta(r)^*) =\nu(\eta(r)^*) = r^{*}$$ where we use the fact that $\nu$ is a weak rig homomorphism and ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(R)$ satisfies the identity. More difficult is to prove that $r^{***} = r^{*}$: we shall use two facts $$\xymatrix{{\ensuremath{{\mathfrak f}{\mathfrak r}}}(X) \ar@{}[dr]|{(1)} \ar[d]_{*} \ar[r]^{{\ensuremath{{\mathfrak f}{\mathfrak r}}}(f)} & {\ensuremath{{\mathfrak f}{\mathfrak r}}}(Y) \ar[d]^{*} \\ {\ensuremath{{\mathfrak f}{\mathfrak r}}}(X) \ar[r]_{{\ensuremath{{\mathfrak f}{\mathfrak r}}}(f)} & {\ensuremath{{\mathfrak f}{\mathfrak r}}}(Y)} ~~~ \xymatrix{{\ensuremath{{\mathfrak f}{\mathfrak r}}}^2(X) \ar@{}[dr]|{(2)} \ar[d]_{*} \ar[r]^{\mu} & {\ensuremath{{\mathfrak f}{\mathfrak r}}}(X) \ar[d]^{*} \\ {\ensuremath{{\mathfrak f}{\mathfrak r}}}^2(X) \ar[r]_{\mu} & {\ensuremath{{\mathfrak f}{\mathfrak r}}}(X)}$$ namely (1) the $(\_)^{*}$ on ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(X)$ is natural and (2) that $\mu$ preserves the $(\_)^{*}$. We start by establishing for any $z \in {\ensuremath{{\mathfrak f}{\mathfrak r}}}(R)$ that $\nu(\eta(\nu(z))^*) = \nu(z^{*})$ as: $$\nu(\eta(\nu(z))^*) = \nu({\ensuremath{{\mathfrak f}{\mathfrak r}}}(\nu)(\eta(z))^{*}) =_{(1)} \nu({\ensuremath{{\mathfrak f}{\mathfrak r}}}(\nu)(\eta(z)^{*})) = \nu(\mu(\eta(z)^{*})) =_{(2)} \nu(\mu(\eta(z))^{*}) = \nu(z^{*})$$ This allows the calculation: $$r^{***} = \nu(\eta(\nu(\eta(\nu(\eta(r)^{*}))^{*}))^{*}) = \nu(\eta(\nu(\eta(r)^{**}))^{*}) = \nu(\eta(r)^{***}) = \nu(\eta(r)^{*}) = r^{*}.$$ Let us denote the category of fractional rigs and and homomorphisms by $\text{\bf fwCRig}$. Because this is a category of algebras over sets, this is a complete and cocomplete category. Furthermore, we have established: The underlying functor $V: \text{\bf fwCRig} \to \text{\bf wCRig}$ has a left adjoint which generates the fraction monad on $\text{\bf wCRig}$. This observation suggests an alternative, more abstract, approach to these results: proving that the adjoint between these categories generates the fractional monad, in fact, suffices to prove that $\text{\bf fwCRig}$ is monadic over $\text{\bf wCRig}$. The approach we have followed reflects our focus on the fractional monad itself and on its concrete development. Rational functions {#rational-functions} ------------------ In any fractional rig the $*$-idempotents, $e=e^{*}=ee$, have a special role. If we force the identity $e=1$, this forces all $e'$ with $ee'=e$ – this is the up-set generated by $e$ under the order $e \leq e' \Leftrightarrow ee'=e$ – to be the identity. For fractional rigs this is an expression of localization. A [**localization**]{} in fractional rigs is any map which is universal with respect to an identity of the form $e=1$, where $e$ is a ${*}$-idempotent of the domain. Thus the map $\ell_e: R \to R/{\langle}e=1{\rangle}$ is a localization at $e$ in case whenever $f: R \to S$ has $f(e)=f(1)$ there is a unique map $f'$ such that: $$\xymatrix{R \ar[rr]^{\ell_e} \ar[rrd]_f & & R/{\langle}e=1{\rangle}\ar@{..>}[d]^{f'} \\ & & S}$$ Here $R/{\langle}e=1{\rangle}$ is determined only up to isomorphism, however, there is a particular realization of $R/{\langle}e=1{\rangle}$ as the fractional rig $R_e = \{ re |r \in R \}$, with the evident addition, multiplication, and definition of $(\_)^{*}$. This gives a canonical way of representing the localization at any $e$ by the map $$\ell_e: R \to R_e; r \mapsto re.$$ In particular, $\ell_{00^{*}}: R \to R_{00^{*}}$ gives a localization of any fractional rig to one in which $0=1$. In $\text{\bf fwCRig}$ the class of localizations, [loc]{}, contains all isomorphisms and is closed to composition and pushouts along any map. [[Proof:]{}]{} All isomorphism are localizations as all isomorphisms are universal solutions to the equation $1=1$. For composition observe, in the canonical representation of localizations, $\ell_{e_1}\ell_{e_2} \simeq \ell_{e_1e_2}$. Finally, the pushout of $\ell_e: R \to R_e$ along $f: R \to S$ is given by $\ell_{f(e)}: S \to S_{f(e)}$: $$\xymatrix{R \ar[d]_{\ell_e} \ar[r]^f & S \ar[d]_{\ell_{f(e)}} \ar[ddr]^{k_1} \\ R_e \ar[r]^{f'} \ar[rrd]_{k_2} & S_{f(e)} \ar@{..>}[dr]|{\hat{k}} \\ & & K}$$ First note that $f'$ is defined by $f'(er) = f(e)f(r)$, which is clearly a fractional rig homomorphism. Now suppose the outer square commutes. If we define $\hat{k}(f(e)s) = k_2(f(e)s)$, then the right triangle commutes while $$\hat{k}(f'(er)) = \hat{k}(f(e)f(r)) = k_2(f(e)f(r)) = k_2(f(er)) = k_1(\ell_e(er)) = k_1(er),$$ showing that the left triangle commutes. Furthermore, $\hat{k}$ is unique as $\ell_{f(e)}$ is epic, showing that the inner square is a pushout. This means immediately: [loc]{} is a stable system of monics in $\text{\bf fwCRig}^{\rm op}$, so that [*([**fwCRig**]{}$^{\rm op}$,[loc]{})*]{} is an ${\cal M}$-category and, thus [*[Par]{}([**fwCRig**]{}$^{\rm op}$,[loc]{})*]{} is a cartesian restriction category. We shall denote this partial map category $\text{\bf RAT}$ and refer to it as the category of [**rational functions**]{}. Recall that a map $R \to S$ in this category, as defined above, is a cospan in $\text{\bf fwCRig}$ of the form: $$\xymatrix@=12pt{& R_e & \\ R \ar[ur]^{\ell_e} & & & S \ar[ull]_h}$$ where we use the representation $R_e = \{er | r \in R \}$. This means, in fact, that a map in this category is equivalently a map $h: S \to R$ which preserves addition, multiplication, and $(\_)^{*}$ – but does [*not*]{} preserve the unit of multiplication, and has $h(0) = h(1) \cdot 0$. These we shall refer to as [**corational**]{} morphisms. Thus, $\text{\bf RAT}$ can be alternately presented as: $\text{\bf RAT}$ is precisely the opposite of the category of fractional rigs with corational morphisms. The advantage of this presentation is that one does not have to contend with spans or pushouts: one can work directly with corational maps. In particular, the corestriction of a corational map $f: S \to R$ is just $f(1) \cdot \_: R \to R$. This category is certainly not obviously recognizable as a category of rational functions as used in algebraic geometry. Our next objective is to close this gap. In order to do this we start by briefly reviewing localization in commutative rigs. The definition of a localization for commutative rigs is a direct generalization of the usual notion of localization for commutative rings, as in [@alggeomeis]. A [**localization**]{} is a rig homomorphism $\phi: R \rightarrow S$ such that there exists a multiplicative set, $U$, with $\phi(U) \subseteq \text{units}(S)$, with the property that for any map $f : R \to T$, with $f(U) \subseteq \text{units}(T)$, there is a unique map $k : S \to T$ such that $f = \phi k$: $$\xymatrix{R \ar[r]^{\phi} \ar[dr]_f & S \ar@{..>}[d]^k \\ & T}$$ A localization is said to be [**finitely generated**]{} if there is a finitely generated multiplicative set $U$ for which the map is universal. Denote the class of finitely generated localizations by [Loc]{}. We next show that [Loc]{} is a stable system of monics in ${\bf CRig}^{{\mbox{\scriptsize op}}}$, so one may form a partial map category for commutative rigs opposite with respect to localizations. If $R$ is a commutative rig, and $U$ is a multiplicative closed set, $R[U^{-1}]$, is the universal rig obtained with all elements in $U$ turned into into units. This is called the rig of fractions with respect to a multiplicative set $U$, as the operations in the rig are defined as for fractions with denominator chosen from $U$, see for example [@dumfooteabsalg]. This is exactly the fractional construction described above except with denominators restricted to $U$ and with the additional ability that one may quotient out by arbitrary factors. There is a canonical localization, $l_U : R \to R[U^{-1}]$; $l_U(r) = \frac{r}{1}$. It is clear that (finitely generated) localizations in [**CRig**]{} are epic, contain all isomorphisms, and are closed to composition. Furthermore, we have the following: In $\text{\bf CRig}$, the pushout along any map of a (finitely generated) localization exists and is a (finitely generated) localization. [[Proof:]{}]{}Let $R,A,S$ be rigs. Let $\phi : R \to S$ be a localization, let $f : R \to A$ be a rig homomorphism, and let $W \subseteq R$ be the factor closed multiplicative set that $\phi$ inverts. Then $f(W)$ is also a multiplicative set which is finitely generated if $W$ is, so we can form the canonical localization $l_{f(W)} : A \to A[(f(W))^{-1}]$. This means that $l_{f(W)} f(W) \subset \text{units}(A[(f(W))^{-1}]$, and so we get a unique $k : S \to A[(f(W))^{-1}]$ such that the following diagram commutes $$\xymatrix{ R \ar[r]^{\phi} \ar[d]_{f} & S \ar[d]^{k} \\ A \ar[r]_{l_{f(W)}} & A[(f(W))^{-1}]}$$ To show that this square is a pushout, suppose the outer square commutes in: $$\xymatrix{ R \ar[r]^{\phi} \ar[d]_{f} & S \ar[d]^{k} \ar@/^1pc/[ddr]^{q_1} & \\ A \ar@/_1pc/[drr]_{q_0} \ar[r]_{l_{f(W)}} & A[(f(W))^{-1}] \ar@{..>}[dr]^{\hat{k}}& \\ & & Q }$$ If we can show that $q_0$ sends $f(W)$ to units, then we get a unique map $\hat{k}: A[(f(W))^{-1}] \to Q$. Now, $q_0(f(W)) = q_1(\phi (W))$ by commutativity; thus, $q_1 (\phi (W)) \subset \text{units}(Q)$, so $q_0 (f(W)) \subset \text{units}(Q)$ giving $\hat{k}$. Next, we must show that $k \hat{k} = q_1$. However, $\phi q_1 = f q_0 = f l_{f(W)} \hat{k} = \phi k \hat{k}$ and $\phi$ is epic, $q_1 = k \hat{k}$. Moreover, since $\hat{k}$ is the unique map this makes the bottom triangle commute, and the square a pushout. Thus [Loc]{} is a stable system of monics in [**CRig**]{}$^{op}$, and so we can form a partial map category: $({\bf CRig}^{{\mbox{\scriptsize op}}},\text{\sc Loc})$ is an ${\mathcal M}$-category, and ${{\sf Par}\xspace}({\bf CRig}^{{\mbox{\scriptsize op}}},\text{\sc Loc})$ is a cartesian restriction category. We shall call this category $\text{\bf RAT}_{\sf rig}$. Our next objective is to prove: \[rationals-on-rigs\] $\text{\bf RAT}_{\sf rig}$ is the full subcategory of $\text{\bf RAT}$ determined by the objects ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))$ for $R \in \text{\bf CRig}$, where $W$ is the inclusion of commutative rigs into weak commutative rigs. To prove that the induced comparison between the cospan categories is full and faithful it suffices to show that the composite $W {\ensuremath{{\mathfrak f}{\mathfrak r}}}$ is a full and faithful left adjoint which preserves and reflects localizations. This because it will then fully represent the maps in the cospan category and preserve their composition – as this is given by a colimit. That it is a left adjoint follows from \[Wleftadj\], the full and faithfulness follows from \[extracting-rigs\](i) and the preservation and reflection of localizations from \[extracting-rigs\](iii),(iv), and (v). We start with: \[Wleftadj\] The inclusion functor $W: \text{\bf CRig} \to \text{\bf wCRig}$ has both a left and right adjoint. The left adjoint arises from simply forcing the nullary distributive law to hold. It is the form of the right adjoint which is of more immediate interest to us. Given any weak rig $R$, the set of [**rig elements**]{} of $R$ is ${\sf rig}(R) = \{ r | r \cdot 0 = 0 \}$. Clearly rig elements include $0$ and $1$, and are closed under the multiplication and addition. Thus, they form a subrig of any weak rig, and it is this rig which is easily seen to give the right adjoint to the inclusion $W$ above. This leads to the following series of observations: \[extracting-rigs\] For any rig $R$: (i) ${\sf rig}({\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))) \cong R$; (ii) If $e$ is a ${*}$-idempotent of ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))$ then $e \sim (r,r)$ for some $r \in R$; (iii) The up-sets of $*$-idempotents $e \in {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))$, $uparrow\! e \{ e' | ee'=e\}$, correspond precisely to finitely generated multiplicative closed subsets of $R$ which are also factor closed, $\Sigma_e= \{ r \in R | (r,r) \geq e\}$; (iv) ${\sf rig}({\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e) = R(\Sigma_e^{-1})$ (where $\Sigma_e^{-1}$ is a rig with $\Sigma_e$ universally inverted); (v) ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e \cong {\ensuremath{{\mathfrak f}{\mathfrak r}}}(R[\Sigma_e^{-1}])$. [[Proof:]{}]{}  1. Suppose $(r,s)(0,1) \sim (0,1)$ then $(0,1) \rightarrowtriangle_\alpha (p,q) \leftarrowtriangle_\beta (0,s)$. It follows that $\alpha$ is a unit (as it must iteratively divide $1$) and so $p=0$ and $q=\alpha$. But $q = \beta s$ so that $\beta$ and $s$ are units. However then $(r,s) \sim (s^{-1}r,1)$ showing each rig element is (up to equivalence) an original rig element. 2. We must have $(r,s) \rightarrowtriangle_\alpha (p,q) \leftarrowtriangle_\beta (r^2,s^2)$ and $(r,s) \rightarrowtriangle_\gamma (p',q') \leftarrowtriangle_\delta (s^2,rs)$ from which we have: $(r,s) \rightarrowtriangle_{\alpha} (\beta r^2,\beta s^2) \rightarrowtriangle_{\gamma} (\beta\delta rs^2,\beta\delta srs).$ 3. $\Sigma_e$ is multiplicatively closed as its idempotents are closed to multiplication, it is factor closed (that $rs \in \Sigma_e$ implies $r,s \in \Sigma_e$) provided $e_1e_2 \geq e$ implies $e_1 \geq e$ and $e_2 \geq e$ which is immediate. Finally, any representative $(r,r)$ for $e$ itself will clearly generate the multiplicative set $\Sigma_e$, so it is finitely generated. A factor closed multiplicative set, $U$, which is finitely generated by $\{ r_1,..,r_n\}$ is generated by a single element, namely the product of the generators, $\prod r_i$, as each generator is a factor of this. However, it is then easy to see that $U = \Sigma_{\prod r_i}$. 4. First observe that forcing $e$ to be a unit forces each $e'\geq e$ to be a unit. But forcing $e'=(r,r)$ to have $(r,r) \sim (1,1)$ forces $r$ to become a unit in ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e$ as $(r,1)(1,r) = (r,r) = (1,1)$. Thus, the evident map $R \to {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e$ certainly inverts every element in $\Sigma_e$. However, the rig elements of ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e$ must, using a similar argument to [*(i)*]{} above must have their denominators invertible so that they are of the form $(r,s)$ where $s \in \Sigma_e$. But these elements give the rig of fractions with respect to $\Sigma_e$ as discussed below. 5. If $l: R \to R[\Sigma_e]$ is the universal map then we have ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(l)): {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R)) \to {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[\Sigma_e]))$ where this sends $e=(r,r)$ to the identity as $r$ becomes a unit. So this map certainly factors ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(l)) = \ell_e h$ where $h: {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e \to {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[\Sigma_e]))$. However there is also a map (at the level of the underlying sets) in the reverse direction given by $(pu_1^{-1},qu_2^1) \mapsto (pu_1^{-1}r^n,qu_2^1r^n)$ where $e=(r,r)$ and a high enough power $n$ is chosen so that $u_1^{-1}$ and $u_2^{-1}$ can be eliminated. This is certainly a section of $h$ as a set map which is enough to show that $h$ is bijective and so an isomorphism. We can now complete the proof of Theorem \[rationals-on-rigs\]: [[Proof:]{}]{}As $W$ and ${\ensuremath{{\mathfrak f}{\mathfrak r}}}$ are both left adjoints they preserve colimits and thus there is a functor $\text{\bf RAT}_{\sf rig} \to \text{\bf RAT}$ which carries an object $R$ to ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))$ and a map $R \to^l R[\Sigma_e^{-1}] \from^h S$ to ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R)) \to^{\ell_e} {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e \cong {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[\Sigma_e^{-1}])) \from^h fr(W(S))$. The preservation of colimits ensures composition is preserved. It remains to show that this functor is full. For this we have to show that given a cospan $${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R)) \to^{\ell_e} {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e \from^h fr(W(S))$$ that it arises bijectively from a span in $\text{\bf RAT}_{\sf rig}$. For this we note the correspondences: $$\infer={{\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R)) \to^{\ell_e} {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e ~~~~~~ \text{\bf fwCRig} }{\infer={ W(R) \to^{(\ell_e)^\flat} {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e ~~~~~~ \text{\bf wCRig} }{R \to^{l_{\Sigma_e}} R[\Sigma_e^{-1}] = {\sf rig}({\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e) ~~~~~~ \text{\bf CRig}}}$$ and also $$\infer={{\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(S)) \to^h {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e ~~~~~~ \text{\bf fwCRig} }{\infer={W(S) \to^{\eta h} ({\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e) ~~~~~~ \text{\bf wCRig} }{\infer={S \to^{(\eta h)^\flat} {\sf rig}({\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/e) ~~~~~~ \text{\bf CRig} }{S \to^{(\eta h)^\flat} R[\Sigma_e^{-1}] ~~~~~~ \text{\bf CRig}}}}$$ so that there is a bijective correspondence between the cospans of $\text{\bf RAT}$ from ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))$ to ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(S))$ and the cospans in $\text{\bf RAT}_{\sf rig}$. We indicated that we had restricted $\text{\bf RAT}$ to rigs by writing $\text{\bf RAT}_{\sf rig}$. Commutative rings sit inside rigs and, fortuitously, when one localizes a ring $R$ in the category of rigs one obtains a ring. Thus, in specializing this result further to $\text{\bf RAT}_{\sf ring}$ there is nothing further to do! $\text{\bf RAT}_{\sf ring}$ is the full subcategory of $\text{\bf RAT}$ determined by the objects ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))$ where $R \in \text{\bf CRing}$. Rational polynomials -------------------- Recall that for any commutative rig $R$, there is an adjunction between [**Sets**]{} and $R/\text{\bf CRig}$. The left adjoint takes a set $B$ to the free commutative $R$-algebra on $B$, giving the correspondence $$\infer={R[x_1,\ldots,x_n] \to_{s^\sharp} S ~~~~ R/\text{\bf CRig} }{\{x_1,\ldots,x_n\} \to^s U(S) ~~~~ \text{\bf Sets}}$$ This correspondence gives the morphism, $s^\sharp$, which is obtained by substituting $s_i \in S$ for $x_i$ which we may present as: $$s^\sharp: R[x_1,\ldots,x_n] \to S; \sum r_i x_1^{\alpha_{1,i}} ..x_n^{\alpha_{n,i}} \mapsto [s_i/x_i](\sum r_i x_1^{\alpha_{1,i}} \ldots x_n^{\alpha_{n,i}}) = \sum r_i s_1^{\alpha_{1,i}} \ldots s_n^{\alpha_{n,i}}$$ (Note that here we identify $r_i$, as is conventional, with its image in an $R$-algebra: strictly speaking we should always write $u(r_i)$ as an $R$-algebra is a map $u: R \to S$.) The category of finitely generated free commutative $R$-algebras opposite is just the Lawvere theory for $R$-algebras: one may think of it as the category of polynomials over $R$. It may be presented concretely as follows: its objects are natural numbers and a map from $n$ to $m$ is an $m$-tuple of polynomials $(p_1,\ldots p_m)$ where each $p_i \in R[x_1,\ldots,x_n]$. Clearly the object $n$ is the $n$-fold product of the object $1$ (e.g. the projections $2 \to 1$ are $(x_1)$ and $(x_2)$ and there is only one map $()$ to $0$ making it the final object). Composition is then given by substituting these tuples: $$n \to^{(p_1,\ldots, p_m)} m \to^{(q_1,\ldots,q_k)} k = n \to^{([p_1/x_1,\ldots,p_m/x_m]q_1,\ldots,[p_1/x_1,\ldots,p_m/x_m]q_k)} k.$$ The aim of this section is derive a similar concrete description of the category of rational polynomials over a rig (or ring) $R$, which we shall call [Rat$_R$]{}. This category will again have natural numbers as objects and its maps will involve fractions of the polynomial rigs. However, before we derive this concrete description, we shall provide an abstract description of this category using our understanding of rational functions developed above. The category of rational polynomials over a commutative rig $R$ may be described in terms of the partial map category obtained from using localizations in $R/\text{\bf CRig}$. Recall that objects in this coslice category are maps $u: R \to S$, and maps are triangles: $$\xymatrix@=15pt{ & R \ar[dl]_{u_1} \ar[dr]^{u_2} \\ S_1 \ar[rr]_f & & S_2}$$ A (finitely generated) localization is just a map whose bottom arrow is a localization in $\text{\bf CRig}$. This allows us to form the category of cospans whose left leg is a localization: the composition is given as before by pushing out, where pushing outs is the same as in $\text{\bf CRig}$. We may call this category $\text{\bf RAT}_{R/{\sf rig}}$ and, as above, we shall now argue that it is a full subcategory of a larger category of rational functions which we shall call $\text{\bf RAT}_{{\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))}$. This latter category is formed by taking the cospan category of localizations in the coslice category ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/\text{\bf fwCRig}$. Thus a typical map in this category has the form: $$\xymatrix{ & {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R)) \ar@/_/[ddl]_{u_1} \ar[d]|{u'} \ar@/^/[ddrr]^{u_2} \\ & S_1/e \\ S_1 \ar[ur]_{\ell_e} & & & S_2 \ar[llu]^{h} }$$ It is now a straightforward observation that: \[Rat\_R\] $\text{\bf RAT}_{R/{\sf rig}}$ is the full subcategory of $\text{\bf RAT}_{{\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))}$ determined by the objects ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(u)): {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R)) \to {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(S))$ for $u \in R/\text{\bf CRig}$. The category of rational polynomials over $R$, [Rat$_R$]{}, may then be described as the full subcategory of $\text{\bf RAT}_{R/{\sf rig}}$ determined by the objects under $R$ given by the canonical (rig) embeddings $u_n: R \to R[x_1,\ldots, x_n]$ for each $n \in {\ensuremath{\mathbb N}\xspace}$. Thus, the objects correspond to natural numbers. In $\text{\bf RAT}_R$, this is the full subcategory determined by the objects ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(u_n)$ and the maps are the opposite of the corational functions which fix ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R)$. Unwinding this has the maps as cospans: $$\xymatrix{ & {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R)) \ar@/_/[ddl]_{{\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(u_1))} \ar[d]|{u'} \ar@/^/[ddrr]^{{\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(u_2))} \\ & {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[x_1,\ldots,x_n]))/e \\ {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[x_1,\ldots,x_n])) \ar[ur]_{\ell_e} & & & {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[x_1,\ldots,x_m])) \ar[llu]^{h} }$$ where we have: $$\infer={\{ x_1,\ldots,x_m \} \to^{{\sf sub}((\eta h)^\flat)} U({\sf rig}({\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[x_1,\ldots,x_n]))/e)) ~~~~~~ \text{\bf Set} }{\infer={R[x_1,\ldots,x_m] \to^{(\eta h)^\flat} {\sf rig}({\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[x_1,\ldots,x_n]))/e) ~~~~~~ R/\text{\bf CRig} }{\infer={W(R[x_1,\ldots,x_m]) \to^{\eta h} {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[x_1,\ldots,x_n]))/e ~~~~~~ W(R)/\text{\bf wCRig} }{{\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[x_1,\ldots,x_m])) \to^h {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[x_1,\ldots,x_n]))/e ~~~~~~ {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R))/\text{\bf fwCRig}}}}$$ Thus, such a map devolves into a selecting, for each variable $x_i$, elements from the underlying set of ${\sf rig}({\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[x_1,\ldots,x_n]))/e)$. To select such elements amounts to selecting $m$ fractions from ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[x_1,\ldots,x_n]))$ whose denominator is in the multiplicative set $\Sigma_e$. Now$\Sigma_e$ is a finitely generated multiplicative set, so it can be written as $\Sigma_e := {\langle}p_1,...,p_k {\rangle}$, where the $p_i \in R[x_1,\ldots,x_n]$ are the generators. This allows us to concretely define a category of rational polynomial over a commutative rig $R$. The objects are natural numbers: the maps $n \to m$ are $m$-tuples of rational polynomials in $n$ variables accompanied by a finite set of polynomials called the [**restriction set**]{} such that each denominator is in the factor closed multiplicative set generated by the restriction set. For brevity we will write $x_1,\ldots,x_n$ as $\overrightarrow{x}^n$. Let $R$ be a commutative rig. Define [Rat$_R$]{} to be the following Objects: : $n \in \mathbb{N}$ Arrows: : $n \rightarrow m$ given by a pair ${\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({f}_i,{g}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}}$ where - $(f_i,g_i) \in {\ensuremath{{\mathfrak f}{\mathfrak r}}}(W(R[x_1,\ldots,x_n]))$ for each $i$; - ${\mathcal U} = {\langle}p_1,\ldots,p_k{\rangle}\subseteq R[x_1,\ldots,x_n]$ is a finitely generated factor closed and multiplicatively closed set of polynomials; - Each $(f_i,g_i)$ is subject to fractional equality, every denominator $g_i$ is in ${\mathcal U}$, and any $u \in {\cal U}$ can be completely eliminated from the fraction (as these are inverted). Identity: : ${\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({x_i},{1}\right)_{i=1}^{n},{\mathcal {{\langle}{\rangle}}}\right)}} : n \longrightarrow n$ Composition: : Given ${\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({f}_i,{g}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}} : n \longrightarrow m$ and ${\ensuremath{\left(\overrightarrow{x}^{m} \mapsto \left({f'}_i,{g'}_i\right)_{i=1}^{k},{\mathcal {U'}}\right)}} : m \longrightarrow k$, then the composition is given by substitution: Where - $(a_j,b_j) = \left[(f_i,g_i)/x_i\right](f_j',g_j')$, - $(\alpha_j,\alpha_j) = \left[(f_i,g_i)/x_i\right] (u_j',u_j')$ where ${\langle}u_1',\ldots,u_w'{\rangle}= {\mathcal U}'$, - and ${\mathcal U}'' = \left{\langle}u_1,\ldots,u_l,\alpha_1,\ldots,\alpha_w\right{\rangle}$, where ${\langle}u_1,\ldots,u_l{\rangle}\in {\mathcal U}$. Perhaps the one part of this concrete definition of [Rat$_R$]{} which requires some explanation is the manner in which ${\mathcal U}''$ is obtained. To understand what is happening, recall that the restriction is determined by a ${*}$-idempotent which for ${\mathcal U}'$ is $e' = (u_1' \ldots u_w',u_1' \ldots u_w')$. To obtain the new ${*}$-idempotent we must multiply the ${*}$-idempotent, $e$, obtained from ${\mathcal U}$, with the result of mapping (i.e. substituting) $e'$. Here is an example of a composition in [Rat$_\mathbb{Z}$]{}. Take the maps $$\left(x_1,x_2 \mapsto \left( \frac{5x_1x_2}{x_1},\frac{x_1x_2^2}{x_1+x_2},\frac{(x_1+x_2)^2}{3x_2}\right),{\langle}x_1,x_1+x_2,x_2{\rangle}\right) : 2 \to 3,$$ and $$\left(x_1,x_2,x_3 \mapsto \left(\frac{7(x_1+x_3)}{x_1x_2},\frac{x_1}{1}\right),{\langle}4+x_3+x_1,x_1,x_2{\rangle}\right) : 3 \to 2.$$ The composite of the above maps – without cleaning up any factors – is: $$\left(x_1,x_2 \mapsto\!\! \left(\frac{(105x_1x_2^2+7x_1(x_1+x_2)^2)(x_1(x_1+x_2))^2}{15x_1^4x_2^4(x_1+x_2)},\frac{5x_1x_2}{x_1}\right),{\langle}\begin{smallmatrix}x_1,x_1+x_2,x_2,\\ 5x_1^3x_2,x_1x_2^2(x_1+x_2)^2, \\ \left(\begin{smallmatrix} 15x_1x_2^2+12x_1x_2 \\ +x_1(x_1+x_2)^2\end{smallmatrix} \right) (3x_2x_1)^2 \end{smallmatrix}{\rangle}\right)\!\!: 2 \to 2.$$ Recall that here we have used the Kleisli composition of the fractional monad. This can be cleaned up somewhat by using properties of fractional rigs: $$\left(x_1,x_2 \mapsto\!\! \left(\frac{(105x_2^2+7x_1(x_1+x_2)^2)(x_1+x_2)^2}{15x_1^2x_2^4(x_1+x_2)},\frac{5x_1x_2}{x_1}\right),{\langle}\begin{smallmatrix}x_1,x_1+x_2,x_2,\\ 5, 3, \\ \left(\begin{smallmatrix} 15x_2^2+12x_2 \\ +(x_1+x_2)^2\end{smallmatrix} \right) \end{smallmatrix}{\rangle}\right)\!\!: 2 \to 2.$$ Finally, we can actually eliminate factors which are in the multiplicative set: $$\left(x_1,x_2 \mapsto\!\! \left(\frac{(105x_2^2+7x_1(x_1+x_2)^2)(x_1+x_2)}{15x_1^2x_2^4},\frac{5x_2}{1}\right),{\langle}\begin{smallmatrix}x_1,x_1+x_2,x_2,\\ 5, 3, \\ \left(\begin{smallmatrix} 15x_2^2+12x_2 \\ +(x_1+x_2)^2\end{smallmatrix} \right) \end{smallmatrix}{\rangle}\right)\!\!: 2 \to 2.$$ The hard work we have done with fractional rigs (in particular, Proposition \[Rat\_R\]) can now be reaped to give: For each commutative rig $R$, [Rat$_R$]{} is a cartesian restriction category. The restrictions are, in this presentation, given by the multiplicative sets. A final remark which will be useful in the next section. If $R$ is a ring, then the rig of polynomials $R[x_1,\ldots,x_n]$ is also a ring. Thus, as before, there is nothing extra to be done to define [Rat$_R$]{} for a commutative ring. Differential structure on rational polynomials ---------------------------------------------- To be a differential restriction category, [Rat$_R$]{} must have cartesian left additive structure. For each commutative rig, $R$, [Rat$_R$]{} is a cartesian left additive restriction category. Each object is canonically a total commutative monoid by the map: $$( (x_i)_{i= 1,\dots,2n} \mapsto (x_i+x_{n+i})_{i=1,\dots,n}): 2n \to n$$ and this clearly satisfies the required exchange coherence (see \[thmAddCart\]). If ${\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({p}_i,{q}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}} , {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({p'}_i,{q'}_i\right)_{i=1}^{m},{\mathcal {V}}\right)}} : n \longrightarrow m$ are arbitrary parallel maps then $$\begin{aligned} & &{\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({p}_i,{q}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}} + {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({p'}_i,{q'}_i\right)_{i=1}^{m},{\mathcal {V}}\right)}} \\ &=& {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({p_i{q'}_i+{p'}_iq}_i,{q_iq'}_i\right)_{i=1}^{m},\left\langle {{\mathcal U}\cup{\mathcal V}}\right\rangle\right)}}\end{aligned}$$ so we are using the addition defined in ${\ensuremath{{\mathfrak f}{\mathfrak r}}}(R[x_1,\ldots,x_n])$. It remains to define the differential structure of [Rat$_R$]{}. We will use formal partial derivatives to define this structure. Formal partial derivatives are used in many places: in Galois theory the formal derivative is used to determine if a polynomial has repeated roots [@galoisstewart], and in algebraic geometry the rank of the formal Jacobian matrix is used to determine if a local ring is regular [@alggeomeis]. Finally, it is also important to note that here we must assume we start with a commutative ring, rather than a rig: negatives are required to define the formal derivative of a rational function. If $R$ is a commutative ring, then [Rat$_R$]{} is a differential restriction category. Given a ring, $R$, there is a formal partial derivative for elements of $R[x_1,\ldots,x_n]$. Let $f = \displaystyle\sum_l a_l x_1^{l_1} \cdots x_n^{l_n}$ be a polynomial. Then the formal partial derivative of $f$ with respect to the variable $x_k$ is $$\frac{\partial f}{\partial x_k} = \displaystyle\sum_l l_ka_l x_1^{l_1} \cdots x_{k-1}^{l_{k-1}} x_k^{l_k-1} x_{k+1}^{l_{k+1}} \cdots x_n^{l_n}$$ Extend the above definition to rational functions, where $g = \frac{p}{q}$ by $$\frac{\partial g}{\partial x_k} = \frac{\frac{\partial p}{\partial x_k} q - p \frac{\partial q}{\partial x_k}}{q^2}.$$ From the above observation, one can show that the unit must have an additive inverse and, thus, every element must have an additive inverse. This means we need a ring to define the differential structure on rational functions. Now, if we have $f = \left( f_1,\ldots,f_m\right) =\left( \frac{p_1}{q_1},\ldots,\frac{p_m}{q_m} \right)$, an $m$-tuple of rational functions in $n$ variables over $R$, then we can define the formal Jacobian at a point of $R^n$ as the $m \times n$ matrix $$J_f (y_1,\ldots,y_n) = \left[ \begin{matrix} \frac{\partial f_1}{\partial x_1} (y_1,\ldots,y_n) & \ldots & \frac{\partial f_1}{\partial x_n} (y_1,\ldots,y_n)\\ \vdots & \ddots & \vdots \\ \frac{\partial f_m}{\partial x_1} (y_1,\ldots,y_n)& \ldots & \frac{\partial f_m}{\partial x_n} (y_1,\ldots,y_n) \end{matrix} \right]$$ Finally, consider [Rat$_R$]{} where $R$ is a commutative ring. Then, define the differential structure to be For example, consider [Rat$_{\mathbb{Z}}$]{} and the map $\left(x_1,x_2 \mapsto \left(\frac{1}{x_1},\frac{x_1^2}{1+x_2}\right),{\langle}x_1,1+x_2{\rangle}\right)$. Then the differential of this map is $$\left(x_1,x_2,x_3,x_4 \mapsto \left(\frac{-x_1}{x^2_3},\frac{2x_3x_1(x_4+1)-x_3^2x_2}{(x_4+1)^2}\right),{\langle}x_3,1+x_4{\rangle}\right)$$ In [@cartDiff], the category of smooth functions between finite dimensional $\mathbb{R}$ vector spaces is established as an example of a cartesian differential category using the Jacobian as the differential structure. The proof for showing that [Rat$_R$]{} is a differential restriction category is much the same, so we will highlight the places where the axioms have changed and new axioms have been added. [**\[DR.2\]**]{} Consider the second part of [**\[DR.2\]**]{}, ${\langle}0,g{\rangle}D[f] = {\ensuremath{\overline{gf}\,}}0$: it has been modified by the addition of the restriction constraint. Let $f = (\overrightarrow{x}^n \mapsto (f_i,f'_i)_m,{\mathcal V})$ and $g = (\overrightarrow{x}^k \mapsto (g_i,g'_i)_n,{\mathcal U})$ then it is clear that we must show $$\begin{aligned} \left[0 /x_i,(g_i,g_i')/x_{n+i} \right] {\mathcal V}' &=& \left[ (g_i,g_i') /x_i \right] {\mathcal V}\end{aligned}$$ where ${\mathcal V}' = [x_{n+i}/x_i] {\mathcal V}$ so that ${\mathcal V}'$ is just ${\mathcal V}$ with variable indices shifted by $n$. Thus, these substitutions are clearly equal. [**\[DR.6\]**]{} Consider the maps $$g = {\ensuremath{\left(\overrightarrow{x}^{k} \mapsto \left({g}_i,{g'}_i\right)_{i=1}^{n},{\mathcal {U}}\right)}}, h = {\ensuremath{\left(\overrightarrow{x}^{k} \mapsto \left({h}_i,{h'}_i\right)_{i=1}^{n},{\mathcal {W}}\right)}}, \mbox{ and } k = {\ensuremath{\left(\overrightarrow{x}^{k} \mapsto \left({k}_i,{k'}_i\right)_{i=1}^{n},{\mathcal {W}}\right)}}.$$ The restriction set for $D[f]$ is ${\mathcal V}' = {\ensuremath{\overline{h}}}{\langle}[x_{n+i}/x_i]{\mathcal V}$, and the restriction set for $D[D[f]]$ is ${\mathcal V}'' = [x_{2n+j}/x_j] {\mathcal V'} = [x_{3n+i}/x_i]{\mathcal V}$. We must prove ${\langle}{\langle}g,0{\rangle},{\langle}h,k{\rangle}{\rangle}D[D[f]] = {\ensuremath{\overline{h}}}{\langle}g,k{\rangle}D[f]$ which translates to: $$\begin{aligned} \lefteqn{\hspace{-65pt} \left(\overrightarrow{x}^k \mapsto \left( (g_1,g_1'),\ldots,(g_n,g_n'),0, \ldots ,0,(h_1,h_1'),\ldots,(h_n,h_n'),(k_1,k_1'),\ldots,(k_n,k_n') \right), \left{\langle}{\mathcal U} \cup {\mathcal W} \cup {\mathcal T}\right{\rangle}\right)} \\ D[D[f]] & = & {\ensuremath{\overline{{\ensuremath{\left(\overrightarrow{x}^{k} \mapsto \left({h}_i,{h'}_i\right)_{i=1}^{n},{\mathcal {T}}\right)}}}\,}} \\ & & ~~~~~~ \left(\overrightarrow{x}^k \mapsto \left( (g_1,g_1') , \ldots , (g_n,g_n'),(k_1,k_1'),\ldots, (k_n,k_n') \right), \left{\langle}{\mathcal U} \cup {\mathcal W}\right{\rangle}\right) D[f].\end{aligned}$$ The rational functions of the maps are easily seen to be the same. It remains to prove that the restriction sets are the same, that is: $$\begin{aligned} &\left{\langle}\left(U \cup W \cup T \right) \cup \left[(g_i,g_i')/x_i , 0_i/x_{n+i} , (h_i,h_i')/x_{2n+i},(k_i,k_i')/x_{3n+i}\right] {\mathcal V}''\right{\rangle}\\ &= \left{\langle}U \cup \left( W \cup T \cup \left[(g_i,g_i')/x_i , (k_i,k_i')/x_{n+i}\right]{\mathcal V}'\right)\right{\rangle}\end{aligned}$$ This amounts to showing: $$\begin{aligned} \left[(g_i,g_i')/x_i , 0_i/x_{n+i} , (h_i,h_i')/x_{2n+i},(k_i,k_i')/x_{3n+i}\right] {\mathcal V}'' &=& \left[(g_i,g_i')/x_i , (k_i,k_i')/x_{n+i}\right]{\mathcal V}'.\end{aligned}$$ which is immediate from the variable shifts which are involved. [**\[DR.8\]**]{} Let $f = \left(\overrightarrow{x}^n \mapsto (f_i){i=1}^m,{\mathcal V}\right) : n \to m$ then $$\begin{aligned} (1 \times {\ensuremath{\overline{f}\,}}) \pi_0 &=& \left(\overrightarrow{x}^{2n} \mapsto (x_i)_{i=1}^{2n} , [x_{n+i}/x_i]{\mathcal V} \right) \pi_0 \\ &=& \left(\overrightarrow{x}^{2n} \mapsto (x_i)_{i=1}^{n} , [x_{n+i}/x_i]{\mathcal V} \right) \\ &=& \left(\overrightarrow{x}^{2n} \mapsto I_{n\times n} \vec{x} , [x_{n+i}/x_i]{\mathcal V} \right) \\ &=& \left(\overrightarrow{x}^{2n} \mapsto \left(J_{(x_i)_i}(\overrightarrow{x}_{n+1}^{2n})\right)\cdot \overrightarrow{x}_{1}^{n},[x_{n+i}/x_i]{\mathcal V} \right) \\ &=& D[{\ensuremath{\overline{f}\,}}].\end{aligned}$$ [**\[DR.9\]**]{} Considering $f = (\overrightarrow{x}^n \mapsto (f_i,f'_i)_{i=1}^m,{\mathcal V})$, we have $$\begin{aligned} 1 \times {\ensuremath{\overline{f }\,}} &=& {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({x_i},{1}\right)_{i=1}^{n},{\mathcal {\{\}}}\right)}} \times {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({x_i},{1}\right)_{i=1}^{n},{\mathcal {V}}\right)}} \\ &=& {\ensuremath{\left(\overrightarrow{x}^{2n} \mapsto \left({x_i},{1}\right)_{i=1}^{2n},\left\langle {\left[x_{n+i}/x_i\right]{\mathcal V}}\right\rangle\right)}} \\ &=& {\ensuremath{\overline{D[f]}\,}}.\end{aligned}$$ Further properties of [Rat$_R$]{} --------------------------------- In this section we will describe three aspects of [Rat$_R$]{}. First we will prove that [Rat$_R$]{} has nowhere defined maps for each $R$. Next, after briefly introducing the definition of $0$-unitariness for restriction categories, we will show that if $R$ is an integral domain, then [Rat$_R$]{} is a $0$-unitary restriction category. Finally, we will show that [Rat$_R$]{} does not in general have joins. Recall from section \[subsecJoins\] that a restriction category ${\ensuremath{\mathbb X}\xspace}$ has [**nowhere defined maps**]{}, if for each ${\ensuremath{\mathbb X}\xspace}(A,B)$ there is a map $\emptyset_{AB}$ which is a bottom element for ${\ensuremath{\mathbb X}\xspace}(A,B)$, and these are preserved by precomposition. We will show that [Rat$_R$]{} always has nowhere defined maps. Intuitively, a nowhere defined rational function should be one whose restriction set ${\mathcal U}$ is the entire rig $R[x_1, \ldots, x_n]$. This can be achieved with a finitely generated set by simply considering the set generated by $0$, since any such polynomial is in the factor closure of $0$. \[rathaszeros\] For any commutative rig $R$, [Rat$_R$]{} has nowhere defined maps given by $${\ensuremath{{\ensuremath{\left(\overrightarrow{x}^{{n}} \mapsto \left({1},{1}\right)_{i=1}^{{m}},\left\langle {0}\right\rangle\right)}}}}.$$ First, note $${\ensuremath{\overline{{\ensuremath{{\ensuremath{\left(\overrightarrow{x}^{{n}} \mapsto \left({1},{1}\right)_{i=1}^{{m}},\left\langle {0}\right\rangle\right)}}}}}\,}} = {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({x_i},{1}\right)_{i=1}^{n},\left\langle {0}\right\rangle\right)}} = {\ensuremath{{\ensuremath{\left(\overrightarrow{x}^{{n}} \mapsto \left({1},{1}\right)_{i=1}^{{n}},\left\langle {0}\right\rangle\right)}}}}$$ since $0x_i=0$. Next, note that $R[x_1,\ldots,x_n] = \left{\langle}0\right{\rangle}= \left{\langle}\left{\langle}0\right{\rangle}\cup {\mathcal U}\right{\rangle}$. Let $(a_i,b_i) = \left[(1,1)/x_i\right](p_i,q_i)$; clearly for each $i$, $$0 = 0a_i = 0b_i.$$ Thus, the following equalities are clear: $$\begin{aligned} &{\ensuremath{{\ensuremath{\left(\overrightarrow{x}^{{n}} \mapsto \left({1},{1}\right)_{i=1}^{{n}},\left\langle {0}\right\rangle\right)}}}}{\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({p}_i,{q}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}} \\ &= {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({a}_i,{b}_i\right)_{i=1}^{m},\left\langle {\left{\langle}0\right{\rangle}\cup {\mathcal U}}\right\rangle\right)}}\\ &= {\ensuremath{{\ensuremath{\left(\overrightarrow{x}^{{n}} \mapsto \left({1},{1}\right)_{i=1}^{{m}},\left\langle {0}\right\rangle\right)}}}},\end{aligned}$$ so that this map is the bottom element. Now consider $$\begin{aligned} & {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({p}_i,{q}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}}{\ensuremath{{\ensuremath{\left(\overrightarrow{x}^{{m}} \mapsto \left({1},{1}\right)_{i=1}^{{k}},\left\langle {0}\right\rangle\right)}}}} \\ &= {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({1},{1}\right)_{i=1}^{k},\left\langle {{\mathcal U}\cup \left{\langle}0\right{\rangle}}\right\rangle\right)}}\\ &= {\ensuremath{{\ensuremath{\left(\overrightarrow{x}^{{n}} \mapsto \left({1},{1}\right)_{i=1}^{{k}},\left\langle {0}\right\rangle\right)}}}},\end{aligned}$$ so that these maps are preserved by precomposition, which completes the proof that [Rat$_R$]{} has nowhere defined maps. Now, if $R$ is an integral domain, we would expect that whenever two rational functions agree on some common restriction idempotent, then they should be equal wherever they are both defined. To make this idea explicit, we will introduce the concept of $0$-unitary for restriction categories[^6]. Let ${\ensuremath{\mathbb X}\xspace}$ be a restriction category with nowhere defined maps. To define $0$-unitariness, we first define a relation $\leq_0$ on parallel arrows, called the [**$0$-density relation**]{}, as follows: $$f \leq_0 g \mbox{ if } f \leq g \text { and } hf = \emptyset \text{ implies } hg = \emptyset.$$ ${\ensuremath{\mathbb X}\xspace}$ is a [**$0$-unitary**]{} restriction category when for any $f,g,h$: $$f \geq_0 h \leq_0 g \text{ implies } f \smile g .$$ \[zerounitaryzeros\] Let ${\ensuremath{\mathbb X}\xspace}$ be a restriction category with nowhere defined maps, and assume $h \leq_0 f$. Then if $f$ or $h$ equals $\emptyset$, then both $f$ and $h$ equal $\emptyset$. Since $h \leq_0 f$, we have $h={\ensuremath{\overline{h}\,}}f$, and whenever $kh = \emptyset$, $kf=\emptyset$. First assume that $f = \emptyset$. Then $h=\emptyset$ since $$h = {\ensuremath{\overline{h}\,}}f = {\ensuremath{\overline{h}\,}} \emptyset = \emptyset.$$ Next, assume that $h = \emptyset$. Then by $0$-unitariness, $$1h = h = \emptyset \text{ implies } 1f = \emptyset,$$ which completes the proof. Now we prove that [Rat$_R$]{} is a $0$-unitary restriction category when $R$ is an integral domain. \[ratzerounitary\] Let $R$ be an integral domain. Then [Rat$_R$]{} is a $0$-unitary restriction category. Consider the maps: $${\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({f}_i,{f'}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}}, {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({g}_i,{g'}_i\right)_{i=1}^{m},{\mathcal {V}}\right)}},~\mbox{ and}~ {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({h}_i,{h'}_i\right)_{i=1}^{m},{\mathcal {W}}\right)}}$$ Assume: $$\begin{aligned} {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({h}_i,{h'}_i\right)_{i=1}^{m},{\mathcal {W}}\right)}} & \leq_0 & {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({f}_i,{f'}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}} \\ {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({h}_i,{h'}_i\right)_{i=1}^{m},{\mathcal {W}}\right)}} & \leq_0 & {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({g}_i,{g'}_i\right)_{i=1}^{m},{\mathcal {V}}\right)}}.\end{aligned}$$ Now if any of the above maps are $\emptyset$, then lemma (\[zerounitaryzeros\]) says that all three of the above equal $\emptyset$; therefore, $${\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({f}_i,{f'}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}} \smile {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({g}_i,{g'}_i\right)_{i=1}^{m},{\mathcal {V}}\right)}}.$$ Thus, suppose all three are not $\emptyset$. Then $0 \not \in {\mathcal U},{\mathcal V},\text{ or }{\mathcal W}$. Then we have $$\begin{aligned} \lefteqn{{\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({f}_i,{f'}_i\right)_{i=1}^{m},\left\langle {{\mathcal W} \cup {\mathcal U}}\right\rangle\right)}} } \\ &= & {\ensuremath{\overline{{\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({h}_i,{h'}_i\right)_{i=1}^{m},{\mathcal {W}}\right)}}}\,}}{\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({f}_i,{f'}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}} \\ &= & {\ensuremath{\overline{{\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({h}_i,{h'}_i\right)_{i=1}^{m},{\mathcal {W}}\right)}}}\,}}{\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({g}_i,{g'}_i\right)_{i=1}^{m},{\mathcal {V}}\right)}} \mbox{ since ${\ensuremath{\overline{h}\,}}f = h = {\ensuremath{\overline{h}\,}}g$,} \\ &= & {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({g}_i,{g'}_i\right)_{i=1}^{m},\left\langle {{\mathcal W} \cup {\mathcal V}}\right\rangle\right)}} .\end{aligned}$$ Now, since $R$ is an integral domain, the product of two nonzero elements is nonzero. Thus, $0 \not \in \left{\langle}{\mathcal W} \cup {\mathcal U}\right{\rangle}$. Thus for each $i$, there is a $W_i \not = 0 \in \left{\langle}{\mathcal W} \cup {\mathcal U}\right{\rangle}$ such that $W_if_ig_i' = W_if_i'g_i$. Moreover, the fact that $R$ is an integral domain also gives the cancellation property: if $a \ne 0$, $ac=ab$ implies $c=b$. Thus, we have that $f_ig_i' = f_i'g_i$, which proves $${\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({f}_i,{f'}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}} \smile {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({g}_i,{g'}_i\right)_{i=1}^{m},{\mathcal {V}}\right)}}.$$ Thus, when $R$ is an integral domain, [Rat$_R$]{} is a $0$-unitary restriction category. It may seem natural to ask if [Rat$_R$]{} has finite joins, especially if $R$ has unique factorization. If $R$ is a unique factorization domain, it is easy to show that any two compatible maps in [Rat$_R$]{} will have the form $${\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({P}_i,{Q}_i\right)_{i=1}^{m},{\mathcal {U}}\right)}} \smile {\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({P}_i,{Q}_i\right)_{i=1}^{m},{\mathcal {V}}\right)}},$$ where $\gcd(P_i,Q_i)=1$. Thus $Q_i \in U,V$, so $Q_i \in \left{\langle}{\mathcal U} \cap {\mathcal V} \right{\rangle}$. Thus from the order theoretic nature of joins, the only candidate for the join is ${\ensuremath{\left(\overrightarrow{x}^{n} \mapsto \left({P}_i,{Q}_i\right)_{i=1}^{m},\left\langle {{\mathcal U} \cap {\mathcal V}}\right\rangle\right)}}$. However, reducing the restriction sets of compatible maps by intersection does not define a join restriction structure on [Rat$_R$]{}, as stability under composition will not always hold. For a counterexample, consider the maps $$\left(1,{\langle}x-1{\rangle}\right) \smile \left(1,{\langle}y-1{\rangle}\right).$$ By the above discussion, $\left(1,{\langle}{\langle}x-1{\rangle}\cap {\langle}y-1{\rangle}{\rangle}\right)$ must be $\left(1,{\langle}1{\rangle}\right)$. We will show that $s(f \vee g) \not = sf \vee sg$. Consider the map $\left((x^2,x^2),\{\}\right)$. Then $$\left((x^2,x^2),\{\}\right) \left(1,{\langle}1{\rangle}\right) = \left(1,{\langle}1{\rangle}\right).$$ However, $$\left((x^2,x^2),\{\}\right) \left(1,{\langle}x-1{\rangle}\right) = \left(1,{\langle}x+1,x-1{\rangle}\right)$$ and $$\left((x^2,x^2),\{\}\right)\left(1,{\langle}y-1{\rangle}\right)=\left(1,{\langle}x+1,x-1{\rangle}\right).$$ The “join” of the latter two maps is $\left(1,{\langle}x+1,x-1{\rangle}\right) \not = \left(1,{\langle}1{\rangle}\right)$. Thus, in general [Rat$_R$]{} does not have joins. Join completion and differential structure {#sectionJoins} ========================================== In the final two sections of the paper, our goal is to show that when one adds joins or relative complements of partial maps, differential structure is preserved. These are important results, as they show that one can add more logical operations to the maps of a differential restriction category, while retaining the differential structure. The join completion ------------------- As we have just seen, a restriction category need not have joins, but there is a universal construction which freely adds joins to any restriction category. We show in this section that if the original restriction category has differential structure, then so does its join completion. By join completing [Rat$_R$]{}, we thus get a restriction category which has both joins and differential structure, but is very different from the differential restriction category of smooth functions defined on open subsets of ${\ensuremath{\mathbb R}\xspace}^n$. The join completion we describe here was first given in this form in [@boolean], but follows ideas of Grandis from [@manifolds]. Given a restriction category ${\ensuremath{\mathbb X}\xspace}$, define ${\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})$ to have: - objects: those of ${\ensuremath{\mathbb X}\xspace}$; - an arrow $X \to^A Y$ is a subset $A \subseteq {\ensuremath{\mathbb X}\xspace}(X,Y)$ such that $A$ is down-closed (under the restriction order), and elements are pairwise compatible; - $X \to^{1_X} X$ is given by the down-closure of the identity, ${\ensuremath{\downarrow} \! \!}1_X$; - the composite of $A$ and $B$ is $\{fg: f \in A, g \in B \}$; - restriction of $A$ is $\{{\ensuremath{\overline{f}\,}}: f \in A \}$; - the join of $(A_i)_{i \in I}$ is given by the union of the $A_i$. From [@boolean], we have the following result: ${\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})$ is a join-restriction category, and is the left adjoint to the forgetful functor from join restriction categories to restriction categories. Note that this construction destroys any existing joins. This can be dealt with: for example, if one wishes to join complete a restriction category which already has empty maps (such as [Rat$_R$]{}) and one wants to preserve these empty maps, then one can modify the above construction by insisting that each down-closed set contain the empty map. Because we will frequently be dealing with the down-closures of various sets, the following lemma will be extremely helpful. \[lemmaDC\](Down-closure lemma) Suppose ${\ensuremath{\mathbb X}\xspace}$ is a restriction category, and $A, B \subseteq {\ensuremath{\mathbb X}\xspace}(A,B)$. Then we have: (i) ${\ensuremath{\downarrow} \! \!}A {\ensuremath{\downarrow} \! \!}B = {\ensuremath{\downarrow} \! \!}(AB) $; (ii) ${\ensuremath{\overline{{\ensuremath{\downarrow} \! \!}A}\,}} = {\ensuremath{\downarrow} \! \!}({\ensuremath{\overline{A}\,}})$; (iii) if ${\ensuremath{\mathbb X}\xspace}$ is cartesian, ${\langle}{\ensuremath{\downarrow} \! \!}A, {\ensuremath{\downarrow} \! \!}B {\rangle}= {\ensuremath{\downarrow} \! \!}{\langle}A, B {\rangle}$; (iv) if ${\ensuremath{\mathbb X}\xspace}$ is left additive, ${\ensuremath{\downarrow} \! \!}A + {\ensuremath{\downarrow} \! \!}B = {\ensuremath{\downarrow} \! \!}(A + B);$ (v) if ${\ensuremath{\mathbb X}\xspace}$ has differential structure, $D[{\ensuremath{\downarrow} \! \!}A] = {\ensuremath{\downarrow} \! \!}D[A]$. <!-- --> (i) If $h \in {\ensuremath{\downarrow} \! \!}(AB)$, then $\exists f \in A, g \in B$ such that $h \leq fg$. So ${\ensuremath{\overline{h}\,}}fg = h$, and ${\ensuremath{\overline{h}\,}}f \in {\ensuremath{\downarrow} \! \!}A$, $b \in {\ensuremath{\downarrow} \! \!}B$, so $h \in {\ensuremath{\downarrow} \! \!}A {\ensuremath{\downarrow} \! \!}B$. Conversely, if $mn \in {\ensuremath{\downarrow} \! \!}A {\ensuremath{\downarrow} \! \!}B$, there exists $f, g$ such that $m \leq f \in A, n \leq g \in B$. But composition preserves order, so $mn \leq fg$, so $mn \in {\ensuremath{\downarrow} \! \!}(AB)$. (ii) Suppose $h \in {\ensuremath{\overline{{\ensuremath{\downarrow} \! \!}A}\,}}$. So there exists $f \in A$ such that $h \leq f$. Since restriction preserves order, ${\ensuremath{\overline{h}\,}} \leq {\ensuremath{\overline{f}\,}}$. But since $h \in {\ensuremath{\overline{{\ensuremath{\downarrow} \! \!}A}\,}}$, $h$ is idempotent, so we have $h \leq {\ensuremath{\overline{f}\,}}$. So $h \in {\ensuremath{\downarrow} \! \!}({\ensuremath{\overline{A}\,}})$. Conversely, suppose $h \in {\ensuremath{\downarrow} \! \!}({\ensuremath{\overline{A}\,}})$, so $h \leq {\ensuremath{\overline{f}\,}}$ for some $f \in A$. Then we have $h = {\ensuremath{\overline{h}\,}}\, {\ensuremath{\overline{f}\,}} = {\ensuremath{\overline{{\ensuremath{\overline{h}\,}} f}\,}}$, so $h$ is idempotent and $h \leq f$, so $h \in {\ensuremath{\overline{{\ensuremath{\downarrow} \! \!}A}\,}}$. (iii) Suppose $h \in {\ensuremath{\downarrow} \! \!}{\langle}A, B {\rangle}$, so $h \leq {\langle}f,g{\rangle}$ for $f \in A, g \in B$. Then $h = {\ensuremath{\overline{h}\,}}{\langle}f,g{\rangle}= {\langle}{\ensuremath{\overline{h}\,}}f,g{\rangle}$, and ${\ensuremath{\overline{h}\,}}f \in {\ensuremath{\downarrow} \! \!}A$, $g \in {\ensuremath{\downarrow} \! \!}B$, so $h \in {\langle}{\ensuremath{\downarrow} \! \!}A, {\ensuremath{\downarrow} \! \!}B {\rangle}$. Conversely, suppose $h \in {\langle}{\ensuremath{\downarrow} \! \!}A, {\ensuremath{\downarrow} \! \!}B {\rangle}$, so that $h = {\langle}m,n{\rangle}$ where $m\leq f \in A, n \leq g \in B$. Since pairing preserves order, $h = {\langle}m,n{\rangle}\leq {\langle}f,g{\rangle}$, so $h \in {\ensuremath{\downarrow} \! \!}{\langle}A, B {\rangle}$. (iv) Suppose $h \in {\ensuremath{\downarrow} \! \!}A + {\ensuremath{\downarrow} \! \!}B$, so $h = m + n$, where $m \leq f \in A$, $n \leq g \in B$. Since addition preserves order, $h = m + n \leq f + g$, so $h \in {\ensuremath{\downarrow} \! \!}(A + B)$. Conversely, suppose $h \in {\ensuremath{\downarrow} \! \!}(A + B)$. Then there exist $f \in A, g \in B$ so that $h \leq f + g$. Then $h = {\ensuremath{\overline{h}\,}}(f + g) = {\ensuremath{\overline{h}\,}}f + {\ensuremath{\overline{h}\,}}g$ (by left additivity), so $h \in {\ensuremath{\downarrow} \! \!}A + {\ensuremath{\downarrow} \! \!}B$. (v) Suppose $h \in D[{\ensuremath{\downarrow} \! \!}A]$. Then there exists $m \leq f \in A$ so that $h \leq D[m]$. But differentiation preserves order, so $h \leq D[m] \leq D[f]$, so $h \in {\ensuremath{\downarrow} \! \!}D[A]$. Conversely, suppose $h \in D[A]$. Then there exists $f \in A$ so that $h \leq D[f]$, so $h \in D[{\ensuremath{\downarrow} \! \!}A]$. Cartesian structure ------------------- We begin by showing that cartesianess is preserved by the join completion. If ${\ensuremath{\mathbb X}\xspace}$ is a cartesian restriction category, then so is ${\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})$. We define $1$ and $X \times Y$ as for ${\ensuremath{\mathbb X}\xspace}$, the projections to be ${\ensuremath{\downarrow} \! \!}\pi_0$ and ${\ensuremath{\downarrow} \! \!}\pi_1$, the terminal maps to be ${\ensuremath{\downarrow} \! \!}(!_A)$, and $${\langle}A, B {\rangle}:= \{{\langle}f,g{\rangle}: f \in A, g \in B \}$$ This is compatible by Proposition \[propCart\], and down-closed since if $h \leq {\langle}f,g{\rangle}$, then $$h = {\ensuremath{\overline{h}\,}}{\langle}f,g{\rangle}= {\langle}{\ensuremath{\overline{h}\,}}f,g{\rangle}$$ so since $A$ is down-closed, this is also in ${\langle}A,B{\rangle}$. The terminal maps do indeed satisfy the required property, as $${\ensuremath{\overline{A}\,}}{\ensuremath{\downarrow} \! \!}(!_A) = {\ensuremath{\overline{A}\,}}!_A = \{ {\ensuremath{\overline{f}\,}}!_A: f \in A \} = \{f : f \in A\} = A,$$ as required. To show that ${\langle}- , - {\rangle}$ satisfies the required property, consider $${\langle}A,B{\rangle}{\ensuremath{\downarrow} \! \!}\pi_0 = \{ {\langle}f,g{\rangle}\pi_0: f \in A, g \in B \} = \{ {\ensuremath{\overline{g}\,}}f: f \in A, g \in B \} = {\ensuremath{\overline{B}\,}}A$$ and similarly for ${\ensuremath{\downarrow} \! \!}\pi_1$. We now need to show that ${\langle}- , - {\rangle}$ is universal with respect to this property. That is, suppose there exists a compatible down-closed set of arrows $C$ with the property that $C {\ensuremath{\downarrow} \! \!}\pi_0 = {\ensuremath{\overline{B}\,}}A$ and $C {\ensuremath{\downarrow} \! \!}\pi_1 = {\ensuremath{\overline{A}\,}}B$. We need to show that $C = {\langle}A,B{\rangle}$. To show that $C \subseteq {\langle}A,B{\rangle}$, let $c \in C$. Since ${\ensuremath{\downarrow} \! \!}(C \pi_0) = C {\ensuremath{\downarrow} \! \!}\pi_0 = {\ensuremath{\overline{B}\,}}A$, there exists $f \in A, g \in B$ such that $c\pi_0 = {\ensuremath{\overline{g}\,}}f$. Then, since ${\ensuremath{\downarrow} \! \!}(C \pi_1) = C {\ensuremath{\downarrow} \! \!}\pi_1 = {\ensuremath{\overline{A}\,}}B$, there exists a $c'$ such that $c'\pi_1 = {\ensuremath{\overline{f}\,}}g$. Then $${\ensuremath{\overline{c'}\,}}c\pi_0 = {\ensuremath{\overline{c'}\,}}{\ensuremath{\overline{c}\,}}c\pi_0 = {\ensuremath{\overline{c'}\,}}{\ensuremath{\overline{c}\,}}\, {\ensuremath{\overline{g}\,}}f$$ and since $c \smile c'$, $${\ensuremath{\overline{c'}\,}}c\pi_1 = {\ensuremath{\overline{c'}\,}}{\ensuremath{\overline{c}\,}}c'\pi_1 = {\ensuremath{\overline{c'}\,}}{\ensuremath{\overline{c}\,}}{\ensuremath{\overline{f}\,}}g$$ Thus, by the universality of ${\ensuremath{\overline{c'}\,}}{\ensuremath{\overline{c}\,}}{\langle}f,g{\rangle}$, ${\ensuremath{\overline{c'}\,}}c = {\ensuremath{\overline{c'}\,}}{\ensuremath{\overline{c}\,}}{\langle}f,g{\rangle}$. Thus $$c \leq {\ensuremath{\overline{c'}\,}}c = {\ensuremath{\overline{c'}\,}}{\ensuremath{\overline{c}\,}}{\langle}f,g{\rangle}\leq {\langle}f,g{\rangle},$$ so since ${\langle}A,B{\rangle}$ is down-closed, $c \in {\langle}f,g{\rangle}$. To show that ${\langle}A,B{\rangle}\subseteq C$, let $f \in A, g \in B$. Then there exists $c$ such that $$c\pi_0 = {\ensuremath{\overline{g}\,}}f = {\langle}f,g{\rangle}\pi_0.$$ Thus, there exists $f' \in A,g' \in B$ such that $$c \pi_1 = {\ensuremath{\overline{f'}\,}}g' = {\langle}f',g'{\rangle}\pi_1.$$ Now, we have $${\ensuremath{\overline{{\langle}f',g'{\rangle}}\,}}{\langle}f,g{\rangle}\pi_0 = {\ensuremath{\overline{{\langle}f',g'{\rangle}}\,}}\, {\ensuremath{\overline{{\langle}f,g{\rangle}}\,}}{\langle}f,g{\rangle}\pi_0 = {\ensuremath{\overline{{\langle}f',g'{\rangle}}\,}}\, {\ensuremath{\overline{{\langle}f,g{\rangle}}\,}}c\pi_0$$ and since $f \smile f'$ and $g \smile g'$, ${\langle}f,g{\rangle}\smile {\langle}f',g'{\rangle}$, so we also get $${\ensuremath{\overline{{\langle}f',g'{\rangle}}\,}}{\langle}f,g{\rangle}\pi_1 = {\ensuremath{\overline{{\langle}f',g'{\rangle}}\,}}\, {\ensuremath{\overline{{\langle}f,g{\rangle}}\,}}{\langle}f',g'{\rangle}\pi_1 = {\ensuremath{\overline{{\langle}f',g'{\rangle}}\,}}\, {\ensuremath{\overline{{\langle}f,g{\rangle}}\,}}c \pi_1.$$ Thus, by the universality of ${\ensuremath{\overline{{\langle}f',g'{\rangle}}\,}}{\langle}f,g{\rangle}$, $${\langle}f,g {\rangle}\leq {\ensuremath{\overline{{\langle}f',g'{\rangle}}\,}}{\langle}f,g{\rangle}= {\ensuremath{\overline{{\langle}f',g'{\rangle}}\,}}\, {\ensuremath{\overline{{\langle}f,g{\rangle}}\,}}c \leq c\, .$$ Since $C$ is down-closed, this shows ${\langle}f,g{\rangle}\in C$, as required. Left additive structure ----------------------- Next, we show that left additive structure is preserved. If ${\ensuremath{\mathbb X}\xspace}$ is a left additive restriction category, then so is ${\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})$, where $$0_{{\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})} := {\ensuremath{\downarrow} \! \!}0 \mbox{ and } A + B := \{f + g: f \in A, g \in B \}$$. By Proposition \[propLA\], $A+B$ is a compatible set. For down-closed, suppose $h \leq f + g$. Then $h = {\ensuremath{\overline{h}\,}}(f + g) = {\ensuremath{\overline{h}\,}}f + {\ensuremath{\overline{h}\,}}g$. Since $A$ and $B$ are down-closed, ${\ensuremath{\overline{h}\,}}f \in A$, ${\ensuremath{\overline{h}\,}}g \in B$, so $h \in A + B$. That this gives a commutative monoid structure on each hom-set follows directly from Lemma \[lemmaDC\], as does ${\ensuremath{\overline{0}\,}} = {\ensuremath{\downarrow} \! \!}1$. Finally, $${\ensuremath{\overline{A+B}\,}} = {\ensuremath{\overline{\{f + g: f \in A, g\in B\}}\,}} = \{{\ensuremath{\overline{f+g}\,}}: f\in A, g\in B\} = \{{\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}: f \in A, g\in B\} = {\ensuremath{\overline{A}\,}}\, {\ensuremath{\overline{B}\,}}.$$ so that ${\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})$ is a left additive restriction category. If ${\ensuremath{\mathbb X}\xspace}$ is a cartesian left additive restriction category, then so is ${\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})$. Immediate from Theorem \[thmAddCart\]. Differential structure ---------------------- Finally, we show that differential structure is preserved. There is one small subtlety, however. To define the pairing or addition of maps in ${\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})$, we merely needed to add or pair pointwise, as the resulting set was automatically down-closed and pairwise compatible if the original was. However, note that $A$ being down-closed does not imply $\{D[f]: f \in A \}$ down-closed. Axiom [[**\[DR.[9]{}\]**]{}]{} requires that differentials be total in the first component. However, this is not always true of an arbitrary $h \leq D[f]$. Thus, to define the differential in the join completion, we make take the down-closure of $\{D[f]: f \in A \}$. If ${\ensuremath{\mathbb X}\xspace}$ is a differential restriction category, then so is ${\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})$, where $$D[A] := {\ensuremath{\downarrow} \! \!}\{ D[f]: f \in A \}$$ Checking the differential axioms is a straightforward application of our down-closure lemma. For example, for [[**\[DR.[1]{}\]**]{}]{}, by the down-closure lemmas, $$D[0_{{\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})}] = D[{\ensuremath{\downarrow} \! \!}0] = {\ensuremath{\downarrow} \! \!}D[0] = {\ensuremath{\downarrow} \! \!}0 = 0_{{\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})}$$ and $$D[A + B] = {\ensuremath{\downarrow} \! \!}\{D[f+g]: f \in A, g \in B \} = {\ensuremath{\downarrow} \! \!}\{D[f] + D[g]: f \in A, g \in B \} = D[A] + D[B]\, .$$ Similarly, to check [[**\[DR.[5]{}\]**]{}]{}: $$D[AB] = {\ensuremath{\downarrow} \! \!}\{D[fg]: f \in A, g \in B \} = {\ensuremath{\downarrow} \! \!}\{{\langle}D[f],\pi_1 f{\rangle}D[g]: f \in A, g \in B \} = {\langle}D[A],{\ensuremath{\downarrow} \! \!}\pi_1 A {\rangle}DB$$ where the last equality follows from several applications of the down-closure lemmas. All other axioms similarly follow. Finally, it is easy to see the following: The unit ${\ensuremath{\mathbb X}\xspace}\to {\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})$, which sends $f$ to ${\ensuremath{\downarrow} \! \!}f$, is a differential restriction functor. The result immediately follows, given the additive, cartesian, and differential structure of ${\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})$. Thus, by Proposition \[diffFunctors\], we have If ${\ensuremath{\mathbb X}\xspace}$ is a differential restriction category, and $f$ is additive/strongly additive/linear, then so is ${\ensuremath{\downarrow} \! \!}f$ in ${\ensuremath{\mathbf{Jn}}}({\ensuremath{\mathbb X}\xspace})$. Classical completion and differential structure {#sectionClassical} =============================================== In our final section, we show that differential structure is preserved when we add relative complements to a join restriction category. This process will greatly expand the possible domains of definition for differentiable maps, even in the standard example. The standard example (smooth maps on open subsets) does not have relative complements. By adding them in, we add smooth maps between any set which is the complement of an open subset inside some other open subset. Of course, this includes closed sets, and so by applying this construction, we have a category of smooth maps defined on all open, closed and half open-half-closed sets. This includes smooth functions defined on points; as we shall see below, this captures the notion of the germ of a smooth function. The classical completion ------------------------ The notion of classical restriction category was defined in [@boolean] as an intermediary between arbitrary restriction categories and the Boolean restriction categories of [@booleanManes]. A restriction category ${\ensuremath{\mathbb X}\xspace}$ with restriction zeroes is a **classical restriction category** if 1. the homsets are locally Boolean posets (under the restriction order), and for any $W \to^f X, Y \to^g Z$, $${\ensuremath{\mathbb X}\xspace}(X,Y) \to^{f \circ (-) \circ g} {\ensuremath{\mathbb X}\xspace}(W,Z)$$ is a locally Boolean morphism; 2. for any disjoint maps $f,g$ (that is, ${\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}} = \emptyset$), $f \vee g$ exists. Sets and partial functions form a classical restriction category. For our purposes, the following alternate characterization of the definition, which describes classical restriction categories as join restriction categories with relative complements, is more useful. If $f' \leq f$, the **relative complement** of $f'$ in $f$, denoted $f \setminus f'$, is the unique map such that - $f \setminus f' \leq f$; - $g \wedge (f \setminus f') = \emptyset$; - $f \leq g \vee (f \setminus f')$. The following can be found in [@boolean]: A classical restriction category is a join restriction category with relative complements $f \setminus f'$ for any $f' \leq f$. Just as one can freely add joins to an arbitrary restriction category, so too can one freely add relative complements to a join restriction category. We will first describe this completion process, then show that cartesian, additive, and differential structure is preserved when classically completing. This is of great interest, as classically completing adds in a number of new maps, even to the standard examples. Let ${\ensuremath{\mathbb X}\xspace}$ be a join restriction category. A [**classical piece**]{} of ${\ensuremath{\mathbb X}\xspace}$ is a pair of maps $(f,f'): A \to B$ such that $f' \leq f$. One thinks of a classical piece as a formal relative complement. Two classical pieces $(f,f'), (g,g')$ are [**disjoint**]{}, written $(f,f') \perp (g,g')$, if ${\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}} = {\ensuremath{\overline{f'}\,}}{\ensuremath{\overline{g}\,}} \vee {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g'}\,}}$. A [**raw classical piece**]{} consists of a finite set of classical pieces, $(f_i, f_i')$ that are pairwise disjoint, and is written $$\bigsqcup_{i \in I} (f_i, f_i'): A \to B.$$ One defines an equivalence relation on the set of raw classical maps by: - [**Breaking:**]{} $(f,f') \equiv (ef,ef') \sqcup (f, f' \vee fe)$ for any restriction idempotent $e = {\ensuremath{\overline{e}\,}}$, - [**Collapse:**]{} $(f,f) \equiv \emptyset$. The first part of the equivalence relation says that if we have some other domain $e$, then we can split the formal complement $(f,f')$ into two parts: the first part, $(ef, ef')$, inside $e$, and the second, $(f, f' \vee fe)$, outside $e$. The second part of the equivalence is obvious: if you formally take away all of $f$ from $f$, the result should be nowhere defined. A [**classical map**]{} is an equivalence class of raw classical maps. Given a join restriction category ${\ensuremath{\mathbb X}\xspace}$, there is a classical restriction category ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$ with - objects those of ${\ensuremath{\mathbb X}\xspace}$, - arrows classical maps, - composition by $$\bigsqcup_{i \in I}(f_i, f_i') \bigsqcup_{j \in J} (g_j, g_j') := \bigsqcup_{i,j} (f_ig_j, f_i'g_j \vee f_ig_j'),$$ - restriction by $${\ensuremath{\overline{\bigsqcup_{i \in I} (f_i, f_i')}\,}} := \bigsqcup_{i \in I} ({\ensuremath{\overline{f_i}\,}}, {\ensuremath{\overline{f_i'}\,}}) ,$$ - disjoint join is simply $\sqcup$ of classical pieces, - relative complement is $$(f,f')\setminus (g,g') := (f,f' \vee {\ensuremath{\overline{g}\,}}f) \sqcup ({\ensuremath{\overline{g'}\,}}f, {\ensuremath{\overline{g'}\,}}f').$$ In [@boolean], this process is shown to give a left adjoint to the forgetful functor from classical restriction categories to join restriction categories. We make one final point about the definition. We defined $(f,f') \perp (f_0, f_0')$ if ${\ensuremath{\overline{f}\,}}{\ensuremath{\overline{f_0}\,}} = {\ensuremath{\overline{f'}\,}}{\ensuremath{\overline{f_0}\,}} \vee {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{f_0'}\,}}$. Note, however, that it suffices that we have $\leq$, since $${\ensuremath{\overline{f'}\,}}{\ensuremath{\overline{f_0}\,}} \vee {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{f_0'}\,}} \leq {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{f_0}\,}} \vee {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{f_0}\,}} = {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{f_0}\,}}$$ We will often use this alternate form of $\perp$ when checking whether maps we give are well-defined. Cartesian structure ------------------- Our goal is to show that if ${\ensuremath{\mathbb X}\xspace}$ has differential restriction structure, then so does ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$. We begin by showing that cartesian structure is preserved, and for this we begin by define the pairing of two classical maps. Given a join restriction category ${\ensuremath{\mathbb X}\xspace}$ and maps $\bigsqcup (f_i,f_i')$ from $Z$ to $X$ and $\bigsqcup (g_j, g_j')$ from $Z$ to $Y$ in ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$, the following: $$\left< \bigsqcup_{i} (f_i, f_i'), \bigsqcup_j (g_j, g_j') \right> := \bigsqcup_{i,j} \left( {\langle}f_i,g_j{\rangle}, {\langle}f_i',g_j{\rangle}\vee {\langle}f_i,g_j'{\rangle}\right)$$ is a well-defined map from $Z$ to $X \times Y$ in ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$. First, we need to check $$( {\langle}f,g{\rangle}, {\langle}f',g{\rangle}\vee {\langle}f,g'{\rangle})$$ defines a classical piece. Indeed, since $f' \smile f$ and $g \smile g'$, the two maps being joined are compatible, so we can take the join. Also, since $f' \leq f$ and $g' \leq g$, the right component is less than or equal to the left component. Now, we need to check that $$\bigsqcup_{i,j} \left( {\langle}f_i,g_j{\rangle}, {\langle}f_i',g_j{\rangle}\vee {\langle}f_i,g_j'{\rangle}\right)$$ defines a raw classical map. That is, we need to check that the pieces are disjoint. That is, we need to show that if $$(f,f') \perp (f_0, f_0') \mbox{ and } (g,g') \perp (g_0, g_0')$$ then $$({\langle}f,g{\rangle}, {\langle}f',g{\rangle}\vee {\langle}f,g'{\rangle}) \perp ({\langle}f_0, g_0{\rangle}, {\langle}f_0', g_0'{\rangle}\vee {\langle}f_0, g_0' {\rangle}).$$ Consider: $$\begin{aligned} & & {\ensuremath{\overline{{\langle}f,g{\rangle}}\,}} {\ensuremath{\overline{{\langle}f_0, g_0 {\rangle}}\,}} \\ & = & {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{f_0}\,}} {\ensuremath{\overline{g}\,}} {\ensuremath{\overline{g_0}\,}} \\ & = & ({\ensuremath{\overline{f'}\,}} {\ensuremath{\overline{f_0}\,}} \vee {\ensuremath{\overline{f}\,}} {\ensuremath{\overline{f_0'}\,}})({\ensuremath{\overline{g'}\,}}{\ensuremath{\overline{g_0}\,}} \vee {\ensuremath{\overline{g}\,}} {\ensuremath{\overline{g_0'}\,}}) \\ & = & {\ensuremath{\overline{f'}\,}}{\ensuremath{\overline{f_0}\,}}{\ensuremath{\overline{g'}\,}}{\ensuremath{\overline{g_0}\,}} \vee {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{f_0'}\,}} {\ensuremath{\overline{g'}\,}} {\ensuremath{\overline{g_0}\,}} \vee {\ensuremath{\overline{f'}\,}}{\ensuremath{\overline{f_0}\,}} {\ensuremath{\overline{g}\,}} {\ensuremath{\overline{g_0'}\,}} \vee {\ensuremath{\overline{f}\,}} {\ensuremath{\overline{f_0'}\,}} {\ensuremath{\overline{g}\,}} {\ensuremath{\overline{g_0'}\,}} \\ & \leq & {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g'}\,}}{\ensuremath{\overline{f_0}\,}}{\ensuremath{\overline{g_0}\,}} \vee {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}} {\ensuremath{\overline{f_0}\,}} {\ensuremath{\overline{g_0}\,}} \vee {\ensuremath{\overline{f'}\,}}{\ensuremath{\overline{g}\,}} {\ensuremath{\overline{f_0}\,}} {\ensuremath{\overline{g_0}\,}} \vee {\ensuremath{\overline{f}\,}} {\ensuremath{\overline{g}\,}} {\ensuremath{\overline{f_0}\,}} {\ensuremath{\overline{g_0'}\,}} \\ & = & ({\ensuremath{\overline{f'}\,}}{\ensuremath{\overline{g}\,}} \vee {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g'}\,}})({\ensuremath{\overline{f_0}\,}}{\ensuremath{\overline{g_0}\,}}) \vee ({\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}})({\ensuremath{\overline{f_0'}\,}} {\ensuremath{\overline{g_0}\,}} \vee {\ensuremath{\overline{f_0}\,}}{\ensuremath{\overline{g_0'}\,}}) \\ & = & {\ensuremath{\overline{{\langle}f',g{\rangle}\vee {\langle}f,g'{\rangle}}\,}} {\ensuremath{\overline{{\langle}f_0,g_0{\rangle}}\,}} \vee {\ensuremath{\overline{{\langle}f,g{\rangle}}\,}} {\ensuremath{\overline{{\langle}f_0',g_0{\rangle}\vee {\langle}f_0, g_0'{\rangle}}\,}} \end{aligned}$$ so that $$({\langle}f,g{\rangle}, {\langle}f',g{\rangle}\vee {\langle}f,g'{\rangle}) \perp ({\langle}f_0, g_0{\rangle}, {\langle}f_0', g_0'{\rangle}\vee {\langle}f_0, g_0' {\rangle}),$$ as required. Finally, we need to check that this is a well-defined classical map. Thus, we need to check it is well-defined with respect to collapse and breaking. For collapse, consider $${\langle}(f,f'), (g,g) {\rangle}= ({\langle}f,g{\rangle}, {\langle}f',g{\rangle}\vee {\langle}f,g{\rangle}) = ({\langle}f,g{\rangle}, {\langle}f,g{\rangle}) \equiv \emptyset$$ as required. For breaking, suppose we have $$(g,g') \equiv (g,g' \vee eg) \perp (eg, eg')$$ Then $$\begin{aligned} & & {\langle}(f,f'), (g,g' \vee eg) \perp (eg, eg') {\rangle}\\ & = & ({\langle}f,g{\rangle}, {\langle}f',g{\rangle}\vee {\langle}f,g' \vee eg{\rangle}) \perp ({\langle}f,eg{\rangle}, {\langle}f',eg{\rangle}\vee {\langle}f,eg'{\rangle}) \\ & = & ({\langle}f,g{\rangle}, {\langle}f',g{\rangle}\vee e{\langle}f,g' \vee eg{\rangle}) \perp (e{\langle}f,g{\rangle}, e({\langle}f',g{\rangle}\vee {\langle}f,g'{\rangle}) \\ & \equiv & ({\langle}f,g{\rangle}, {\langle}f',g{\rangle}\vee {\langle}f,g'{\rangle}) \\ & = & {\langle}(f,f'), (g,g') {\rangle}\end{aligned}$$ as required. Thus, the above is a well-defined classical map. We now give some lemmas about our definition. Note that once we show that this pairing does define cartesian structure on ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$ , these lemmas follow automatically, as they are true in any cartesian restriction category (see Lemma \[propCart\]) However, we will need these lemmas to establish that this does define cartesian structure on ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$. \[lemmaCart1\] Suppose we have maps $f: Z \to X$, $g: Z \to Y$, and $e = {\ensuremath{\overline{e}\,}}: Z \to Z$ in ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$. Then $e{\langle}f,g{\rangle}= {\langle}ef,g{\rangle}= {\langle}f,eg{\rangle}$. It suffices to show the result for classical pieces. Thus, consider $$\begin{aligned} & & {\langle}(e,e')(f,f'), (g,g') {\rangle}\\ & = & {\langle}(ef, e'f \vee ef'), (g,g') {\rangle}\\ & = & ({\langle}ef, g{\rangle}, {\langle}e'f \vee ef', g{\rangle}\vee {\langle}ef,g'{\rangle}) \\ & = & (e{\langle}f,g{\rangle}, e'{\langle}f,g{\rangle}\vee e{\langle}f',g{\rangle}\vee e{\langle}f,g'{\rangle}\\ & = & (e,e')({\langle}f,g{\rangle}, {\langle}f',g{\rangle}\vee {\langle}f,g'{\rangle}\\ & = & (e,e'){\langle}(f,f'), (g,g') {\rangle}\end{aligned}$$ as required. Putting the $e$ in the right component is similar. \[lemmaCart2\] For any $c$ in ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$, ${\langle}c\pi_0, c\pi_1{\rangle}= c$. It suffices to show the result for classical pieces. Thus, consider $$\begin{aligned} & & {\langle}(c,c')(\pi_0,\emptyset), (c,c')(\pi_1, \emptyset){\rangle}\\ & = & {\langle}(c\pi_0, c'\pi_0), (c\pi_1, c'\pi_1) {\rangle}\\ & = & ({\langle}c\pi_0, c\pi_1{\rangle}, {\langle}c'\pi_0, c\pi_1{\rangle}\vee {\langle}c\pi_0, c'\pi_1{\rangle}) \\ & = & (c, {\ensuremath{\overline{c'}\,}}{\langle}c\pi_0, c\pi_1{\rangle}\vee {\ensuremath{\overline{c'}\,}}{\langle}c\pi_0, c\pi_1{\rangle}) \\ & = & (c, {\ensuremath{\overline{c'}\,}}c \vee {\ensuremath{\overline{c'}\,}}c) \\ & = & (c,c')\end{aligned}$$ as required. It will be most helpful if we can give an alternate characterization of when two classical maps are equivalent. To that, we prove the following result: In ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$, $(f,f') \equiv (g,g')$ if and only if there exist restriction idempotents $e_1, \ldots, e_n$ such that for any $I \subseteq \{1, \ldots, n\}$, if we define $$e_I := \left( \bigcirc_{i \in I} e_i, \left(\bigcirc_{i \in I} e_i \right)\left(\bigvee_{j \not\in I} e_j \right) \right)$$ (where $\bigcirc$ denotes iterated composition) then for each such $I$, $$e_I(f,f') = e_I(g,g')$$ or they both collapse to the empty map. As discussed in [@boolean], breaking and collapse form a system of rewrites, so that if two maps are equivalent, they can be broken into a series of pieces, each of which are either equal or both collapse to the empty map. Thus, it suffices to show that the above is what occurs after doing $n$ different breakings along the idempotents $e_1, \ldots, e_n$. To this end, note that the two pieces left after breaking $(f,f')$ by $e$ are given by precomposing with $(e,\emptyset)$ and $(1,e)$; indeed: $$(e,\emptyset)(f,f') = (ef,ef') \mbox{ and } (1,e)(f,f') = (f,ef \vee f')$$ Thus, if $n=1$, the result holds. Now assume by induction that the result holds for $k$. Then for any subset $I \subseteq \{1, \ldots n\}$, breaking $e_I$ by $(e_{k+1})$ gives the pieces $$(e_{n+1}, \emptyset)(\circ e_i, (\circ{e_i})(\vee e_j) = (e_{n+1} \circ e_i, (e_{n+1} \circ{e_i})(\vee e_j))$$ and $$(1, e_{n+1})(\circ e_i, (\circ e_i)(\vee e_j) = (\circ e_i, (\circ e_i)(e_{n+1}) \vee (\circ e_i)(\vee e_j)) = (\circ e_i, (\circ e_i)(e_{n+1} \vee e_j))$$ Thus, we get all possible idempotents $e_{I'}$, where $I' \subseteq \{1, \ldots, n+1 \}$, as required. If ${\ensuremath{\mathbb X}\xspace}$ is a cartesian restriction category, then so is ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$. Define the terminal object $T$ as for ${\ensuremath{\mathbb X}\xspace}$, and the unique maps by $!_A := (!_A, \emptyset)$. Then for any classical map $\bigsqcup (f_i, f_i')$, we have $$\bigsqcup (f_i, f_i') = \bigsqcup (!_A {\ensuremath{\overline{f_i}\,}}, !_A {\ensuremath{\overline{f_i'}\,}}) = \left(\bigsqcup ({\ensuremath{\overline{f_i}\,}}, {\ensuremath{\overline{f_i'}\,}})\right)(!_A, \emptyset)$$ as required. So ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$ has a partial final object. We define the product objects $A \times B$ as for ${\ensuremath{\mathbb X}\xspace}$, the projections by $(\pi_0, \emptyset)$ and $(\pi_1, \emptyset)$, and the product map as above. To show that our putative product composes well with the projections, consider $$\begin{aligned} & & {\langle}(f,f'), (g,g'){\rangle}(\pi_0, \emptyset) \\ & = & ({\langle}f,g{\rangle}, {\langle}f',g{\rangle}\vee {\langle}f,g'{\rangle}) (\pi_0, \emptyset) \\ & = & ({\langle}f,g{\rangle}\pi_0, {\langle}f',g{\rangle}\pi_0 \vee {\langle}f,g'{\rangle}\pi_0) \\ & = & ({\ensuremath{\overline{g}\,}}f, {\ensuremath{\overline{g}\,}}f' \vee {\ensuremath{\overline{g'}\,}}{\ensuremath{\overline{f}\,}}) \\ & = & ({\ensuremath{\overline{g}\,}}, {\ensuremath{\overline{g'}\,}})(f, f') \\ & = & {\ensuremath{\overline{(g,g')}\,}}(f,f')\end{aligned}$$ as required. Composing with $\pi_1$ is similar. Finally, we need to show that the universal property holds. It suffices to show that if $c\pi_0 \leq f$ and $c\pi_1 \leq g$, then $c \leq {\langle}f,g{\rangle}$. Suppose we have the first two inequalities, so that $${\ensuremath{\overline{c}\,}}f \equiv c\pi_0 \mbox{ by breaking with idempotents $(e_1, \ldots, e_n)$}$$ and $${\ensuremath{\overline{c}\,}}g \equiv c\pi_1 \mbox{ by breaking with idempotents $(d_1, \ldots, d_m)$}.$$ We claim that ${\ensuremath{\overline{c}\,}}{\langle}f,g{\rangle}\equiv c$ by breaking with idempotents $(e_1, \ldots e_n, d_1, \ldots, d_m)$. By the previous theorem, it suffices to show they are equal (or both collapse to the empty map) when composing with an element of the form in the theorem for an arbitrary subset $K \subseteq \{1, \ldots n, n+1, \ldots n+m\}$. However, if $I = K \cap \{1, \ldots, n\}$ and $J = K \cap \{n+1, \ldots n+m\}$, then such an element can be written as $$(e_I,e_Ie_{I'})(d_J, d_Jd_{J'})$$ since that equals $$(e_Id_J, (e_Id_J)(e_{I'} \vee d_{J'}))$$ which is $e_K$. Thus, writing $e$ for $(e_I,e_Ie_{I'})$ and $d$ for $(d_J, d_Jd_{J'})$, it suffices to show that $ed{\ensuremath{\overline{c}\,}}{\langle}f,g{\rangle}= edc$ (or they both collapse to the empty map). However, we know that $$e {\ensuremath{\overline{c}\,}}f = ec\pi_0 \mbox { and } d{\ensuremath{\overline{c}\,}}g = dc\pi_1$$ (or one or the other collapses to the empty map). Pairing the above equalities, we get $${\langle}e{\ensuremath{\overline{c}\,}}f,d{\ensuremath{\overline{c}\,}}g{\rangle}= {\langle}ec\pi_0, dc\pi_1 {\rangle}$$ which, by lemma \[propCart\], reduces to $$(ed){\ensuremath{\overline{c}\,}}{\langle}f,g{\rangle}= edc$$ as required. If either equality has both sides collapsing to the empty map, then both sides of the above collapse to the empty map, since we showed earlier that pairing is well-defined when applied to collapsed maps. Thus, we have the required universal property, and ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$ is cartesian. Left additive structure ----------------------- Next, we show that left additive structure is preserved. We begin by defining the sum of two maps. Suppose that ${\ensuremath{\mathbb X}\xspace}$ is a left additive restriction category with joins. Given maps $\bigsqcup (f_i,f_i')$ and $\bigsqcup (g_j, g_j')$ from $X$ to $Y$ in ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$, the following: $$\bigsqcup_{i,j} (f_i + g_j, (f'_i + g_j) \vee (f_i + g_j'))$$ is a well defined map from $X$ to $Y$ in ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$. The proof is nearly identical to that for showing that the pairing definition gives a well-defined classical map. If ${\ensuremath{\mathbb X}\xspace}$ has the structure of a left additive restriction category, then so does ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$, where addition of maps is defined as above, and the zero map is given by $(0,\emptyset)$. It is easily checked that the addition and zero give each homiest the structure of a commutative monoid. For the restriction axioms, $$\begin{aligned} & & {\ensuremath{\overline{(f,f') + (g,g')}\,}} \\ & = & {\ensuremath{\overline{(f+g, (f'+g) \vee(f+ g'))}\,}} \\ & = & ({\ensuremath{\overline{f+g}\,}}, {\ensuremath{\overline{f'+g}\,}} \vee {\ensuremath{\overline{f + g'}\,}}) \\ & = & ({\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}}, {\ensuremath{\overline{f'}\,}}{\ensuremath{\overline{g}\,}} \vee {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g'}\,}}) \\ & = & ({\ensuremath{\overline{f}\,}}, {\ensuremath{\overline{f'}\,}})({\ensuremath{\overline{g}\,}}, {\ensuremath{\overline{g'}\,}}) \\ & = & {\ensuremath{\overline{(f,f')}\,}} {\ensuremath{\overline{(g,g')}\,}}\end{aligned}$$ and clearly $(0,\emptyset)$ is total. For the left additivity, consider $$\begin{aligned} & & (f,f')(g,g') + (f,f')(h,h') \\ & = & (fg, f'g \vee fg') + (fh, f'h \vee fh') \\ & = & (fg + fh, ((f'g \vee fg') + fh) \vee (fg + (f'h \vee fh'))) \\ & = & (fg + fh, (f'g + fh) \vee (fg' + fh) \vee (fg + f'h) \vee (fg + fh')) \\ & = & (fg + fh, {\ensuremath{\overline{f'}\,}}(fg + fh) \vee {\ensuremath{\overline{f'}\,}}(fg + fh) \vee (fg' + fh) \vee (fg + fh')) \mbox{ since $f' \leq f$} \\ & = & (f(g+h), {\ensuremath{\overline{f'}\,}}f(g+h) \vee f(g'+h) \vee f(g+h')) \\ & = & (f(g+h), f'(g+h) \vee f(g'+h) \vee f(g+h')) \\ & = & (f,f')(g+h, (g'+h) \vee (g+h')) \\ & = & (f,f')((g,g') + (h,h')) \end{aligned}$$ as required. Thus ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$ is a left additive restriction category. If ${\ensuremath{\mathbb X}\xspace}$ has the structure of a cartesian left additive restriction category, then so does ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$. Immediate from Theorem \[thmAddCart\]. Differential structure ---------------------- Finally, we show that if ${\ensuremath{\mathbb X}\xspace}$ has differential restriction structure, so does ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$. We first need to define the differential of a map. If ${\ensuremath{\mathbb X}\xspace}$ is a differential join restriction category, and $\bigsqcup (f_i, f_i')$ is a map from $X$ to $Y$ in ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$, then the following: $$\bigsqcup (D[f_i], D[f_i'])$$ is a well-defined map in ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$ from $X \times X$ to $Y$. If $f' \leq f$, then $D[f'] \leq D[f]$, so it is a well-defined classical piece. If $(f,f') \perp (g,g')$, then $$\begin{aligned} & & {\ensuremath{\overline{Df}\,}}{\ensuremath{\overline{Dg}\,}} \\ & = & (1 \times {\ensuremath{\overline{f}\,}})(1 \times {\ensuremath{\overline{g}\,}}) \\ & = & 1 \times {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g}\,}} \\ & = & 1 \times ({\ensuremath{\overline{f'}\,}}{\ensuremath{\overline{g}\,}} \vee {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g'}\,}}) \mbox{ since $(f,f') \perp (g,g')$} \\ & = & (1 \times {\ensuremath{\overline{f'}\,}}{\ensuremath{\overline{g}\,}}) \vee (1 \times {\ensuremath{\overline{f}\,}}{\ensuremath{\overline{g'}\,}}) \\ & = & (1 \times {\ensuremath{\overline{f'}\,}})(1 \times {\ensuremath{\overline{g}\,}}) \vee (1 \times {\ensuremath{\overline{f}\,}})(1 \times {\ensuremath{\overline{g'}\,}}) \\ & = & {\ensuremath{\overline{Df'}\,}} {\ensuremath{\overline{Dg}\,}} \vee {\ensuremath{\overline{Df}\,}}{\ensuremath{\overline{Dg'}\,}} \end{aligned}$$ so $(Df, Df') \perp (Dg, Dg')$, so it is a well-defined raw classical map. That this is well-defined under collapsing is obvious. For breaking, suppose we have $$(f,f') \equiv (f,f' \vee ef) \perp (ef, ef')$$ for some restriction idempotent $e = {\ensuremath{\overline{e}\,}}$. Then consider $$\begin{aligned} & & D[(f,f' \vee ef) \perp (ef, ef')] \\ & = & (Df, Df' \vee D(ef)) \perp (D(ef), D(ef')) \\ & = & (Df, Df' \vee (1 \times e)Df) \perp ((1 \times e)Df, (1 \times e) Df')) \mbox{ by lemma \ref{propDiff}} \\ & \equiv & (Df, Df') \mbox{ by breaking along the restriction idempotent $(1 \times e)$.}\end{aligned}$$ Thus the map is well-defined under collapsing and breaking, so is a well-defined classical map. If ${\ensuremath{\mathbb X}\xspace}$ is a differential join restriction category, then so is ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$, with the differential of $\bigsqcup (f_i, f_i')$ given above. Most axioms involve a straightforward calculation and use of the lemmas we have developed. We shall demonstrate the two most involved calculations: [[**\[DR.[2]{}\]**]{}]{} and [[**\[DR.[5]{}\]**]{}]{}. For [[**\[DR.[2]{}\]**]{}]{}, consider $$\begin{aligned} & & {\langle}(g,g'), (k,k'){\rangle}D(f,f') + {\langle}(h,h'),(k,k'){\rangle}D(f,f') \\ & = & ({\langle}g,k{\rangle},{\langle}g',k{\rangle}\vee {\langle}g,k'{\rangle})(Df,Df') + ({\langle}h,k{\rangle}, {\langle}h',k{\rangle}\vee {\langle}h,k'{\rangle})(Df,Df') \\ & = & ({\langle}g,k{\rangle}Df,{\langle}g',k{\rangle}Df \vee {\langle}g,k'{\rangle}Df \vee {\langle}g,k{\rangle}Df) + ({\langle}h,k{\rangle}Df, {\langle}h',k{\rangle}Df \vee {\langle}h,k'{\rangle}Df \vee {\langle}h,k{\rangle}Df') \\ & = & ({\langle}g,k{\rangle}Df + {\langle}h,k{\rangle}Df, [{\langle}g',k{\rangle}Df + {\langle}h,k{\rangle}Df] \vee [{\langle}g,k'{\rangle}Df + {\langle}h,k{\rangle}Df] \vee [{\langle}g,k{\rangle}Df' + {\langle}h,k{\rangle}Df] \\ & & \vee [{\langle}g,k{\rangle}Df + {\langle}h',k{\rangle}Df] \vee [{\langle}g,k{\rangle}Df + {\langle}h,k'{\rangle}Df] \vee [{\langle}g,k{\rangle}Df + {\langle}h,k{\rangle}Df']) \\\end{aligned}$$ We can simplify a term like ${\langle}g,k'{\rangle}Df$ as follows: $${\langle}g,k'{\rangle}Df = {\langle}g,{\ensuremath{\overline{k'}\,}}k{\rangle}Df = {\ensuremath{\overline{k'}\,}}{\langle}g,k{\rangle}Df$$ And for a term like ${\langle}g,k{\rangle}Df'$, we can simplify it as follows: $${\langle}g,k{\rangle}Df' = {\langle}g,k{\rangle}D({\ensuremath{\overline{f'}\,}}f) = {\langle}g,k{\rangle}(1 \times {\ensuremath{\overline{f'}\,}})Df = {\langle}g,k{\ensuremath{\overline{f'}\,}}{\rangle}Df = {\langle}g,{\ensuremath{\overline{kf'}\,}}k{\rangle}Df = {\ensuremath{\overline{kf'}\,}}{\langle}g,k{\rangle}Df$$ Thus, continuing the calculation above, we get $$\begin{aligned} & = & ({\langle}g,k{\rangle}Df + {\langle}h,k{\rangle}Df, [{\langle}g',k{\rangle}Df + {\langle}h,k{\rangle}Df] \vee {\ensuremath{\overline{k'}\,}}[{\langle}g,k{\rangle}Df + {\langle}h,k{\rangle}Df] \vee {\ensuremath{\overline{kf'}\,}}[{\langle}g,k{\rangle}Df + {\langle}h,k{\rangle}Df] \\ & & \vee [{\langle}g,k{\rangle}Df + {\langle}h',k{\rangle}Df] \vee {\ensuremath{\overline{k'}\,}}[{\langle}g,k{\rangle}Df + {\langle}h,k'{\rangle}Df] \vee {\ensuremath{\overline{kf'}\,}}[{\langle}g,k{\rangle}Df + {\langle}h,k{\rangle}Df']) \\ & = & ({\langle}g,k{\rangle}Df + {\langle}h,k{\rangle}Df, [{\langle}g',k{\rangle}Df + {\langle}h,k{\rangle}Df] \vee [{\langle}g,k{\rangle}Df + {\langle}h',k{\rangle}Df] \\ & & \vee {\ensuremath{\overline{k'}\,}}[{\langle}g,k{\rangle}Df + {\langle}h,k{\rangle}Df] \vee {\ensuremath{\overline{kf'}\,}}[{\langle}g,k{\rangle}Df + {\langle}h,k{\rangle}Df]) \\ & = & ({\langle}g,k{\rangle}Df + {\langle}h,k{\rangle}Df, [{\langle}g',k{\rangle}Df + {\langle}h,k{\rangle}Df] \vee [{\langle}g,k{\rangle}Df + {\langle}h',k{\rangle}Df] \\ & & \vee [{\langle}g,k'{\rangle}Df + {\langle}h,k'{\rangle}Df] \vee [{\langle}g,k{\rangle}Df' + {\langle}h,k{\rangle}Df']) \mbox{ using the above calculations in reverse}\\ & = & ({\langle}g+h,k{\rangle}Df, {\langle}g'+h,k{\rangle}Df \vee {\langle}g+h',k{\rangle}Df \vee {\langle}g+h,k'{\rangle}Df \vee {\langle}g+h,k{\rangle}Df') \mbox{ by {{\bf [DR.{2}]}} for ${\ensuremath{\mathbb X}\xspace}$} \\ & = & ({\langle}g+h,k{\rangle}, {\langle}g'+h,k{\rangle}\vee {\langle}g+h',k{\rangle}\vee {\langle}g+h,k'{\rangle})(Df, Df') \\ & = & {\langle}(g+h, (g'+h) \vee (g+h')),(k,k'){\rangle}(Df, Df') \\ & = & {\langle}(g,g') + (h,h'), (k,k'){\rangle}D(f,f') \\\end{aligned}$$ as required. For [[**\[DR.[5]{}\]**]{}]{}, consider $$\begin{aligned} & & {\langle}D(f,f'), (\pi_1, \emptyset)(f,f'){\rangle}D(g,g') \\ & = & {\langle}(Df, Df'), (\pi_1 f, \pi_1 f'){\rangle}(Dg,Dg') \\ & = & ({\langle}Df, \pi_1f{\rangle}, {\langle}Df',\pi_1{\rangle}\vee {\langle}Df, \pi_1 f'{\rangle})(Dg, Dg') \\ & = & ({\langle}Df, \pi_1f{\rangle}Dg, {\langle}Df',\pi_1f{\rangle}Dg \vee {\langle}Df, \pi_1f'{\rangle}Dg \vee {\langle}Df, \pi_1f{\rangle}Dg')\end{aligned}$$ Now, we can simplify $${\langle}Df',\pi_1f{\rangle}= {\langle}D({\ensuremath{\overline{f'}\,}}f),\pi_1f{\rangle}= {\langle}(1 \times {\ensuremath{\overline{f'}\,}})Df,\pi_1f{\rangle}= (1 \times {\ensuremath{\overline{f'}\,}}){\langle}Df,\pi_1f{\rangle}$$ (where the second equality is by Lemma \[propDiff\]), and $${\langle}Df,\pi_1f'{\rangle}= {\langle}Df, \pi_1{\ensuremath{\overline{f'}\,}}f{\rangle}= {\langle}Df, (1 \times {\ensuremath{\overline{f'}\,}})\pi_1f{\rangle}= (1 \times {\ensuremath{\overline{f'}\,}}){\langle}Df, \pi_1f {\rangle}$$ where the second equality is by lemma \[propCart\]. Thus, the above becomes $$\begin{aligned} & = & ({\langle}Df, \pi_1f{\rangle}Dg, (1 \times {\ensuremath{\overline{f'}\,}}){\langle}Df, \pi_1f{\rangle}Dg \vee {\langle}Df, \pi_1f{\rangle}Dg') \\ & = & (D(fg), (1 \times {\ensuremath{\overline{f'}\,}}D(fg) \vee D(fg')) \mbox{ by {{\bf [DR.{5}]}} for ${\ensuremath{\mathbb X}\xspace}$} \\ & = & (D(fg), D(f'g) \vee D(fg')) \mbox{ by Lemma \ref{propDiff}} \\ & = & D(fg, f'g \vee fg') \\ & = & D((f,f')(g,g'))\end{aligned}$$ as required. Now that we know that the classical completion of a differential restriction category is again a differential restriction category, it will be interesting to see what type of maps are in the classical completion of the standard model. For example, consider two functions: $f(x) = 2x$ defined everywhere but $x=5$, and $g(x) = 2x$ defined everywhere. Taking the relative complement of these maps gives a map defined *only* at $x=5$, and has the value $2x = 10$ there. But if differential structure is retained, in what sense is this map “smooth”? Of course, this map is really an equivalence class of maps. In particular, imagine we have a restriction idempotent $e = {\ensuremath{\overline{e}\,}}$ (that is, an open subset), which includes $5$. Then we have $$(f,f') \equiv (ef,ef') \sqcup (f,f'\vee ef) = (ef,ef') \sqcup (f,f) \equiv (ef,ef')$$ So that this map is actually equivalent to any other map defined on an open subset which includes $5$. This is precisely the definition of the *germ* of a function at $5$. Thus, the classical completion process adds germs of functions at points. Of course, it also allows us to take joins of germs and regular maps, so that for example we could take the join of the above map, and something like $\frac{x-1}{x-5}$, giving a total map which has “repaired” the discontinuity of the second map at $5$. The fact that this restriction category is a differential restriction category is perhaps now much more surprising. Clearly, this will be an example that will need to be explored further. Finally, given the additive, cartesian, and differential structure of ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$, the following is immediate: The unit ${\ensuremath{\mathbb X}\xspace}\to {\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$, which sends $f$ to $(f,\emptyset)$, is a differential restriction functor. And as a result, we have the following: Suppose ${\ensuremath{\mathbb X}\xspace}$ is a differential restriction category with joins, and $f' \leq f$. Then: (i) if $f$ is additive in ${\ensuremath{\mathbb X}\xspace}$, then so are $(f,\emptyset)$ and $(f,f')$ in ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$; (ii) if $f$ is strongly additive in ${\ensuremath{\mathbb X}\xspace}$, then so is $(f,\emptyset)$ in ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$; (iii) if $f$ is linear in ${\ensuremath{\mathbb X}\xspace}$, then so are $(f,\emptyset)$ and $(f,f')$ in ${\ensuremath{\mathbf{Cl}}}({\ensuremath{\mathbb X}\xspace})$. By Proposition \[diffFunctors\], $(f,\emptyset)$ retains being additive/strongly additive/linear, and since $(f,f')$ is a relative complement, $(f,f') \leq f$, so is additive/linear if $f$ is. Conclusion {#sectionConclusion} ========== There are a number of different expansions of this work that are possible; here we mention the most immediate. A construction given in [@manifolds] allows one to build a new restriction category of manifolds out of any join restriction category. For example, applying this construction to continuous functions defined on open subsets of ${\ensuremath{\mathbb R}\xspace}^n$ gives a category of real manifolds. An obvious expansion of the present theory is to understand what happens when we apply this construction to a differential restriction category with joins. Clearly, this will build categories of smooth maps between smooth manifolds. In general, however, one should not expect this to again be a differential restriction category, as the derivative of a smooth manifold map $f: M \to N$ is not a map $M \times M \to N$, but instead a map $TM \to TN$, where $T$ is the tangent bundle functor. Thus, we must show that one can describe the tangent bundle of any object in the manifold completion of a differential restriction category. This leads one to consider using the tangent space as a basis for axiomatizing this sort of differential structure. This is the subject of a future paper, and will allow for closer comparisons between the theory presented here and synthetic differential geometry. Acknowlegements {#acknowlegements .unnumbered} =============== The authors are grateful to both the referee and the editor for their handling of an error concerning the fractional monad construction. In the first version of the paper we had rather stupidly failed to observe that the fraction construction was not distributive unless one insisted on the equation $(a,a^2) = (1,a)$. The referee had therefore suggested we should add the equation. This resulted in the expanded section on the fractional monad and, the non-standard – but we felt interesting – presentation of the rational functions where we were forced to use weak rigs. Blute, R., Cockett,  J and Seely, R. (2008) Cartesian differential categories. *Theory and Applications of Categories*, **22**, 622–672. A. Bucciarelli, T. Ehrhard, and G. Manzonetto. (2010) Categorical models for simply typed resource calculi. To appear in *26th Conference on the Mathematical Foundations of Programming Semantics (MFPS2010)*. Carlström, J (2004) [Wheels – on division by zero]{}, Mathematical. Structures in Comp. Sci., Vol. 14, Issue 1, 143–184. Chen, K. (1977) Iterated path integrals. *Bulletin of the American Mathematical Society*, **85** (5), 831–879. Cockett, J. and Manes, E. (2009) Boolean and classical restriction categories. *Mathematical Structures in Computer Science*, **19** (2), 357–416. Cockett, J. and Lack, S. (2002) Restriction categories I: categories of partial maps. *Theoretical computer science*, **270** (2), 223–259. Cockett, J. and Lack, S. (2007) Restriction categories III: colimits, partial limits, and extensivity. *Mathematical Structures in Computer Science*, **17**, 775–817. Cockett, J. and Guo, X. (2006) Stable meet semilattice fibrations and free restriction categories. *Theory and applications of categories*, **16**, 307–341. Cockett, J and Hofstra, P. (2008) Introduction to turing categories. *Annals of pure and applied logic*, **156** (2–3), 183–209. Cohn, P. (2002) Basic Algebra: Groups, Rings, and Fields. *Springer*, 352. Dubuc, E. (1979) Sur la modèle de la geométrie différentielle synthétique. *Cahiers de Topologie et Geométrie Différential Catégoriques*, **XX** (3). Dummit, D. and Foote, R. (2004) Abstract Algebra. *John Wiley and Sons Inc*, 655-709. Eisenbud, E. Commutative Algebra with a View Toward Algebraic Geometry. *Springer (Graduate Texts in Mathematics)*, 57–86, 404–410. Ehrhard, T., and Regnier, L. (2003) The differential lambda-calculus. *Theoretical Computer Science*, **309** (1), 1–41. Frölicher, A. (1982) Smooth structures. In *Category Theory (Gummersbach, 1981)*, volume 962 of *Lecture Notes in Mathematics*, 69–81. Springer, Berlin. Golan,  J. (1992) The Theory of Semirings with Applications in Mathematics and Theoretical Computer Sciecne. *Longman Scientific and Technical*. Grandis, M. (1989) Manifolds as Enriched Categories. *Categorical Topology (Prague 1988)*, 358–368. Hartshorne, R . (1997) Algebraic Geometry. *Springer (Graduate Texts in Mathematics)*. Hungerford,  T. (2000) Algebra. *Springer (Graduate Texts in Mathematics)*. Lawson,  M. (1998) Inverse Semigroups: The Theory of Partial Symmetries. *World Scientific*. Lindstrum,  A. (1967) Abstract Algebra. *Holden-Day*. Manes, E. (2006) Boolean restriction categories and taut monads. *Theoretical Computer Science*, **360**, 77–95. Moerdijk, I. and Reyes, G. (1991) *Models for Smooth Infinitesimal Analysis*, Springer. Kock, A. (2006) *Synthetic Differential Geometry*, Cambridge University Press (2nd ed.). Also available at http://home.imf.au.dk/kock/sdg99.pdf. Robinson, E. and Roslini, G. (1988) Categories of partial maps. *Information and Computation*, vol. 79, 94–130. Sikorski, R. (1972) Differential Modules. *Colluquia Mathematica*, **24**, 45–79. Stacey, A. (2008) Comparative Smootheology. Available at arxiv.org/0802.2225. Stewart, I. (2004) Galois Theory. *Chapman and Hall*, 185–187. Stewart, J. (2003) Calculus. *Thompson*, 1077–1085. Zariski, O. and Pierre Samuel. (1975) Commutative Algebra. *Springer (Graduate Texts in Mathematics)*, 42–49. [^1]: Partially supported by NSERC, Canada. [^2]: Partially supported by PIMS, Calgary. [^3]: With the exception of being preserved when we take manifolds. Understanding what happens when we take manifolds of a differential restriction category will be considered in a future paper: see the concluding section of this paper for further remarks. [^4]: Recall that the exchange map is defined by $\mbox{ex} := {\langle}\pi_0 {\times}\pi_0,\pi_1 {\times}\pi_1{\rangle}$ and that it satisfies, for example, ${\langle}{\langle}f,g{\rangle},{\langle}h,k{\rangle}{\rangle}\mbox{ex} = {\langle}{\langle}f,h{\rangle},{\langle}g,k{\rangle}{\rangle}$ and $(\Delta {\times}\Delta)\mbox{ex} = \Delta$. [^5]: This order is known as the “modal interval inclusion” in the rough set literature and the meet with respect to this order is a well-known database operation related to “left outer joins”! [^6]: This is related to the concept of $0$-unitary from inverse semigroup theory (see [@lawson]); the relationship will be explored in detail in a future paper.
--- address: | School of Mathematics,\ University of Minnesota,\ Minneapolis, MN 55454,\ USA. author: - Naichung Conan Leung title: | Topological Quantum Field Theory for\ Calabi-Yau threefolds and $G_{2}$-manifolds --- Introduction ============ In the past two decades we have witnessed many fruitful interactions between mathematics and physics. One example is in the Donaldson-Floer theory for oriented four manifolds. Physical considerations lead to the discovery of the Seiberg-Witten theory which has profound impact to our understandings of four manifolds. Another example is in the mirror symmetry for Calabi-Yau manifolds. This duality transformation in the string theory leads to many surprising predictions in the enumerative geometry. String theory in physics studies a ten dimensional space-time $X\times \mathbb{R}^{3,1}$. Here $X$ a six dimensional Riemannian manifold with its holonomy group inside $SU\left( 3\right) $, the so-called *Calabi-Yau threefold*. Certain parts of the mirror symmetry conjecture, as studied by Vafa’s group, are specific for Calabi-Yau manifolds of complex dimension *three*. They include the Gopakumar-Vafa conjecture for the Gromov-Witten invariants of *arbitrary* genus, the Ooguri-Vafa conjecture on the relationships between knot invariants and enumerations of holomorphic disks and so on. The key reason is they belong to a duality theory for $G_{2}$-manifolds. $G_{2}$-manifolds can be naturally interpreted as special Octonion manifolds [@Le; @RG; @over; @A]. For any Calabi-Yau threefold $X$, the seven dimensional manifold $X\times S^{1}$ is automatically a $G_{2}$-manifold because of the natural inclusion $SU\left( 3\right) \subset G_{2}$. In recent years, there are many studies of $G_{2}$-manifolds in M-theory including works of Archaya, Atiyah, Gukov, Vafa, Witten, Yau, Zaslow and many others (e.g. [@Acharya], [@Atiyah; @Witten], [@GYZ], [@Mina; @Vafa]). In the studies of the symplectic geometry of a Calabi-Yau threefold $X$, we consider unitary flat bundles over three dimensional (special) Lagrangian submanifolds $L$ in $X$. The corresponding geometry for a $G_{2}$-manifold $M $ is called the *special* $\mathbb{H}$*-Lagrangian geometry* (or *C-geometry* in [@LL]). where we consider Anti-Self-Dual (abbrev. ASD) bundles over four dimensional coassociative submanifolds, or equivalently *special* $\mathbb{H}$*-Lagrangian submanifolds of type II* [@Le; @RG; @over; @A],* * (abbrev. $\mathbb{H}$-SLag) $C$ in $M$. Counting ASD bundles over a fixed four manifold $C$ is the well-known theory of Donaldson differentiable invariants, $Don\left( C\right) $. Similarly, counting unitary flat bundles over a fixed three manifold $L$ is Floer’s Chern-Simons homology theory, $HF_{CS}\left( L\right) $. When $C$ is a connected sum $C_{1}\#_{L}C_{2}$ along a homology three sphere, the relative Donaldson invariants $Don\left( C_{i}\right) $’s take values in $HF_{CS}\left( L\right) $ and $Don\left( C\right) $ can be recovered from individual pieces by a gluing theorem, $Don\left( C\right) =\left\langle Don\left( C_{1}\right) ,Don\left( C_{2}\right) \right\rangle _{HF_{CS}\left( L\right) }$ (see e.g. [@Don; @Instanton; @Book]). Similarly when $L$ has a handlebody decomposition $L=L_{1}\#_{\Sigma}L_{2}$, each $L_{i}$ determines a Lagrangian subspace $\mathcal{L}_{i}$ in the moduli space $\mathcal{M}^{flat}\left( \Sigma\right) $ of unitary flat bundles over the Riemann surface $\Sigma$ and Atiyah conjectures that we can recover $HF_{CS}\left( L\right) $ from the Floer’s Lagrangian intersection homology group of $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ in $\mathcal{M}^{flat}\left( \Sigma\right) $, $HF_{CS}\left( L\right) =HF_{Lag}^{\mathcal{M}^{flat}\left( \Sigma\right) }\left( \mathcal{L}_{1},\mathcal{L}_{2}\right) $. Such algebraic structures in the Donaldson-Floer theory can be formulated as a Topological Quantum Field Theory (abbrev. TQFT), as defined by Segal and Atiyah [@At; @3; @4]. In this paper, we propose a construction of a TQFT by counting ASD bundles over four dimensional $\mathbb{H}$-SLag $C$ in any closed (almost) $G_{2}$-manifold $M$. We call these $\mathbb{H}$*-SLag cycles* and they can be identified as zeros of a naturally defined closed one form on the configuration space of topological cycles. We expect to obtain a homology theory $H_{C}\left( M\right) $ by applying the construction in the Witten’s Morse theory. When $M$ is non-compact with an asymptotically cylindrical end, $X\times\lbrack0,\infty)$, then the collection of boundary data of relative $\mathbb{H}$-SLag cycles determines a Lagrangian submanifold $\mathcal{L}_{M}$ in the moduli space $\mathcal{M}^{SLag}\left( X\right) $ of special Lagrangian cycles in the Calabi-Yau threefold $X$. When we decompose $M=M_{1}\#_{X}M_{2}$ along an infinite asymptotically cylindrical neck, it is reasonable to expect to have a gluing formula, $$H_{C}\left( M\right) =HF_{Lag}^{\mathcal{M}^{SLag}\left( X\right) }\left( \mathcal{L}_{M_{1}},\mathcal{L}_{M_{2}}\right) \text{.}$$ The main technical difficulty in defining this TQFT rigorously is the *compactness* issue for the moduli space of $\mathbb{H}$-SLag cycles in $M$. We do not know how to resolve this problem and our homology groups are only defined in the *formal* sense (and physical sense?). $G_{2}$-manifolds and $\mathbb{H}$-SLag geometry ================================================ We first review some basic definitions and properties of $G_{2}$-geometry, see [@LL] for more details. A seven dimensional Riemannian manifold $M$ is called a $G_{2}$-manifold if the holonomy group of its Levi-Civita connection is inside $G_{2}\subset SO\left( 7\right) $. The simple Lie group $G_{2}$ can be identified as the subgroup of $SO\left( 7\right) $ consisting of isomorphism $g:\mathbb{R}^{7}\rightarrow \mathbb{R}^{7}$ preserving the linear three form $\Omega$, $$\Omega=f^{1}f^{2}f^{3}-f^{1}\left( e^{1}e^{0}+e^{2}e^{3}\right) -f^{2}\left( e^{2}e^{0}+e^{3}e^{1}\right) -f^{3}\left( e^{3}e^{0}+e^{1}e^{2}\right) \text{,}$$ where $e^{0},e^{1},e^{2},e^{3},f^{1},f^{2},f^{3}$ is any given orthonormal frame of $\mathbb{R}^{7}$. Such a three form, or up to conjugation by elements in $GL\left( 7,\mathbb{R}\right) $, is called *positive*, and it determines a unique compatible inner product on $\mathbb{R}^{7}$ [@Bryant; @Metric; @Exc]. Gray [@Gray; @VectorCrossProd] shows that $G_{2}$-holonomy of $M$ can be characterized by the existence of a positive harmonic three form $\Omega$. A seven dimensional manifold $M$ equipped with a positive closed three form $\Omega$ is called an almost $G_{2}$-manifold. Remark: The relationship between $G_{2}$-manifolds and almost $G_{2}$-manifolds is analogous to the relationship between Kahler manifolds and symplectic manifolds. Namely we replace a parallel non-degenerate form by a closed one. For example, suppose that $X$ is a complex three dimensional Kähler manifold with a trivial canonical line bundle, i.e. there exists a nonvanishing holomorphic three form $\Omega_{X}$. Yau’s celebrated theorem says that there is a Kähler form $\omega_{X}$ on $X$ such that the corresponding Kahler metric has holonomy in $SU\left( 3\right) $, i.e. a Calabi-Yau threefold. In particular both $\Omega_{X}$ and $\omega_{X}$ are parallel forms. Then the product $M=X\times S^{1}$ is a $G_{2}$-manifold with $$\Omega=\operatorname{Re}\Omega_{X}+\omega_{X}\wedge d\theta\text{.}$$ Conversely, one can prove, using Bochner arguments, every $G_{2}$-metric on $X\times S^{1}$ must be of this form. More generally, if $\omega_{X}$ is a general Kähler form on $X$, then $\left( X\times S^{1},\Omega\right) $ is an *almost* $G_{2}$-manifold and the converse is also true. Next we quickly review the geometry of $\mathbb{H}$-SLag cycles in an almost $G_{2}$-manifold (see [@LL]). An orientable four dimensional submanifold $C$ in an almost $G_{2}$-manifold $\left( M,\Omega\right) $ is called a coassociative submanifold, or simply a $\mathbb{H}$-SLag, if the restriction of $\Omega$ to $C$ is identically zero, $$\Omega|_{C}=0\text{.}$$ If $M$ is a $G_{2}$-manifold, then any coassociative submanifold $C$ in $M$ is calibrated by $\ast\Omega$ in the sense of Harvey and Lawson [@HL], in particular, it is an absolute minimal submanifold in $M$. The normal bundle of any $\mathbb{H}$-SLag $C$ can be naturally identified with the bundle of self-dual two forms on $C$. McLean [@McLean] shows that infinitesimal deformations of any $\mathbb{H}$-SLag are unobstructed and they are parametrized by the space of harmonic self-dual two forms on $C$, i.e. $H_{+}^{2}\left( C,\mathbb{R}\right) $. For example, if $S$ is a complex surface in a Calabi-Yau threefold $X$, then $S\times\left\{ t\right\} $ is a $\mathbb{H}$-SLag in $M=X\times S^{1}$ for any $t\in S^{1}$. Notice that $H_{+}^{2}\left( S,\mathbb{R}\right) $ is spanned by the Kahler form and the real and imaginary parts of holomorphic two forms on $S$, and the latter can be identified holomorphic normal vector fields along $S$ because of the adjunction formula and the Calabi-Yau condition on $X$. Thus all deformations of $S\times\left\{ t\right\} $ in $M $ as $\mathbb{H}$-SLag submanifolds are of the same form. Similarly, if $L$ is a three dimensional special Lagrangian submanifold in $X$ with phase $\pi/2$, i.e. $\omega|_{L}=\operatorname{Re}\Omega_{X}|_{L}=0$, then $L\times S^{1}$ is also a $\mathbb{H}$-SLag in $M=X\times S^{1}$. Furthermore, all deformations of $L\times S^{1}$ in $M$ as $\mathbb{H}$-SLag submanifolds are of the same form because $H_{+}^{2}\left( L\times S^{1}\right) \cong H^{1}\left( L\right) $, which parametrizes infinitesimal deformations of special Lagrangian submanifolds in $X$. A $\mathbb{H}$-SLag cycle in an almost $G_{2}$-manifold $\left( M,\Omega\right) $ is a pair $\left( C,D_{E}\right) $ with $C$ a $\mathbb{H}$-SLag in $M$ and $D_{E}$ an ASD connection over $C$. Remark: $\mathbb{H}$-SLag cycles are supersymmetric cycles in physics as studied in [@MMMS]. Their moduli space admits a natural three form and a cubic tensor [@LL], which play the roles of the correlation function and the Yukawa coupling in physics. We assume that the ASD connection $D_{E}$ over $C$ has rank one, i.e. a $U\left( 1\right) $ connection. This avoids the occurrence of reducible connections, thus the moduli space $\mathcal{M}^{\mathbb{H}-SLag}\left( M\right) $ of $\mathbb{H}$-SLag cycles in $M$ is a smooth manifold. It has a natural orientation and its expected dimension equals $b^{1}\left( C\right) $, the first Betti number of $C$. This is because the moduli space of $\mathbb{H}$-SLags has dimension equals $b_{+}^{2}\left( C\right) $ [@McLean] and the existence of an ASD $U\left( 1\right) $-connection over $C$ is equivalent to $H_{-}^{2}\left( C,\mathbb{R}\right) \cap H^{2}\left( C,\mathbb{Z}\right) \neq\phi$. The number $b^{1}\left( C\right) $ is responsible for twisting by a flat $U\left( 1\right) $-connection. For simplicity, we assume that $b^{1}\left( C\right) =0$, otherwise, one can cut down the dimension of $\mathcal{M}^{\mathbb{H}-SLag}\left( M\right) $ to zero by requiring the ASD connections over $C$ to have trivial holonomy around loops $\gamma_{1},...,\gamma_{b^{1}\left( C\right) }$ in $C$ representing an integral basis of $H_{1}\left( C,\mathbb{Z}\right) $. We plan to count the algebraic number of points in this moduli space $\#\mathcal{M}^{\mathbb{H}-SLag}\left( M\right) $. This number, in the case of $X\times S^{1}$, can be identified with a proposed invariant of Joyce [@Joyce; @Count; @SLag] defined by counting rigid special Lagrangian submanifolds in any Calabi-Yau threefold. To explain this, we need the following proposition on the strong rigidity of product $\mathbb{H}$-SLags. If $L\times S^{1}$ is a $\mathbb{H}$-SLag in $M=X\times S^{1}$ with $X$ a Calabi-Yau threefold, then any $\mathbb{H}$-SLag representing the same homology class must also be a product. Proof: For simplicity we assume that the volume of the $S^{1}$ factor is unity, $Vol\left( S^{1}\right) =1$. If $L\times S^{1}$ is a $\mathbb{H}$-SLag in $M$ then $L$ is special Lagrangian submanifold in $X$ with phase $\pi/2$, i.e. $\operatorname{Re}\Omega_{X}|_{L}=\omega|_{L}=0$. Suppose $C$ is another $\mathbb{H}$-SLag in $M$ representing the same homology class, we have $Vol\left( C\right) =Vol\left( L\right) $. If we write $C_{\theta}=C\cap\left( X\times\left\{ \theta\right\} \right) $ for any $\theta\in S^{1}$, then $Vol\left( C_{\theta}\right) \geq Vol\left( L\right) , $ as $L$ is a calibrated submanifold in $X$. Furthermore the equality sign holds only if $C_{\theta}$ is also calibrated. In general we have $$Vol\left( C\right) \geq\int_{S^{1}}Vol\left( C_{\theta}\right) d\theta,$$ with the equality sign holds if and only if $C$ is a product with $S^{1}$. Combining these, we have $$Vol\left( L\right) =Vol\left( C\right) \geq\int_{S^{1}}Vol\left( C_{\theta}\right) d\theta\geq\int_{S^{1}}Vol\left( L\right) d\theta =Vol\left( L\right) \text{.}$$ Thus both inequalities are indeed equal. Hence $C=L^{\prime}\times S^{1}$ for some special Lagrangian submanifold $L^{\prime}$ in $X$. $\blacksquare$   Suppose $M=X\times S^{1}$ is a product $G_{2}$-manifold and we consider product $\mathbb{H}$-SLag $C=L\times S^{1}$ in $M$. From the above proposition, every $\mathbb{H}$-SLag representing $\left[ C\right] $ must also be a product. Since $b_{+}^{2}\left( C\right) =b^{1}\left( L\right) $, the rigidity of the $\mathbb{H}$-SLag $C$ in $M$ is equivalent to the rigidity of the special Lagrangian submanifold $L$ in $X$. When this happens, i.e. $L$ is a rational homology three sphere, we have $b^{2}\left( C\right) =0$ and $$\text{No. of ASD U(1)-bdl/}C=\#H^{2}\left( C,\mathbb{Z}\right) =\#H^{2}\left( L,\mathbb{Z}\right) =\#H_{1}\left( L,\mathbb{Z}\right) \text{.}$$ Here we have used the fact that the first cohomology group is always torsion free. Thus the number of such $\mathbb{H}$-SLag cycles in $X\times S^{1}$ equals the number of special Lagrangian rational homology three spheres in a Calabi-Yau threefold $X$, weighted by $\#H_{1}\left( L,\mathbb{Z}\right) $. Joyce [@Joyce; @Count; @SLag] shows that with this particular weight, the numbers of special Lagrangians in any Calabi-Yau threefold behave well under various surgeries on $X$, and expects them to be invariants. Thus in this case, we have $$\#\mathcal{M}^{\mathbb{H}-SLag}\left( X\times S^{1}\right) =\text{Joyce's proposed invariant for }\#\text{SLag. in }X\text{.}$$ In the next section, we will propose a homology theory, whose Euler characteristic gives $\#\mathcal{M}^{\mathbb{H}-SLag}\left( M\right) $. Witten’s Morse theory for $\mathbb{H}$-SLag cycles ================================================== We are going to use the parametrized version of $\mathbb{H}$-SLag cycles in any almost $G_{2}$-manifold $M$. We fix an oriented smooth four dimensional manifold $C$ and a rank $r$ Hermitian vector bundle $E$ over $C$. We consider the *configuration space* $$\mathcal{C}=Map\left( C,M\right) \times\mathcal{A}\left( E\right) ,$$ where $\mathcal{A}\left( E\right) $ is the space of Hermitian connections on $E$. An element $\left( f,D_{E}\right) $ in $\mathcal{C}$ is called a *parametrized $\mathbb{H}$-SLag cycles* in $M$ if $$f^{\ast}\Omega=F_{E}^{+}=0\text{,}$$ where the self-duality is defined using the pullback metric from $M$. Instead of $Aut\left( E\right) $, the symmetry group $\mathcal{G}$ in our situation consists of gauge transformations of $E$ which cover *arbitrary* diffeomorphisms on $M$, $$\begin{array} [c]{ccc}E & \overset{g}{\rightarrow} & E\\ \downarrow & & \downarrow\\ M & \overset{g_{M}}{\rightarrow} & M. \end{array}$$ It fits into the following exact sequence, $$1\rightarrow Aut\left( E\right) \rightarrow\mathcal{G}\rightarrow Diff\left( C\right) \rightarrow1\text{.}$$ The natural action of $\mathcal{G}$ on $\mathcal{C}$ is given by $$g\cdot\left( f,D_{E}\right) =\left( f\circ g_{M},g^{\ast}D_{E}\right) ,$$ for any $\left( f,D_{E}\right) \in\mathcal{C}=Map\left( C,M\right) \times\mathcal{A}\left( E\right) $. Notice that $\mathcal{G}$ preserves the set of parametrized $\mathbb{H}$-SLag cycles in $M$. The configuration space $\mathcal{C}$ has a natural one form $\Phi_{0}$: At any $\left( f,D_{E}\right) \in\mathcal{C}$ we can identify the tangent space of $\mathcal{C}$ as $$T_{\left( f,D_{E}\right) }\mathcal{C}=\Gamma\left( C,f^{\ast}T_{M}\right) \times\Omega^{1}\left( C,ad\left( E\right) \right) \text{.}$$ We define $$\Phi_{0}\left( f,D_{E}\right) \left( v,B\right) =\int_{C}Tr\left[ f^{\ast}\left( \iota_{v}\Omega\right) \wedge F_{E}+f^{\ast}\Omega\wedge B\right] \text{,}$$ for any $\left( v,B\right) \in T_{\left( f,D_{E}\right) }\mathcal{C}$. The one form $\Phi_{0}$ on $\mathcal{C}$ is closed and invariant under the action by $\mathcal{G}$. Proof: Recall that there is a universal connection $\mathbb{D}_{E}$ over $C\times\mathcal{A}\left( E\right) $ whose curvature $\mathbb{F}_{E}$ at a point $\left( x,D_{E}\right) $ equals, $$\begin{aligned} \mathbb{F}_{E}|_{\left( x,D_{E}\right) } & =\left( \mathbb{F}_{E}^{2,0},\mathbb{F}_{E}^{1,1},\mathbb{F}_{E}^{0,2}\right) \\ & \in\Omega^{2}\left( C\right) \otimes\Omega^{0}\left( \mathcal{A}\right) +\Omega^{1}\left( C\right) \otimes\Omega^{1}\left( \mathcal{A}\right) +\Omega^{0}\left( C\right) \otimes\Omega^{2}\left( \mathcal{A}\right)\end{aligned}$$ with $$\mathbb{F}_{E}^{2,0}=F_{E},\,\mathbb{F}_{E}^{1,1}\left( v,B\right) =B\left( v\right) ,\,\mathbb{F}_{E}^{0,2}=0,$$ where $v\in T_{x}C$ and $B\in\Omega^{1}\left( C,ad\left( E\right) \right) =T_{D_{E}}\mathcal{A}\left( E\right) $ (see e.g. [@Le; @Sympl; @Gauge]). The Bianchi identity implies that $Tr\mathbb{F}_{E}$ is a closed form on $C\times\mathcal{A}\left( E\right) $. We also consider the evaluation map, $$\begin{gathered} ev:C\times Map\left( C,M\right) \rightarrow M\\ ev\left( x,f\right) =f\left( x\right) .\end{gathered}$$ It is not difficult to see that the pushforward of the differential form $ev^{\ast}\left( \Omega\right) \wedge Tr\mathbb{F}_{E}$ on $C\times Map\left( C,M\right) \times\mathcal{A}\left( E\right) $ to $Map\left( C,M\right) \times\mathcal{A}\left( E\right) $ equals $\Phi_{0}$, i.e. $$\Phi_{0}=\int_{C}ev^{\ast}\left( \Omega\right) \wedge Tr\mathbb{F}_{E}\text{.}$$ Therefore the closedness of $\Phi_{0}$ follows from the closedness of $\Omega $. It is also clear from this description of $\Phi_{0}$ that it is $\mathcal{G}$-invariant. $\blacksquare$ From this proposition, we know that $\Phi_{0}=d\Psi_{0}$ locally for some function $\Psi_{0}$ on $\mathcal{C}$. As in the Chern-Simons theory, this function $\Psi_{0}$ can be obtained explicitly by integrating the closed one form $\Phi_{0}$ along any path joining to a fixed element in $\mathcal{C}$. When $M=X\times S^{1}$ and $C=L\times S^{1}$, this is essentially the functional used by Thomas in [@Thomas]. From now on, we assume that $E$ is a rank one bundle. The zeros of $\Phi_{0}$ are the same as parametrized $\mathbb{H}$-SLag cycles in $M$. Proof: Suppose $\left( f,D_{E}\right) $ is a zero of $\Phi_{0}$. By evaluating it on various $\left( 0,B\right) $, we have $f^{\ast}\Omega=0$, i.e. $f:C\rightarrow M$ is a parametrized $\mathbb{H}$-SLag. This implies that the map $$\lrcorner\Omega:T_{f\left( x\right) }M\rightarrow\Lambda^{2}T_{x}^{\ast}C$$ has image equals $\Lambda_{+}^{2}T_{x}^{\ast}C$, for any $x\in C$. By evaluating $\Phi_{0}$ on various $\left( v,0\right) $, we have $F_{E}^{+}=0$, i.e. $\left( f,D_{E}\right) $ is a parametrized $\mathbb{H}$-SLag cycle in $M$. The converse is obvious. $\blacksquare$ From above results, $\Phi_{0}$ descends to a closed one form on $\mathcal{C}/\mathcal{G}$, called $\Phi$. Locally we can write $\Phi=d\mathcal{F}$ for some function $\mathcal{F}$ whose critical points are precisely (unparametrized) $\mathbb{H}$-SLag cycles in $M$. Using the gradient flow lines of $\mathcal{F}$, we could formally define a Witten’s Morse homology group, as in the famous Floer’s theory. Roughly speaking one defines a complex $\left( \mathbf{C}_{\ast},\partial\right) $, where $\mathbf{C}_{\ast} $ is the free Abelian group generated by critical points of $\mathcal{F}$ and $\partial$ is defined by counting the number of gradient flow lines between two critical points of relative index one. Remark: The equations for the gradient flow are given by $$\frac{\partial f}{\partial t}=\ast\left( f^{\ast}\xi\wedge F_{E}\right) ,\,\frac{\partial D_{E}}{\partial t}=\ast\left( f^{\ast}\Omega\right) ,$$ where $\xi\in\Omega^{2}\left( M,T_{M}\right) $ is defined by $\left\langle \xi\left( u,v\right) ,w\right\rangle =\Omega\left( u,v,w\right) $. The equation $$\partial^{2}=0$$ requires a good compactification of the moduli space of $\mathbb{H}$-SLag cycles in $M$, which we are lacking at this moment (see [@Tian] however). We denote this proposed homology group as $H_{C}\left( M\right) $, or $H_{C}\left( M,\alpha\right) $ when $f_{\ast}\left[ C\right] =\alpha\in H_{4}\left( M,\mathbb{Z}\right) $. This homology group should be invariant under deformations of the almost $G_{2}$-metric on $M$ and its Euler characteristic equals, $$\chi\left( H_{C}\left( M\right) \right) =\#\mathcal{M}^{\mathbb{H}-SLag}\left( M\right) \text{.}$$ Like Floer homology groups, they measure the *middle dimensional* topology of the configuration space $\mathcal{C}$ divided by $\mathcal{G}$. TQFT of $\mathbb{H}$-SLag cycles ================================ In this section we study complete almost $G_{2}$-manifold $M_{i}$ with asymptotically cylindrical ends and the behavior of $H_{C}\left( M\right) $ when a closed almost $G_{2}$-manifold $M$ decomposes into connected sum of two pieces, each with an asymptotically cylindrical end, $$M=M_{1}\underset{X}{\#}M_{2}.$$ Nontrivial examples of compact $G_{2}$-manifolds are constructed by Kovalev [@Kovalev] using such connected sum approach. The boundary manifold $X$ is necessary a Calabi-Yau threefold. We plan to discuss analytic aspects of $M_{i}$’s in a future paper [@Le; @Asym; @Cyl; @G2]. Each $M_{i}$’s will define a Lagrangian subspace $\mathcal{L}_{M_{i}}$ in the moduli space of special Lagrangian cycles in $X$. Furthermore we expect to have a gluing formula expressing the above homology group for $M$ in terms of the Floer Lagrangian intersection homology group for the two Lagrangian subspaces $\mathcal{L}_{M_{1}}$ and $\mathcal{L}_{M_{2}}$, $$H_{C}\left( M\right) =HF_{Lag}^{\mathcal{M}^{SLag}\left( X\right) }\left( \mathcal{L}_{M_{1}},\mathcal{L}_{M_{2}}\right) \text{.}$$ These properties can be reformulated to give us a topological quantum field theory. To begin we have the following definition. An almost $G_{2}$-manifold $M$ is called cylindrical if $M=X\times \mathbb{R}^{1}$ and its positive three form respect such product structure, i.e. $$\Omega_{0}=\operatorname{Re}\Omega_{X}+\omega_{X}\wedge dt\text{.}$$ A complete almost $G_{2}$-manifold $M$ with one end $X\times\lbrack0,\infty)$ is called asymptotically cylindrical if the restriction of its positive three form equals to the above one for large $t$, up to a possible error of order $O\left( e^{-t}\right) $. More precisely the positive three form $\Omega$ of $M$ restricted to its end equals, $$\Omega=\Omega_{0}+d\zeta$$ for some two form $\zeta$ satisfying $\left| \zeta\right| +\left| \nabla\zeta\right| +\left| \nabla^{2}\zeta\right| +\left| \nabla^{3}\zeta\right| \leq Ce^{-t}.$ Remark: If $M$ is an almost $G_{2}$-manifold with an asymptotically cylindrical end $X\times\lbrack0,\infty)$, then $\left( X,\omega_{X},\Omega_{X}\right) $ is a complex threefold with a trivial canonical line bundle, but the Kähler form $\omega_{X}$ might not be Einstein. This is so, i.e. a Calabi-Yau threefold, provided that $M$ is a $G_{2}$-manifold. We will simply write $\partial M=X$. We consider $\mathbb{H}$-SLags $C$ in $M$ which satisfy a *Neumann condition* at infinity. That is, away from some compact set in $M$, the immersion $f:C\rightarrow M$ can be written as $$f:L\times\lbrack0,\infty)\rightarrow X\times\lbrack0,\infty)$$ with $\partial f/\partial t$ vanishes at infinite [@Le; @Asym; @Cyl; @G2]. A relative $\mathbb{H}$-SLag itself has asymptotically cylindrical end $L\times\lbrack0,\infty)$ with $L$ a special Lagrangian submanifold in $X$. A *relative $\mathbb{H}$-SLag cycle* in $M$ is a pair $\left( C,D_{E}\right) $ with $C$ a relative $\mathbb{H}$-SLag in $M$ and $D_{E}$ a unitary connection over $C$ with finite energy, $$\int_{C}\left| F_{E}\right| ^{2}dv<\infty\text{.}$$ Any finite energy connection $D_{E}$ on $C$ induces a unitary flat connection $D_{E^{\prime}}$ on $L$ [@Don; @Instanton; @Book]. Such a pair $\left( L,D_{E^{\prime}}\right) $ of a unitary flat connection $D_{E^{\prime}}$ over a special Lagrangian submanifold $L$ in a Calabi-Yau threefold $X$ is called a *special Lagrangian cycle* in $X$. Their moduli space $\mathcal{M}^{SLag}\left( X\right) $ plays an important role in the Strominger-Yau-Zaslow Mirror Conjecture [@SYZ] or [@Le; @Geom; @MS]. The tangent space to $\mathcal{M}^{SLag}\left( X\right) $ is naturally identified with $H^{2}\left( L,\mathbb{R}\right) \times H^{1}\left( L,ad\left( E^{\prime}\right) \right) $. For line bundles over $L$, the cup product $$\cup:H^{2}\left( L,\mathbb{R}\right) \times H^{1}\left( L,\mathbb{R}\right) \rightarrow\mathbb{R},$$ induces a symplectic structure on $\mathcal{M}^{SLag}\left( X\right) $ [@Hitchin; @SLag]. Using analytic results from [@Le; @Asym; @Cyl; @G2] about asymptotically cylindrical manifolds, we can prove the following theorem. Suppose $M$ is an asymptotically cylindrical (almost) $G_{2}$-manifold with $\partial M=X$. Let $\mathcal{M}^{\mathbb{H}-SLag}\left( M\right) $ be the moduli space of rank one relative $\mathbb{H}$-SLag cycles in $M$. Then the map defined by the boundary values, $$b:\mathcal{M}^{\mathbb{H}-SLag}\left( M\right) \rightarrow\mathcal{M}^{SLag}\left( X\right) ,$$ is a Lagrangian immersion. Sketch of the proof ([@Le; @Asym; @Cyl; @G2]): For any closed Calabi-Yau threefold $X$ (resp. $G_{2}$-manifold $M$), the moduli space of rank one special Lagrangian submanifolds $L$ (resp. $\mathbb{H}$-SLags $C$) is smooth [@McLean] and has dimension $b^{2}\left( L\right) $ (resp. $b_{+}^{2}\left( C\right) $). The same holds true for complete manifold $M$ with a asymptotically cylindrical end $X\times\lbrack0,\infty)$, where $b_{+}^{2}\left( C\right) _{L^{2}}$ denote the dimension of $L^{2}$-harmonic self-dual two forms on a relative $\mathbb{H}$-SLag $C$ in $M$. The linearization of the boundary value map $\mathcal{M}^{\mathbb{H}-SLag}\left( M\right) \rightarrow\mathcal{M}^{SLag}\left( X\right) $ is given by $H_{+}^{2}\left( C\right) _{L^{2}}\overset{\alpha}{\rightarrow }H^{2}\left( L\right) $. Similar for the connection part, where the boundary value map is given by $H^{1}\left( C\right) _{L^{2}}\overset{\beta }{\rightarrow}H^{1}\left( L\right) $. We consider the following diagram where each row is a long exact sequence of $L^{2}$-cohomology groups for the pair $\left( C,L\right) $ and each column in a perfect pairing. $$\begin{array} [c]{cccccccccccc}0 & \rightarrow & H_{+}^{2}\left( C,L\right) & \rightarrow & H_{+}^{2}\left( C\right) & \overset{\alpha}{\rightarrow} & H^{2}\left( L\right) & \rightarrow & H^{3}\left( C,L\right) & \rightarrow & \cdots & \\ & & \otimes & & \otimes & & \otimes & & \otimes & & & \\ 0 & \leftarrow & H_{+}^{2}\left( C\right) & \leftarrow & H_{+}^{2}\left( C,L\right) & \leftarrow & H^{1}\left( L\right) & \overset{\beta }{\leftarrow} & H^{1}\left( C\right) & \leftarrow & \cdots & \\ & & \downarrow & & \downarrow & & \downarrow & & \downarrow & & & \\ & & \mathbb{R} & & \mathbb{R} & & \mathbb{R} & & \mathbb{R} & & & \end{array}$$ Notice that $H_{+}^{2}\left( C,L\right) $, $H_{+}^{2}\left( C\right) $ and $H^{2}\left( L\right) $ parametrize infinitesimal deformation of $C$ with fixed $\partial C$, deformation of $C$ alone and deformation of $L$ respectively. By simple homological algebra, it is not difficult to see that $\operatorname{Im}\alpha\oplus\operatorname{Im}\beta$ is a Lagrangian subspace of $H^{2}\left( L\right) \oplus H^{1}\left( L\right) $ with the canonical symplectic structure. Hence the result. $\blacksquare$ Remark: The deformation theory of *conical* special Lagrangian submanifolds is developed by Pacini in [@Pacini]. We denote the immersed Lagrangian submanifold $b\left( \mathcal{M}^{\mathbb{H}-SLag}\left( M\right) \right) $ in $\mathcal{M}^{SLag}\left( X\right) $ by $\mathcal{L}_{M}$. When $M$ decompose as a connected sum $M_{1}\#_{X}M_{2}$ along a long neck, as in Atiyah’s conjecture on Floer Chern-Simons homology group [@At; @3; @4], we expect to have an isomorphism, $$H_{C}\left( M\right) \cong HF_{Lag}^{\mathcal{M}^{SLag}\left( X\right) }\left( \mathcal{L}_{M_{1}},\mathcal{L}_{M_{2}}\right) \text{.}$$ More precisely, suppose $\Omega_{t}$ with $t\in\lbrack0,\infty)$, is a family of $G_{2}$-structure on $M_{t}=M$ such that as $t$ goes to infinite, $M$ decomposes into two components $M_{1}$ and $M_{2}$, each has an aymptotically cylindrical end $X\times\lbrack0,\infty)$. Then we expect that $\lim _{t\rightarrow\infty}H_{C}\left( M_{t}\right) \cong HF_{Lag}^{\mathcal{M}^{SLag}\left( X\right) }\left( \mathcal{L}_{M_{1}},\mathcal{L}_{M_{2}}\right) $. We summarize these structures in the following table: $$\begin{tabular} [c]{|c||c|c|c}\cline{1-3}{\footnotesize Manifold:} & {\footnotesize (almost)} $G_{2}$ {\footnotesize -manifold,} $M^{7}$ & {\footnotesize (almost) CY threefold, }$X^{6}$ & $\begin{array} [c]{l}\,\\ \, \end{array} $\\\cline{1-3}{\footnotesize SUSY Cycles:} & $\mathbb{H}$ {\footnotesize -SLag. submfds.+ ASD bdl} & {\footnotesize SLag submfds.+ flat bdl} & $\begin{array} [c]{l}\,\\ \, \end{array} $\\\cline{1-3}{\footnotesize Invariant:} & {\footnotesize Homology group,} $H_{C}\left( M\right) $ & {\footnotesize Fukaya category,} $Fuk\left( \mathcal{M}^{SLag}\left( X\right) \right) .$ & $\begin{array} [c]{l}\,\\ \, \end{array} $\\\cline{1-3}\end{tabular}$$ These associations can be formalized to form a TQFT [@At; @TQFT]. Namely we associate an additive category $F\left( X\right) =Fuk\left( \mathcal{M}^{SLag}\left( X\right) \right) $ to a closed almost Calabi-Yau threefold $X$, a functor $F\left( M\right) :F\left( X_{0}\right) \rightarrow F\left( X_{1}\right) $ to an almost $G_{2}$-manifold $M$ with asymptotically cylindrical ends $X_{1}-X_{0}=X_{1}\cup\bar{X}_{0}$. They satisfy $$\begin{tabular} [c]{ll}(i) & $F\left( \phi\right) =\text{the additive tensor category of vector spaces }\left( \left( Vec\right) \right) \text{,}$\\ (ii) & $F\left( X_{1}\amalg X_{2}\right) =F\left( X_{1}\right) \otimes F\left( X_{2}\right) .$\end{tabular}$$ For example, when $M$ is a closed $G_{2}$-manifold, that is a cobordism between empty manifolds, then we have $F\left( M\right) :\left( \left( Vec\right) \right) \rightarrow\left( \left( Vec\right) \right) $ and the image of the trivial bundle is our homology group $H_{C}\left( M\right) $. More TQFTs ========== Notice that all TQFTs we propose in this paper are formal mathematical constructions. Besides the lack of compactness for the moduli spaces, the *obstruction* issue is also a big problem if we try to make these theories rigorous. This problem is explained to the author by a referee. There are other TQFTs naturally associated to Calabi-Yau threefolds and $G_{2}$-manifolds but (1) they do not involve nontrivial coupling between submanifolds and bundles and (2) new difficulties arise because of corresponding moduli spaces for Calabi-Yau threefolds have virtual dimension zero and could be singular. They are essentially in the paper by Donaldson and Thomas [@DT]. **TQFT of associative cycles** We assume that $M$ is a $G_{2}$-manifold, i.e. $\Omega$ is parallel rather than closed. Three dimensional submanifolds $A$ in $M$ calibrated by $\Omega$ is called *associative submanifolds* and they can be characterized by $\chi|_{A}=0$ ([@HL]) where $\chi\in\Omega^{3}\left( M,T_{M}\right) $ is defined by $\left\langle w,\chi\left( x,y,z\right) \right\rangle =\ast \Omega\left( w,x,y,z\right) $. We define a *parametrized A-cycle* to be a pair $\left( f,D_{E}\right) \in\mathcal{C}_{A}=Map\left( A,M\right) \times\mathcal{A}\left( E\right) ,$ with $f:A\rightarrow M$ a parametrized A-submanifold and $D_{E}$ is a unitary flat connection on a Hermitian vector bundle $E$ over $A$. There is also a natural $\mathcal{G}$-invariant closed one form $\Phi_{A}$ on $\mathcal{C}_{A}$ given by $$\Phi_{A}\left( f,D_{E}\right) \left( v,B\right) =\int_{A}TrF_{E}\wedge B+\left\langle f^{\ast}\chi,v\right\rangle _{T_{M}},$$ for any $\left( v,B\right) \in\Gamma\left( A,f^{\ast}T_{M}\right) \times\Omega^{1}\left( A,ad\left( E\right) \right) =T_{\left( f,D_{E}\right) }\mathcal{C}_{A}$ $.$ Its zero set is the moduli space of A-cycles in $M$. As before, we could formally apply arguments in Witten’s Morse theory to $\Phi_{A}$ and define a homology group $H_{A}\left( M\right) $. The corresponding category associated to a Calabi-Yau threefold $X$ would be the Fukaya-Floer category of the moduli space of unitary flat bundles over holomorphic curves in $X$, denote $\mathcal{M}^{curve}\left( X\right) $. We summarize these in the following table: $$\begin{tabular} [c]{|c||c|c|c}\cline{1-3}{\footnotesize Manifold:} & $G_{2}$ {\footnotesize -manifold,} $M^{7}$ & {\footnotesize CY threefold,} $X^{6}$ & $\begin{array} [c]{l}\,\\ \, \end{array} $\\\cline{1-3}{\footnotesize SUSY Cycles:} & {\footnotesize A-submfds.+ flat bundles} & {\footnotesize Holomorphic curves+ flat bundles} & $\begin{array} [c]{l}\,\\ \, \end{array} $\\\cline{1-3}{\footnotesize Invariant:} & {\footnotesize Homology group,} $H_{A}\left( M\right) $ & {\footnotesize Fukaya category,} $Fuk\left( \mathcal{M}^{curve}\left( X\right) \right) .$ & $\begin{array} [c]{l}\,\\ \, \end{array} $\\\cline{1-3}\end{tabular}$$ **TQFT of Donaldson-Thomas bundles** We assume that $M$ is a seven manifold with a $G_{2}$-structure such that its positive three form $\Omega$ is co-closed, rather than closed, i.e. $d\Theta=0$ with $\Theta=\ast\Omega$. In [@DT] Donaldson and Thomas introduce a first order Yang-Mills equation for $G_{2}$-manifolds, $$F_{E}\wedge\Theta=0\text{.}$$ Their solutions are the zeros of the following gauge invariant one form $\Phi_{DT}$ on $\mathcal{A}\left( E\right) $, $$\Phi_{DT}\left( D_{E}\right) \left( B\right) =\int_{M}Tr\left[ F_{E}\wedge B\right] \wedge\Theta\text{,}$$ for any $B\in\Omega^{1}\left( M,ad\left( E\right) \right) =T_{D_{E}}\mathcal{A}\left( E\right) $. This one form $\Phi_{DT}$ is closed because of $d\Theta=0$. As before, we can formally define a homology group $H_{DT}\left( M\right) $. The corresponding category associated to a Calabi-Yau threefold $X$ should be the Fukaya-Floer category of the moduli space of Hermitian Yang-Mills connections over $X$, denote $\mathcal{M}^{HYM}\left( X\right) $. Again we summarize these in a table: $$\begin{tabular} [c]{|c||c|c|c}\cline{1-3}{\footnotesize Manifold:} & $G_{2}$ {\footnotesize -manifold,} $M^{7}$ & {\footnotesize CY threefold,} $X^{6}$ & $\begin{array} [c]{l}\,\\ \, \end{array} $\\\cline{1-3}{\footnotesize SUSY Cycles:} & {\footnotesize DT-bundles} & {\footnotesize Hermitian YM-bundles} & $\begin{array} [c]{l}\,\\ \, \end{array} $\\\cline{1-3}{\footnotesize Invariant:} & {\footnotesize Homology group,} $H_{DT}\left( M\right) $ & {\footnotesize Fukaya category,} $Fuk\left( \mathcal{M}^{HYM}\left( X\right) \right) .$ & $\begin{array} [c]{l}\,\\ \, \end{array} $\\\cline{1-3}\end{tabular}$$ It is an interesting problem to understand the transformations of these TQFTs under dualities in M-theory. *Acknowledgments: This paper is partially supported by NSF/DMS-0103355. The author expresses his gratitude to J.H. Lee, R. Thomas, A. Voronov and X.W. Wang for useful discussions. The author also thank the referee for many useful comments.* [99]{} B. S. Acharya, B. Spence, *Supersymmetry and M theory on 7-manifolds,* \[hep-th/0007213\]. M. Aganagic, C. Vafa, *Mirror Symmetry and a G*$_{2}$* Flop,* \[hep-th/0105225\]. M. Atiyah, *New invariants of three and four dimensional manifolds,* in The Mathematical Heritage of Herman Weyl, Proc. Symp. Pure Math., **48**, A.M.S. (1988), 285-299. M. Atiyah, *Topological quantum field theories.* Inst. Hautes Études Sci. Publ. Math. No. 68 (1988), 175–186 (1989). M. Atiyah, E. Witten, *M-theory dynamics on a manifold of* $G_{2}$* holonomy*, \[hep-th/0107177\]. R. Bryant, *Metrics with exceptional holonomy*, Ann. of Math. 126 (1987) 525-576. S. Donaldson, *Floer homology group in Yang-Mills theory*, Cambridge Univ. Press (2002). S. Donaldson, P. Kronheimer, *The geometry of four-manifols,* Oxford University Press, (1990). S. Donaldson, R. Thomas, *Gauge theory in higher dimension,* The Geometric Universe: Science, Geometry and the work of Roger Penrose, S.A. Huggett et al edited, Oxford Univ. Press (1988). K. Fukaya, Y.G. Oh, H. Ohta, K. Ono, *Lagrangian intersection Floer theory - anomoly and obstruction*, to appear in International Press. R. Gopakumar, C. Vafa, *M-theory and topological strings - II,* \[hep-th/9812127\]. A. Gray, *Vector cross products on manifolds*, Trans. Amer. Math. Soc. 141 (1969) 465-504. S. Gukov, S.-T. Yau, E. Zaslow, *Duality and Fibrations on G*$_{2}$* Manifolds,* \[hep-th/0203217\]. R. Harvey, B. Lawson, *Calibrated geometries*, Acta Math. 148 (1982), 47-157. N. Hitchin, *The moduli space of special Lagrangian submanifold*s. Dedicated to Ennio DeGiorgi. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 25 (1997), no. 3-4, 503–515 (1998). \[dg-ga/9711002\]. N. Hitchin, *The geometry of three forms in 6 and 7 dimensions, J. Differential Geom. 55 (2000), no. 3,* 547–576.* *\[math.DG/0010054\]. D. Joyce, *On counting special Lagrangian homology 3-spheres,* \[hep-th/9907013\]. A. Kovalev, *Twisted connected sums and special Riemannian holonomy*, \[math.DG/0012189\]. J.H. Lee, N.C. Leung, *Geometric structures on* $G_{2}$ *and* $Spin\left( 7\right) $*-manifolds*, \[math.DG/0202045\]. N.C. Leung, *Symplectic structures on gauge theory,* Comm. Math. Phys., 193 (1998) 47-67. N.C. Leung, *Mirror symmetry without corrections,* \[math.DG/0009235\]. N.C. Leung, *Geometric aspects of mirror symmetry*, to appear in the proceeding of ICCM 2001, \[math.DG/0204168\]. N.C. Leung, *Riemannian geometry over different normed division algebras*, preprint 2002. N.C. Leung, in preparation. N.C. Leung, S.Y. Yau, E. Zaslow, *From special Lagrangian to Hermitian-Yang-Mills via Fourier-Mukai transform*, to appear in Adv. Thero. Math. Phys.. \[math.DG/0005118\]. M. Marino, R. Minasian, G. Moore, and A. Strominger, *Nonlinear Instantons from supersymmetric p-Branes*, \[hep-th/9911206\]. R. McLean, *Deformations of calibrated submanifolds*, Comm. Analy. Geom., **6** (1998) 705-747. T. Pacini, *Deformations of Asymptotically Conical Special Lagrangian Submanifolds.* Preprint 2002. \[math.DG/0207144\]. A. Strominger, S.-T. Yau, and E. Zaslow, *Mirror Symmetry is T-Duality*, Nuclear Physics **B479** (1996) 243-259; \[hep-th/9606040\]. R. P. Thomas, *Moment maps, monodromy and mirror manifolds.* In ”Symplectic geometry and mirror symmetry” , Proceedings of the 4th KIAS Annual International Conference, Seoul. Eds. K. Fukaya, Y.-G. Oh, K. Ono and G. Tian. World Scientific, 2001. G. Tian, *Gauge theory and calibrated geometry. I*. Ann. of Math. (2) 151 (2000), no. 1, 193–268.
--- abstract: 'Organo-metallic molecular structures where a single metallic atom is embedded in the organic backbone are ideal systems to study the effect of strong correlations on their electronic structure. In this work we calculate the electronic and transport properties of a series of metalloporphyrin molecules sandwiched by gold electrodes using a combination of density functional theory and scattering theory. The impact of strong correlations at the central metallic atom is gauged by comparing our results obtained using conventional DFT and DFT$+U$ approaches. The zero bias transport properties may or may not show spin-filtering behavior, depending on the nature of the $d$ state closest to the Fermi energy. The type of $d$ state depends on the metallic atom and gives rise to interference effects that produce different Fano features. The inclusion of the $U$ term opens a gap between the $d$ states and changes qualitatively the conductance and spin-filtering behavior in some of the molecules. We explain the origin of the quantum interference effects found as due to the symmetry-dependent coupling between the $d$ states and other molecular orbitals and propose the use of these systems as nanoscale chemical sensors. We also demonstrate that an adequate treatment of strong correlations is really necessary to correctly describe the transport properties of metalloporphyrins and similar molecular magnets.' author: - 'R. Ferradás' - 'V. M. García-Suárez' - 'J. Ferrer' title: 'Symmetry-induced interference effects in metalloporphyrin wires' --- = 10000 Introduction ============ A key issue in the field of molecular electronics is the search for molecular compounds that give rise to new or improved functionalities. Porphyrin molecules constitute in this respect promising candidates and are as such receiving increased attention. These molecules play an essential role in many biological processes such as electron transfer, oxygen transport, photosynthetic processes and catalytic substrate oxidation. [@Dolphin78]. Porphyrins have been extensively studied in the past by biologists and chemists [@Dorough51; @Gust76; @Goff76; @D'Souza94]. However, an increasing number of theoretical [@MengSheng02; @Palummo09; @Rovira97] and experimental [@OtsukiSTM10; @VictorNature11] physics analyzes have appeared in the past few years. Progress in the design of supramolecular structures involving porphyrin molecules has been rather spectacular [@Irina09]. Porphyrins are cyclic conjugate molecules. Their parent form is the porphin, which is made of four pyrrole groups joined by carbon bridges and has a nearly planar D4h symmetry [@Jentzen96]. The alternating single and double bonds in its structure, make this molecule chemically very stable. In porphyrin systems, the porphine is the base molecule, but different functional groups can join the macrocycle by replacing a peripheral hydrogen. In addition, the macrocycle can accommodate inside a metallic atom, such as Fe (which is the base of the hemoglobin in mammals), Cu (hemolymph in invertebrates), Mg (chlorophyll in plants), etc. Large porphyrin systems can undergo certain ruffling distortions because of its peripheral ligands, its metallic atom inside or the environment [@PaikSuh84]. It is precisely the large number of possible configurations, which give rise to a wide variety of interesting properties that make these molecules very attractive for molecular-scale technological applications. In this work, we present a theoretical study of the electronic and transport properties of porphyrins sandwiched by gold electrodes using Density Functional Theory (DFT). The porphyrine molecule is attached to the gold surface by a thiol group and oriented perpendicular to it, as sketched in Fig. (\[AuPhFe\]). The metallic atom placed inside can be Fe, Co, Ni, Cu and Zn. For the sake of comparison, we have also studied the porphine compound, which has no metallic atom. Since DFT fails to describe correctly transition metal elements in correlated configurations, we have also adopted a DFT$+U$ approach. We show that the inclusion of strong correlations is necessary to accurately characterize the transport properties of some of the metalloporphyrin complexes. The outline of the article is as follows: in section II we briefly explain the DFT$+U$ flavor used in our calculations. In section III we give the computational details and in the last two sections, IV and V, we present and discuss the electronic and transport properties of the junctions, respectively. Details of the DFT$+U$ approach =============================== DFT has emerged as the tool of choice for the simulation of a wide array of materials and nanostructures. However, the theory fails to describe strongly correlated electron systems, such as embedded 3d transition metal or 4f rare earth elements. Apart from the fact that even exact DFT can not describe all excited states[@Carrascal12], the approximations included in the exchange-correlation potential induce qualitative errors in correlated systems. There have been many attempts to fix these problems. These include generating exchange-correlation functionals specifically tailored to the system under investigation, but then those are frequently not transferable. Another attempt is based on removing the electronic self interactions introduced by the approximate treatment of exchange in DFT[@PerdewZunger81]. This unphysical self-interaction is a significant source of errors when electrons are localized. However it is not clear that just by removing the self-interaction error the physical description will be qualitatively correct. We will use in this article the popular DFT$+U$ approach, developed by Anisimov and co-workers [@AnisimovZaanen91; @SolovyevDederichs94; @LiechtensteinAnisimov95], that represents a simple mean-field way to correct for strong correlations in systems with transition metals or rare earth compounds. ![\[AuPhFe\](Color online) Schematic view of a Fe-porphyrine molecule between gold leads. Yellow, dark yellow, grey, black, blue, and magenta represent gold, sulphur, hydrogen, carbon, nitrogen and iron, respectively.](AuPhFe.eps){width="\columnwidth"} The DFT$+U$ method assumes that electrons can be split into two subsystems: delocalized electrons, that can be treated with traditional DFT, and localized electrons (3d or 4f), which must be handled using a generalized Hubbard model hamiltonian with orbital-dependent local electron-electron interactions. The DFT$+U$ functional is defined then as: $$\begin{aligned} E^{\mathrm{DFT}+U} [\rho^\sigma (\overrightarrow{r}) , \{ n^\sigma \}]=E^\mathrm{DFT} [\rho^\sigma (\overrightarrow{r})]+\nonumber \\ +E^\mathrm{Hub} [\{ n^\sigma \}]-E^\mathrm{DC} [\{ n^\sigma \}]\end{aligned}$$ where $E^\mathrm{DFT}$ is the standard DFT functional; $\rho^\sigma (\overrightarrow{r})$ is the charge density for the $\sigma$ spin; $E^\mathrm{Hub}$ is the on-site coulomb correction; $E^\mathrm{DC}$ is the double counting correction, which is necessary to avoid including again the average electron-electron interaction that is included in $E^\mathrm{DFT}$; and $\{ n^\sigma \}$ are the atomic orbital occupations corresponding to the orbitals that need to be corrected. The generalized Hubbard Hamiltonian is written as $$\hat{H}_\mathrm{int} = \frac{U}{2} \sum_{m,m',\sigma}\hat{n}_{m,\sigma}\hat{n}_{m',-\sigma}+\frac{U-J}{2} \sum_{m\neq m',\sigma}\hat{n}_{m,\sigma}\hat{n}_{m',\sigma}$$ Following Ref. , we take the atomic limit of the above hamiltonian where the number of strongly correlated electrons $N_{\sigma} = \sum_{m} n_{m,\sigma}$ is an integer and write: $$\begin{aligned} E^\mathrm{DC}& =& \langle \mathrm{integer}\, N_{\sigma} |\hat{H}_\mathrm{int}| \mathrm{integer}\, N_{\sigma} \rangle \nonumber \\ &=& \frac{U}{2} \sum_{\sigma} N_{\sigma} N_{-\sigma}+\frac{U-J}{2} \sum_{\sigma} N_{\sigma}(N_{\sigma} -1)\end{aligned}$$ In contrast, for a noninteger occupation number, corresponding to an ion embedded in a larger system, we write: $$\begin{aligned} E^\mathrm{Hub} &=& \langle \mathrm{noninteger}\, N_{\sigma} |\hat{H}_\mathrm{int}| \mathrm{noninteger}\, N_{\sigma} \rangle \\ &=& \frac{U}{2} \sum_{m,m',\sigma} n_{m,\sigma} n_{m',-\sigma}+ %\\&+& \frac{U-J}{2} \sum_{m\neq m',\sigma}n_{m,\sigma}n_{m',\sigma}\nonumber\end{aligned}$$ The above two equations can be merged and after some algebra the DFT$+U$ functional can be written as $$\begin{aligned} E^{\mathrm{DFT}+U}= E^\mathrm{DFT} + \frac{U_\mathrm{eff}}{2} &\sum_{m,\sigma} n_{m,\sigma}(1- n_{m,\sigma})\end{aligned}$$ where $U_\mathrm{eff} = U-J$. It has to be noted that the correction term depends on the occupation number matrix. This occupation number matrix, a centrepiece of the DFT$+U$ approach, is not well defined, because the total density charge can not be broken down into simple atomic contributions. Since the appearance of the DFT$+U$ approach, there have been many different definitions of the occupation number matrix [@CTablero08]. We evaluate the occupation number matrix by introducing projection operators in the following way[@CTablero08]: $${n_{m}}^{(\sigma)} = \sum_{j} {q_{j}}^{(\sigma)} \langle {\varphi_{j}}^{(\sigma)} |{\hat{P}_{m}}^{(\sigma)}|{\varphi_{j}}^{(\sigma)}\rangle$$ where ${\varphi_{j}}^{(\sigma)}$ are the KS eigenvectors for the $j$ state with spin index $\sigma$ and $q_{j}^\sigma$ are their occupations. The choice of the projection operators $\hat{P}_{m}^{(\sigma)}$ is crucial, because due to the non-orthogonal basis set different projection operators give different occupation number matrices. In our case, the choice corresponds to so-called [*full*]{} representation. Here, the selected operator is $${\hat{P}_{m}}^{(\sigma)} = |\chi_{m} \rangle \langle \chi_{m}|$$ where $|\chi_{m} \rangle$ are the atomic orbitals of the strongly correlated electrons. Introducing this projector in (6), we get: $${{\bf n}_{\sigma}}^\mathrm{full} = {\bf S}{\bf D}_{\sigma}{\bf S}$$ where ${\bf S}$ is the overlap matrix and ${\bf D}_{\sigma}$ is the density matrix of the system. Computational Method ==================== We have performed the electronic structure calculations using the DFT code SIESTA[@SIESTA], which uses norm conserving pseudopotentials and a basis set of pseudoatomic orbitals to span the valence states. For the exchange and correlation potential, we have used both the local density approximation (LDA), as parameterized by Ceperley and Adler[@CA], and the generalized gradient approximation(GGA), as parameterized by Perdew, Burke and Ernzernhof[@PBE]. SIESTA parameterizes the pseudopotentials according to the Troullier and Martins[@Tro91] prescription and factorizes them following Kleynman and Bylander[@Kle82]. We included non-linear core correction in the transition metal pseudopotentials to correctly take into account the overlap between the valence and the core states [@Lou82]. We also used small non-linear core corrections in all the other elements to get rid of the small peak in the pseudopotential close to the nucleus when using the GGA approximation. We placed explicitly the $s$ and $d$ orbitals of gold as valence orbitals and employed a single-${\zeta}$ basis (SZ) to describe them. We used a double-${\zeta}$ polarized basis (DZP) for all the other elements (H, C, O, N, S and TM). We computed the density, Hamiltonian, overlap and the matrix elements in a real space grid defined with an energy cutoff of 400 Ry. We used a single $k$-point when performing the structural relaxations and transport calculations, which is enough to relax the coordinates and correctly compute the transmission around the Fermi level in the case of gold electrodes. We relaxed the coordinates until all forces were smaller than 0.01 eV/Å. We varied the $U$- parameter and the radii of the projectors of the $U$-projectors and compared the results with experiments or previous simulations of the isolated molecules. In this way we found that the optimal values for them were 4.5 eV and 5.5 Å, respectively. Fig. (\[AuPhFe\]) shows the central part of the extended molecule in a gold-Fe-Porphirine-gold junction. The gold electrodes were grown in the (001) direction, which we took as the $z$ axis. The sulphur atoms were contacted to the gold surfaces in the hollow position at a distance of 1.8 Å. We carried out a study of the most stable position and distance and found, both in GGA and LDA, that the hollow configuration was more stable than the top and bottom, in agreement with previous results obtained with other molecules. The most stable distances were 1.6 Å/ and 1.8Å/ for LDA and GGA, respectively. We finally chose a distance of 1.8 Åfor all cases in order to make a systematic study of geometrically identical systems. We performed the transport calculations using GOLLUM, a newly developed and efficient code[@Gollum]. The junctions were divided in three parts: left and right leads and extended molecule (EM). This EM block included the central part of the junction (molecule attached to the gold surfaces) and also some layers of the gold leads to make sure that the electronic structure was converged to the bulk electronic structure. The same general parameters as in the bulk simulation (real-space grid, perpendicular $k$-points, temperature, etc.) and also the same parameters for the gold electrodes (bulk coordinates along $x$ and $y$ and basis set) were used in the EM simulation. LDA LDA$+U$ GGA GGA$+U$ ----- ------ --------- ------ --------- FeP 0.10 0.85 0.48 1.10 CoP 0.27 1.80 0.90 1.80 NiP 1.55 1.65 1.50 1.60 CuP 0.85 1.40 0.75 1.40 ZnP 1.60 1.60 1.60 1.60 : Energy gaps of metalloporphyrins in the gas phase, calculated with LDA and GGA and with or without $U$.[]{data-label="Tab01"} Electronic structure of metallo-porphyrins in vacuum ==================================================== Firstly we studied the effect of the $U$ term on the electronic structure of the metallo-porphyrine MP molecules in vacuum. From these initial simulations it is already possible to fetch an idea of the main effects of correlations and their future impact on the transport properties. We chose for these analyzes a cubic unit cell of size $35 \times 15 \times 35 \ {\AA}$ to ensure that the molecules did not interact with adjacent images. In order to determine the influence of different $U$’s and cutoff radii and see how the results compare to previous experiments and calculations we studied first the case of the bulk oxide FeO [@Parmigiani99], where DFT is known to give qualitatively different results (metallic instead of insulating character) [@Cococcioni05]. We performed calculations with $U=4$ eV and $U=4.5$ eV, which is the range of values most used in the literature for iron [@Cococcioni05], and we used projectors with different radii. We found that the results were very sensitive to these radii. Specifically, the gap for FeO only appeared for radii larger than $2.5$ Bohr. The parameters that best fitted the experiments and previous calculations for FeO were $U=4.5$ eV and $r_\mathrm{c} = 5.5$ Bohr, which gave a gap of $2.5$ eV (2.4 eV in Ref. ). To reproduce previous theoretical results published for the iron porphyrine[@MengSheng02; @Panchmatia08], we had to choose $r_\mathrm{cut} = 1.5$ Bohr, which is smaller than the values used for FeO. This is possibly justified by the fact that the values of $U$ and the projectors radii depend on the environment where the strongly correlated atom is located. In case of the other transition metals (Co, Ni, Cu and Zn), $U$ is expected to increase towards the Zn but still be similar [@SolovyevDederichs94]. We therefore used the same parameters for all metalloporphyrins, which also simplified the comparison between different cases and made the study more systematic. We summarize in the following subsections the main features of the electronic structure of all molecules. In some cases we show the density of states projected (PDOS) onto the $d$ orbitals and/or the carbon and nitrogen atoms to determine the properties of the $d$ states and their relation to the other molecular states. The gaps and magnetic moments of each molecule calculated with LDA and GGA and with or without $U$ are summarized in Tables \[Tab01\] and \[Tab02\], respectively. LDA LDA$+U$ GGA GGA$+U$ --------- ------ --------- ------ --------- FeP 2.00 2.00 2.00 2.00 CoP 1.00 1.00 1.00 1.00 NiP 0.00 2.00 0.00 2.00 CuP 1.00 1.00 1.00 1.00 ZnP 0.00 0.00 0.00 0.00 AuFePAu 1.14 1.32 1.07 1.34 AuCoPAu 0.62 1.07 0.83 1.08 AuNiPAu 0.00 0.00 0.00 0.00 AuCuPAu 0.84 0.97 0.80 0.98 AuZnPAu 0.02 0.00 0.02 0.00 : Magnetic moments in units of $\mu_\mathrm{B}$ of metalloporphyrins in the gas phase and between gold leads, calculated with LDA and GGA and with or without $U$.[]{data-label="Tab02"} Iron porphyrine FeP ------------------- The lowest energy electronic configuration for FeP in all calculated cases (LDA, LDA$+U$, GGA, and GGA$+U$) is \[...\]$ {(d_{xy})^2} {(d_{z^2})^2} {(d_{yz})^1} {{(d_{{x^2}-{y^2}}})^1} $. This configuration is a consequence of the crystalline field generated by the molecule, which is located on the $xz$ plane -the largest interaction between the nitrogens and the $d$ states corresponds to the $d_{xz}$, which moves up in energy and empties-. The total spin is therefore $S_z=1$, an intermediate-spin configuration, which is in agreement with experimental results[@Spin-citation]. The PDOS on the iron $d$ states calculated with GGA and GGA$+U$ is shown in Fig. (\[PDOS.GGA.SP.Fe\]) for spin up and spin down electrons. As can be seen the closest orbitals to the Fermi level are the spin down $d_{xy}$ and $d_{yz}$. The states that are filled and contribute to the magnetic moment of the molecule are the $d_{yz}$ and the $d_{x^2-y^2}$, whereas the $d_{xz}$ is completely empty. The $d_{yz}$ shows also a strong hybridization. When the $U$ is included all gaps increase as a consequence of the movement of the filled orbitals to lower energies and the empty orbitals to higher energies. From here it is already possible to get an idea on the effect of the iron states on the transport properties of the molecule. States with large hybridization with other molecular states, such as the $d_{yz}$ are expected to give rise to relatively broad transmission resonances, whereas those states with small hybridization such as the ($d_{z^2}$, and the $d_{x^2-y^2}$) are expected to produce either very thin resonances or Fano resonances, as explained below. ![\[PDOS.GGA.SP.Fe\](Color online) Projected density of states (PDOS) on the iron $d$ states $- d_{xy}$ (a), $d_{yz}$ (b), $d_{3z^2-r^2}$ (c), $d_{xz}$ (d) and $d_{{x^2}-{y^2}}$ (e) $-$ for a iron metalloporphyrin in the gas phase, computed with GGA and spin polarization. Left (1) and right (2) columns correspond to calculations with $U=0$ eV and $U=4.5$ eV. Positive and negative values represent spin up and spin down electrons.](PDOS.GGA.SP.Fe.eps){width="\columnwidth"} ![\[PDOS.GGA.SP.FeP\](Color online) Projected density of states (PDOS) on the iron (a), nitrogen (b) and carbon (c) states for a iron metalloporphyrin in the gas phase, computed with GGA and spin polarization. Left (1) and right (2) columns correspond to calculations with $U=0$ eV and $U=4.5$ eV. Positive and negative values represent spin up and spin down electrons. Notice the vertical scale is different in the middle panels (nitrogen).](PDOS.GGA.SP.FeP.eps){width="\columnwidth"} The amount of hybridization can also be seen by plotting the PDOS on each type of molecular atom, as shown in Fig. (\[PDOS.GGA.SP.FeP\]) for GGA. Notice there are iron states that do not hybridize with the rest of the molecule, whereas other iron states do hybridize producing extended molecular orbitals. Focusing on particular states it is possible to see that the HOMO and LUMO have Fe and C contributions. In the HOMO, the contribution of Fe is bigger but in the LUMO both Fe and C contribute equally. Specifically, the Fe contribution to the HOMO (LUMO) comes from $3d_{xy}$ and ($3d_{yz}$) orbitals; in addition, iron contributes to the HOMO-1 with the $3d_{z^2}$ and a small part of the $3d_{{x^2}-{y^2}}$. In the case of carbon the state that contributes to both the HOMO and LUMO is the $2p_{y}$. Note that there are also sulphur states around the Fermi level, specially on the LUMO and below the HOMO but in order to simplify the description we focus only on the carbon, nitrogen and metallic states. In this case the HOMO-LUMO gap is about 0.5 eV. With GGA$+U$ the states that most contribute to HOMO are still Fe $3d_{xy}$, but the C $2p_{y}$ states are the most important in the LUMO; the LUMO has also a small contribution from N $2p_{y}$. Even though the $U$ term acts only on the iron atom, the C $2p_{y}$ states are indirectly affected, i.e. the C $2p_{y}$ states, located around -1 eV and 1 eV, lower their energy as a consequence of the changes on the iron states. In this case the HOMO-LUMO gap increases to 1.1 eV. With LDA and LDA$+U$ the results are slightly different. The states that contribute most to the HOMO and LUMO are Fe $3d_{z^2}$ and C $2p_{y}$. The HOMO-LUMO gap without $U$ is about 0.1 eV. With LDA$+U$ the Fe $3d_{z^2}$ states contributes much more to the HOMO and the C $2p$ to the LUMO, and the C $2p$ states are less affected by the $U$. The gap with $U$ increases to 0.8 eV. ![\[Fe.XZ-YZ\](Color online) Spatial distribution of the density of states projected on the spin-down main peak of the iron $d_{yz}$ (a) and $d_{xz}$ (b), computed with GGA.](Fe.XZ-YZ.eps){width="0.8\columnwidth"} By using the spatial projection of the density of states (local density of states, LDOS) it is also possible to understand where a particular molecular state is located on the molecule. In Fig. (\[Fe.XZ-YZ\]) we show the LDOS projected on the molecular states associated to the $d_{yz}$ and $d_{xz}$ spin down states (with an isosurface value of 0.001 e/Å$^3$), i.e. those where the weight of these $d$ orbitals for spin down is largest, calculated with GGA. These states are located at $E-E_\mathrm{F}= 0.29$ eV ($yz$) and 2.29 eV ($xz$) and move across the Fermi level when the atomic charge of the metal increases towards the Zn. Notice that in the case of the $d_{yz}$ state, due to the stronger hybridization, the $d$ peak splits in two and therefore the choice is ambiguous. The spatial distribution of each peak is similar however. On each $d$ state it is possible to see the typical shape associated to it, i.e. four lobes on the diagonals of the $YZ$ or $XZ$ planes, plus some charge on other atoms of the molecule due to hybridization with other molecular orbitals. Notice that the $d_{yz}$ state does not interact too much with the nitrogens but rather with the carbons, specially with those located closest to the sulphur atoms. This produces an hybridization between this orbital and the carbon $\pi$ states, as can be clearly seen in Fig. (\[Fe.XZ-YZ\]) where the charge on the carbon atoms is mainly located on top of them. On the other hand, in the $d_{xz}$ case, the lobes are directed exactly towards the nitrogen atoms and therefore this state interacts strongly with them. This interaction is $\sigma$-like, which is the type of interaction that nitrogen forms with the adjacent atoms, and therefore localizes the charge between atoms. The differences between the $d_{yz}$ and $d_{xz}$ orbitals and their coupling to different molecular states has a direct impact on the transport properties of some of these molecules, as explained in next section. ![\[PDOS.GGA.SP.Ni\](Color online) Projected density of states (PDOS) on the nickel $d$ states $- d_{xy}$ (a), $d_{yz}$ (b), $d_{z^2}$ (c), $d_{xz}$ (d) and $d_{{x^2}-{y^2}}$ (e) $-$ for a nickel metalloporphyrin in the gas phase, computed with GGA and spin polarization. Left (1) and right (2) columns correspond to calculations with $U=0$ eV and $U=4.5$ eV. Positive and negative values represent spin up and spin down electrons.](PDOS.GGA.SP.Ni.eps){width="\columnwidth"} CoP --- The lowest energy electronic configuration for CoP, in all cases, is \[...\]$ {(d_{xy})^2} {(d_{z^2})^2} {(d_{yz})^2} {{(d_{{x^2}-{y^2}}})^1}$, which produces a ground state with $S_z=1/2$, in agreement with previous results [@MengSheng02]. With GGA, the HOMO is made of Fe $3d_{yz}$ and $3d_{xy}$ states, C $2p_{y}$ states and small fraction of N $2p_{y}$ states. The LUMO is made only of Co $3d_{z^2}$ and a small fraction of $3d_{{x^2}-{y^2}}$. The gap is about 0.9 eV. With GGA$+U$, in general, all states (Fe and C) lower their energy. Now the HOMO and LUMO are formed by C $2p_{y}$ and N C $2p_{y}$ states and have a small contribution of Co $3d_{xz}$ states; the GGA $+U$ is 1.8 eV. With LDA the results are similar, but the HOMO-LUMO gap is 0.3 eV, which increases to 1.7 eV for LDA$+U$. NiP --- Unlike the FeP and CoP cases, the ground state electronic configurations of NiP changes when the $U$ is included. For GGA and LDA, the electronic configuration is \[...\]$ {(d_{xy})^2} {(d_{z^2})^2} {(d_{yz})^2} {{(d_{{x^2}-{y^2}}})^2}$, i.e. $S_z=0$, which makes the molecule diamagnetic. When the $U$ is introduced, the electronic configuration becomes \[...\]$ {(d_{xy})^2} {(d_{z^2})^2} {(d_{yz})^2} {{(d_{{x^2}-{y^2}}})^1} {(d_{xz})^1}$. The spin is $S_z=1$, and the molecule becomes magnetic. This is produced by the transfer of one electron from the $3d_{{x^2}-{y^2}}$ to the $3d_{xz}$ orbital. This can clearly be seen in Fig. (\[PDOS.GGA.SP.Ni\]), where without $U$ both spin up and spin down $3d_{xz}$ states are above the Fermi level (panel (d1)) whereas both $3d_{{x^2}-{y^2}}$ are below the Fermi level (panel (e1)). However, when $U$ is introduced one of the $3d_{xz}$ moves downwards and one of the $3d_{{x^2}-{y^2}}$ moves upwards, each of them crossing the Fermi level. It can be therefore concluded that Hund’s interaction is enhanced when $U$ is introduced due to the larger repulsion between electrons. ![\[PDOS.GGA.SP.NiP\](Color online) Projected density of states (PDOS) on the nickel (a), nitrogen (b) and carbon (c) states for a nickel metalloporphyrin in the gas phase, computed with GGA and spin polarization. Left (1) and right (2) columns correspond to calculations with $U=0$ eV and $U=4.5$ eV. Positive and negative values represent spin up and spin down electrons. Notice the vertical scale is different in the middle panels (nitrogen).](PDOS.GGA.SP.NiP.eps){width="\columnwidth"} Fig.(\[PDOS.GGA.SP.NiP\]) shows the PDOS for Ni, C and N. With GGA the HOMO is formed by Ni, C and N states. Specifically, by Ni $3d_{z^2}$ and $3d_{{x^2}-{y^2}}$ and by C and N $2p_{y}$ states. Unlike previous cases, the states below the Fermi level with spin up compensate the states with spin dow and the molecule becomes diamagnetic. The LUMO, is formed by Ni $3d_{xz}$ states and has a small contribution of N $2p_{x}$ and $2p_{z}$ states. The gap is about 1.55 eV. However, when $U$ is introduced, the HOMO is just made of C and N $2p_{y}$ states. The LUMO is made of Ni $3d_{z^2}$ and $3d_{{x^2}-{y^2}}$ and by C and N $2p_{y}$ states, just like the HOMO with GGA. In addition, as said before, the molecule becomes magnetic with $S_z=1$. The gap with GGA$+U$ is similar to the case without $U$ (1.65 eV). Similar words can be said when comparing LDA with LDA$+U$. With LDA the molecule is again diamagnetic, with a gap of 1.55 eV, like GGA. In this case however, the HOMO is only formed by Ni $3d$ states and in the LUMO the C $2p_{y}$ states have stronger weight. With LDA$+U$, the HOMO is formed by C and N $2p_{y}$ states, and the LUMO by Ni $3d_{z^2}$ and $3d_{{x^2}-{y^2}}$ and by C and N $2p_{y}$ states, just like with GGA$+U$. The gap in this case does not change (1.55 eV). LDA LDA$+U$ GGA GGA$+U$ -- --------- ------- --------- ------- --------- AuFePAu 0.097 0.194 0.134 0.211 AuCoPAu 0.115 0.260 0.191 0.286 AuNiPAu 0.255 -0.729 0.281 -0.708 AuCuPAu 0.164 0.236 0.181 0.277 AuZnPAu 0.273 0.276 0.301 0.297 AuFePAu 0.545 0.360 0.529 0.380 AuCoPAu 0.495 0.246 0.401 0.263 AuNiPAu 0.254 1.269 0.276 1.290 AuCuPAu 0.444 0.315 0.459 0.309 AuZnPAu 0.273 0.276 0.297 0.296 : Change in Mulliken populations, $\Delta e$, when the metalloporphyrins are coupled between gold leads, calculated with LDA and GGA and with or without $U$.[]{data-label="Tab03"} CuP --- In CuP, the lowest energy electronic configuration is the same with and without $U$: \[...\]$ {(d_{xy})^2} {(d_{z^2})^2} {(d_{yz})^2} {{(d_{{x^2}-{y^2}}})^2} {(d_{xz})^{1.5}}$. In this case, the $3d$ orbitals have closed shells, except the $3d_{xz}$, which loses a bit of charge. The total spin of the molecule is $S_z=1/2$, which mainly comes from the Cu $4s$ and $3d_{xz}$ orbitals. Furthermore, the magnetism is not strictly localized on the Cu, but extends a little to the nitrogens. The PDOS on Cu, C and N with GGA, shows that the HOMO comes from C and N $2p_{y}$ states. The Cu states are present in the HOMO-1, which is made of Cu $3d_{xz}$ and has small contributions from N and C $2p_{x}$ and $2p_{z}$ states. The LUMO is made of Cu $3d_{xz}$ and N $2p_{x}$ states. The energy gap is 0.75 eV. With GGA$+U$, the HOMO remains the same, but the previous HOMO-1 moves to lower energies and the new HOMO-1 has C and N $2p_{y}$ character. Although the LUMO is still formed by the same type of orbitals (i.e. Cu $3d_{xz}$ and N $2p_{x}$) now changes its spin polarization and becomes populated by spin down electrons, unlike GGA. The energy gap with $U$ is about 1.4 eV. With LDA the PDOS is similar to the GGA case, since the nearest states to Fermi level are the spin up Cu $3d_{xz}$ , N and C $2p_{x}$ and $2p_{z}$ states. We therefore consider the HOMO to be located, as before, in the carbons and nitrogens. The HOMO-LUMO gap between spin-down states is 0.85 eV, which is now smaller than the HOMO-LUMO gap between spin-up states, 1.40 eV. The LUMO is the same as with GGA and is populated again with spin-down electrons. The results with LDA$+U$ are the same as with GGA$+U$, but shifted slightly in energy. ZnP --- The lowest energy electronic structure for ZnP is a close-shell electronic configuration (all the Zn $3d$ and $4s$ states are filled) due to the fact that the Zn states are all very deep in energy as a consequence of the large nuclear attraction. Therefore the $U$ correction does not affect to electronic configuration of this molecule. By analyzing the PDOS for Zn, C and N it can be shown that the HOMO and LUMO come from N and C $2p_{y}$ states. The energy gaps in all cases are about 1.65 eV. The only difference between LDA (LDA$+U$) and GGA (GGA$+U$) is the energy difference between levels, which is slightly different. ![\[PDOS.GGA.SP.Fe-Au\](Color online) Projected density of states (PDOS) on the iron $d$ states $- d_{xy}$ (a), $d_{yz}$ (b), $d_{z^2}$ (c), $d_{xz}$ (d) and $d_{{x^2}-{y^2}}$ (e) $-$ for a iron metalloporphyrin between gold electrodes, computed with GGA and spin polarization. Left (1) and right (2) columns correspond to calculations with $U=0$ eV and $U=4.5$ eV. Positive and negative values represent spin up and spin down electrons.](PDOS.GGA.SP.Fe-Au.eps){width="\columnwidth"} Electronic structure and transport properties of metalloporphyrins between gold electrodes ========================================================================================== Electronic structure -------------------- When the metalloporphyrine molecules are coupled to the electrodes the most important effect on the PDOS is the broadening of molecular orbitals into resonances, as a consequence of the coupling and hybridization of the molecular states with the gold states. Although this effect is not very large on the $d$ states it can still be seen by comparing Fig. (\[PDOS.GGA.SP.Fe\]) and Fig. (\[PDOS.GGA.SP.Fe-Au\]). In case of iron, the largest difference is seen in the $d_{xy}$ PDOS (panel (b)), which spreads and decreases its height much more. This seems to indicate a larger delocalization of this state as a consequence of its coupling to other molecular states that hybridize with the gold states. Something similar happens to the $d_{xz}$ and, to a lesser extent, to the other $d$ orbitals. This effect is maintained when the $U$ is included. Also, as a consequence of hybridization and charge transfer the total spin of the molecule is reduced to almost a half of the value of the isolated molecule. This reduction comes mainly from the $d_{xy}$ state, which spreads and crosses the Fermi level. The charge transfer is summarized in Table \[Tab03\], which shows the change in the Mulliken populations of the molecule when it is coupled between gold leads. As can be seen, all molecules gain charge, excluding the ‘pathological’ case of Ni. Since the gain is larger for spin up than for spin down the molecule ends up decreasing its magnetic moment, which is consistent with more delocalized states that reduce Hund’s interaction. In metalloporphyrins other than FeP, the broadening of resonances is not so clear, which can be explained by taking into account that the $d$ states become more localized when the atomic number increases. In some cases the $d$ states seem to even become more localized as can be seen by comparing panels (a) and (b) in Fig. (\[PDOS.GGA.SP.Ni\]) and Fig. (\[PDOS.GGA.SP.Ni-Au\]). The case of Ni is also the most striking since the differences between the coupled and the isolated molecule are not only quantitative but qualitative. As can be seen in Fig. (\[PDOS.GGA.SP.Ni-Au\]), there is no splitting between up and down levels both with and without $U$, as opposed to the case with $U$ in the isolated molecule. This can be explained by arguing that electrons in the $d_{xz}$ and $d_{{x^2}-{y^2}}$ can be now more delocalized, which decreases the effect of Hund’s interaction. As a consequence, the total magnetic moment with and without $U$ in this case is 0.00 $\mu_\mathrm{B}$, as can also be seen in Table \[Tab02\]. ![\[PDOS.GGA.SP.Ni-Au\](Color online) Projected density of states (PDOS) on the nickel $d$ states $- d_{xy}$ (a), $d_{yz}$ (b), $d_{z^2}$ (c), $d_{xz}$ (d) and $d_{{x^2}-{y^2}}$ (e) $-$ for a nickel metalloporphyrin between gold electrodes, computed with GGA and spin polarization. Left (1) and right (2) columns correspond to calculations with $U=0$ eV and $U=4.5$ eV. Positive and negative values represent spin up and spin down electrons.](PDOS.GGA.SP.Ni-Au.eps){width="\columnwidth"} Transport properties -------------------- The zero-bias transport properties of these molecules are shown in Figs. (\[TRC.LDA.SP\]), (\[TRC.GGA.SP\]) and (\[TRC.GGA.No\_SP\]), calculated with LDA and spin polarization, GGA and spin polarization and GGA without spin polarization, respectively. As can be seen, increasing the atomic number from Fe to Zn produces qualitative changes in those transport properties at the beginning of the series. The main difference between the different metalloporphyrins is due to two types of resonances that move to lower energies as the atomic number increases. One is a Fano-like resonance, a typical interference effect [@Pat97; @Ric10; @Spa11; @Kal12; @Gue12], which is shown as a sharp increase followed by a steep decrease of the transmission. The other seems to be a sharp Breit-Wigner resonance. These resonances appear in all cases, specially for Fe and Co, with GGA or LDA, with or without $U$ and with or without spin polarization. The only differences are their position and width. With LDA and spin polarization, Fig. (\[TRC.LDA.SP\]), the Fano-like resonance appears clearly with Fe and, much sharper and on lower energies, with Co. This resonance moves to higher (lower) energies when $U$ is added to Fe (Co). Above this resonance there is a Breit-Wigner-like resonance that moves to lower energies from Fe to Zn and appears just at the Fermi level on CuP. It moves also to higher or lower energies when the $U$ is included. With LDA and spin polarization, Fig. (\[TRC.GGA.SP\]), the results are similar. Without spin polarization, Fig. (\[TRC.GGA.No\_SP\]) we show for simplicity just the GGA cases. Again the behavior is similar. Some of these last results without $U$ are also analogous to those obtained by Wang [*et al.*]{} [@Wan09] ![\[TRC.LDA.SP\](Color online) Transmission coefficients for Fe (a), Co (b), Ni (c), Cu (d) and Zn (e) metalloporphyrins between gold electrodes, computed with LDA and spin polarization. Left (1) and right (2) columns correspond to calculations with $U=0$ eV and $U=4.5$ eV. Continuous and dashed lines represent spin up and spin down electrons.](TRC.LDA.SP.eps){width="\columnwidth"} ![\[TRC.GGA.SP\](Color online) Transmission coefficients for Fe (a), Co (b), Ni (c), Cu (d) and Zn (e) metalloporphyrins between gold electrodes, computed with GGA and spin polarization. Left (1) and right (2) columns correspond to calculations with $U=0$ eV and $U=4.5$ eV. Continuous and dashed lines represent spin up and spin down electrons.](TRC.GGA.SP.eps){width="\columnwidth"} ![\[TRC.GGA.No\_SP\](Color online) Transmission coefficients for Fe (a), Co (b), Ni (c), Cu (d) and Zn (e) metalloporphyrins between gold electrodes, computed with GGA and without spin polarization. Left (1) and right (2) columns correspond to calculations with $U=0$ eV and $U=4.5$ eV.](TRC.GGA.No_SP.eps){width="\columnwidth"} Obviously the two types of resonances must come from the $d$ orbitals, due to their evolution with the metallic atom and also the fact that they are not present in porphyrins without metallic atoms. One hint about their origin can be obtained by looking at the projected density of states of the $d$ levels and the surrounding atoms where these resonances happen. This is shown in Fig. (\[TRC-PDOS.BW-Fano\]) for the case of the Cobalt metalloporphyrin, computed with LDA and without $U$, which is the configuration where different contributions from different atoms cam be most clearly seen. First, the spin down $d$ states that are closest to the energy of the resonance are the $d_{xz}$ and $d_{yz}$ in the Breit-Wigner-like and Fano-like cases, respectively. For the first resonance the PDOS shows that there is a rather strong hybridization between the $d_{xz}$ level and the nitrogen atoms. For the Fano-like resonance, however, the $d_{yz}$ hybridizes much more with the carbon atoms. This is in agreement with the LDOS of Fig. (\[Fe.XZ-YZ\]), which shows that the $d_{xz}$ hybridizes with the nitrogens and the $d_{yz}$ with the carbons. ![\[TRC-PDOS.BW-Fano\](Color online) Transmission coefficients (a) and PDOS (b) around a Breit-Wigner-like resonance (1) and a Fano resonance (2), calculated on cobalt metalloporphyrin between gold electrodes and computed with LDA and spin polarization. Positive and negative values in the lower panels represent spin up and spin down electrons.](TRC-PDOS_BW-Fano.LDA.SP.Co.eps){width="\columnwidth"} Taking into account the previous data it is possible to explain the behavior of these systems as follows. The $d$ orbitals are localized states that couple to certain molecular states. Such configuration is similar to that where a side state couples to a molecular backbone, which produces Fano resonances in the transmission coefficients. The $d$ orbitals generate therefore Fano resonances associated to one or various molecular states. If, for example, the molecular state is the LUMO and the $d$ orbital is in the HOMO-LUMO gap, the effect of the Fano resonance is clearly seen because it affects the transmission in the gap, which has a large weight from the LUMO. The Fano resonance does not go to zero, however, because of the tails of the transmission of other molecular states. If, on the other hand, the $d$ orbital couples to a state below the HOMO or above the LUMO, the effect of the Fano resonance turns out to be much smaller because the transmission of such state does not affect the transmission in the HOMO-LUMO gap. There is however one effect produced by the Fano resonance, which is related to its peak. This peak has a transmission of 1 at the tip and therefore can be seen above the background of the transmission of other states. We can therefore confirm that the sharp Breit-Wigner-like resonances that are seen at different energies come in reality from Fano resonances. In view of these results it is possible to conclude then that the Fano features inside the HOMO-LUMO gap are produced by $d$ orbitals that couple to the $\pi$ states, appear mainly on the LUMO. One $d$ orbital responsible for such a feature is clearly the $d_{xy}$, as seen in Fig. (\[Fe.XZ-YZ\]). On the other hand, the sharp resonances are generated by $d$ orbitals, such as the $d_{xz}$, that couple to sigma states above the LUMO or below the HOMO. These features, specially the sharp resonance, appear in all cases and are a general characteristic of these molecules. We propose that the above-discussed metalloporphyrine molecules could be used as atomic or gas sensors. This is so due to the sharp changes in the zero-bias conductance that they generate when the Fano resonances cross the Fermi level, which should be altered whenever a given atom or molecules couples either to the metallic atom or to other parts of the molecular backbone. These resonances induce also differences between the spin-up and spin-down transmissions, which are specially important in the case of iron, cobalt and copper. These could give rise to spin-filtering properties. The inclusion of the $U$ term reduces however the spin-filtering behavior, which only remains in the case of FeP. ![\[Model\](Color online) Schematic representation of the model used to reproduce the ab-initio results. Case (a) corresponds to the $d$ level coupled to the $\sigma$ state (labeled in the text as $H_1$) and case (b) to the $d$ level coupled to one of the $\pi$ levels ($H_2$).](Model.eps){width="\columnwidth"} Simple model ------------ The previous behavior can be described with a simple model that takes into account the coupling of the molecular orbitals placed in the neighborhood of the Fermi energy to featureless leads displaying a flat density of states, as well as the coupling of one of those molecular orbitals to the $d$ orbital responsible for the Fano resonance[@Mar09; @Gar11]. Specifically, the model includes four molecular levels: a $\sigma$ level below the HOMO, a level associated to the two linker sulphur atoms that represents the HOMO and a $\pi$ level which represents the LUMO. Finally, the $d$ level has no direct coupling to the electrodes, but is instead coupled either to the $\sigma$ or to the $\pi$ levels. The model and its two possible couplings are sketched in Fig. (\[Model\]) (a) and (b), respectively. The Hamiltonian is therefore diagonal, except for the coupling between a given state and the $d$ level. We also choose diagonal and identical $\Gamma$ matrices, that couple the molecule to the leads: $$\begin{aligned} \hat H&=&\left(\begin{array}{cccc} \epsilon_\sigma&0&t_\sigma&0\\ 0&\epsilon_S&0&0\\ t_\sigma^*&0&\epsilon_d&t_\pi\\ 0&0&t_\pi^*&\epsilon_\pi \end{array}\right),\nonumber\\ \Gamma_L=\Gamma_R&=&\left(\begin{array}{cccc} \Gamma_\sigma&0&0&0\\ 0&\Gamma_S&0&0\\ 0&0&\Gamma_d&0\\ 0&0&0&\Gamma_\pi \end{array}\right)\end{aligned}$$ To facilitate the comparison with the ab-initio results, we have chosen the following values for the on-site energies and the couplings: $\epsilon_\sigma=-2$, $\epsilon_S=-1$, $\epsilon_d=0$, $\epsilon_\pi=0.6$, $\Gamma_\sigma=0.04$, $\Gamma_S=0.06$, $\Gamma_d=0$ and $\Gamma_\pi=0.06$. where all energies are measured in $eV$. For model (a), we choose $t_\sigma=-0.4$ and $t_\pi=0$, while for model (b) we choose $t_\sigma=0$ and $t_\pi=-0.2$. The transmission and the retarded Green’s function are then given by $$\begin{aligned} T(E)&=&\mathrm{Tr}\left[\hat\Gamma\hat G^{\mathrm{R}\dag}(E) \hat\Gamma\hat G^\mathrm{R}(E)\right]\nonumber\\ \hat G^\mathrm{R}(E)&=&\left[E\hat I-\hat H_{a,b}- i\hat\Gamma\right]^{-1}\end{aligned}$$ ![\[TRC.Model\](Color online) (a) Transmission coefficients calculated for model (a) described in the text and in Fig. (\[Model\]) (a). The black solid line corresponds to the full model (a), whereas the red dashed corresponds to a simplified (a) model, where the $S$ and $\pi$ orbitals have been dropped from the calculation; (b) Transmission coefficients calculated for model (b) described in the text and in Fig. (\[Model\]) (b). The black solid line corresponds to the full model (b), whereas the red dashed corresponds to a simplified (b) model, where the $S$ and $\sigma$ orbitals have been dropped from the calculation;](TRC.Model.eps){width="0.8\columnwidth"} We show in Fig. (\[TRC.Model\]) the transmissions obtained for models (a) and (b). The dashed line in Fig. (\[TRC.Model\]) (a) shows the transmission coefficient found when the sulfur and $\pi$ orbitals in model (a) are dropped and only the d and $\sigma$ orbitals are considered. A clear-cut Fano resonance emerges at the d level on-site energy. However, this resonance is masked when the the sulfur and $\pi$ orbitals are re-integrated back into the calculation leaving what looks at first sight a conventional Breit-Wigner resonance. the dashed line in Fig. (\[TRC.Model\]) (b) is the transmission obtained in model (b) when the $\sigma$ and sulfur orbitals are left aside, which features again a Fano resonance. However, the Fano resonance is now clearly visible in the full model, because the position of the $\sigma$ and sulfur resonance combined with the ordering of the divergencies in the Fano resonance can not mask completely the drop in transmission at higher energies. The similarity with the ab-initio results for the cobalt porphyrine shown in Fig. (\[TRC.LDA.SP\]) is striking, despite the simplicity of the model. Conclusions =========== We have studied the electronic properties of isolated metalloporphyrins and determined the influence of the exchange-correlation potential and strong correlations. We found that the HOMO-LUMO gap is greatly improved when a LDA+$U$ or GGA+$U$ description is used. However, we have found that the spin of the molecule does not change, excluding the case of NiP. We have also studied the electronic and transport properties of metalloporphyrins between gold electrodes. We found that the coupling to the electrodes changes only slightly the electronic properties but the magnetic moments decrease as a consequence of charge transfer and hybridization with the electrodes. We have found two types of features in the transport properties that we show to be Fano resonances by the use of a simple model. We propose the use of metalloporphyrine molecules as possible nanoelectronic devices with sensing and spin-filtering functionalities. This work was supported by the the Spanish Ministry of Education and Science (project FIS2009-07081) and the Marie Curie network NanoCTM. VMGS thanks the Spanish Ministerio de Ciencia e Innovación for a Ramón y Cajal fellowship (RYC-2010-06053). RF acknowledges financial support through grant Severo-Ochoa (Consejería de Educación, Principado de Asturias). We acknowledge discussions with C. J. Lambert. D. Dolphin, The Porphyrins, Academic, New York (1978). D. Dorough, J. R. Miller and F. M. Huennekens, J. Am. Chem. Soc. [**73**]{}, 4315 (1951). D. Gust and J. D. Roberts, J. Am. Chem. Soc. [**99**]{}, 3637 (1977). H. Goff, G. N. La Mar and C. A. Reed, J. Am. Chem. Soc. [**99**]{}, 3641 (1977). F. D’Souza, P. Boulas, A. M. Aukauloo, R. Guilard, M. Kisters, E. Vogel and K. M. Kadish, J. Phys. Chem. [**98**]{}, 11885 (1994). M.-S. Liao and S. Scheiner, Journal of Chemical Physics [**117**]{}, 205 (2002). M. Palummo, C. Hogan, F. Sottile, P. Bagalá and A. Rubio, The Journal of Chemical Physics [**131**]{}, 084102 (2009). C. Rovira, K. Kunc, J. Hutter, P. Ballone and M. Parrinello, J. Phys. Chem. A [**101**]{}, 8914 (1997). J. Otsuki, Chem. Review [**254**]{}, 2311 (2010). G. Sedghi, V. M. García-Suárez, L. J. Esdaile, H. L. Anderson, C. J. Lambert, S. Martín, d. Bethell, S. J. Higgins, M. Elliot, N. Bennett, J. E. Macdonald adn R. J. Nichols, Nature Nanotechnology [**6**]{}, 517 (2011). I. Beletskaya, V. S. Tyurin, A. Y. Tsivadze, R. Guilard and C. Stern, Chem. Review [**109**]{}, 1659 (2009). W. Jentzen, I. Turowska-Tyrk, W. R. Scheidt and J. A. Shelnutt, Inorg. Chem. [**35**]{}, 3559 (1996). M. P. Suh, P. N. Swepston and J. A. Ibers, J. Am. Chem. Soc. [**106**]{}, 5164 (1984). D. J. Carrascal and J. Ferrer, Phys. Rev. B [**85**]{}, 045110 (2012). J. P. Perdew and A. Zunger, Physical Review B [**23**]{}, 5048 (1981). V. I. Anisimov, J. Zaanen and O. K. Andersen, Physical Review B [**44**]{}, 943 (1991). I. V. Solovyev, P. H. Dederichs and V. I. Anisimov, Physical Review B [**50**]{}, 16861 (1994). A. I. Liechtenstein, V. I. Anisimov and J. Zaanen, Physical Review B [**52**]{}, 5467 (1995). S. L. Dudarev, G. A. Botton, S. Y. Savrasov, C. J. Humphreys and A. P. Sutton, Physical Review B [**57**]{}, 1505 (1998). C. Tablero, J. Phys.: Condensed Matter [**20**]{}, 325205 (2008). J. M. Soler, E. Artacho, J. D. Gale, A. Garcí�a, J. Junquera, P. Ordejón and D. Sánchez-Portal, J. Phys.: Condensed Matter [**14**]{}, 2745 (2002). D. M. Ceperley and B. J. Alder, Phys. Rev. Lett. [**45**]{}, 566 (1980) J. P. Perdew, K. Burke and M. Ernzerhof, Phys. Rev. Lett. [**77**]{}, 3865 (1996) N. Troullier and J. L. Martins, Phys. Rev. B [**43**]{}, 1993 (1991). L. Kleinman and D. M. Bylander, Phys. Rev. Lett. [**48**]{}, 1425 (1982). S. G. Louie, S. Froyen and M. L. Cohen, Phys. Rev. B [**26**]{}, 1738 (1982). V. M. García-Suárez, D. Zs. Manrique, L. Oroszlani, C. J. Lambert and J. Ferrer, in preparation. F. Parmigiani and L. Sangaletti Journal of Electron Spectroscopy and Related Phenomena, [**98-99**]{}, 287 (1999). M. Cococcioni and S. Gironcoli, Physical Review B [**71**]{}, 035105 (2005). P. M. Panchmatia, B. Snayal and P. M. Oppeneer GGA+U modeling of structural, electronic and magnetic propirties of iron porphyrin-type molecules Chemical Physics, [**343**]{}, 47 (2008). H. Goff, G. N. La Mar and C. A. Reed NMR Investigation of Magnetic and Electronic Properties of Intermediate Spin Ferrous Porphyrin Complexes J. Am. Chem. Soc. [**99**]{}, 3641 (1977). C. Patoux, C. Coudret, J. P. Launay, C. Joachim, and A. Gourdon, Inorg. Chem. [**36**]{}, 5037 (1997). A. B. Ricks, G. C. Solomon, M. T. Colvin, A. M. Scott, K. Chen, M. A. Ratner, and M. R. Wasielewski, J. Am. Chem. Soc. [**132**]{}, 15427 (2010). R. E. Sparks, V. M. García-Suárez, D. Zs. Manrique, and C. J. Lambert, Phys. Rev. B [**83**]{}, 075437 (2011). V. Kaliginedi, P. Moreno-García, H. Valkenier, W. Hong, V. M. García-Suárez, P. Buiter, J. L. H. Otten, J. C. Hummelen, C. J. Lambert, and T. Wandlowski, J. Am. Chem. Soc. [**134**]{}, 5262 (2012). C. M. Guédon, H. Valkenier, T. Markussen, K. S. Thygesen, J. C. Hummelen, and S. J. van der Molen, Nature Nanotechnology [**7**]{}, 305 (2012). N. Wang, H. Liu, J. Zhao, Y. Cui, Z. Xu, Y. Ye, M. Kiguchi, and K. Murakoshi, J. Phys. Chem. C [**113**]{}, 7416 (2009). S. Martín, D. Zs. Manrique, V. M. García-Suárez, W. Haiss, S. J. Higgins, C. J. Lambert, and R. J. Nichols, Nanotechnology [**20**]{}, 125203 (2009). V. M. García-Suárez and C. J. Lambert, New Journal of Physics [**13**]{}, 053026 (2011).
--- abstract: 'Erasing quantum-mechanical distinguishability is of fundamental interest and also of practical importance, particularly in subject areas related to quantum information processing. We demonstrate a method applicable to optical systems in which single-mode filtering is used with only linear optical instruments to achieve quantum indistinguishability. Through “heralded” Hong-Ou-Mandel interference experiments we measure and quantify the improvement of indistinguishability between single photons generated via spontaneous four-wave mixing in optical fibers. The experimental results are in excellent agreement with predictions of a quantum-multimode theory we develop for such systems, without the need for any fitting parameter.' author: - 'Monika Patel,$^1$ Joseph B. Altepeter,$^2$ Yu-Ping Huang,$^2$ Neal N. Oza,$^2$ and Prem Kumar$^{1,2}$' title: 'Erasing Quantum Distinguishability via Single-Mode Filtering' --- Quantum indistinguishability is inextricably linked to several fundamental phenomena in quantum mechanics, including interference, entanglement, and decoherence [@ent; @decoh1; @decoh2]. For example, only when two photons are indistinguishable can they show strong second-order interference [@indis]. From an applied perspective, it forms the basis of quantum key distribution [@bb84], quantum computing [@KniLafMil01], quantum metrology [@quantummetrology], and many other important applications in modern quantum optics. In practice, however, the generation and manipulation of quantum-mechanically indistinguishable photons is quite challenging, primarily due to their coupling to external degrees of freedom. In this Letter, we experimentally investigate a pathway to erasing quantum distinguishability by making use of the Heisenberg uncertainty principle. This method, although designed specifically for optical systems, might be generalizable to other physical systems, including those of atoms and ions. It uses a filtering device that consists of only linear optical instruments, which in our present rendering is a temporal gate followed by a spectral filter. The gate’s duration $T$ and the filter’s bandwidth $B$ (in angular-Hertz) are chosen to satisfy $BT<1$ so that any photon passing through the device loses its temporal (spectral) identity as required by the Heisenberg uncertainty principle. In this sense, the device behaves as a single-mode filter (SMF) that passes only a single electromagnetic mode of certain temporal profile while rejecting all other modes. Hence, applying such a SMF to distinguishable single photons can produce output photons that are indistinguishable from each other [@HuaAltKum10; @HuaAltKum11]. Our calculations show that for appropriate parameters very high levels of quantum indistinguishability can be achieved with use of the SMF, while paying a relatively low cost in terms of photon loss. This method is superior to using tight spectral or temporal filtering alone for similar purposes [@ZeiHorWei97; @FioVosSha02], where the photon loss is much higher. In fact, in Refs. [@HuaAltKum10; @HuaAltKum11] we have shown that the use of a SMF can significantly improve the performance of heralding-type single-photon sources made from optical fibers or crystalline waveguides [@Heralded-Single-Photon-SPDC86; @Single-Photon-PCF05; @Single-Photon-Fiber09; @Single-Photon-PDC-99]. In our experiment, pairs of signal and idler photons are generated in two separate optical-fiber spools via spontaneous four-wave mixing. By detecting the idler photons created in each spool, we herald the generation of their partner (signal) photons. To quantify their indistinguishability, we mix the signal photons generated separately from the two spools on a 50:50 beamsplitter and perform Hong-Ou-Mandel (HOM) interference measurements. We find that the HOM visibility is quite low when the signal photons have a temporal length $T>1/B$, owing to the presence of photons with many distinguishable degrees of freedom. However, when $T<1/B$, for which a SMF is effectively realized, a much higher HOM visibility is obtained. This result clearly shows that the SMF can be used to erase the quantum distinguishability of single photons. To quantitatively examine the degree of improvement, we develop a comprehensive theoretical model of light scattering and detection in optical fiber systems, taking into account multi-pair emission, Raman scattering, transmission loss, dark counts, and other practical parameters. The experimental data are in good agreement with predictions of the model without the need for any fitting parameter. To understand our approach for erasing quantum distinguishability, we consider amplitude profiles $f(t)$ and $h(\omega)$ for the time gate and the spectral filter, respectively. The number operator for output photons is then given by $\hat{n}=\frac{1}{(2\pi)^2}\int d\omega d\omega' \kappa(\omega,\omega'){\hat{a}}^\dag(\omega){\hat{a}}(\omega')$ [@PrSp61; @ZhuCav90], where ${\hat{a}}(\omega)$ is the annihilation operator for the incident photons of angular-frequency $\omega$, satisfying $[{\hat{a}}(\omega),{\hat{a}}^\dag(\omega')]=2\pi\delta(\omega-\omega')$. $\kappa(\omega,\omega')=\int dt~ h^\ast(\omega) h(\omega') |f(t)|^2 e^{i(\omega-\omega')t}$ is a Hermitian spectral correlation function, which can be decomposed onto a set of Schmidt modes as $ \kappa(\omega,\omega')=\sum^{\infty}_{j=0} \chi_{j} \phi^\ast_{j}(\omega)\phi_{j}(\omega'), $ where $\{\phi_{j}(\omega)\}$ are the mode functions satisfying $\int d\omega \phi^\ast_{j}(\omega)\phi_{k}(\omega)=2\pi\delta_{j,k}$ and $\{\chi_{j}\}$ are the decomposition coefficients satisfying $1\ge \chi_0 >\chi_1>...\ge 0$. Introducing an infinite set of mode operators via ${\hat{c}}_{j}=\frac{1}{2\pi}\int d\omega {\hat{a}}(\omega) \phi_{ j}(\omega)$ ($j=0,1,...$) that satisfy $[{\hat{c}}_{j},{\hat{c}}_{k}^\dag]=\delta_{jk}$, the output operator for the filtering device can be rewritten as $$\hat{n}=\sum^{\infty}_{j=0} \chi_j~ {\hat{c}}^\dag_{j} {\hat{c}}_{j}.$$ This result indicates that $\{\phi_{j}(\omega)\}$ have an intuitive physical interpretation: as “eigenmodes” with eigenvalues $\{\chi_{j}\}$ of the filtering device. In this physical model, the filtering device projects incident photons onto the eigenmodes, each of which are passed with a probability given by the eigenvalues. Specifically, for $\chi_{0}\sim 1$ and $\chi_{j\neq 0} \ll 1$ (achievable with an appropriate choice of spectral and temporal filters, as shown below) only the fundamental mode is transmitted while all the other modes are rejected. In this way, truly *single-mode* filtering can be achieved. Combined with single-photon detectors, this can be extended to a single-mode, single-photon detection system. Regardless of the type of spectral and temporal filters used to achieve this kind of single-mode filtering, such a system is capable of separating photons which, even though they may exist in the same spectral band and the same time-bin, have different mode structures. As an example, in Fig. \[fig1\](a) we plot $\chi_{0},\chi_{1},\chi_{2}$ versus $c\equiv BT/4$ for a rectangular-shaped spectral filter with bandwidth $B$ and a rectangular-shaped time window of duration $T$ [@PrSp61; @SasSuz06]. For $c<1$, we have $\chi_0\approx 1$ whereas $\chi_1,\chi_2\ll 1$, giving rise to approximately single-mode filtering. Note that this behavior is true for any $B$, as long as $T<4/B$. In other words, $\{\chi_j\}$ depend only on the product of $B$ and $T$, rather than on their specific values. Consequently, even a broadband filter can lead to a single-mode measurement over a sufficiently short detection window, and vice-versa. To understand this, consider the case where a detection event announces the arrival of a signal photon at an unknown time within the window $T$. In the Fourier domain, this corresponds to a detection resolution of $1/T$ in frequency. Given $c<1$ or $1/T>B/4$, the detector is thus unable to, even in principle, reveal the frequency of the signal photon. Therefore, the signal photon is projected onto a quantum state in a coherent superposition of frequencies within $B$ [@HuaAltKum11]. This can be seen in Fig. \[fig1\](b), where the fundamental detection mode has a nearly flat profile over the filter band $[-B/2, B/2]$. Lastly, since $T<4/B$ is required, the pass probability of the fundamental mode will be sub-unity, but not significantly less than one. To verify this theory of erasing quantum distinguishability via single-mode filtering, we perform a heralded two-photon interference experiment [@gisin03; @zeilingerentswp09; @rarity07; @takesue07] in both multimode ($c>1$) and single-mode ($c<1$) regimes. Hong-Ou-Mandel interference between two photons originating from independent photon-pair sources provides a test of indistinguishability. Appropriate choices of wavelength-division multiplexers (spectral filters which select $B$) and the width of pulses pumping the photon-pair sources (which effectively sets the temporal window $T$ in which photon pairs are born) allow a transition from the single-mode to the multimode regime. The experimental setup is shown in Fig. \[experimentalsetup\]. Both heralded photon-pair sources are pumped using the same system, consisting of 50-MHz repetition-rate pulses carved from the output of either a continuous-wave (CW) laser (for the multimode heralding experiment) or a mode-locked laser (for the single-mode heralding experiment). The pulse-carver is an optical amplitude modulator (EOSPACE, Model AK-OK5-10) driven by the output of a 20-Gbps 2:1 selector (Inphi, Model 20709SE), which is clocked at 50 MHz by an electrical signal source that also triggers the single-photon detectors (NuCrypt, Model CPDS-4) used in the experiment. The carved pulses are then amplified and fed to a 50:50 fiber splitter. Each output branch of the splitter leads to a four-wave-mixing (FWM) fiber spool (500 m of standard single-mode fiber cooled to 77 K) in a Faraday-mirror configuration [@HuaAltKum09]. The Faraday mirror effectively doubles the length of fiber available for four-wave mixing while simultaneously compensating for any polarization changes which may occur in the spooled fiber. The signal and idler photons are created via spontaneous four-wave mixing. Along with the residual pump photons, they enter two cascaded filtering stages which provide $\approx$100-dB of isolation. The filtered signal and idler photons then pass through fiber polarization controllers (not shown in Fig. \[experimentalsetup\]) and the signal photons are led to the two input ports of an in-fiber 50:50 coupler. Adjusting the polarization controllers and careful temporal alignment with use of a variable delay stage in the path of one of the signal photons ensures that the signal photons arriving at the 50:50 coupler are identical in all degrees of freedom: polarization, spectral/temporal, and spatial. Note that even though these signal photons are identical, they may still be partially or completely distinguishable (particularly in the multimode regime described above). This distinguishability may arise from entanglement with *different* idler photons (heralds) or from the presence of background photons that originate in the FWM fiber owing to Raman scattering. Four InGaAs-based single-photon detectors are used to count photons, one each at the outputs of the idler arms and the 50:50 coupler. These detectors are gated at 50-MHz repetition rate synchronous with the arrival of photons and have a dark-count probability of $1.6\times10^{-4}$ per pulse. Their quantum efficiencies are approximately 20%. The delay stage is used to vary the temporal overlap of the signal photons while the photon counts are recorded. In the multimode experimental configuration, where a CW laser (Santec, model TSL-210V) is used as the pump, the temporal duration of the carved pulses is specified by the width of the electrical pulses provided to the modulator, which is measured to be 100 ps, giving $T=10^{-10}$ s. The signal and idler filters each consist of a free-space diffraction-grating filter \[full-width at half-maximum (FWHM) $\approx$ 0.14 nm\] followed by a dense wavelength-division-multiplexing (DWDM) filter (FWHM $\approx$ 0.4 nm). The resulting optical transmission spectra are shown in Fig. 3(a), from which the effective bandwidth of the signal and idler filters is determined to be approximately 0.14 nm. In units of frequency, this gives $B/2\pi=24.6$ GHz so that $BT=2.5\times2\pi$. Therefore, $c=3.8$ and from Fig. \[fig1\], $\chi_0\approx1$, $\chi_1\approx0.9$ and $\chi_2\approx 0.5$. Because $\chi_1$ and $\chi_2$ are not neglible, this case corresponds to a multimode measurement. In the single-mode experimental configuration, a 10-GHz mode-locked laser (U2T, model TMLL1310) emitting a train of 2-ps duration, transform-limited pulses is used as the pump. The signal and idler photons along with the pump pulses are each filtered by two stages of DWDM filters. The resulting optical transmission spectra of these filters are shown in Fig. 3(b). The bandwidth of the pump filter is measured to be $68.3$ GHz, from which the pump-pulse width and thus the effective $T$ is derived to be $6.4$ ps. The bandwidths of the signal and the idler filters, on the other hand, are both approximately 0.4 nm, which give $BT\approx 0.4\times 2\pi$ or $c=0.7$. In this case, $\chi_0\approx0.4$, and $\chi_1$ and $\chi_2$ are nearly zero, giving rise to a single-mode measurement. In practice it is experimentally convenient to analyze the behavior of non-heralded two-photon coincidence counts to precisely path-match the two signal arms. This is because there are many more twofold coincidences than fourfold coincidences in the system, which allows us to study the quantum interference effect with much smaller error bars and a much shorter measurement time. To this end we define a twofold coincidence count to be when detectors A and B (c.f. Fig. 1) fire in the same time slot. We define a twofold accidental-coincidence count to be when detectors A and B fire in adjacent time slots. Finally, we define a fourfold coincidence, the quantity of primary experimental interest, to occur when all four detectors fire simultaneously in the same time slot. Figure \[trues\] shows the variation in accidental-subtracted coincidences on detectors A and B as the relative delay between the signal photons from the two FWM sources is varied. The recorded fourfold coincidence counts as a function of the relative delay in the heralded HOM interference experiment are plotted in Fig. \[expresults\]. For the multimode experimental configuration, as shown in Fig. \[expresults\](a), the interference visibility is only $19\pm2\%$. In contrast, for the single-mode configuration, a high visibility of $72\pm7\%$ is obtained, as shown in Fig. \[expresults\](b). This is the highest HOM interference visibility reported thus far for fiber-based single-photon sources in the telecommunications band. For these results, the transmission efficiencies of the signal and idler photons from their generation site in the FWM spools to the detectors are measured for each arm and are found to be 3.4% (5.5%) for the signal arms and 5.0% (7.0%) for the idler arms in the multimode (single-mode) configuration. The photon-pair production probabilities per pump pulse are measured to be 12.5% and 3.9% for the multimode and single-mode configurations, respectively. Although these experiments show a clear difference between the single-mode and multimode regimes, the theory of single-mode detection presented above—in the absence of any systematic sources of noise—seems to predict much higher visibilities, particularly for the single-mode experiment where it seems that any entanglement with the idler photons should be eliminated by the SMF. In fact, systematic sources of noise—from multi-pair production, stimulated Raman emission, loss, and dark counts—do significantly affect the results. In order to determine the extent to which these experimental results verify the theory of SMF presented above, it is necessary to create a complete theoretical model of multi-pair production, Raman emission, loss, dark-count noise, and the interference between two real experimental systems. For this goal, we adopt the standard quantum-mechanical description (assuming phase matching and undepleted pump) of light scattering in optical fibers at a few-photon level [@FWM-Raman07]: $ {\hat{a}}^{r(\ell)}_{s,a}(\omega)= \int d\omega' \alpha(\omega-\omega') {\hat{b}}^{r(\ell)}_{s,a}(\omega') +i\gamma L \int\int d\omega_1 d\omega' A_p(\omega_1) A_p(\omega'+\omega-\omega_1) ({\hat{b}}^{r(\ell)}_{a,s})^\dag(\omega')+i \int^L_0 dz \int d\omega' {\hat{m}}^{r(\ell)}(z,\omega') A_p(\omega-\omega'), $ where ${\hat{b}}^{r(\ell)}_{s,a}$ (${\hat{a}}^{r(\ell)}_{s,a}$) are the input (output) annihilation operators for the Stokes and anti-Stokes photons, respectively, in the right (left) fiber spool. $A_p(\omega)$ is the spectral amplitude of the pump in each fiber spool, with $2\pi\int d\omega |A_p(\omega)|^2$ giving the pump-pulse energy; $\alpha(\omega-\omega')$ is determined self-consistently to preserve the commutation relations of the output operators; $\gamma$ is the fiber SFWM coefficient, which we have assumed to be constant; $L$ is the effective length of the fiber spool; and ${\hat{m}}^{r(\ell)}(z,\omega)$ is the phonon-noise operator accounting for the Raman scattering, which satisfies $[{\hat{m}}^{r(\ell)}(z,\omega),{\hat{m}}^{r(\ell)\dag}(z',\omega')]=2\pi g(\omega) \delta(z-z')\delta(\omega-\omega')$, where $g(\omega)>0$ is the Raman gain coefficient [@KarDouHau94; @RamanMeasured05]. For a phonon bath in equilibrium at temperature $T$, we have the expectation $\langle {\hat{m}}^{r(\ell)\dag}(z,\omega){\hat{m}}(z',\omega') \rangle=2\pi g(\omega) \delta(z-z')\delta(\omega-\omega') n_T(\omega)$, where $n_T(\omega)=\frac{1}{e^{\hbar |\omega|/k_B T}-1}+\theta(-\omega)$ with $k_B$ the Boltzman constant, and $\theta(\omega)=1$ for $\omega\ge 0$, and $0$ otherwise. For the fourfold coincidence measurement depicted in Fig. \[expresults\], the photon-number operators for detectors A, B, C, D are given by $ \hat{n}_M=\sum_{j_M} \eta_M \chi_{j_M}{\hat{a}}^\dag_{j_M}{\hat{a}}_{j_M}+\zeta_{M} \hat{d}^\dag_M\hat{d}_M$, for $M$ = A, B, C, D. Here, $\eta_M$ is the total detection efficiency taking into account propagation losses and the detector quantum efficiency. $\eta_{jM}$ is the $j$-th eigenvalue of the filtering system for detector $M$. $\zeta_M$ measures the quantum-noise level of the detector as a result of the dark counts and the after-pulsing counts. $\hat{d}_M$ is a noise operator obeying $[\hat{d}_M,\hat{d}^\dag_{M'}]=\delta_{M,M'}$. By this definition, the mean number of dark counts for detector $M$ is then given by the expectation $\zeta_M\langle\hat{d}^\dag_M \hat{d}_{M'}\rangle$. The bosonic operators $ {\hat{a}}_{j_\mathrm{A}(j_\mathrm{B})}=\frac{1}{2\sqrt{2}\pi} \int_{\mathrm{A(B)}} d\omega\left[e^{\frac{i\tau \omega}{2}}{\hat{a}}^{r}_{s}(\omega)\pm e^{\frac{-i\tau \omega}{2}} {\hat{a}}^\ell_s(\omega)\right]\phi_{j_\mathrm{A}(j_\mathrm{B})}(\omega)$, and ${\hat{a}}_{j_\mathrm{C}(j_\mathrm{D})}= \frac{1}{2\pi} \int_{\mathrm{C(D)}} d\omega~{\hat{a}}^{r(\ell)}_{a} \phi_{j_\mathrm{C}(j_\mathrm{D})}$ where $\tau$ is the amount of signal delay and $``\int_M d\omega''$ represents integral over the detection spectral band of the detector $M$. With $\hat{n}_M$, the positive operator-valued measure for the detector $M$ to click is calculated to be $\hat{P}_M=1-:\!\exp(-\hat{n}_M)\!:$, where “: :” stands for normal ordering of all the embraced operators. The four-fold coincidence probability is then given by $\langle :\!\hat{P}_\mathrm{A} \hat{P}_\mathrm{B} \hat{P}_\mathrm{C} \hat{P}_\mathrm{D}\!:\rangle$. Applying the above theory to the experimental configurations presented above, we find the predicted visibilities of 17% (multimode regime) and 72% (single-mode regime)—in excellent agreement with the experimental results. Note that because the theoretical fits shown in Fig. \[expresults\] are generated from the complete theory described above, they require *no fitting parameter*. As a result, we conclude that the theories of both single-mode filtering and SFWM in the presence of noise are able to accurately model our experiments in both the single-mode and multimode regimes, and provide an important new tool for the study of distinguishability in photonic systems. This research was supported in part by the Defense Advanced Research Projects Agency (DARPA) under the Zeno-based Opto-Electronics (ZOE) program (Grant No. W31P4Q-09-1-0014) and by the United States Air Force Office of Scientific Research (USAFOSR) (Grant No. FA9550-09-1-0593). [99]{} R. Horodecki, P. Horodecki, M. Horodecki and K. Horodecki, Rev. Mod. Phys. **81**, 865¨C942 (2009). W. J. Zurek, Rev. Mod. Phys. **75**, 715¨C775 (2003). M. Schlosshauer, Rev. Mod. Phys. **76**, 1267¨C1305 (2005). L. Mandel, Optics Letters **16**, 23 (1982). C. H. Bennett and G. Brassard, in proc. of the IEEE International Conference on Computers, Systems and Signal Processing, Bangalore, India (1984). E. Knill, R. Laflamme and G. J. Milburn, Nature **409**, 46 (2001). V. Giovannetti, S. Lloyd and L. Maccone, Nature Photonics **5**, 222 (2011). Yu-Ping Huang, Joseph B. Altepeter, and Prem Kumar, Phys. Rev. A **82**, 043826 (2010). Yu-Ping Huang, Joseph B. Altepeter, and Prem Kumar, Phys. Rev. A **84**, 033844 (2011). A. Zeilinger, M. A. Horne, H. Weinfurter and M. Zukowski, Phys. Rev. Lett. **78**, 3031 (1997). M. Fiorentino, P. L. Voss, J. E. Sharping and P. Kumar, Photon. Technol. Lett. **27**, 491 (2002). C. K. Hong and L. Mandel, Phys. Rev. Lett. **56**, 58 (1986). J. Fulconis, O. Alibart, W. Wadsworth, P. Russell and J. Rarity, Opt. Express **13**, 7572 (2005). O. Cohen, J. S. Lundeen, B. J. Smith, G. Puentes, P. J. Mosley, and I. A. Walmsley, Phys. Rev. Lett. **102**, 123603 (2009). A. V. Sergienko, M. Atatüre, Z. Walton, G. Jaeger, B. E. A. Saleh and M. C. Teich, Phys. Rev. A **60**, R2622 (1999). D. Splepian and H. O. Pollak, Bell Syst. Tech. J. **40**, 43 (1961). C. Zhu and C. M. Caves, Phys. Rev. A **42**, 6794 (1990). M. Sasaki and S. Suzuki, Phys. Rev. A **73**, 043807 (2006). H. D. Riedmatten, I. Marcikic, W. Tittel, H. Zbinden and N. Gisin, Phys. Rev. A **67**, 022301 (2003). R. Kaltenbaek, R. Prevedel, M. Aspelmeyer, M. and A. Zeilinger, Phys. Rev. A **79** 040302, (2009). J. Fulconis, O. Alibart, J. L. O¡¯Brien, W. J. Wadsworth, and J. G. Rarity, Phys. Rev. Lett. **99**, 120501 (2007). H. Takesue, Appl. Phy. Lett. **90**, 204101 (2007). M. A. Hall, J. B. Altepeter and P. Kumar, Opt. Express **17**, 14558 (2009). Q. Lin, F. Yaman and G. P. Agrawal, Phys. Rev. A **75**, 023803 (2007). F. X. Kärtner, D. J. Dougherty, H. A. Haus and E. P. Ippen, J. Opt. Soc. Am. B **11**, 1267 (1994). X. Li, P. Voss, J. Chen, K. F. Lee and P. Kumar, Opt. Express **13**, 2236 (2005).
--- abstract: 'With the aim of paving the road for future accurate astrometry with MICADO at the European-ELT, we performed an astrometric study using two different but complementary approaches to investigate two critical components that contribute to the total astrometric accuracy. First, we tested the predicted improvement in the astrometric measurements with the use of an atmospheric dispersion corrector (ADC) by simulating realistic images of a crowded Galactic globular cluster. We found that the positional measurement accuracy should be improved by up to $\sim2$ mas with the ADC, making this component fundamental for high-precision astrometry. Second, we analysed observations of a globular cluster taken with the only currently available Multi-Conjugate Adaptive Optics assisted camera, GeMS/GSAOI at Gemini South. Making use of previously measured proper motions of stars in the field of view, we were able to model the distortions affecting the stellar positions. We found that they can be as large as $\sim 200$ mas, and that our best model corrects them to an accuracy of $\sim1$ mas. We conclude that future astrometric studies with MICADO requires both an ADC and an accurate modelling of distortions to the field of view, either through an a-priori calibration or an a-posteriori correction.' author: - Davide Massari - Giuliana Fiorentino - Eline Tolstoy - Alan McConnachie - Remko Stuik - Laura Schreiber - David Andersen - Yann Clénet - Richard Davies - Damien Gratadour - Konrad Kuijken - Ramon Navarro - 'Jörg-Uwe Pott' - Gabriele Rodeghiero - Paolo Turri - Gijs Verdoes Kleijn bibliography: - 'report.bib' title: 'High-precision astrometry towards ELTs' --- INTRODUCTION {#sec:intro} ============ Accurate astrometry is one of the major drivers for diffraction limited Extremely Large Telescopes (ELTs). To reach diffraction limited observations, the Multi-AO Imaging Camera for Deep Observations (MICADO), one of the first light instruments for the European-ELT, will be assisted by an Adaptive Optics module (MAORY, [@diolaiti10]) providing both a Single Conjugate (developed jointly with the MICADO consortium) and a Multi Conjugate modes. The goal of MICADO is a relative astrometric accuracy for bright and isolated stars of 50 $\mu$as over a central, circular field of 20 arcsec diamater. To determine if such an ambitious goal is feasible, a dual approach is taken. To simulate stellar fields as they would be seen by the SCAO with the predicted instrumental Point Spread Function (PSF) and analyse the resulting realistic images will test the predicted performance, and help to optimise the instrumental design. In addition, present-day astrometric studies with existing MCAO facilities are crucial to test the main sources of inaccuracy not related to the specific instrumental design or telescope. In this paper we present both these approaches, as complementary studies. In particular, we start by investigating in Section \[sim\] how to best reach the astrometric requirement for MICADO by quantifying the errors associated to one of the most important components in the light path: the atmospheric distortion corrector (ADC). This investigation is carried out making simulations with the SCAO module PSF of the central region of a crowded globular cluster, for a field of view of $2 \times 2$ arcsec. This size is small enough for PSF variations not to be important, but big enough to contain a sufficient number of stars. Then, in Section \[real\], we also present the results of an astrometric study performed with Gemini Multi-Conjugate Adaptive Optics System (GeMS) observations of the Galactic globular cluster NGC6681. This cluster is the most centrally concentrated in the Galaxy, and thus represents a major observational challenge in terms of stellar crowding. We have previously determined proper motions by comparing two Hubble Space Telescope epochs ([@massari13]). This makes this cluster an ideal candidate to test the effects that will be introduced by MCAO corrections on proper motion measurements and related uncertainties. In particular, we looked for systematic distortions introduced in the GeMS images by observing through both J and Ks filters and we quantify their impact on the astrometry. Though previous studies have already tested the astrometric performance of MCAO cameras (e.g. [@neichel14a; @ammons14; @lu14; @fritz16]), our investigation is the first to address the detailed structure and sizes of distortions in a MCAO instrument and will therefore be the starting point for understanding MCAO astrometry capabilities and any future improvements in the calibration and data reduction strategy for ELT observations. MICADO ADC simulations {#sim} ====================== One of the most severe observational issues concerning accurate astrometry is [*atmospheric dispersion*]{}. Since the refractive index of the atmosphere, $n$, depends on the wavelength, the observed angular distance between two sources is altered depending on the difference of their colours. Moreover, atmospheric dispersion elongates the shape of the PSF along the zenith, thus reducing the Strehl ratio and affecting the precision of any astrometric measurement. This is especially true in crowded fields, where the position of faint sources is affected by the broader wings of the PSF of bright neighbours. Since the combined effect can have an impact as large as few mas on the astrometry ([@trippe10]), an adequate correction is mandatory. Previous dedicated studies determined as best solution for MICADO observations to introduce a counter-rotation based ADC located at the pupil of the instrument. A detailed description of such a component is provided in an internal communication document by Remko Stuik. In order to quantify the impact of using an ADC on the astrometric performances of MICADO, we simulated realistic images of a Galactic globular cluster field using the predicted instrumental PSF of the SCAO module. The PSF were generated from a preliminary set of AO simulations on the COMPASS platform by Yann Clenet ([@clenet13]). The telescope spiders and segmentation were added to the individual frames and the PSF was shifted based on wavelength and zenith angle. An optical ZEMAX model was used to compute the correction as a function of wavelength for an ideal ADC with counter-rotating double prisms, assuming a fixed zenith angle of $60$ degrees. We stress that this is not a full end-to-end model, and that the interactions between all of the various sources of error, as well as optical and alignment errors, are not fully included. The PSFs were computed on axis, in the standard Ks-band filter, with an atmospheric coherence length r$_{0}=0.129$ m at a wavelength of 0.5 $\mu$m and a resulting Strehl ratio of $0.76$. The AO simulations ran for 1 hour, with a frame rate of 1 kHz, but only one of every 1000 PSF images was saved. Therefore, each PSF of 1 ms represents 1 s of data. To perform our astrometric tests, we summed $N$ independent PSF realisations to produce exposures of $N$ seconds. Although the global PSF is a correct representation of the stated exposure time, the noise statistics are not. The PSF predicted for a 20 s MICADO exposure without (a-panel) and with the inclusion of the ADC in the light path (b-panel) are shown in Figure \[psf\]. \[ht\] ![image](PSFa_noadc20inv.jpg){width="8.5cm"} ![image](PSFb_adc20inv.jpg){width="8.5cm"} It is already clear that the ADC can correct the PSF, making it more symmetric and removing many of the speckles near the upper region of the PSF core. The PSF also appears sharper, with a Strehl ratio that increases from $0.35$ to $0.76$. In the following we will quantify the astrometric improvement due to the introduction of the ADC, demonstrating how fundamental this component is for astrometric studies of crowded stellar fields with MICADO. Input catalogue and Simulations ------------------------------- To carry out our investigation, we simulate a realistic astrophysical problem. A natural choice is the crowded stellar field of a Galactic globular cluster (GC). In fact, GCs have routinely been the subject of detailed astrometric studies (e.g. [@bellini14; @watkins15]), and because of the availability of bright guide stars, they are ideally suited to be studied with diffraction-limited AO observations. In our investigation we want to simulate only a small region of the sky, where the PSF can reasonably be assumed to be constant across the entire field of view (FoV). Several previous studies have demonstrated that the PSF in AO images varies in a way that is very difficult to predict (see e.g. [@fiorentino14; @saracino15; @turri15; @massari16]). However, the introduction of a variable PSF is beyond the current scope of this work and will be addressed in future investigations. At the same time, we need to simulate a number of stars large enough to draw statistically significant conclusions. For these reasons, we choose to simulate the innermost 2 arcsec $\times$ 2 arcsec region of the crowded core of the GC M3 ([@massari16b]). Since M3 is known to have a flat density distribution in its core ([@miocchi13]), it is correct to assume that stars are uniformly distributed in the central 2 arcsec $\times$ 2 arcsec region, and we can build the input catalogue simply by distributing the stellar positions randomly. We determined the realistic number of stars to be simulated using the Hubble Space Telescope (HST) catalogue of M3 ([@anderson08]). After correcting it for incompleteness effect, we found the total number of stars in our FoV to be $\sim650$. This is the population we will use to create a realistic simulation of a MICADO image of M3. We also reproduced a realistic distribution of stellar magnitudes by using the theoretical models taken from the Basti archive ([@pietrinferni06]). The software used to create the simulated images is described in detail in [@deep11]. A few of the technical specifications used in that paper were updated for this investigation. In particular, we now use a primary mirror with an outer diameter of 37.0 m, an 11.1 m internal diameter, 6 spiders of 40 cm width every 60 degrees, for a total effective area of 947.3 m$^{2}$. Given a pixel-scale of 3 mas/pixel[^1] our $2\times2$ arcsec images have a size of $667\times667$ pixels. We created two sets of simulated images, with and without the inclusion of the ADC, with exposure times of 2 s, 4 s, 5 s, 10 s, 20 s and 120 s. These images were built following a random dither pattern to reproduce a typical scientific observation. An example of an image simulated in this way and using a 20 s exposure PSF with ADC is shown in Figure \[clu\]. \[ht\] ![image](clu.jpg){width="\columnwidth"} Astrometric analysis -------------------- The source detection and extraction in the simulated images has been carried out with the DAOPHOT ([@stetson87]) suite of software. The PSF model used to fit the light profile of each star was determined on the images, without exploiting any a-priori knowledge. This is to accurately mimic what happens routinely when dealing with real imaging data. Here, the best solution turned out to be a Moffat function. No degrees of spatial variation were necessary, since by construction the PSF does not vary across the images FoV. This model was then fit to all of the sources above a 3$\sigma$ threshold of the sky background by ALLSTAR, to give a catalogue of stellar positions and instrumental magnitudes as output. The first interesting result comes from a comparison of the number of input sources with the number of sources actually recovered in the analysis, that is a sort of completeness test (see also [@deep11; @greggio12] for other detailed analysis on the achievable completeness with simulated MICADO data). The achieved completeness is shown in Figure \[complet\]. It does not take into account false detections, which are stars found that were not in the input list, and were discarded by cross-matching input and output catalogues. \[ht\] ![image](compl_texp_mag_adcnoadc_con.jpg){width="\columnwidth"} As expected due to the superior PSF quality, the performance obtained in the ADC-case is strikingly better than that without the ADC. This shown in Figure \[confr\_compl\]. In the ADC-case (left panel of Figure \[confr\_compl\]) the software is able to pick up most of the true sources (plus some PSF artefacts that however can be easily identified and discarded because of their non-stellar shape). In contrast, in the no-ADC case (right panel of Figure \[confr\_compl\]) the elongated PSF causes the fainter stars to fall below the detection limit, while bright stars with bright companions can often no longer be recognised as independent sources. This is already an important indication of how important it will be to have an ADC assisting MICADO at the E-ELT, not only for astrometry but also for purely photometric purposes. \[ht\] ![image](confr_zoom.jpg){width="\columnwidth"} We tested the astrometric performance by comparing the input positions with those recovered by the software as output. In particular, we considered all of the stars in a given exposure, and computed the root mean square (rms) of the difference between input and output positions. Then, we divided our sample of stars in bins of magnitude and computed the corresponding mean rms value. Magnitude bins were defined in order to have a statistically significant number of stars (at least 70) in each of them. Finally, we determined the difference of the rms values between the non-ADC and the ADC case, and we show its behaviour with respect to exposure time and input magnitude in the two panels of Figure \[test1\]. The X- and Y- direction of the detector have been analysed separately. \[ht\] ![image](diff_texp_mag_adcnoadc_con.jpg){width="\columnwidth"} The improved performance with the inclusion of the ADC is marked. This is especially evident in the Y-component (upper panel of Figure \[test1\]), since it is the direction where the atmospheric dispersion most affects the PSF shape in our simulations because of our chosen field orientation. The difference in the performances in the X-direction is less conspicuous, but clearly improving for fainter magnitudes and shorter exposure times. Of course for real observations without the ADC, such a distortion usually has a significant component in both the directions depending on the orientation, thus affecting strongly the achievable astrometric accuracy in both axes. When combining in quadrature the improvement in both components, our tests reveal that the minimum improvement (obtained with the brightest stars in the longest exposure) amounts to $\sim370~\mu$as, while the maximum value (for the faintest stars in the shortest exposure) amounts to $\sim2000~\mu$as, in fair agreement with the prediction of [@trippe10]. Therefore, under the hypothesis that the PSFs used are realistically reproducing the reliable effect of the ADC, our study strongly supports the need of a high-functioning ADC in MICADO to achieve accurate astrometry. Discussion on the astrometric accuracy -------------------------------------- As stated in the Introduction, MICADO has the ambitious goal of reaching a relative astrometric accuracy of $50~\mu$as for bright and isolated stars. To reach such a goal, all of the components of the instrumental design have to be carefully tested in order to minimise the contribution of any systematic source of astrometric error to the overall budget. In this investigation, we focused on the impact that the ADC has on the SCAO performance of MICADO in simplified conditions, namely that the simulated FoV is small and on axis, and without considering the effects of the camera distortions, but for a realistic observational setup and science case. Our findings clearly demonstrate that the inclusion of the ADC is a fundamental requirement to reach high-precision astrometry, since in the simulated conditions it significantly improves the astrometric performance by at least $\sim370~\mu$as. One of the future goals we will pursue is to improve the PSF generation, including the realistic ADC manufacturing errors together with the instrumental ones. Another necessary future step is to accurately quantify the total contribution of the ADC to the overall astrometric error budget. To do so, we also need to quantify the contribution to the error coming from the inability of current software in modelling the PSF. Starfinder ([@diolaiti2000]) is the ideal software to do this, as it is able either to model the PSF directly on the single frames, or to use as input the same PSF used to simulate the images, thus bringing to zero the uncertainty due to the PSF modelling. However, we stress that the procedure we followed in this study is routinely and necessarily followed when analysing real images given that no sufficiently accurate a-priori knowledge of the PSF is usually available, the only exception being PSF reconstruction experiments (e.g. [@jolissaint12]). Once the intrinsic contribution to the astrometric error budget from the optical design, including the ADC, is estimated, we will also be able to quantify the potential effects of the reduction software. This will be extremely important in order to determine what are the likely future software requirements, and to test how PSF reconstruction techniques could reduce the overall astrometric uncertainty budget. Real data {#real} ========= The practical requirement to measure stellar proper motions (PMs) is to determine the displacement of stellar positions between two (or more) epochs. However, the physical motion of a star is not the only contribution to such a displacement. In fact, any effect artificially altering the observed position of a star with respect to the true one introduces a distortionthat, without a proper treatment, is degenerate with the PM signal. For this reason it is fundamental to disentangle the effect of such distortions before any astrometric investigation. In this respect, MCAO systems are particularly complex to deal with. In fact, deformable mirrors conjugated to high altitude layers far away from the pupil can induce field distortions that significantly affect the overall astrometric accuracy (see e.g. the discussion in [@neichel14a]). Since the magnitude and structure of distortions in MCAO data might change with differing seeing conditions, asterisms, airmass ([@rigaut12; @lu14]) and are as yet poorly investigated, our ability to determine how accurately they can be corrected remains uncertain. Calibrating the camera distortions and applying the general solution to any data-set is unlikely to prove sufficiently accurate for high-precision astrometric studies, because the distortions change from case to case. One of the possible solution to this problem, would be to correct each exposure of a data-set with its own absolute solution, but this requires the availability of a distortion-free reference to break the PM-distortion degeneracy. In the meantime a case where we can still make an accurate assessment of these effects is the Galactic GC NGC6681. This is because, both the distortion free positions in a past epoch and the PMs of the stars in the cluster FoV are known from Hubble Space Telescope (HST) measurements ([@massari13], hereafter Ma13). Recent observations have been taken with the MCAO camera GeMS ([@neichel14b]) for this cluster (Programme IDs: GS-2012B-SV-406, GS-2013A-Q-16, GS-2013B-Q-55, PI: McConnachie). Therefore, in this case we have all of the necessary ingredients to determine the distortions caused by the MCAO on the GeMS camera. The method ---------- Because we have distortion-free stellar positions at the epoch of the first HST measurement (GO:10775, PI:Sarajedini) and their PMs from subsequent HST epochs (Ma13), it is possible to build a distortion-free reference frame at the epoch of GeMS observations. In this way, the differences between the observed GeMS positions and those from HST projected at the GeMS epoch are only due to distortion terms. In order to be as accurate as possible, only NGC6681 stars with a PM uncertainty smaller than 0.03 mas yr$^{-1}$ were used to build the reference frame. Their large number ($7770$) allow us to accurately sample the area in common between the HST and the GeMS data sets. The distortion-free reference at the GeMS epoch were aligned in Right Ascension (RA) and Declination (Dec), and then all the GeMS exposures were registered to this reference frame to estimate their distortion maps. The GeMS data set is composed of $8\times160$ s exposures in both the J and the Ks filters, dithered by a few, non-integer pixel steps to cover the intra-chip gaps of the camera and to allow a better modelling of the PSF. All of the details concerning the reduction of the images will be described in a forthcoming paper (Massari et al. in prep.). Briefly, stellar raw positions (x$_{i}^{r}$, y$_{i}^{r}$) were obtained via PSF fitting using the DAOPHOT suite of software ([@stetson87]) and following the procedure described in [@massari16]. Each of the four chips of the camera was treated separately. The PSF best-modelling was achieved by fitting the light profile of several hundred bright, isolated stars with a Moffat function and allowing the fitting residuals to be described with a look-up table that varies cubically across the FoV. By matching each exposure raw positions to the distortion-free reference using a 5th-order polynomial, the corrected, distortion-free GeMS positions (x$_{i}^{c}$, y$_{i}^{c}$) were obtained. The 5th-order polynomial turned out to be the best choice when trying to balance the improvement of the rms of the transformations. This was determined by attempting to keep the order as low as possible, so as not to introduce spurious effects due to excessive degrees of freedom. In particular, when moving from the third order to the fourth and fifth, the rms of the transformations improved by $\sim4$% per order, leading to a final rms of $\sim1$ mas. Instead, for the following orders the improvement was only by $\sim1$%. Quantifying GeMS distortions ---------------------------- The aim of this analysis was to quantify the distortions that affect GeMS astrometry. We stress that the contribution to these distortions does not come only from the instrumental geometric distortion, but also from all of the effects that artificially alter the position of a star such as anisoplanatism effects or an imperfect PSF modelling, and thus affect our ability to measure the true PM of that star. The average GeMS distortion maps for the J and Ks images are shown in Figure \[dmaps\]. Each single exposure distortion map was built as the difference between the positions corrected with the 5th- order polynomial (x$_{i}^{c}$, y$_{i}^{c}$), and the positions corrected using only linear transformations (thus taking into account rigid shifts, rotations and different scale). For the upper left corner of the camera, the polynomial solution is extrapolated, since no stars in common with the HST FoV of Ma13 were found. Therefore the solution in that small area might not be representative of the true distortions. For all of the stars in common to all of the eight exposures per filter, the difference vectors were averaged and then multiplied by 20 to enhance the details. \[ht\] ![image](gdj_med.jpg){width="8.5cm"} ![image](gdk_medcut.jpg){width="8.5cm"} Several structures in the distortion maps are in common among all of the chips and both the filters. The amplitude of the distortions in the X-component (corresponding to RA) is significantly larger than that in the Y-component (Dec). In fact the former spans an interval ranging from $-1.86$ pixels to $4.83$ pixels (mean value of $0.6$ pixels), while the latter varies from $-0.82$ pixels to $0.52$ pixels (mean value of $-0.01$ pixels). The corners of each chip appear to be more distorted. Since the size of each pixel is that of the HST/ACS, the largest distortions are of the order of 0.2 arcsec. Similar values were found independently on another GeMS data set (S.Saracino, private communication). A quite striking common feature is a circular structure roughly located at the centre of each chip (which does not move from exposure to exposure) where the X-component distortion changes its sense. Another remarkable feature is the similarity of the distortions in the four separated chips. Not only the structure, but also their magnitude is very similar. For all the four chips, minimum, maximum and mean distortion values in X and Y agree within $\sim0.1$ pixels Finally, to separate the distortion component that changes from exposure to exposure, we build residual maps, that is the difference between each single distortion map and the average one of the corresponding filter. An example for a K-band exposure is shown in Figure \[resi\]. In the left-hand panel each vector is multiplied by a factor of 20, and the good work made by the average map in describing the overall behaviour of the single-exposure map is evident. In the right-hand panel, instead, the vectors are multiplied by a factor of 80, to enhance the residuals. The main structures observed in the average maps have disappeared, and only a different pattern survive. The magnitude of the residual distortions are again very similar among each chip, and ranges from $-1.1$ pixels to $1.3$ pixels (mean value $0.05$ pixels) in the X component, and from $-0.11$ pixels to $0.07$ pixels (mean value $0.0$ pixels) in the Y component. Note that by excluding the most external corners, the variation ranges are much smaller, being limited between $-0.02$ to $0.03$ pixels in both components. Since from exposure to exposure the only significant difference is the seeing, we interpret these residual maps as the distortion variations introduced by the varying observing conditions. This will likely play a significant role in our ability to carry out accurate astrometry and photometry, and will need to be further studied. \[ht\] ![image](resik_x20.jpg){width="8.5cm"} ![image](resik_x80.jpg){width="8.5cm"} Discussion on the distortion maps --------------------------------- Using previous PM studies with HST we were able to acquire a-priori knowledge to break the PM-distortions degeneracy for GeMS observations of the GC NGC6681, and are thus able to model the time varying distortions of each exposure taken with the MCAO-assisted camera. Our findings show that the average distortion across the entire FoV amounts to $\sim30$ mas, but it can reach values as large as $\sim200$ mas. With the use of a fifth order polynomial, we were able to model the distortion to an accuracy of $\sim1$ mas. A contribution to this term is given by the propagation of the PM error of the stars used to build the distortion-free reference frame at the GeMS epoch. Since only stars with an error smaller than 0.03 [${\rm mas\, yr^{-1}}$]{}were used, and the GeMS data were taken $6.9$ yr after the first HST epoch, the total contribution due to PM uncertainty is $\sim0.2$ mas. Investigations of the internal dynamics of GCs, which might shed light on fundamental topics such as the presence of intermediate mass black holes or the formation of GC multiple populations (e.g. [@piotto15]), require precisions $<0.1$ mas/yr. Therefore an accurate modelling of such distortions is fundamental to the success of these studies. The method used throughout this analysis proved efficient in achieving this goal (Massari et al. in prep.) but is limited to cases for which already measured PMs are available at least for the bright stars in the field. Therefore, other solutions still have to be investigated. For example, our group is performing a series of studies aimed at anchoring GeMS astrometry to the distortion free reference frame provided by seeing-limited observations obtained with the FLAMINGOS-2 camera at the Gemini-South telescope. Looking to the future, MICADO astrometry might also require to test the use of calibration masks or methods that do not rely on an absolute distortion correction but on a relative calibration. We thank Benoit Neichel, Carmelo Arcidiacono, Jessica Lu and Marc Ammons for the useful discussions on the GeMS distortions. Based on observations obtained at the Gemini Observatory and acquired through the Gemini Science Archive. GF and DM has been supported by the FIRB 2013 (MIUR grant RBFR13J716). [^1]: Note that the current predicted diamater of the E-ELT is $38.5$ m, while the MICADO pixel-scale is currently set to 1.5 mas/pixel for the high spatial resolution mode.
--- abstract: 'We report on marked memory effects in the vortex system of twinned YBa$_2$Cu$_3$O$_7$ single crystals observed in ac susceptibility measurements. We show that the vortex system can be trapped in different metastable states with variable degree of order arising in response to different system histories. The pressure exerted by the oscillating ac field assists the vortex system in ordering, locally reducing the critical current density in the penetrated outer zone of the sample. The robustness of the ordered and disordered states together with the spatial profile of the critical current density lead to the observed memory effects.' address: 'Laboratorio de Bajas Temperaturas, Departamento de Física, Universidad Nacional de Buenos Aires, Pabellón I, Ciudad Universitaria, 1428 Buenos Aires, Argentina' author: - 'S. O. Valenzuela and V. Bekeris' date: Submitted 9 December 1999 title: 'Plasticity and memory effects in the vortex solid phase of twinned YBa$_2$Cu$_3$O$_7$ single crystals' --- [2]{} Continuous efforts have been made to understand the remarkably rich variety of liquid and solid phases in high temperature superconductors [@blat]. A subject that has recently attracted much interest is the connection between these thermodynamic phases and the driven motion of vortices, in particular concerning the presence of topological defects (like dislocations) and the evolution of the spatial order of the vortex structure (VS) at different driving forces [@Kosh; @yaron; @marl; @Matsu; @hend1; @flavio; @abu; @hend2; @rav]. In systems containing random pinning, theoretical [@Kosh] and experimental [@yaron; @Matsu; @hend1; @flavio] results have shown that, at the depinning transition, the VS undergoes plastic flow in which neighboring parts of the flux lattice move at different velocities thereby disordering the VS. Changes in the volume in which vortices remain correlated may modify the critical current density $J_{\mathrm{c}}$ and lead to a thermomagnetic history dependence in the transport and magnetic properties of the superconductor in a way reminiscent of other disordered systems such as spin glasses [@let]. History effects recently observed in conventional low-$T_{\mathrm{c}}$ superconductors [@hend1; @hend2; @rav; @kupf] were attributed to plastic deformations of the VS, however, little is known about the exact mechanism involved in these phenomena. In YBa$_2$Cu$_3$O$_7$ (YBCO), the importance of plasticity has been revealed through magnetic [@abu; @zies; @kup; @kokk] and transport [@fend] measurements below the melting transition, though detailed history effects studies have not been performed up to now. In this work we report on thermomagnetic history effects in the solid vortex phase of pure twinned YBCO single crystals by measuring the ac susceptibility with the ac field parallel to the [*c*]{} axis of the sample. ac susceptibility is a sensitive tool to detect changes in $J_{\mathrm{c}}$ and therefore, in the translational correlation length of the VS. Our results show that the VS may be trapped in different metastable states depending on its thermomagnetic history. For example, if the sample is cooled from above $T_{\mathrm{c}}$ with no applied ac field, the VS is trapped in a more disordered state than when the ac field is turned on during the cooling process. In addition, we find evidence that the cyclic pressure exerted by the ac field induces a dynamical reordering of the VS in the penetrated outer zone of the sample that persists when the ac field is turned off and, as a result, different parts of the sample may be in different pinning regimes. The resulting spatial variation of $J_{\mathrm{c}}$ leads to a strong history dependence of the ac response and accounts for the observed memory effects. The character of the reordering induced by the ac field seems to be related to the flow of dislocations in a similar way as in ordinary solids under cyclic stress [@hertz]. Magnetic field orientation relative to the twin boundaries (TB) is another ingredient that determines the overall ac response. The effective strength of pinning at the TB’s can be tuned by rotating the applied dc field out of the twin planes. At small angles, where pinning by TB’s is expected to be more effective, history effects are weak. At larger angles, however, the influence of TB’s diminishes and then pronounced history effects are observed. Global ac susceptibility measurements with the mutual inductance technique were carried out in two twinned single crystals of YBCO. We present the data obtained with one of them (dimensions $0.56 \times 0.6 \times 0.02 ~ \mathrm{mm}^{3}$). The crystal has a $T_{\mathrm{c}}$ of 92 K at zero dc field ($h_{\mathrm{ac}}$=1 Oe) and a transition width of 0.3 K (10-90% criterion). Polarized light microscopy revealed that the crystal has three definite groups of twins oriented $45^{\circ}$ from the crystal edge as shown in the inset of Fig. \[fig1\]. Susceptibility data were recorded under different angles $\theta$ between the applied field and the [*c*]{} axis. The axis of rotation is shown in the inset of Fig. \[fig1\] and it was chosen so that the field can be rotated out of all twin boundary planes simultaneously. We begin by describing briefly the angular dependence of the ac susceptibility and then we will focus our discussion on the memory effects. Fig. \[fig1\] shows the real component of the ac susceptibility, $\chi'$, for four values of $\theta$. The data were obtained while decreasing the temperature in the usual field cooled procedure at a rate of 0.2 K/min, with $H_{\mathrm{dc}}$ = 3 kOe, and a superimposed ac field of amplitude $h_{\mathrm{ac}}$ = 2 Oe and frequency $f$ = 10.22 kHz. =.9 -4mm The diamagnetic screening, $\chi'$, presents a dramatic evolution as $\theta$ is increased from $\theta$ = 0$^\circ$ ($H_{\mathrm{dc}}\parallel c$). The pronounced changes are a consequence of the effective pinning strength of the TB’s. Fig. \[fig1\] shows that, when the dc field is tilted from the TB’s direction, a sharp onset in the susceptibility develops. A sharp onset in the susceptibility was observed in untwinned YBCO single crystals [@giapin; @brac] and was demonstrated to coincide closely [@brac] with a sharp resistivity drop that is generally accepted as a fingerprint of a melting transition between a vortex liquid phase and a vortex solid phase with long-range order [@hugo; @kw]. We identify the step-like onset of $\chi$’ as the melting temperature, $T_{\mathrm{m}}$. The absence at small angles of the sharp onset in $\chi$’ (Fig. \[fig1\]) suggests that the first order melting transition is suppressed by the TB’s. The nature of the transition for $H_{\mathrm{dc}}\parallel c$ is generally accepted to be a second order phase transition to a Bose-Glass state [@nel]. A peak at $\theta = 0^{\circ}$ in the angular dependence of $T_{\mathrm{m}}$, as seen in the inset of Fig. \[fig1\], has been related to this transition [@kw; @nel]. The angular dependence of the ac susceptibility in twinned samples is still a matter of debate. The observed minimum in the shielding at low temperatures and $\theta = 0^{\circ}$ has been recently explained by vortex channeling along TB’s [@ed]. According to Ref. [@ed; @ous], as the field is rotated out of the TB’s, the channeling is partially suppressed. This would explain the initial increase in $\chi$’ at small angles (see Fig. \[fig1\]). However, above a threshold angle, $\theta_{\mathrm{k}} \sim 14^{\circ}$, a new reduction in $\chi$’ is observed (Fig. \[fig1\], $\theta = 25^{\circ}$). One reason may be that, for these angles, the influence of TB’s vanishes and a more ordered VS can form [@zies] leading to a reduction in $J_{\mathrm{c}}$. We turn now to the memory effects. The main results of our investigation are summarized in Fig. \[fig2\] and \[fig3\]. The measurements in Fig. \[fig2\] were performed by varying $T$ at fixed $H_{\mathrm{dc}}$, $h_{\mathrm{ac}}$ and $\theta$. Dotted curves were obtained on cooling (C) as the ones in Fig. \[fig1\], while solid and dashed curves were obtained on warming (W). The difference between solid and dashed curves is the way the sample was cooled prior to the measurements. Dashed curves were performed after cooling from $T > T_{\mathrm{c}}$ with applied ac field (F$_{\mathrm{ac}}$CW), [*e.g.*]{} after measuring the dotted curves. Solid curves were also obtained after cooling from $T > T_{\mathrm{c}}$ but with $h_{\mathrm{ac}}$ = 0 (ZF$_{\mathrm{ac}}$CW). It is apparent that when cooling the sample with no applied ac field the vortex system solidifies in a more strongly disordered and pinned state (with a higher effective critical current density $J_{\mathrm{c}}^{\mathrm{dis}}$), as can be inferred from the enhanced shielding and the reduced dissipation. =.8 -12mm The angular dependence of the thermomagnetic history exhibits interesting features. At small $\theta$, the three curves are very similar, though a closer look shows that the maximum shielding (and minimum dissipation) corresponds to the ZF$_{\mathrm{ac}}$CW case. As $\theta$ is increased, but kept below $\theta_{\mathrm{k}}$, the thermomagnetic history becomes more and more relevant. For angles near $\theta_{\mathrm{k}}$, the importance of the history rapidly increases and the ZF$_{\mathrm{ac}}$CW case strongly separates from the other two curves (see Fig. \[fig2\]). The relative variation between the measured $\chi$’ for different sample histories is seen to be as high as 20%. The magnitude of the history dependence at angles beyond $\theta_{\mathrm{k}}$ suggests that the first order melting transition and the translational order of the VS are key factors to explain this behavior. The results presented above can be understood in terms of a dynamical reordering of the VS caused by the shaking movement induced by the applied ac field during the cooling process in the F$_{\mathrm{ac}}$CC and the F$_{\mathrm{ac}}$CW cases. Due to this dynamical ordering the correlation length of the VS grows and $J_{\mathrm{c}}$ diminishes ($J_{\mathrm{c}} = J_{\mathrm{c}}^{\mathrm{ord}} < J_{\mathrm{c}}^{\mathrm{dis}}$). Note that when $H_{\mathrm{dc}} \parallel c$ a long range ordered structure is unlikely to form as TB’s prevent vortices from occupying positions favored by their mutual interaction. This seems to be the case even when the ac field is applied manifesting in the faint history effects shown in Fig. \[fig2\]. However, when the field is tilted from the twins their influence weakens and history effects become more evident suggesting the dynamical ordering of the VS by the ac field. This reordering or [*annealing*]{} of the VS occurs in the penetrated outer zone of the sample which depends on the ac field amplitude and the temperature (as $J_{\mathrm{c}}$ is temperature dependent). If the VS is initially disordered ([*e.g.*]{} ZF$_{\mathrm{ac}}$C), an increase in $T$ or $h_{\mathrm{ac}}$ will order the VS as the ac field front progresses towards the center of the sample. The VS at the inner region will remain disordered if the condition $J < J_{\mathrm{c}}^{\mathrm{dis}}$ is satisfied at [*all*]{} times and an elastic Campbell-like regime applies with most of the vortices pinned [@bra]. From this follows that the [*spatial profile*]{} of $J_{\mathrm{c}}$ is determined by the [*history*]{} of the sample. If the sample is cooled from $T > T_{\mathrm{c}}$, it will be measured the smallest shielding for the applied ac field (F$_{\mathrm{ac}}$CC and F$_{\mathrm{ac}}$CW cases), as observed in Fig. \[fig2\]. If the above reasoning is correct, memory effects both in $T$ and $h_{\mathrm{ac}}$ should be observed because the inner disordered state is able to sustain a higher current without vortex movement, thereby enhancing the shielding $|\chi'|$ and reducing the dissipation $\chi''$ in the sample. Moreover, the existence of the disordered region should become more evident in $\chi$ when the ac flux front is near its boundary. These memory effects are clearly depicted in Fig. \[fig3\]. Starting at $T \sim 80$ K with a disordered state (ZF$_{\mathrm{ac}}$C) we measured the susceptibility while increasing temperature (ZF$_{\mathrm{ac}}$CW). As $T$ increases, $J_{\mathrm{c}}$ decreases and the ac field front penetrates further into the sample ordering the VS. If warming is stopped and the temperature is lowered (point $A$) the measured susceptibility shows a hysteretic behavior as the outer part of the sample is now ordered. Furthermore, when the sample is driven to low enough temperatures the measured susceptibility tends to that obtained in the F$_{\mathrm{ac}}$CC procedure because the ac field is unable to sense the inner disordered region that was never reached by the ac field front. If now the temperature is increased once again, the susceptibility closely follows the last cooling curve because the order-disorder profile was not changed during the cooling process. As expected, beyond point $A$ the ac response matches the ZF$_{\mathrm{ac}}$CW case. The procedure was repeated at point $B$ where an equivalent description can be made. It is worth noting that the long term memory (our experiments take more than 1 h) indicates that both the disordered and the ordered metastable states are very robust. =.8 -12mm Analogous cycles in $h_{\mathrm{ac}}$ that corroborate the above explanation can be performed starting with $h_{\mathrm{ac}}$ = 0 after ZF$_{\mathrm{ac}}$C (top inset of Fig. \[fig3\]). Note that for ac field amplitudes higher than 5.5 Oe no hysteretic behavior is observed in neither $\chi'$ nor $\chi''$. In this case, the ac field has penetrated in the whole sample suppressing the disordered region. As the VS has been fully annealed, the system loses memory of the highest ac field that was applied. We also studied what happens when the sample is zero field cooled (both ac and dc)(ZFC). The dc field rate was 50 Oe/s. In these measurements, the ac field is turned on after the dc field has reached its final value. The warm up curves obtained after preparing the system at $T \sim 80$ K are also contained in Fig. \[fig3\] (solid line). The results are close to the disordered ZF$_{\mathrm{ac}}$CW ones. We interpret this in terms of disorder yielded by plastic motion of vortices. It is well established from Bitter decoration [@flavio], SANS [@yaron], noise [@marl] and transport experiments [@hend1; @hend2] that when a current near $J_{\mathrm{c}}$ drives the flux lattice a disordered plastic motion occurs. On the other hand, there is experimental [@abu] and theoretical [@feig] evidence suggesting the existence of a dislocation mediated plastic creep of the vortex structure that would be analogous to the diffusive motion of dislocations in solids [@hertz]. In our experiment, when the magnetic field is applied, vortices can start penetrating only when the induced current $J$ is of the same order of $J_{\mathrm{c}}$. In this situation, a small dispersion in the pinning strength will destroy the long range order generating a high density of defects. As this plastic motion proceeds, the screening current decreases and the lattice will be unable to reorder as detected when we apply the ac field [@sov]. While from the above discussion it is clear that the ZFC case will correspond to a disordered state, one can then ask on the character of the VS annealing when an ac field is applied. The mechanism involved in this phenomenon may be analogous to the flow and rearrangement of dislocations that leads to the softening of hard atomic solids under cyclic stress [@hend2; @hertz]. The bottom inset of Fig. \[fig3\] shows evidence of this cycle-dependent softening in the VS. Starting from a ZF$_{\mathrm{ac}}$C disordered state, the sample is cycled with a large ac field to anneal the VS. After $N$ cycles the ac field is turned off and the state of the VS is sensed by measuring the ac susceptibility with a smaller probe ac field at 10 kHz. This procedure is repeated for each point in the figure. As discussed above, the degree of exclusion of the probe and therefore the susceptibility are a function of the critical current of the sample at the moment the probe is applied ($J_{\mathrm{c}}(N)$). The cycle-dependent behavior of the susceptibility nicely demonstrates that $J_{\mathrm{c}}$ is also cycle-dependent. In conclusion, we have presented susceptibility measurements on twinned YBCO single crystals. We find that the vortex system can be trapped in different metastable states as a consequence of different thermomagnetic histories. When measuring susceptibility, the oscillating applied field assists the vortex structure in ordering, locally reducing the critical current density. As a result, different parts of the sample can be in different pinning regimes. The robustness of these states and the associated spatial variation of the critical current density manifest in strong memory effects both in temperature and ac field. The angular dependence of these effects is consistent with an increase of the correlation length of the VS when the dc field is rotated out of the twin planes. We expect that similar effects will be present in transport measurements because, in most cases, the applied (ac) current will not flow homogeneously inside the sample and will force a field redistribution in a similar manner as when applying an external ac field. We acknowledge E. Rodríguez and H. Safar for a critical reading of the manuscript. This research was supported by UBACyT TX-90, CONICET PID N$^{\circ}$ 4634 and Fundación Sauberán. -7mm G. Blatter [*et al.*]{}, Rev. Mod. Phys. [**66**]{}, 1125 (1994); E.H. Brandt, Rep. Prog. Phys. [**58**]{}, 1465 (1995). A.E. Koshelev and V.M. Vinokur, , 3580, (1994). U. Yaron [*et al.*]{}, Nature [**376**]{}, 753 (1995). A.C. Marley [*et al.*]{}, , 3029 (1995). T. Matsuda [*et al.*]{}, Science [**271**]{}, 1393 (1996); F. Nori, [*ibid.*]{}, 1373; G.W. Crabtree and D.R. Nelson Phys. Today [**77**]{}, 38 (1997). W. Henderson [*et al.*]{}, , 2077 (1996); N.R. Dilley [*et al.*]{}, , 2379, (1997). F. Pardo [*et al.*]{}, , 4633 (1997). Y. Abulafia [*et al.*]{}, , 1596 (1996). W. Henderson [*et al.*]{}, , 2352 (1998). G. Ravikumar [*et al.*]{}, , R11069, (1998); S.S. Banerjee [*et al.*]{}, , 995, (1998). E. Vincent [*et al.*]{}, in [*Complex Behaviour of Glassy Systems*]{}, Springer Verlag Lecture Notes in Physics Vol.492, M. Rubi Editor, 1997, pp.184-219, and Refs. therein. For earlier work see H. Küpfer and W. Gey, Phil. Mag. [**36**]{}, 859 (1977). M. Ziese [*et al.*]{}, , 9491 (1994). H. Küpfer [*et al.*]{}, , 7689 (1995). S. Kokkaliaris [*et al.*]{}, , 5116 (1999). J.A. Fendrich [*et al.*]{}, , 2073 (1996). R. W. Hertzberg, [*Deformation and fracture mechanics of engineering materials*]{} (J. Wiley & Sons, NY, 1983). J. Giapintzakis [*et al.*]{}, , 16001 (1994). D. Bracanovic [*et al.*]{}, Physica C [**296**]{}, 1 (1998). H. Safar [*et al.*]{}, , 824 (1992). W.K. Kwok [*et al*]{}, , 3370 (1992). D. R. Nelson and V. M. Vinokur, , 13060 (1993). G.A. Jorge and E. Rodríguez , 103 (2000); see also M. Oussena [*et al.*]{}, , 1389 (1995). M. Oussena [*et al.*]{}, , 2559 (1996). E.H. Brandt, Physica C [**195**]{}, 1 (1992). See also Ref. 14. M.V. Feigel’man [*et al.*]{}, , 2303 (1989). Note that, at the measuring temperatures, the vortex distribution strongly relaxes in a few seconds. See Ref. 8 and S.O. Valenzuela [*et al.*]{} Rev. Sci. Instrum. [**69**]{}, 251 (1998). We found no differences in the measurements when varying the waiting time from seconds to several minutes.
--- abstract: 'We study low energy quantum oscillations of electron gas in plasma. It is shown that two electrons participating in these oscillations acquire additional negative energy when they interact by means of a virtual plasmon. The additional energy leads to the formation a Cooper pair and possible existence of the superconducting phase in the system. We suggest that this mechanism supports slowly damping oscillations of electrons without any energy supply. Basing on our model we put forward the hypothesis the superconductivity can occur in a low energy ball lightning.' author: - Maxim Dvornikov title: Formation of Cooper pairs in quantum oscillations of electrons in plasma --- Introduction {#INTR} ============ In the majority of cases plasma contains a great deal of electrically charged particles. Free electrons in metals can be treated as a degenerate plasma. If the motion of charged particles in plasma is organized, one can expect the existence of macroscopic quantum effects. For instance, collective oscillations of the crystal lattice ions can be exited in metals. These oscillations are interpreted as quasi-particles or phonons. Phonons are known to be bosons. The exchange of a phonon causes the effective attraction between two free electrons in plasma of metals. This phenomenon is known as the formation of a Cooper pair [@Coo56] and is important in the explanation of the superconductivity of metals. Cooper pairs can exist during a macroscopic time resulting in an undamped electric current. Unfortunately the formation of Cooper pairs is possible only at very low temperatures. Nowadays there are a great deal of attempts to obtain a superconducting material at high temperatures [@Gin00]. Without an external field, e.g. a magnetic field, the motion of electrons in plasma is stochastic. It results in rather short times of plasma recombination. The laboratory plasma at atmospheric pressure without an energy supply recombines during $\sim 10^{-3}\thinspace\text{s}$ [@Smi93]. That is why free oscillations of electrons in plasma, i.e. without any external source of energy, will attenuate rather fast. Therefore to form a Cooper pair in plasma one should create a highly structured electrons motion to overcome the thermal fluctuations of background charged particles. Nevertheless it is known that a plasmon superconductivity can appear [@Pas92]. In Ref. [@Dvo] we studied quantum oscillations of electron gas in plasma on the basis of the solutions to the non-linear Schrödinger equation. The spherically and axially symmetrical solutions to the Schrödinger equation were found in our works. We revealed that the densities of both electrons and positively charged ions have a tendency to increase in the geometrical center of the system. The found solutions belong to two types, low and high energy ones, depending on the branch in the dispersion relation (see also Sec. \[MODEL\]). We put forward a hypothesis that a microdose nuclear fusion reactions can happen inside a high energy oscillating electron gas. This kind of reaction can provide the energy supply to feed the system. In our works we suggested that the obtained solutions can serve as a model of a high energy ball lightning. Note that the suggestion that nuclear reactions can take place inside a ball lightning was also put forward in Ref. [@nuclfus]. In the present work we study a low energy solution (see Sec. \[MODEL\]) which is also presented in our model [@Dvo]. As we mentioned above a non-structured plasma usually recombines during several milliseconds. It is the main difficulty in constructing of long-lived plasma structures. In a high energy solution internal nuclear fusion reactions could compensate the energy losses. We suggest that the mechanism, which prevents the decay of low energy oscillations of the electron gas, is based on the possible existence of the plasma superconductivity. In frames of our model we show that the interaction of two electrons, described by spherical waves, can be mediated by a virtual plasmon. For plasmon frequencies corresponding to the microwave range, this interaction results in the additional negative energy. Then it is found that for a low energy solution with the specific characteristic this negative energy results in the effective attraction. Therefore a Cooper pair of two electrons can be formed because the interacting electrons should have oppositely directed spins. Using this result it is possible to state that the friction occurring when electrons propagate through the background matter can be significantly reduced due to the superconductivity phenomenon. We also discuss the possible applications of our results for the description of the a low energy ball lightning. It should be noticed that the appearance of the superconductivity inside of a ball lightning was considered in Ref. [@supercondold]. This paper is organized as follows. In Sec. \[MODEL\] we briefly outline our model of quantum oscillations of the electron gas which was elaborated in details in Ref. [@Dvo]. Then in Sec. \[LE\] we study the interaction of two electrons when they exchange a virtual plasmon and examine the conditions of the effective attraction appearance. In Sec. \[CONCL\] we summarize and discuss the obtained results. Brief description of the model {#MODEL} ============================== In Ref. [@Dvo] we discussed the motion of an electron with the mass $m$ and the electric charge $e$ interacting with both background electrons and positively charged ions. The evolution of this system is described by the non-linear Schrödinger equation, $$\label{Schrod0} \mathrm{i}\hbar\frac{\partial \psi}{\partial t} = H_0 \psi, \quad H_0 = -\frac{\hbar^2}{2m}\nabla^2+U(|\psi|^2),$$ where $$\label{Uinterac} U(|\psi|^2)=e^2\int\mathrm{d}^3\mathbf{r}' \frac{1}{|\mathbf{r}-\mathbf{r}'|} \{|\psi(\mathbf{r}',t)|^2-n_i(\mathbf{r}',t)\},$$ is the term which is responsible for the interaction of an electron with background matter which includes electrons and positively charged ions with the number density $n_i(\mathbf{r},t)$. In Eq.  the wave function is normalized on the number density of electrons, $|\psi(\mathbf{r},t)|^2=n_e(\mathbf{r},t)$. Note that in Eq.  the $c$-number wave function $\psi = \psi(\mathbf{r},t)$ depends on the coordinates of a single electron. Generally in a many-body system this kind of approximation is valid for an operator wave function, i.e. when it is expressed in terms of creation or annihilation operators. In Ref. [@KuzMak99] it was shown that it is possible to rewrite an exact operator Schrödinger equation using $c$-number wave functions for a single particle. Of course, in this case a set of additional terms appears in a Hamiltonian. Usually we neglect these terms (see Ref. [@Dvo]). The solution to Eqs.  and  can be presented in the following way: $$\label{sol0} \psi(\mathbf{r},t) = \psi_0+\delta\psi(\mathbf{r},t), \quad |\psi_0|^2 = n_0,$$ where $n_0$ is the density on the rim of system, i.e. at rather remote distance from the center. The perturbative wave function $\delta\psi(\mathbf{r},t)$ for the spherically symmetrical oscillations has the form (see Ref. [@Dvo]), $$\label{pertsol} \delta\psi(\mathbf{r},t) = e^{-\mathrm{i}\omega t} \delta\psi_k(\mathbf{r}), \quad \delta\psi_k(\mathbf{r}) = A_k \frac{\sin kr}{r},$$ where $A_k$ is the normalization coefficient. The dispersion relation which couples the frequency of the oscillations $\omega$ and the quantum number $k$ reads $$\label{disprelk} k^2_{\pm} = \frac{\omega m}{\hbar} \left[ 1 \pm \left( 1-4\frac{\omega_p^2}{\omega^2} \right)^{1/2} \right].$$ It is convenient to rewrite Eq.  in the new variables $x=\omega/\omega_p$ and $y=k\sqrt{\hbar/2 \omega_p m}$. Now it is transformed into the form, $$\label{ypm} y_{\pm{}}^2=\frac{x}{2} \left[ 1 \pm \left( 1-\frac{4}{x^2} \right)^{1/2} \right].$$ We represent the functions $y_{\pm{}}$ versus $x$ on Fig. \[fig1\]. ![\[fig1\] The dispersion relation for quantum oscillations of electron gas in plasma based on the results of Ref. [@Dvo].](figure1.eps) One can see on Fig. \[fig1\] that there are two branches $y_{\pm{}}$ in the dispersion relation, upper and lower ones. It is possible to identify the upper branch with a high energy solution and the lower branch – with a low energy one. Generally Eqs.  and  include the interaction of the considered electron with all other electrons in the system. However since we look for the solution in the pertubative form  and linearize these equations, the solution to Eqs.  and  describe the interactions of the considered electron only with background electrons, which correspond to the coordinate independent wave function $\psi_0$, rather than with electrons which participate in spherically symmetrical oscillations. Interaction of two electrons performing oscillations corresponding to the low energy branch {#LE} =========================================================================================== In this section we discuss the interaction between two electrons participating in spherically symmetrical oscillations. This interaction is supposed to be mediated by a plasmon field $\varphi$. To account for the electron interaction with a plasmon one has to include the additional term $e\varphi$ into the Hamiltonian $H_0$ in Eq. . We also admit that electron gas oscillations correspond to the lower branch $y_{-{}}$ in the dispersion relation , i.e. we are interested in the dynamics of a low energy solution. A plasmon field distribution can be obtained directly from the Maxwell equation for the electric displacement field $\mathbf{D}$, $$\label{MaxD} (\nabla \mathbf{D})=4\pi\rho,$$ where $\rho$ is the free charge density of electrons. As it was mentioned in Ref. [@Dvo] the vector potential for spherically symmetrical oscillations can be put to zero $\mathbf{A}=0$. Therefore only longitudinal plasmons can exist. Presenting the potential $\varphi$ in the form of the Fourier integral, $$\varphi(\mathbf{r},t)=\int \frac{\mathrm{d}\omega\mathrm{d}^3\mathbf{p}}{(2\pi)^4} \varphi(\mathbf{p},\omega) e^{-\mathrm{i}\omega t+\mathrm{i}\mathbf{p}\mathbf{r}},$$ we get the formal solution to Eq.  as $$\label{plasmsol} \varphi(\mathbf{r},t)=4\pi\int \mathrm{d}t'\mathrm{d}^3\mathbf{r}' G(\mathbf{r}-\mathbf{r}',t-t')\rho(\mathbf{r}',t'),$$ where $$\label{Greendef} G(\mathbf{r},t)=\int \frac{\mathrm{d}\omega\mathrm{d}^3\mathbf{p}}{(2\pi)^4} \frac{1}{\mathbf{p}^2\varepsilon_l(\mathbf{p},\omega)} e^{-\mathrm{i}\omega t+\mathrm{i}\mathbf{p}\mathbf{r}},$$ is the Green function for the longitudinal plasmon. In Eq.  $\varepsilon_l$ stands for the longitudinal permittivity. Note that to choose the exact form of the Green function  one has to specify passing-by a residue at $\mathbf{p}=0$. We will regularize the Green function by putting a cut-off at small momenta. It should be noticed that the more rigorous derivation of the plasmon Green function is presented in Ref. [@plasmonprop]. To relate Eq.  with the distribution of electrons we can take that free charge density is proportional to the probability distribution, $\rho=e|\psi|^2$. Then, using Eq.  and the Schrödinger equation  with the new Hamiltonian $H = H_0 + e\varphi$ we obtain the closed non-linear equation for the the electron wave function only. Now we write down the new Hamiltonian explicitly, $$\label{newHam} H=H_0+V, \quad V=4\pi e^2\int \mathrm{d}t'\mathrm{d}^3\mathbf{r}' G(\mathbf{r}-\mathbf{r}',t-t')|\psi(\mathbf{r}',t')|^2.$$ As we mentioned in Sec. \[MODEL\] the term $H_0$ describes the properties of a single electron including its interactions with background particles. The second term $V$ in the Hamiltonian  is responsible for the interactions between two separated electrons. The scattering of two electrons is due to this interaction. It is possible to introduce the additional energy acquired in scattering of two electrons as $$\label{addE1} \Delta E = \langle \psi | V | \psi \rangle = 4\pi e^2\int \mathrm{d}t'\mathrm{d}^3\mathbf{r}'\mathrm{d}t\mathrm{d}^3\mathbf{r} \psi^{*{}}(\mathbf{r},t)\psi^{*{}}(\mathbf{r}',t') G(\mathbf{r}-\mathbf{r}',t-t') \psi(\mathbf{r},t)\psi(\mathbf{r}',t').$$ In Eq.  we add one additional integration over $t$ which is convenient for further calculations. To take into account the exchange effects one has to replace the products of the electrons wave functions in the integrand in Eq.  with the symmetric or antisymmetric combinations of the basis wave functions, $$\begin{aligned} \label{psisymasym} \psi(\mathbf{r},t)\psi(\mathbf{r}',t') \to & \frac{1}{\sqrt{2}} [e^{-\mathrm{i}\omega_1 t-\mathrm{i}\omega_2 t'} \psi_{k_1}(\mathbf{r})\psi_{k_2}(\mathbf{r}')\pm e^{-\mathrm{i}\omega_2 t-\mathrm{i}\omega_1 t'} \psi_{k_2}(\mathbf{r})\psi_{k_1}(\mathbf{r}')], \notag \\ \psi^{*{}}(\mathbf{r},t)\psi^{*{}}(\mathbf{r}',t') \to & \frac{1}{\sqrt{2}} [e^{\mathrm{i}f_1 t+\mathrm{i}f_2 t'} \psi_{q_1}^{*{}}(\mathbf{r})\psi_{q_2}^{*{}}(\mathbf{r}')\pm e^{\mathrm{i}f_2 t+\mathrm{i}f_1 t'} \psi_{q_2}^{*{}}(\mathbf{r})\psi_{q_1}^{*{}}(\mathbf{r}')],\end{aligned}$$ where $k_{1,2}$ and $q_{1,2}$ are the initial and final quantum numbers, and $\omega_{1,2}$ and $f_{1,2}$ are the initial and final frequencies of electron gas oscillations. Since the total wave function of two electrons, which also includes their spins, should be anti-symmetric, the coordinate wave functions in Eq.  which are symmetrical correspond to anti-parallel spins. For the same reasons the anti-symmetrical functions in Eq.  describe particles with parallel spin. It is worth to mention that in calculating the additional scattering energy  electrons should be associated with the perturbative wave function  rather than with the total wave function . However we will keep the notation $\psi$ for an electron wave function to avoid cumbersome formulas. Using the basis wave functions , Eqs.  and  as well as the known value of the integral, $$\int_0^{\infty}\frac{\mathrm{d}s}{s}\sin(ps)\sin(ks)\sin(qs) = \frac{\pi}{8} [\mathrm{sign}(p+k-q)+\mathrm{sign}(p+q-k)- \mathrm{sign}(p+k+q)-\mathrm{sign}(p-k-q)],$$ we obtain for the additional energy the following expression: $$\begin{aligned} \label{addE2} \Delta E = & \delta(f_1+f_2-\omega_1-\omega_2) A_{q_1}^{*{}}A_{q_2}^{*{}}A_{k_1}A_{k_2} \frac{e^2\pi^4}{16} \int_0^\infty \frac{\mathrm{d}p}{p^2} \notag \\ & \times \bigg\{ \frac{1}{\varepsilon_l(p,f_1-\omega_1)} [\mathrm{sign}(p+k_1-q_1)+\mathrm{sign}(p+q_1-k_1) \notag \\ & - \mathrm{sign}(p+k_1+q_1)-\mathrm{sign}(p-k_1-q_1)]\times [1 \leftrightarrow 2] \notag \\ & \pm \frac{1}{\varepsilon_l(p,f_2-\omega_1)} [\mathrm{sign}(p+k_2-q_1)+\mathrm{sign}(p+q_1-k_2) \notag \\ & - \mathrm{sign}(p+k_2+q_1)-\mathrm{sign}(p-k_2-q_1)]\times [1 \leftrightarrow 2] \bigg\},\end{aligned}$$ where the symbol $[1 \leftrightarrow 2]$ stands for the terms similar to the terms preceding each of them but with all quantities with a subscript $1$ replaced with the corresponding quantities with a subscript $2$. We suggest that electrons move towards each other before the collision and in the opposite directions after the collision, i.e. $k_1=-k_2=k$ and $q_1=-q_2=q$. In this situation the electrons occupy the lowest energy state (see, e.g., Ref. [@LifPit78p191]). Taking into account that the frequencies and the coefficients $A_k$ are even functions of $k$ \[see Eq. \] we rewrite the expression for the additional energy in the form, $$\begin{aligned} \label{addE3} \Delta E = & \delta(f-\omega) |A_{q}|^2 |A_{k}|^2 \frac{e^2\pi^4}{16} \int_0^\infty \frac{\mathrm{d}p}{p^2} \frac{1}{\varepsilon_l(p,f-\omega)} \notag \\ & \times [\mathrm{sign}(p+k-q)+\mathrm{sign}(p+q-k)- \mathrm{sign}(p+k+q)-\mathrm{sign}(p-k-q)]^2,\end{aligned}$$ where $\omega=\omega_1=\omega_2$ and $f=f_1=f_2$. It is important to note that only symmetric wave functions from Eq.  contribute to Eq. . The anti-symmetric wave functions convert Eq.  to zero. It means that only electrons with oppositely directed spins contribute to the additional energy. Now we should eliminate the energy conservation delta function. One can make it in a standard way, $\delta(f-\omega) \to \text{Time}/(2\pi)$, and then divide the whole expression on the observation time. Since the energy is concerned in the scattering one has two possibilities for the momenta, $k+q=0$ or $k-q=0$. We can choose, e.g., the latter case supposing that $k>0$. In this situation one has for $\Delta E$ the following expression: $$\Delta E = |A_{k}|^4 \frac{e^2\pi^3}{32} \int_0^\infty \frac{\mathrm{d}p}{p^2} \frac{1}{\varepsilon_l(p,0)} [1-\mathrm{sign}(p-2k)]^2.$$ Finally accounting for the step function in the integrand we can present the additional energy in the form, $$\label{addE4} \Delta E = |A_{k}|^4 \frac{e^2\pi^3}{8} \int_0^{2k} \frac{\mathrm{d}p}{p^2} \frac{1}{\varepsilon_l(p,0)}.$$ Depending on the sign of the the additional energy one has either effective repulsion or effective attraction. As we have found in Ref. [@Dvo] the typical frequencies of electron gas oscillations in our problem $\sim 2\omega_p$, which is $\sim 10^{13}\thinspace\text{s}^{-1}$. Such frequencies lie deep inside the microwave region. The permittivity of plasma for this kind of situation has the following form (see, e.g., Ref. [@Gin60]): $$\label{perm} \varepsilon_l(\omega)=1- \frac{\omega_p^2}{\omega^2+\nu^2},$$ where $\nu$ is the transport collision frequency. In Eq.  we neglect the spatial dispersion. It is clear that in Eq.  one has to study the static limit, i.e. $\omega \to 0$. Supposing that the density of electrons is relatively high, i.e. $\omega_p \gg \nu$, one gets that the permittivity of plasma becomes negative and the interaction of two electrons results in the effective attraction. Note that at the absence of spatial dispersion the integral is divergent on the lower limit. Such a divergence is analogous to the infrared divergence in quantum electrodynamics. We have to regularize it by the substitution $0 \to 1/a_D$, where $a_D=\sqrt{4\pi e^2 n_e/k_B T}$ is the Debye length and $T$ is the plasma temperature. Besides the energy $\Delta E$, which is acquired in a collision, an electron also has its kinetic energy and the energy of the interaction with background charged particles. For the effective attraction to take place we should compare the additional negative energy with the eigenvalue of the operator $H_0$, which includes both kinetic term and interaction with other electrons and ions. We suggest that two electrons, which are involved in the scattering, move towards each other, i.e. $\omega_1=\omega_2=\omega$. Therefore, if we get that $2\hbar\omega + \Delta E < 0$, the effective attraction takes place. Calculating the integral in Eq.  one obtains the following inequality to be satisfied for the attraction to happen: $$\label{ineq1} x<\frac{\xi^2 \pi^2}{256 \alpha}\frac{n_0 \nu^2 \hbar v}{m \omega_p^4} \frac{(y-\tilde{y})}{y^5}, \quad \tilde{y} = \frac{1}{2}\sqrt{\frac{\hbar\omega_p}{2 m v^2}},$$ where $v = \sqrt{k_B T/m}$ is the thermal velocity of background electrons in plasma and $k_B \approx 1.38 \times 10^{-16}\thinspace\text{erg/K}$ is the Boltzmann constant. In Eq.  we use the variables $x$ and $y$ which were introduced in Sec. \[MODEL\]. We also assume that $A_k^2 = n_c/k^2$, where $n_c$ is the density in the central region \[see Eq. \]. As it was predicted in our work [@Dvo], $n_c$ should have bigger values that the density on the rim of the system. We can also introduce two dimensionless parameters, $\xi$ and $\alpha$, to relate the density in the center, the density of free electrons, and the rim density: $n_c = \xi n_0$ and $n_e = \alpha n_0$. We suppose that $\alpha \sim 0.1$. It is known (see, e.g., Ref. [@PEv3]) that the number density of free electrons in a spark discharge is equal to $10^{16}-10^{18}\thinspace\text{cm}^{-3}$. Approximately the same electrons number densities are acquired in a streak lightning. If one takes the rim density equal to the Loschmidt constant $n_0=2.7 \times 10^{19}\thinspace\text{cm}^{-3}$, we arrive to the maximal value of $\alpha \sim 0.1$. In this case mainly the collisions with neutral atoms would contribute to the transport collision frequency. One can approximate this quantity as $\nu \approx v \sigma n_c$, where $\sigma = \pi R^2$ is the characteristic collision cross section and $R \sim 10^{-8}\thinspace\text{cm}$ is the typical atomic radius. Note that the process in question happens in the center of the system. That is why we take the central number density $n_c$ in the expression for $\nu$. Finally we receive the following inequality to be satisfied for the effective attraction between two electrons to take place: $$\begin{gathered} x < K \frac{\tau^{3/2}\xi^4}{\alpha^3}\frac{1}{y^5} \left( y-y_0 \frac{\alpha^{1/4}}{\tau^{1/2}} \right), \quad y_0 = \frac{1}{2}\sqrt{\frac{\hbar\omega_0}{2 m v_0^2}} \approx 0.63, \notag \\ \label{ineq2} K = \frac{\pi^2}{256}\frac{\hbar n_0^3 v_0^3\sigma^2}{m \omega_0^4} \approx 1.1 \times 10^{-11},\end{gathered}$$ where $\tau = T/T_0$. In Eq.  we take the reference temperature $T_0=700\thinspace\text{K}$. Now one calculates other reference quantities like $v_0$ and $\omega_0$ as $v_0 = \sqrt{k_B T_0/m} \approx 10^7\thinspace\text{cm/s}$ and $\omega_0 = \sqrt{4 \pi e^2 n_0/m} \approx 2.9 \times 10^{14}\thinspace\text{s}^{-1}$. As one can see on Fig. \[fig1\] \[see also Eq. \] the parameter $y$ is always more than $1$ for the upper branch. It means that the inequality in Eq.  can never be satisfied. On the contrary, there is a possibility to implement the condition  for the lower branch on Fig. \[fig1\]. It signifies the existence of the effective attraction between electrons. On Fig. \[fig2\] we plot the allowed regions where the effective attraction happens. ![\[fig2\] The allowed regions in the $(\xi,x)$ plane for the existence of the effective attraction between two electrons performing quantum oscillations corresponding to the lower branch on Fig. \[fig1\]. We also add one more vertical axis on the right hand side demonstrating the behaviour of the quantum number $k$. Three zones for the temperatures $T=500\thinspace\text{K}$, $600\thinspace\text{K}$ and $700\thinspace\text{K}$ are shown here.](figure2.ps) This plot was created for the constant value of $\alpha = 0.1$. We analyze the condition  numerically. The calculation is terminated when we reach the point $x=2$. It has the physical sense since oscillations cannot exist for $x<2$ according to Eq.  (see also Fig. \[fig1\]). One can notice that always there is a critical value of the central density. It is equal to $120 n_0$, $105 n_0$ and $95 n_0$ for the temperatures $T=500\thinspace\text{K}$, $600\thinspace\text{K}$ and $700\thinspace\text{K}$ respectively. It means that in the center of the system there is an additional pressure of about $100\thinspace\text{atm}$. One has to verify the validity of the used approach checking that $y>\tilde{y}$ in Eq. . It would signify that that the the quantum number $k$ is more than $1/2a_D$. The values of $\tilde{y}$ are $0.42$, $0.38$ and $0.35$ for $T=500\thinspace\text{K}$, $600\thinspace\text{K}$ and $700\thinspace\text{K}$ respectively. Comparing these values with the curves on Fig. \[fig2\] one can see that the condition $y>\tilde{y}$ is always satisfied for all the allowed zone of the effective attraction existence. For the effective attraction to happen one should have the negative values of $\varepsilon_l(0)$ in Eq. . Using Eq.  one can conclude that the following inequality should be satisfied: $\nu < \omega_p$ or equivalently $\xi < 3.3 \times 10^{3} \sqrt{\alpha/\tau}$. For $\alpha = 0.1$ and temperatures $T=500\thinspace\text{K}$, $600\thinspace\text{K}$ and $700\thinspace\text{K}$ we get the critical values of $\xi$ as $1.2 \times 10^3$, $1.1 \times 10^3$ and $1.0 \times 10^3$ respectively. One sees on Fig. \[fig2\] that all the curves lie far below the critical values of $\xi$. Therefore the condition $\varepsilon_l < 0$ is always satisfied. Summary and discussions {#CONCL} ======================= In conclusion we mention that we studied spherically symmetrical quantum oscillations of electron gas in plasma which correspond to a low energy solution to the Schrödinger equation [@Dvo]. We suggested that two independent plasma excitations, corresponding to two separate electrons, can interact via exchange of a plasmon. Since the typical oscillations frequencies are very high and lie in the deep microwave region the plasma permittivity is negative and the interaction of two electrons result in the appearance of an attraction. For the this attraction to be actual we compared it with the kinetic energy of electrons and the energy of interaction of electrons with other background charged particles. The total energy of an electrons pair turned out to be negative for the lower branch in the dispersion relation (see Fig. \[fig1\] and Ref. [@Dvo]). Although we did not demonstrate it explicitly, using the same technique as in Sec. \[LE\] one can check that the total energy of electrons is always positive for the upper branch. It means that even additional negative energy does not cause the attraction of electrons. It is interesting to mention that to obtain the master equations  and  in Ref. [@Dvo] we neglected the exchange effects between an electron which corresponds to the wave function $\psi$ and the rest of background electrons (see also Ref. [@KuzMak99]). This crude approximation is valid if the superconductivity takes place because in this case the friction, i.e. the interaction, between oscillating and background electrons is negligible. The proposed plasmon superconductivity happening at spherically symmetrical oscillations of electrons could be implemented in a low energy ball lightning. Theoretical and experimental studies of ball lightnings have many years history (see, e.g., Ref. [@Tur98]). This natural phenomenon, happening mainly during a thunderstorm, is very rare and we do not have so many its reliable witnesses. There were numerous attempts to generate in a laboratory stable structures similar to a ball lightning. Many theoretical models aiming to describe the observational data were put forward. In putting forward a ball lightning model one should try to explain these properties on the basis of the existing physical ideas without involvement of extraordinary concepts (see, e.g. Ref. [@Rab99]) though they look very exciting. However none of the available models could explain all the specific properties of a fireball. Among the existing ball lightning models one can mention the aerogel model [@Smi93]. According to this model fractal fibers of the aerogel can form a knot representing the skeleton of a ball lightning. Using this model it is possible to explain some of the ball lightning features. The interesting model of a ball lightning having complex onion-like structure with multiple different layers was put forward in Ref. [@Tur94]. However in this model an external source of electrical energy is necessary for a ball lightning to exist for a long time. The hypothesis that a ball lightning is composed of molecular clusters was described in Ref. [@Sta85]. This model can explain the existence of a low energy ball lightning. In Ref. [@RanSolTru00] the sophisticated ball lightning model was proposed, which is based on the non-trivial magnetic field configuration in the form of closed magnetic lines forming a knot. In frames of this model one can account for the relatively long life time of a ball lightning. It is worth to be noticed that though the properties of low energy fireballs can be accounted for within mentioned above models they are unlikely to explain its regular geometrical shape. The author of the present work has never observed a ball lightning, however according to the witnesses a fireball resembles a regular sphere. There are many other ball lightning models which are outlined in Ref. [@Smi88]. The most interesting ball lightning properties are (see, e.g., Ref. [@Bar80]) - There are both high and low energy ball lightnings. The energy of a low energy ball lightning can be below several hundred kJ. However the energy of a high energy one can be up to $1\thinspace\text{MJ}$. - There are witnesses that a ball lightning can penetrate through a window glass. Sometimes it uses existing microscopic holes without destruction of the form of a ball lightning. It means that the actual size of a ball lightning is rather small and its visible size of several centimeters is caused by some secondary effects. Quite often a ball lightning can burn tiny holes inside materials like glass to pass through them. It signifies that the internal temperature and pressure are very high in the central region. - A fireball is able to follow the electric field lines. It is the indirect indication that a ball lightning has electromagnetic nature, e.g., consists of plasma. - A ball lightning has very long life-time, about several minutes. In Sec. \[INTR\] we mentioned that unstructured plasma without external energy source looses its energy and recombines extremely fast. If we rely on the plasma models of a ball lightning, it means that either there is an energy source inside of the system or plasma is structured and there exists a mechanism which prevents the friction in the electrons motion. - In some cases there were reports that a ball lightning could produce rather strong electromagnetic radiation even in the X-ray range. It might signify that energetic processes, e.g. nuclear fusion reactions, can happen inside a fireball. Quantum oscillations of electron gas in plasma [@Dvo] can be a suitable model for a ball lightning. For example, it explains the existence of two types of ball lightnings, low and high energy ones. Our model accounts for the very small $\sim 10^{-7}\thinspace\text{cm}$ and dense central region as well as indirectly points out on the possible microdose nuclear fusion reactions inside of a high energy ball lightning. In the present paper we suggested that superconductivity could support long life-time of a low energy fireball and described that this phenomenon could exist in frames of our model. It is unlikely that nuclear fusion reactions can take place inside a low energy ball lightning, i.e. this type of a ball lightning cannot have an internal energy source. Moreover as we mentioned in Ref. [@Dvo], electrons participating in spherically symmetrical oscillations cannot emit electromagnetic radiation. It means that the system does not loose the energy in the form of radiation. As a direct consequence of our model we get the regular spherical form of a fireball gratis. In Sec. \[LE\] we made the numerical simulations for the temperature range $500-700\thinspace\text{K}$. According to the fireball witnesses [@Bar80] some low energy ball lightnings do not combust materials like paper, wood etc. The combustion temperature of these materials lie in the mentioned above range. It justifies our assumption. Note the a ball lightning seems to be a many-sided phenomenon and we do not claim that our model explains all the existing electrical atmospheric phenomena which look like a ball lightning. However in our opinion a certain class of fireballs with the above listed properties is satisfactory accounted for within the model based on quantum oscillations of electron gas in plasma. The work has been supported by the Conicyt (Chile), Programa Bicentenario PSD-91-2006. The author is very thankful to Sergey Dvornikov and Viatcheslav Ivanov for helpful discussions. [40]{} L. N. Cooper, Phys. Rev. **104**, 1189 (1956). V. L. Ginzburg, Phys. Usp. **43**, 573 (2000). B. M. Smirnov, Phys. Rep. **224**, 1 (1993). E. A. Pashitskiĭ, JETP Lett. **55**, 333 (1992). M. Dvornikov, S. Dvornikov, and G. Smirnov, Appl. Math. & Eng. Phys. **1**, 9 (2001), physics/0203044; M. Dvornikov and S. Dvornikov, in *Advances in plasma physics research, vol. 5*, ed. by F. Gerard (Nova Science Publishers, Inc., 2007), pp. 197–212, physics/0306157. M. D. Altschuler, L. L. House, and E. Hildner, Nature **228**, 545 (1970); Yu. L. Ratis, Phys. Part. Nucl. Lett. **2**, 64 (2005); the up-to-date discussion about nuclear fusion reactions as the energy source of a ball lightning is given in A. I. Nikitin, J. Russ. Laser Research **25**, 169 (2004). G. C. Dijkhuis, Nature **284**, 150 (1980); M. I. Zelikin, J. Math. Sci. **151**, 3473 (2008). L. S. Kuz’menkov and S. G. Maksimov, Theor. Math. Phys. **118**, 227 (1999). E. M. Lifschitz, and L. P. Pitaevskiĭ, *Statistical physics, Part II* (Moscow, Nauka, 1978), pp. 370–376; for the more contemporary treatment see E. Braaten and D. Segel, Phys. Rev. D **48**, 1478 (1993), hep-ph/9302213. See p. 191 in Ref. [@plasmonprop]. V. L. Ginzburg, *Propagation of electromagnetic waves in plasma* (Moscow, Fizmatgiz, 1960), pp. 26–35. B. M. Smirnov, in *Physics encyclopedia*, vol. 3, ed. by A. M. Prokhorov, (Moscow, Bolshaya Rossiĭskaya entsiklopediya, 1992), pp. 350–355. D. J. Turner, Phys. Rep. **293**, 1 (1998). J. D. Barry, *Ball lightning and bead lightning* (Plenum Press, New York, 1980). M. Rabinowitz, Astrophys. Space Sci. **262**, 391 (1999), astro-ph/0212251. D. J. Turner, Phil. Trans. R. Soc. Lon. A **347**, 83 (1994). I. P. Stakhanov, *The physical nature of a ball lightning* (Moscow, Energoatomizdat, 1985), 2nd ed., pp. 170–190. A. F. Rañada, M. Soler, and J. L. Trueba, Phys. Rev. E **62**, 7181 (2000). B. M. Smirnov, *The problem of a ball lightning* (Moscow, Nauka, 1988), pp. 41–57.
--- author: - | Steffen Zeuch[$^{1,2}$]{} Ankit Chaudhary[$^{1}$]{} Bonaventura Del Monte[$^{1,2}$]{} Haralampos Gavriilidis[$^{1}$]{}\ Dimitrios Giouroukis[$^{1}$]{} Philipp M. Grulich[$^{1}$]{} Sebastian Bre[ß]{}[$^{1}$]{} Jonas Traub[$^{1,2}$]{} Volker Markl[$^{1,2}$]{}\ bibliography: - 'sigproc.bib' title: 'The NebulaStream Platform: Data and Application Management for the Internet of Things' ---
--- abstract: 'Miniature Hall-probe arrays were used to measure the critical current densities for the three main directions of vortex motion in the stoichiometric LiFeAs superconductor. These correspond to vortex lines along the $c$-axis moving parallel to the $ab$-plane, and to vortex lines in the $ab$–plane moving perpendicular to, and within the plane, respectively. The measurements were carried out in the low-field regime of strong vortex pinning, in which the critical current anisotropy is solely determined by the coherence length anisotropy parameter, $\varepsilon_{\xi}$. This allows for the extraction of $\varepsilon_{\xi}$ at magnetic fields far below the upper critical field $B_{c2}$. We find that increasing the magnetic field decreases the anisotropy of the coherence length.' author: - 'M. Kończykowski' - 'C. J. van der Beek' - 'M. A. Tanatar' - 'V. Mosser' - Yoo Jang Song - Yong Seung Kwon - 'R. Prozorov' date: - - 30 September 2011 title: Anisotropy of the coherence length from critical currents in the stoichiometric superconductor LiFeAs --- The determination of the electronic anisotropy in the superconducting state is a fundamental problem in multi-band type-II superconductors, that has attracted much in interest with the discovery of the iron-based materials.[@Johnston2010review] In single band materials with an ellipsoidal Fermi surface, one can describe the anisotropy using the ratio $\varepsilon \equiv (m/M)^{1/2} < 1$ of the electron effective masses, provided that transport along the anisotropy ($c$–) axis of the material is coherent.[@Blatter94] This, however, yields an oversimplified picture in which the anisotropy is temperature–independent. In multi-band superconductors, the contribution of electronic bands with different, $k-$dependent Fermi velocities and gap values leads to different ratios $\varepsilon_{\lambda}(T) = \lambda_{ab}/ \lambda_{c}$ and $\varepsilon_{\xi}(T) = \xi_{c}/ \xi_{ab}$ of the in–plane and $c$-axis London penetration depths $\lambda_{ab,c}(T)$ and coherence lengths $\xi_{ab,c}(T)$. The low temperature value of the penetration depth anisotropy $\varepsilon_{\lambda}(0) = \varepsilon \left( v_{F,c} / v_{F,ab} \right)$ is determined by the anisotropy of the Fermi velocity, while its temperature dependence reflects the relative probabilities of quasi-particle excitation in the two directions. On the other hand, the coherence length anisotropy $\varepsilon_{\xi} \sim \left(v_{F,c} / v_{F,ab} \right)\left( \Delta_{c} / \Delta_{ab} \right)$ directly depends on the anisotropy of the superconducting gap $\Delta$. As a result of the changing weight of superconductivity on different Fermi surface sheets and that of intra- and interband scattering, both $\varepsilon_{\xi}$ and $\varepsilon_{\lambda}$ are temperature[@GurevichMgB2review; @Kogan2002anisotropy] and field-dependent,[@Nojima2006] behavior exemplified by MgB$_2$,[@Nojima2006; @Hc2MgB2anisotropy; @Fletcher2005MgB2] the iron-based superconductors, [@Okazaki2009; @Cho2011_LiFeAs_Hc2; @Kurita2011Hc2; @Khim2011; @Song2011EPL; @Prozorov2011ROPP] and, possibly, NbSe$_{2}$.[@Sonier2005] Experimentally, the anisotropy parameter $\varepsilon_{\xi}$ is usually determined from the ratio of the $c-$axis and $ab-$plane upper critical fields, $B_{c2}^{\parallel c} = \Phi_{0}/2\pi\xi_{ab}^{2}$ and $B_{c2}^{\parallel ab} = \Phi_{0}/2\pi\xi_{ab}\xi_{c}$,[@Cho2011_LiFeAs_Hc2; @Kurita2011Hc2; @Khim2011] while the ratio of the lower critical fields $B_{c1}^{\parallel c} = (\Phi_{0}/4\pi\mu_{0}\lambda_{ab}^{2})\ln \kappa_{ab}$ and $B_{c1}^{\parallel ab} = (\Phi_{0}/4\pi\mu_{0}\lambda_{ab}\lambda_{c})\ln \kappa_{c}$ is used to evaluate $\varepsilon_{\lambda}$.[@Okazaki2009; @Song2011EPL] Here, $\Phi_{0} = h/2e$ is the flux quantum, $\kappa_{ab} = \lambda_{ab}/\xi_{ab}$ and $\kappa_{c} = (\lambda_{ab}\lambda_{c}/\xi_{ab}\xi_{c})^{1/2}$. Another approach is the direct measurement of $\lambda$ using differently oriented ac fields.[@Prozorov2011ROPP] Hence, $\varepsilon_{\lambda}$ is usually obtained from measurements at low reduced fields $B/B_{c2}$, while $\varepsilon_{\xi}$ is extracted from data in the high field regime close to $B_{c2}$. Below, we show that $\varepsilon_{\xi}$ at low fields can be accessed by direct measurements of the critical current density along three principal directions: $j_{ab}^{c}$ for vortex lines along the $c$-axis moving parallel to the $ab$-plane, $j_{ab}^{ab}$ for vortices parallel to the $ab$–plane and moving parallel to the $c$-axis, and $j_{c}^{ab}$ for vortices again parallel to the $ab$–plane, but moving within the plane. Experimentally, this is not a trivial task, as the signal from usual bulk magnetometry for ${\bf B} \parallel ab$ will always involve contributions from both $j_{ab}^{ab}$ and $j_{c}^{ab}$. In Fe-based superconductors, the only work that we are aware of uses transport measurements of the three critical currents in mesoscopic bridges fashioned by focused-ion beam (FIB) lithography in Sm-1111 single crystals.[@NatureComm] In what follows, we report on *contactless* measurements using miniature Hall-probe arrays, with the same single crystal positioned in different orientations, which allow one to unambiguously measure the critical current density for the three different situations. In order to analyze the critical current density, we have rederived known expressions for the respective cases of weak-[@Blatter94] and strong [@Ovchinnikov91; @vdBeek2002] vortex pinning, for the three relevant magnetic field and current orientations. In doing so, we keep track of $\lambda_{ab,c}(T)$ and $\xi_{ab,c}(T)$ as they appear, combining them into the ratios $\varepsilon_{\lambda}$ and $\varepsilon_{\xi}$ only as a final step.[@tbp] It turns out that in the regime of strong pinning by extrinsic nm-scale defects, the anisotropy $j_{ab}^{ab}/j_{c}^{ab}$ directly yields $\varepsilon_{\xi}$. In iron-based superconductors, this pinning mechanism is relevant at low magnetic fields.[@Kees; @Kees1] At intermediate fields, weak pinning due to scattering by dopant atoms dominates the critical current.[@Kees; @Kees1] Then $\varepsilon_{\xi}$ is the main (but not the only) contribution to $j_{ab}^{ab}/j_{c}^{ab}$. In order to obtain unambiguous results, one should thus make sure that the critical current is measured in the limit of strong pinning. Thus, we have chosen a superconducting system with reduced intrinsic scattering, in the guise of the (tetragonal) stoichiometric compound LiFeAs.[@Wang2008] Angle-resolved photoemission [@Borisenko2010], London penetration depth [@Kim2011LiFeAsLambda] and first critical field measurements [@Song2011EPL] have shown that this is a fully gapped two-band superconductor with moderate anisotropy. One of the cylindrical hole surfaces centered on the $\Gamma$-point has the smaller gap value of $\Delta = 1.5$ meV, while the gap on the more dispersive electron surface around the $M$-point has $\Delta = 2.5$ meV.[@Borisenko2010] Measurements of the anisotropic upper critical field shows that $H_{c2}$ is of mostly orbital character for $H \parallel c-$axis, and Pauli limited for $H \perp c$;[@Cho2011_LiFeAs_Hc2; @Kurita2011Hc2; @Khim2011] there is evidence for the Fulde-Ferrell-Larkin-Ovchinnikov state for the latter configuration.[@Cho2011_LiFeAs_Hc2] A second peak effect (SPE) or “fishtail” was reported from magnetization measurements.[@Pramanik2010LiFeAs] For $H \parallel c$, the critical current densities range from $\sim 1$[@Song2010] to $\sim 100$ kA/cm$^{2}$.[@Pramanik2010LiFeAs] This might be indicative of different defect structures in crystals obtained in different growth procedures. Measurements of the Campbell length on our crystals have shown an even higher “theoretical” critical current density of $1 \times 10^{3}$ kA/cm$^{2}$.[@Prommapan] ![(Color online) Lower inset: Experimental scheme, with the three positions of the Hall array (shown as a thick black line with intersecting segments) used to probe the $j_{c}$ for the three possible orientations, as described in the text. Upper inset: Successive profiles of the magnetic induction, obtained on warming after initial zero-field cooling and the application of an external field, $\mu_{0}H_{a} = 2$ T $\parallel c$. This configuration probes $j_{ab}^{c}$. Main panel: Hysteresis loops of the in–plane local gradient $dB/dx$ for $\mu_{0}H_{a}\parallel c$. []{data-label="fig1"}](fig1.pdf){width="1.35\linewidth"} Single crystals of LiFeAs were grown in a sealed tungsten crucible using the Bridgman method [@Song2011EPL; @Song2010] and were transported in sealed ampoules. Immediately after opening, a $0.16 \times 0.19\times 0.480$ mm$^{3}$ rectangular parallelepiped sample was cut with a wire saw, washed and protected in mineral oil. Crystals from the same batch were used for transport as well as AC and DC magnetization measurements. Overall, samples from three different batches were measured, yielding consistent results. The Hall probe arrays were tailored in a pseudomorphic AlGaAs/InGaAs/GaAs heterostructure using proton implantation. The 10 Hall sensors of the array, spaced by either 10 or 20 $\mu$m, had an active area of 3 $\times$ 3 $\mu$m$^{2}$, while an 11$^{\mathrm{th}}$ sensor located far from the others was used for the measurement of the applied field. The LiFeAs crystal was positioned appropriately for the measurement of the critical current density in each of the different orientations, as illustrated in the inset to Fig. \[fig1\]. For the measurement of $j_{ab}^{c}$, the crystal was centered with its $ab$-face on the sensor array, with the array perpendicular to the long edge. For the measurement of $j_{c}^{ab}$ and $j_{ab}^{ab}$, the crystal was centered with its $ac$–face on the array, with the array perpendicular to $c$ and to $ab$, respectively. In all configurations, the local magnetic induction $B$ perpendicular to the Hall sensors (and to the sample surface) was measured along a line across the sample face, in fields up to 2.5 T. ![(Color online) Main panel: Hysteresis loops of $dB/dx \parallel ab$, for ${\bf B} \parallel ab$, after zero field–cooling, measured at 4.2, 6, 8, 10, and 12 K. The right-hand ordinate shows the value of the corresponding current density $j_{c}^{ab}$. Upper inset: Profiles of the sample “self–field” $B - \mu_{0}H_{a}$ on the decreasing field branch (third quadrant), at various $H_{a}$–values. Lower inset: Profiles of the “self–field” on the increasing field branch (first quadrant), at various $H_{a}$–values. []{data-label="fig2"}](fig2.pdf){width="1.35\linewidth"} The top inset in Fig. \[fig1\] shows typical profiles of $B$ measured after cooling in zero magnetic field (ZFC), application of a external field $\mu_{0}H_{a} = 2$ T $\parallel c$, and warming. The straight-line profiles are quite regular and conform to the Bean model,[@Bean62; @Zeldov94] which implies a homogeneous critical current density that is practically field-independent over the range of $B$–values in the crystal. To obtain the local screening current, we plot the spatial gradient $dB/dx$ versus $B$. The main panel in Fig. \[fig1\] shows representative hysteresis loops of $dB/dx$ measured at 4.2, 8 and 12 K. The right ordinate shows the value of the corresponding current density $j_{ab}^{c} = (2/\mu_{0}) dB/dx$. The factor 2 corresponds to the case when $B$ is measured on the end surface of a semi-infinite superconducting slab; a more precise evaluation can be done using the results of Brandt.[@Brandt98] The $j_{ab}^{c}$–values, of the order of $ 100$ kA/cm$^{2}$, are similar to those obtained from global measurements in the same configuration.[@Pramanik2010LiFeAs] Because of flux creep, the measured current densities are slightly reduced with respect to the “true” critical current density, by a multiplicative factor determined by the effective experimental time scale (here, about 3 s).[@vdBeek92] The creep rate is rather modest;[@Pramanik2010LiFeAs] in our experiment, it amounts to 2-4 % per decade of time, and is similar for $j_{ab}^{ab}$ and $j_{c}^{ab}$, so that the ratio $j_{ab}^{ab}/j_{c}^{ab}$ we shall be interested in is not appreciably altered. The shape of the $dB/dx$-hysteresis loop is very similar to that obtained for other iron-based superconductors.[@Kees; @Kees1] It is characterized by a sharp maximum of the critical current density for $|B| \lesssim 6$ kG, behavior characteristic of a dominant contribution from strong pinning[@Ovchinnikov91; @vdBeek2002] by nm-sized inhomogeneities.[@Sultan] The constant $dB/dx$ at higher fields comes from a weak “collective” pinning contribution[@Blatter94] due to scattering of quasiparticles in the vortex cores by atomic-scale point defects.[@Kees; @Kees1] Figure \[fig2\] shows similar results for $H_{a} \parallel ab-$plane and the Hall array $\perp c$, the configuration that probes $j_{c}^{ab}$. Again, the flux density profiles are very well described by the Bean model, although in this field orientation, the critical current density is dominated by the strong pinning contribution over the whole field range. Due to the elongated slab geometry, the configuration with $H_{a} \parallel ab$ does not involve a demagnetization correction, so that the relation $j_{ab} = (2/\mu_{0}) dB/dx$ is practically exact. With $j_{c}^{ab}$ and $j_{ab}^{ab}$ both measured in this orientation, geometry-related corrections play no role in the determination of $j_{ab}^{ab}/j_{c}^{ab}$. The critical currents for the three directions are summarized in Fig. \[fig3\], for an applied field of 1 T. Clearly, $j_{ab}^{ab}$ involving vortex motion along the $c-$axis (with vortices crossing the Fe-As planes) exceeds the other two critical currents. As expected, $j_{c}^{ab}$ for easy vortex sliding along the $ab$–plane is the smallest. The critical current $j_{ab}^{c}$ goes to zero at a lower temperature, reflecting the anisotropy of the irreversibility line in this material. ![(Color online) Local gradient of the magnetic induction measured in the three different configurations as function of temperature, for an applied field $\mu_{0}H_{a} = 1$T : ($\circ$) $dB/dx$ along $ab$ with ${\bf B} \parallel ab$, [*i.e.*]{}, $j_{c}^{ab}$; ($\diamond$) $dB/dx$ along $c$ with ${\bf B} \parallel ab$, [*i.e.*]{}, $j_{ab}^{ab}$; ($\triangle$) $dB/dx$ along $c$ with ${\bf B} \parallel c$, [*i.e.*]{}, $j_{ab}^{c}$. []{data-label="fig3"}](fig3.pdf){width="1.4\linewidth"} The critical current ratio $j_{ab}^{ab}/j_{c}^{ab}$ for ${\bf B} \parallel ab$ is plotted in Fig. \[fig4\] for different values of the applied field. To analyze it, we first consider theoretical results derived for the case of weak collective pinning.[@Blatter94] More specifically, in the regime of field–independent “single–vortex” pinning, the softer tilt- and shear moduli for vortex motion within the $ab$–plane imply that $j_{c}^{ab} = \varepsilon j_{ab}^{ab}$.[@Blatter94] This expression that does not take into account possible differences between $\varepsilon_{\lambda}$ and $\varepsilon_{\xi}$. A rederivation that keeps of the different contributions to the anisotropy yields $j_{c}^{ab} = (\varepsilon_{\lambda}^{5/3}/\varepsilon_{\xi}^{2/3} ) j_{ab}^{c}$ and $j_{ab}^{ab} =(\varepsilon_{\lambda}/\varepsilon_{\xi})^{7/3} j_{ab}^{c}$. Hence, the anisotropy ratio $$j_{ab}^{ab}/j_{c}^{ab} = \varepsilon_{\lambda}^{2/3} / \varepsilon_{\xi}^{5/3}$$ is mainly determined by the coherence length anisotropy. In the present situation though, the strong pinning contribution dominates the critical current density. Then, the critical current density is determined by the direct sum of the elementary force $f_{p}$ that individual inhomogeneities exert on the vortex lines.[@Ovchinnikov91; @vdBeek2002] It is given by the expression $j_{c} = (f_{p}/\Phi_{0}) n_{p}u_{0}^{2}$,[@vdBeek2002] where $n_{p}$ is the defect density, and $\Phi_{0}$ is the flux quantum. The trapping radius $u_{0}$ is the largest distance, perpendicular to the field direction, on which a pin can be effective. The critical current anisotropy is thus determined by the anisotropy of $f_{p}$, and that of $u_{0}$. The former is determined by the anisotropy of $\lambda$ and $\xi$, and by the geometric anisotropy of the pins, $\varepsilon_{b} = \ln \left( 1 + b_{ab}^{2}/2 \xi_{ab}^{2} \right) / \ln \left( 1 + b_{ab}b_{c}/2\varepsilon_{\xi} \xi_{ab}^{2} \right) < 1$. Here, $b_{ab}$ and $b_{c}$ are the mean extent of the pins in the $ab$ and $c$–direction, respectively. At low fields, the $u_{0}$–anisotropy is determined only that of the vortex line tension, and is therefore field-independent. We find that $j_{c}^{ab} = \varepsilon_{\lambda}^{2}\varepsilon_{b}^{-3/2} j_{s}^{c}$, while $j_{ab}^{ab} =(\varepsilon_{\lambda}^{2}/\varepsilon_{b}^{3/2}\varepsilon_{\xi}) j_{s}^{c}$. At higher fields, $u_{0}$ is determined by the intervortex interaction, leading to the ubiquitous decrease of the critical current density as $B^{-1/2}$. Then, $j_{c}^{ab} = \varepsilon_{b}^{-2} \varepsilon_{\lambda} j_{s}^{c}$, while $j_{ab}^{ab} = (\varepsilon_{\lambda}/ \varepsilon_{b}^{2}\varepsilon_{\xi}) j_{s}^{c}$. In both cases, $$j_{ab}^{ab}/j_{c}^{ab} = 1/ \varepsilon_{\xi}.$$ Thus, the experimental ratio $j_{ab}^{ab}/j_{c}^{ab}$, plotted in Fig. \[fig4\], directly measures the coherence length anisotropy. ![(Color online) Critical current ratio $j_{ab}^{ab}/j_{c}^{ab} \sim 1/\varepsilon_{\xi}$ for applied magnetic fields of ($\Diamond$) 0.5 T; ($\Box$) 1 T; ($\circ$) 2 T.[]{data-label="fig4"}](fig4.pdf){width="1.2\linewidth"} In spite of the fact that we could only evaluate the anisotropy above $T = 9$ K, it is clear that the extrapolated values of $1/\varepsilon_{\xi}$ at low temperature are of the order 1.5 – 2. The anisotropy ($\sim 1/\varepsilon_{\xi}$) increases with increasing temperature to become as large as 6–7 at $T = 13$ K, an experimental upper limit imposed by the increasing role of flux creep at higher $T$. The anisotropy becomes smaller and less $T$-dependent at higher magnetic field, and merges with the results obtained from the $B_{c2}$-ratios reported in Refs.  for a field as low as 2 T. Both the magnitude and the $T$-dependence of $\varepsilon_{\xi}$ are reminiscent of that of $\varepsilon_{\lambda}$ obtained on the 1111 family of iron–based superconductors.[@Okazaki2009] Notably, $\varepsilon_{\xi}$ is strongly temperature dependent at low fields, and less so at higher magnetic fields. Since the Fermi velocity is unaffected by field, a plausible framework for our observations is the temperature- [@Komendova2011] and field-dependent relative contribution of the two superconducting gaps to the effective superconducting coherence length. In particular, the evolution of $\varepsilon_{\xi}$ suggests that the relative weight of the gap on the more two-dimensional hole surface progressively decreases as the magnetic field is increased. For fields higher than 2 T, the gap on the three-dimensional electron surface would determine all superconducting properties related to the coherence length. This is consistent with recent thermal conductivity measurements that suggest that at fields as low as $0.1 B_{c2}(0)$ ([*i.e.*]{} 2 T), LiFeAs behaves as a single band superconductor. In that limit, the anisotropy of the coherence length and of the penetration depth are expected to be similar, and rather temperature independent. This is indeed the trend observed in the measurements: the high-field coherence length anisotropy seem to behave very similarly to reported results for the penetration depth anisotropy.[@Sasmal2010] It is to be noted that as the magnetic field is increased, the vortex core radius should plausibly shrink, such as this occurs in NbSe$_{2}$.[@Sonier2005] Also, the core structure should be modified.[@Komendova2011] This does not affect the ratio of the coherence lengths discussed here. The field-dependence of $\varepsilon_{\xi}$ may explain why the weak collective pinning contribution to the critical current density is more important for fields oriented parallel to $c$. The values of $\varepsilon_{\xi}$ and $\varepsilon_{\lambda}$ are very similar at fields above 1 – 2 T at which this contribution manifests itself. Hence, the weak pinning part of the critical current should be nearly the same for the two field orientations, as in a single band superconductor. At lower fields, it should be enhanced for $H \parallel ab$, but this is not perceptible because it remains masked by the strong pinning contribution. On the other hand, strong pinning is enhanced for all values of $H \parallel ab$ because its dependence on $\varepsilon_{\xi}$ through $\varepsilon_{b}$ . In conclusion, we present a direct technique for the measurement of the critical current anisotropy in uniaxial type II superconductors. The technique crucially relies on the use of a local probe of the magnetic induction, in this case, miniature Hall probe arrays. In the situation of strong pinning by extrinsic extended point defects, the ratio of the critical current densities along the $ab$–plane and the $c$-axis, for field oriented along the $ab$-plane, directly yields the coherence length anisotropy. We apply the method to infer the coherence length anisotropy $1/\varepsilon_{\xi}$ of LiFeAs at much lower magnetic fields than commonly reported. We interpret the results in terms of the gap anisotropy, and find that this is reduced to its value near $B_{c2}$ by the application of a magnetic field as low as 2 T . We thank V.G. Kogan for useful discussions and Dr. S. Bansropun and his group at Thales-TRT, Palaiseau for the careful processing of the Hall sensors. This work was supported by the French National Research agency, under grant ANR-07-Blan-0368 “Micromag”. The work at The Ames Laboratory was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under contract No. DE-AC02-07CH11358. Work at SKKU was partially supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0007487). The work of R. Prozorov in Palaiseau was funded by the St. Gobain Chair of the Ecole Polytechnique. [99]{} D. C. Johnston, Advances in Physics **59**, 803–1061 (2010). G. Blatter, M.V. Feigel’man, V.B. Geshkenbein, A.I. Larkin, and V.M. Vinokur, Rev. Mod. Phys. [**66**]{}, 1125 (1994). A. Gurevich, Physica C **456**, 160 (2007).V. G. Kogan, **66**, 020509 (2002). T. Nojima, H. Nagano, A. Ochiai, H. Aoki, B. Kang, and S.-I. Lee, Physica C [**445**]{}-[**448**]{}, 42 (2006). Z. X. Shi, M. Tokunaga, T. Tamegai, Y. Takano, K. Togano, H. Kito, and H. Ihara , Phys. Rev. B **68**, 104513 (2003). J. D. Fletcher, A. Carrington, O. J. Taylor, S. M. Kazakov, and J. Karpinski, **95**, 097005 (2005). R. Okazaki, M. Konczykowski, C. J. van der Beek, T. Kato, K. Hashimoto, M. Shimozawa, H. Shishido, M. Yamashita, M. Ishikado, H. Kito, A. Iyo, H. Eisaki, S. Shamoto, T. Shibauchi, and Y. Matsuda, Phys. Rev. B [**79**]{}, 064520 (2009). K. Cho, H. Kim, M. A. Tanatar, Y. J. Song, Y. S. Kwon, W. A. Coniglio, C. C. Agosta, A. Gurevich, and R. Prozorov, **83**, 060502 (2011). N. Kurita, K. Kitagawa, K. Matsubayashi, A. Kismarahardja, Eun-Sang Cho, J. S. Brooks, Y. Uwatoko, S. Uji, and T. Terashima, J. Phys. Soc. Japan **80**, 013706 (2011). Seunghyun Khim, Bumsung Lee, Jae Wook Kim, Eun Sang Choi, G. R. Stewart, and Kee Hoon Kim, Phys. Rev. B [**84**]{}, 104502 (2011). Yoo Jang Song, Jin Soo Ghim, Jae Hyun Yoon, Kyu Joon Lee, Myung Hwa Jung, Hyo-Seok Ji, Ji Hoon Shim, Yunkyu Bang, and Yong Seung Kwo, Europhys. Lett. **94**, 57008 (2011). R. Prozorov and V. G. Kogan, Rep. Prog. Phys. **74**, 124505 (2011). F. D. Callaghan, M. Laulajainen, C. V. Kaiser, and J. E. Sonier, Phys. Rev. Lett. [**95**]{}, 197001 (2005). Philip J.W. Moll, Roman Puzniak, Fedor Balakirev, Krzysztof Rogacki, Janusz Karpinski, Nikolai D. Zhigadl1 and Bertram Batlogg, Nature Mat. [**9**]{}, 628 (2010). Yu. N. Ovchinnikov and B. I. Ivlev, Phys. Rev. B [**43**]{}, 8024 (1991). C. J. van der Beek, M. Konczykowski, A. AbalÕoshev, I. AbalÕosheva, P. Gierlowski, S. J. Lewandowski, M. V. Indenbom, and S. Barbanera, Phys. Rev. B [**66**]{}, 024523 (2002). C.J. van der Beek, M. Konczykowski, and R. Prozorov, to be published. Cornelis J. van der Beek, Marcin Konczykowski, Shigeru Kasahara, Takahito Terashima, Ryuji Okazaki, Takasada Shibauchi, and Yuji Matsuda, Phys. Rev. Lett. [**105**]{}, 267002 (2010). C. J. van der Beek, G. Rizza, M. Konczykowski, P. Fertey, I. Monnet, Thierry Klein, R. Okazaki, M. Ishikado, H. Kito, A. Iyo, H. Eisaki, S. Shamoto, M. E. Tillman, S. L. BudÕko, P. C. Canfield, T. Shibauchi, and Y. Matsuda, Phys. Rev. B [**81**]{}, 174517 (2010). X. C. Wang, Q. Q. Liu, Y. X. Lv, W. B. Gao, L. X. Yang, R. C. Yu, F. Y. Li, and C. Q. Jin, Solid Sate Comm. [**148**]{}, 538 (2008). S. V. Borisenko, V. B. Zabolotnyy, D. V. Evtushinsky, T. K. Kim, I. V. Morozov, A. N. Yaresko, A. A. Kordyuk, G. Behr, A. Vasiliev, R. Follath, and B. Büchner **105**, 067002 (2010). H. Kim, M. A. Tanatar, Yoo Jang Song, Yong Seung Kwon, and R. Prozorov, **83**, 100502 (2011). A. K. Pramanik, L. Harnagea, C. Nacke, A. U. B. Wolter, S. Wurmehl, V. Kataev, and B. Büchner,   **83**, 094502 (2011). Yoo Jang Song, Jin Soo Ghim, Byeong Hun Min, Yong Seung Kwon, Myung Hwa Jung, and Jong-Soo Rhyee, Appl. Phys. Lett. [**96**]{}, 212508 (2010). Plengchart Prommapan, Makariy A. Tanatar, Bumsung Lee, Seunghyun Khim, Kee Hoon Kim, and Ruslan Prozorov, Phys. Rev. B [**84**]{}, 060509 (2011). C.P. Bean, Phys. Rev. Lett. [**8**]{}, 6 (1962). E. Zeldov, J. R. Clem, M. McElfresh and M. Darwin, Phys. Rev. B [**49**]{}, 9802 (1994). E. H. Brandt, Phys. Rev. B [**58**]{}, 6506 (1998). C. J. van der Beek, G.J. Nieuwenhuys, P.H. Kes, H.G. Schnack, and R. Griessen Physica C [**197**]{}, 320 (1992). S. Demirdiş, C. J. van der Beek, Y. Fasano, N. R. Cejas Bolecek, H. Pastoriza, D. Colson, and F. Rullier-Albenque, Phys. Rev. B [**84**]{}, 094517 (2011). L. Komendová, M. V. Milošević, A. A. Shanenko, and F. M. Peeters, Phys. Rev. B [**84**]{} 064522 (2011). M. A. Tanatar, J.-Ph. Reid, S. René de Cotret, N. Doiron-Leyraud, F. Laliberté, E. Hassinger, J. Chang, H. Kim, K. Cho, Yoo Jang Song, Yong Seung Kwon, R. Prozorov, and Louis Taillefer, Phys. Rev. B [**84**]{}, 054507 (2011). K. Sasmal, Z. Tang, F. Y Wei, A. M Guloy, and C.W. Chu, Phys. Rev. B [**81**]{}, 144512 (2010).
[**‘Desert’ in Energy or Transverse Space?**]{}\ [C. Bachas]{}[^1]\ [*Laboratoire de Physique Th[é]{}orique de l’ Ecole Normale Sup[é]{}rieure*]{}\ [*24 rue Lhomond, 75231 Paris Cedex 05, France*]{}\ **Abstract** > I review the issue of string and compactification scales in the weak-coupling regimes of string theory. I explain how in the Brane World scenario a (effectively) two-dimensional transverse space that is hierarchically larger than the string length may replace the conventional ‘energy desert’ described by renormalizable supersymmetric QFT. I comment on the puzzle of unification in this context. [**. The SQFT Hypothesis**]{} String/M-theory [@GSW] is a higher-dimensional theory with a single dimensionful parameter, which can be taken to be the fundamental string tension or the eleven-dimensional Planck scale. The theory has on the other hand a large number of ‘dynamical parameters’ characterizing its many distinct semiclassical vacua, such as compactification radii or sizes of defects localized in the compact space. Understanding how the Standard Model and Einstein gravity arise at low energies in one of those vacuum states is a central outstanding problem of String/M-theory. The usual hypothesis is that the string, compactification and Planck scales lie all close to one another, and that the physics at lower energies is well described by some effective four-dimensional renormalizable supersymmetric quantum field theory (SQFT), which must include the Minimal Supersymmetric Standard Model (MSSM) and some hidden sectors. I will refer to this picture of the world as the ‘SQFT hypothesis’. In this picture the breaking of the residual supersymmetry and the generation of the electroweak scale are believed to be triggered by non-perturbative gaugino condensation – a story that is however incomplete because of the problems of vacuum stability and of the cosmological constant. The minimal version of the SQFT hypothesis is obtained when there are no light fields charged under $SU(3)_c\times SU(2)_{ew}\times U(1)_Y$, besides those of the MSSM. This is the ‘energy desert’ scenario – a slight misnommer since the ‘desert’ may be populated by all sorts of stuff coupling with gravitational strength to ordinary matter. The minimal unification scenario is supported, as is well known [@GQW; @SU; @L], by two pieces of strong, though indirect evidence: (a) the measured low-energy gauge couplings [*do meet*]{} when extrapolated to higher energies with the MSSM $\beta$-functions, and (b) the energy $M_{U}\sim 2\times 10^{16} GeV$ at which they meet is in the same ball-parc as the string scale. A detailed analysis within the weakly-coupled heterotic string [@Ka] leads, in fact, to a discrepancy of roughly one order of magnitude between the theoretical point of string unification, and the one that fits the low-energy data. This is a small discrepancy on a logarithmic scale, and it could be fixed by small modifications of the minimal scenario [@BFY]. Besides being simple and rather natural, the minimal SQFT hypothesis makes thus [*two*]{} quantitative predictions which fit the low-energy data to better than one part in ten. The Weakly-Coupled Heterotic Theory =================================== The SQFT hypothesis is particularly compelling in the context of the weakly-coupled heterotic string [@K]. Both the graviton and the gauge bosons live in this case in the ten-dimensional bulk, and their leading interactions are given by the same order in string perturbation theory (i.e. the sphere diagram). This leads to the universal relation between the four-dimensional Planck mass ($M_P$) and the tree-level Yang-Mills couplings [@G], $$M_P^2 \sim M_H^2/g^{ 2}_{\rm YM}\ , \label{eq:gins}$$ independently of the details of compactification. If we assume that $g_{\rm YM}\sim o(1)$, then the heterotic string scale ($M_H$) is necessarily tied to the Planck scale. Furthermore, the standard Kaluza-Klein formula for the four-dimensional gauge couplings is $$1/g^2_{\rm YM} \sim (R M_H)^6 /g_H^2\ , \label{eq:wch}$$ with $R$ the typical radius of the six-dimensional compact space and $g_H$ the dimensionless string coupling. Pushing the Kaluza-Klein scale ($M_{\rm KK} \sim R^{-1}$) much below $M_H$ requires therefore a hierarchically-strong string coupling, and invalidates the semiclassical treatment of the vacuum. Of course all radii need not be equal but, at least in orbifold compactifications, T-duality allows us to take them all larger or equal to the string length, and then the above argument forbids any single radius from becoming too large. There is actually a loophole in the above reasonning. If some compact dimensions are much larger than the heterotic string length, loop corrections to the inverse squared gauge couplings will generically grow like a power of radius [@Ka]. [^2] [^3] It is thus logically conceivable that even though the observed low-energy gauge couplings are of order one, their tree-level values are hierarchically smaller. Since it is the tree-level couplings that enter in the relation (\[eq:gins\]), the heterotic string scale could thus in principle be significantly lower than the four-dimensional Planck mass [@Ba]. The main motivation for contemplating such possibilities in the past was the search for string models with low-energy supersymmetry broken spontaneously at tree level. Existing heterotic vacua of this type employ a string variant [@R] of the Scherk-Schwarz mechanism [@SS], which breaks supersymmetry in a way reminiscent of finite-temperature effects. The scale of (primordial) breaking is proportional to an inverse radius, so that lowering it to the electroweak scale requires the openning of extra dimensions at the TeV – a feature shown [@ABLT] to be generic in orbifold models. [^4] Insisting on tree-level breaking is, on the other hand, only a technical requirement – there is no reason why the breaking in nature should not have a non-perturbative origin. Furthermore, Scherk-Schwarz compactification has not so far lead to any new insights on the problems of vacuum selection and stability. Thus, there seems to be little theoretical motivation at this point for abandonning the SQFT hypothesis, and its successful unification predictions, in heterotic string theory. Brane World and Open String Theory ================================== The story is different in the theory of (unoriented) open and closed strings, in which gauge and gravitational interactions have different origins. While the graviton (a closed-string state) lives in the ten-dimensional bulk, open-string vector bosons can be localized on defects [@DLP] – the worldvolumes of D(irichlet)-branes [@Po]. Furthermore while closed strings interact to leading order via the sphere diagram, open strings must be attached to a boundary and thus interact via the disk diagram which is of higher order in the genus expansion. The four-dimensional Planck mass and Yang-Mills couplings therefore read $$1/g_{\rm YM}^2 \sim (R_{\parallel}M_I)^{6-n}/g_I \ , \ \ \ M_P^2 \sim R_{\perp}^{n}R_{\parallel}^{6-n} M_I^8/g_I^2 , \label{eq:typei}$$ where $R_\perp$ is the typical radius of the n compact dimensions transverse to the brane, $R_\parallel$ the typical radius of the remaining (6-n) compact longitudinal dimensions, $M_I$ the type-I string scale and $g_I$ the string coupling constant. As a result (a) there is no universal relation between $M_P$, $g_{\rm YM}$ and $M_I$ anymore, and (b) tree-level gauge couplings corresponding to different sets of branes have radius-dependent ratios and need not unify. A few remarks before going on. First, we are here discussing a theory of unoriented strings, because orientifolds [@S] are required in order to cancel the tension and RR charges of the (non-compact) space-filling D-branes. Second, using T-dualities we can ensure that both $R_\perp$ and $R_\parallel$ are greater than or equal to the string scale [@DLP]. This may take us either to Ia or to Ib theory (also called I or I’, respectively) – I will not make a distinction between them in what follows. Finally, it should be stressed that D-branes are the only known defects which can localize non-abelian gauge interactions in a perturbative setting. Orbifold fixed points can at most ‘trap’ matter fields and abelian vector bosons (from twisted RR sectors).[^5] Relations (\[eq:typei\]) tell us that type I string theory is much more flexible (and less predictive) than heterotic theory. The string scale $M_I$ is now a free parameter, even if one insists that both $g_{\rm YM}$ and $g_I$ be kept fixed and of $o(1)$. This added flexibility can be used to remove the order-of-magnitude discrepancy between the unification and string scales [@W]. A much more drastic proposal [@ADD; @Ly; @AADD] is to lower $M_I$ down to the experimentally-allowed limit $\sim o({\rm TeV})$. Keeping for instance $g_I$, $g_{\rm YM}$ and $R_\parallel M_I$ of order one, leads to the condition $$R_\perp^n \sim M_P^2/M_I^{2+n} . \label{eq:mm}$$ A TeV string scale would then require from n=2 millimetric to n=6 fermi-size dimensions transverse to our Brane World – the relative weakness of gravity being in this picture attributed to the transverse spreading of gravitational flux. What has brought this idea [^6] into sharp focus [@ADD] was (a) the realization that submillimeter dimensions are not at present ruled out by mesoscopic gravity experiments,[^7] and (b) the hope that lowering $M_I$ to the TeV scale may lead to a new understanding of the gauge hierarchy. Needless to say that a host of constraints (astrophysical and cosmological bounds, proton decay, fermion masses etc.) will make realistic model building a very strenuous exercise indeed. Finding type I vacua with three chiral families of quarks and leptons is already a non-trivial problem by itself [@KT]. None of these difficulties seems, however, [*a priori*]{} fatal to the Brane World idea, even in its most extreme realization [@checks]. Renormalization Group or Classical Supergravity? ================================================ Although the type I string scale could lie anywhere below the four-dimensional Planck mass,[^8] I will now focus on the extreme case where it is close to its experimental lower limit, $M_I \sim o({\rm TeV})$. Besides being a natural starting point for discussing the question of the gauge hierarchy, this has also the pragmatic advantage of bringing string physics within the reach of future acceleretor experiments. This extreme choice is at first sight antipodal to the minimal SQFT hypothesis : the MSSM is a stable renormalizable field theory, and yet one proposes to shrink its range of validity to one order of magnitude at most! Nevertheless, as I will now argue, the Brane World and SQFT scenaria share many common features when the number of large transverse dimensions in the former is exactly two [@B; @AB]. The key feature of the SQFT hypothesis is that low-energy parameters receive large logarithmic corrections, which are effectively resummed by the equations of the Renormalization Group. This running with energy can account for the observed values of the three gauge couplings, and of the mass matrices of quarks and leptons, in a way that is relatively ‘robust’.[^9] Furthermore the logarithmic sensitivity of parameters generates naturally hierarchies of scales, and has been the key ingredient in all efforts to understand the origin of the $M_Z/M_P$ hierarchy in the past [@N]. Consider now the Brane World scenario. The parameters of the effective Brane Lagrangian are dynamical open- and closed-string moduli. These latter, denoted collectively by $m_K$, include the dilaton, twisted-sector massless scalars, the metric of the transverse space etc. Their vacuum expectation values are constant along the four non-compact space-time dimensions, but vary generically as a function of the transverse coordinates $\xi$. For weak type-I string coupling and large transverse space these variations can be described by a Lagrangian of the (schematic) form $${\cal L}_{\rm bulk} + {\cal L}_{\rm source} \sim \int d^n\xi \; \Bigl[ {1\over g_I^2} (\partial_\xi m_K)^2 + {1\over g_I} \sum_s f_s(m_K) \delta(\xi-\xi_s)\Bigr] . \label{eq:bb}$$ This is a supergravity Lagrangian reduced to the n large transverse dimensions, and coupling to D-branes and orientifolds which act as sources localized at transverse positions $\xi_s$.[^10] The couplings $f_s(m_K)$ may vary from source to source – they can for instance depend on open-string moduli – and are subject to global consistency conditions. What is important, however, to us is that they are [*weak*]{} in the type-I limit, leading to weak variations, $$m_K(\xi) = m_K^0 + g_I\; m_K^1(\xi) + \cdots , \label{eq:bbb}$$ with $m_K^0$ a constant, $m_K^1$ a sum of Green’s functions etc. For $n=2$ dimensions the leading variation $m_K^1$ grows logarithmically with the size of the transverse space, $R_\perp$. Since our Standard Model parameters will be a function of the moduli evaluated at the position of our Brane World, they will have logarithmic sensitivity on $M_P$ in this case, very much like the (relevant) parameters of a supersymmetric renormalizable QFT. Similar sensitivity will occur even if $n>2$, as long as some twisted moduli propagate in only two extra large dimensions. Let me now discuss the validity of the approximation (\[eq:bb\]). The bulk supergravity Lagrangian receives both $\alpha^\prime$ and higher-genus corrections, but these involve higher derivatives of fields and should be negligible for moduli varying logarithmically over distance scales $\gg \sqrt{\alpha^\prime}$. The source functions, $f_s(m_K)$, are also in general modified by such corrections – our $\delta$-function approximation is indeed only valid to within $\delta\xi\sim o(\sqrt{\alpha^\prime})$. Such source modifications can, however, be absorbed into boundary conditions for the classical field equations at the special marked points $\xi_s$. The situation thus looks (at least superficially) analogous to that prevailing under the SQFT hypothesis : large corrections to low-energy parameters can be in both cases resummed by differential equations with appropriate boundary conditions. There are, to be sure, also important differences : in particular, the Renormalization Group equations are first order differential equations in a single (energy) scale parameter, while the classical supergravity equations are second-order and depend on the two coordinates of the large transverse space. The analogy between energy and transverse distance is also reminiscent of the holographic idea [@hol], considered in the context of compactification in [@RSV]. It is, however, important to stress that our discussion here stayed pertubative (and there was no large-N limit involved). I have just tried to argue that large string-loop corrections to the parameters of a brane action can, in appropriate settings, be calculated reliably as the sum of two superficially similar effects: (a) RG running from some low energy scale up to string scale, and (b) bulk-moduli variations over a transverse two-dimensional space of size much greater than string length. The two corresponding regimes – of renormalizable QFT and of reduced classical supergravity – are a priori different and need not overlap. The Puzzle of Unification ========================= The logarithmic sensitivity of brane parameters on $R_\perp$ can be used to generate scale hierarchies dynamically, exactly as with renormalizable QFT. Gauge dynamics on a given brane, for example, can become strong as the transverse space expands to a hierarchically large size, thereby inducing gaugino condensation and possibly supersymmetry breaking. Rather than discussing such scenaria further, I would now like to return to the main piece of evidence in favour of the SQFT hypothesis : the apparent unification of the Standard Model gauge couplings. Can their observed low-energy values be understood [@B; @ABD] in an equally robust and controlled manner, as coming from logarithmic variations in the (real) space transverse to our Brane World ?[^11] I dont yet know the answer to this important question, but let me at least refute the following possible objection : since the three gauge groups of the Standard Model live at the same point in transverse space (or else matter charged under two of them would have been ultraheavy) how can real-space variations split their coupling constants apart ? This objection would have been, indeed, fatal if all gauge couplings were determined by the same combination of bulk fields. This is fortunately not the case : scalar moduli from twisted sectors of orbifolds have been, for instance, shown to have non-universal couplings to gauge fields living on the same brane [@AFIV; @ABD]. The logarithmic variations of such fields could split the three Standard Model gauge couplings apart, although it is unclear why this splitting should be in the right proportion. [**Acknowledgements**]{}: I thank the organizers of the Göteborg, Brussels and Bad Honnef meetings for the invitations to speak, and in particular François Englert for teaching us all that ‘physics is great fun’. I also thank G. Aldazabal, C. Angelantonj, A. Dabholkar, M. Douglas, G. Ferretti, B. Pioline, A. Sen and H. Verlinde for discussions, and the ICTP in Trieste for hospitality while this talk was being written up. Research partially supported by EEC grant TMR-ERBFMRXCT96-0090. [99]{} M.B. Green, J.H. Schwarz and E. Witten, [*Superstring Theory*]{} (Cambridge U. Press, 1987); J. Polchinski, [*String Theory*]{} (Cambridge U. Press, 1998). H. Georgi, H. Quinn and S. Weinberg, [*Phys. Rev. Lett.*]{} [**33**]{} (1974) [451]{}. S. Dimopoulos, S. Raby and F. Wilczek, [*Phys. Rev.*]{} [**D24**]{} (1981) 1681; S. Dimopoulos and H. Georgi, [*Nucl. Phys.*]{} [**B193**]{} (1981) 150; L. Ibanez and G.G. Ross, [*Phys. Lett.*]{} [**B106**]{} (1981) 439; N. Sakai, [*Z. Phys.*]{} [**C11**]{} (1981) 153. For a recent review see for instance P. Langacker, hep-ph/9411247. V. Kaplunovsky, [*Nucl. Phys.*]{} [**B307**]{} (1988) [145]{}; L. Dixon, V. Kaplunovsky and J. Louis, [*Nucl. Phys.*]{} [**B329**]{} (1990) [27]{}. As for instance in C. Bachas, C. Fabre and T. Yanagida, [*Phys. Lett.*]{} [**B370**]{} (1996)[49]{}. For a review see K. Dienes, [*Phys. Rep.*]{} [**287**]{} (1997) 447, and references therein. See V. Kaplunovsky, [*Phys. Rev. Lett.*]{} [**55**]{} (1985) [1036]{}, for an early discussion of this point. P. Ginsparg, [*Phys. Lett.*]{} [**B197**]{} (1987)[139]{}. I. Antoniadis, [*Phys. Lett.*]{} [**B246**]{} (1990) [377]{}. K. R. Dienes, E. Dudas and T. Gherghetta, [*Phys. Lett.*]{} [**B436**]{} (1998) [55]{}; [*Nucl. Phys.*]{} [**B537**]{} (1999)[47]{} ; Z. Kakushadze, [*Nucl. Phys.*]{} [**B548**]{} (1999) 205. C. Bachas, unpublished (1995); K. Benakli, hep-ph/9809582. R. Rohm, [*Nucl. Phys.*]{} [**B237**]{} (1984)[553]{}; S. Ferrara, C. Kounnas and M. Porrati, [*Nucl. Phys.*]{} [**B304**]{} (1988) [500]{}. J. Scherk and J.H. Schwarz, [*Phys. Lett.*]{} [**B82**]{} (1979) [60]{}; [*Nucl. Phys.*]{} [**B153**]{} (1979) [61]{}. I. Antoniadis, C. Bachas, D. Lewellen and T. Tomaras, [*Phys. Lett.*]{} [**B207**]{} (1988)[441]{}. M. Dine and N. Seiberg, [*Nucl. Phys.*]{} [**B301**]{} (1988) [357]{}; T. Banks and L.J. Dixon, [*Nucl. Phys.*]{} [**B307**]{} (1988) [93]{}. J. Dai, R.G. Leigh and J. Polchinski, [*Mod. Phys. Lett.*]{} [**A4**]{} (1989) 2073; G. Pradisi and A. Sagnotti, [*Phys. Lett.*]{} [**B216**]{} (1989) [59]{}; P. Horava, [*Phys. Lett.*]{} [**B231**]{} (1989) [251]{}. J. Polchinski, [*Phys. Rev. Lett.*]{} [**75**]{} (1995) [4724]{}. I. Antoniadis and B. Pioline, [*Nucl. Phys.*]{} [**B550**]{} (1999) 41. A. Sagnotti, in [*Non-perturbative Quantum Field Theory*]{}, eds. G. Mack [*et al*]{} (Pergamon Press, Oxford, 1988). E. Witten, [*Nucl. Phys.*]{} [**B460**]{} (1996) [541]{}. N. Arkani-Hamed, S. Dimopoulos and G. Dvali, [*Phys. Lett.*]{} [**B429**]{} (1998) [263]{}; [*Phys. Rev.*]{} [**D59**]{} (1999)[086004]{}. J.D. Lykken, [*Phys. Rev.*]{} [**D54**]{} (1996) [3693]{}. I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali, [*Phys. Lett.*]{} [**B436**]{} (1998) [257]{}. V. Rubakov and M. Shaposhnikov, [*Phys. Lett.*]{} [**B125**]{} (1983) 136 ; G.W. Gibbons and D.L. Wiltshire, [*Nucl. Phys.*]{} [**B287**]{} (1987) 717. J.E. Moody and F. Wilczek, [*Phys. Rev.*]{} [**D30**]{} (1984) [130]{}; A. De Rujula, [*Phys. Lett.*]{} [**B180**]{} (1986) [213]{} ; T.R. Taylor and G. Veneziano, [*Phys. Lett.*]{} [**B213**]{} (1988) [450]{}. See for instance Z. Kakushadze and S.-H. H. Tye, [*Phys. Rev.*]{} [**D58**]{} (1998) 126001 ; L. E. Ib[á]{}[ñ]{}ez, C. Mu[ñ]{}oz, S. Rigolin, hep-ph/9812397. See for instance G. Shiu and S.-H. H. Tye, [*Phys. Rev.*]{} [**D58**]{} (1998) [106007]{} ; N. Arkani-Hamed and S. Dimopoulos, hep-ph/9811353 ; N. Arkani-Hamed, S. Dimopoulos, G. Dvali and J. March-Russell, hep-ph/9811448, and ref. [@ADD]. C. P. Burgess, L. E. Ibanez, F. Quevedo, [*Phys.Lett.*]{} [**B447**]{} (1999) ; K. Benakli in ref. [@Ba]. C. Bachas, [*JHEP*]{} [**9811**]{} (1998) [023]{}. I. Antoniadis and C. Bachas, [*Phys. Lett.*]{} [**B450**]{} (1999) [83]{}. See for example H.P. Nilles, [*Phys. Rep.*]{} [**110C**]{} (1984) 1. For a review see O. Aharony, S. S. Gubser, J. Maldacena, H. Ooguri, Y. Oz, hep-th/9905111. H. Verlinde, hep-th/9906182; based on L. Randall and R. Sundrum, hep-ph/9905221; hep-th/9906064. I. Antoniadis, C. Bachas and E. Dudas, hep-th/9906039. L. E. Ib[á]{}[ñ]{}ez, hep-ph/9905349. G. Aldazabal, A. Font, L. E. Ibanez and G. Violero, [*Nucl. Phys.*]{} [**B536**]{} (1998) [29]{} ; L. E. Ibanez, R. Rabadan and A. M. Uranga, [*Nucl. Phys.*]{} [**B542**]{} (1999) [112]{}. [^1]: Based on talks given at the conferences ‘22nd Johns Hopkins workshop’ (Göteborg, August 1998), ‘Fundamental interactions: from symmetries to black holes’ in honor of Francois Englert (Brussels, March 1999) and ‘From Planck Scale to Electroweak Scale’ (Bad Honnef, April 1999). [^2]: In special models, such as orbifolds without N=2 sectors, these large threshold corrections can be made to vanish at one-loop. The evolution of gauge couplings with energy is thus unaffected by the openning of large extra dimensions [@A]. However, since $g_H$ must in these models be hierarchically strong, the semiclassical string vacuum cannot be trusted. [^3]: Power corrections to gauge couplings have been also recently invoked as a way to speed up the unification process [@DDG]. [^4]: For more general compactifications, the limit of supersymmetry restoration is also known to be a singular limit [@DS], even though there is no precise relation between the scale of symmetry breaking and some Kaluza Klein threshold. [^5]: Non-perturbative symmetry enhancement is of course a possibility, as has been discussed for instance in [@AP]. The great success of the perturbative Standard Model makes one, however, reluctant to start with a theory in which $W$ bosons, and all quarks and leptons do not correspond to perturbative quanta. [^6]: For early discussions of a Brane Universe see [@RS]. [^7]: That such experiments do not rule out light scalar particles, such as axions, with gravitational-force couplings and Compton wavelengths of a millimeter or less, had been already appreciated in the past [@MW]. The Kaluza-Klein excitations of the graviton are basically subject to the same bound. [^8]: Arguments in favour of an intermediate string scale were given in [@BIQ]. [^9]: One must of course assume initial conditions for the RG equations, typically imposed by unification and by discrete symmetries, but there is no need to know in greater detail the physics in the ultraviolet regime. [^10]: In the general case there could be also branes extending only partially into the large transverse bulk. Our discussion can be adapted easily to take those into account. [^11]: For another recent idea see [@I].
--- abstract: 'In this paper, the problem of proactive caching is studied for cloud radio access networks (CRANs). In the studied model, the baseband units (BBUs) can predict the content request distribution and mobility pattern of each user, determine which content to cache at remote radio heads and BBUs. This problem is formulated as an optimization problem which jointly incorporates backhaul and fronthaul loads and content caching. To solve this problem, an algorithm that combines the machine learning framework of *echo state networks* with sublinear algorithms is proposed. Using echo state networks (ESNs), the BBUs can predict each user’s content request distribution and mobility pattern while having only limited information on the network’s and user’s state. In order to predict each user’s periodic mobility pattern with minimal complexity, the memory capacity of the corresponding ESN is derived for a periodic input. This memory capacity is shown to capture the maximum amount of user information needed for the proposed ESN model. Then, a sublinear algorithm is proposed to determine which content to cache while using limited content request distribution samples. Simulation results using real data from *Youku* and the *Beijing University of Posts and Telecommunications* show that the proposed approach yields significant gains, in terms of sum effective capacity, that reach up to $27.8\%$ and $30.7\%$, respectively, compared to random caching with clustering and random caching without clustering algorithm.' author: - '\' bibliography: - 'references.bib' title: 'Echo State Networks for Proactive Caching in Cloud-Based Radio Access Networks with Mobile Users' --- Introduction ============ Cellular systems based on cloud radio access networks (CRANs) enable communications using a massive number of remote radio heads (RRHs) are controlled by cloud-based baseband units (BBUs) via wired or wireless fronthaul links [@2]. These RRHs act as distributed antennas that can service the various wireless users. To improve spectral efficiency, cloud-based cooperative signal processing techniques can be executed centrally at the BBUs [@MugenRecent]. However, despite the ability of CRAN systems to run such complex signal processing functions centrally, their performance remains limited by the capacity of the fronthaul and backhaul (CRAN to core) links [@MugenRecent]. Indeed, given the massive nature of a CRAN, relying on fiber fronthaul and backhaul links may be infeasible. Consequently, capacity-limited wireless or third party wired solutions for the backhaul and fronthaul connections are being studied for CRANs such as in [@Jointwireless] and [@Optimalfronthaul]. To overcome these limitations, one can make use of content caching techniques[@Cluster; @Cooperative; @MeanField; @Content; @BackhaulChen] in which users can obtain contents from storage units deployed at cloud or RRH level. However, deploying caching strategies in a CRAN environment faces many challenges that include optimized cache placement, cache update, and accurate prediction of content popularity. The existing literature has studied a number of problems related to caching in CRANs, [heterogeneous networks, and content delivery networks (CDNs) [@Cluster; @Cooperative; @MeanField; @Content; @BackhaulChen; @Tran2016Octopus; @Cachingimprovement; @Jointcaching; @ContextAware; @Sung2016Efficient; @kang2014mobile; @De2011Optimum]]{}. In [@Cluster], the authors study the effective capacity of caching using stochastic geometry and shed light on the main benefits of caching. The work in [@Cooperative] proposes a novel cooperative hierarchical caching framework for the CRAN to improve the hit ratio of caching and reduce backhaul traffic load by jointly caching content at both the BBU level and RRH level. In [@MeanField], the authors analyzed the asymptotic limits of caching using mean-field theory. The work in [@Content] introduces a novel approach for dynamic content-centric base station clustering and multicast beamforming that accounts for both channel condition and caching status. In [@BackhaulChen], the authors study the joint design of multicast beamforming and dynamic clustering to minimize the power consumed, while quality-of-service (QoS) of each user is guaranteed and the backhaul traffic is balanced. [The authors in [@Tran2016Octopus] propose a novel caching framework that seeks to realize the potential of CRANs by using a cooperative hierarchical caching approach that minimizes the content delivery costs and improves the users quality-of-experience.]{} In [@Cachingimprovement], the authors develop a new user clustering and caching method according to the content popularity. The authors also present a method to estimate the number of clusters within the network based on the Akaike information criterion. In [@Jointcaching], the authors consider joint caching, routing, and channel assignment for video delivery over coordinated small-cell cellular systems of the future internet and utilize the column generation method to maximize the throughput of the system. The authors in [@ContextAware] allow jointly exploiting the wireless and social context of wireless users for optimizing the overall resources allocation and improving the traffic offload in small cell networks with device-to-device communication. [In [@Sung2016Efficient], the authors propose an efficient cache placement strategy which uses separate channels for content dissemination and content service. The authors in [@kang2014mobile] propose a low-complexity search algorithm to minimize the average caching failure rate.]{} However, most existing works on caching such as [@Cluster; @Cooperative; @MeanField; @Content; @BackhaulChen; @Tran2016Octopus; @Cachingimprovement; @Jointcaching; @ContextAware] have focused on the performance analysis and simple caching approaches that may not scale well in a dense, content-centric CRAN. [Moreover, the existing cache replacement works [@Sung2016Efficient; @kang2014mobile; @De2011Optimum] which focus on wired CDNs do not consider the cache replacement in a wireless network such as CRANs in which one must investigate new caching challenges that stem from the dynamic and wireless nature of the system and from the backhaul and fronthaul limitations.]{} In addition, these works assume a known content distribution that is then used to design an effective caching algorithm and, as such, they do not consider a proactive caching algorithm that can predict the content request distribution of each user. Finally, most of these existing works neglect the effect of the users’ mobility. For updating the cached content, if one can make use of the long-term statistics of user mobility to predict the user association, the efficiency of content caching will be significantly improved [@Mobilityaware]. For proactive caching, the users’ future position can also enable seamless handover and content download for users. More recently, there has been significant interest in studying how prediction can be used for proactive caching such as in [@Bigdata; @SoysaPredicting; @NagarajaCaching; @TadrousOn; @ManytoMany; @pompili2016elastic]. The authors in [@Bigdata] develop a data extraction method using the Hadoop platform to predict content popularity. The work in [@SoysaPredicting] proposes a fast threshold spread model to predict the future access pattern of multi-media content based on the social information. In [@NagarajaCaching], the authors exploit the instantaneous demands of the users to estimate the content popularity and devise an optimal random caching strategy. In [@TadrousOn], the authors derive bounds on the minimum possible cost achieved by any proactive caching policy and propose specific proactive caching strategies based on the cost function. In [@ManytoMany], the authors formulate a caching problem as a many-to-many matching game to reduce the backhaul load and transmission delay. [The authors in [@pompili2016elastic] study the benefits of proactive operation but they develop any analytically rigorous learning technique to predict the users’ behavior.]{} Despite these promising results, existing works such as [@Bigdata; @SoysaPredicting; @NagarajaCaching; @TadrousOn; @ManytoMany] do not take into account user-centric features, such as the demographics and user mobility. Moreover, such works cannot deal with massive volumes of data that stem from thousands of users connected to the BBUs of a CRAN, since they were developed for small-scale networks in which all processing is done at base station level. Meanwhile, none of these works in [@Bigdata; @SoysaPredicting; @NagarajaCaching; @TadrousOn; @ManytoMany] analyzed the potential of using machine learning tools such as neural network for content prediction with mobility in a CRAN. The main contribution of this paper is a novel proactive caching framework that can accurately predict both the content request distribution and mobility pattern of each user and, subsequently, cache the most suitable contents while minimizing traffic and delay within a CRAN. The proposed approach enables the BBUs to dynamically learn and decide on which content to cache at the BBUs and RRHs, and how to cluster RRHs depending on the prediction of the users’ content request distributions and their mobility patterns. Unlike previous studies such as [@Content], [@Bigdata] and [@TadrousOn], which require full knowledge of the users’ content request distributions, we propose a novel approach to perform proactive content caching based on the powerful frameworks of echo state networks (ESNs) and sublinear algorithms [@Sublinear]. The use of ESNs enables the BBUs to quickly learn the distributions of users’ content requests and locations without requiring the entire knowledge of the users’ content requests. The entire knowledge of the user’s content request is defined as the user’s *context* which includes the information about content request such as age, job, and location. The user’s context significantly influence the user’s content request distribution. Based on these predictions, the BBUs can determine which contents to cache at cloud cache and RRH cache and then offload the traffic. Moreover, the proposed sublinear approach enables the BBUs to quickly calculate the percentage of each content and determine the contents to cache without the need to scan all users’ content request distributions. To our best knowledge, beyond our work in [@Chen2016Echo] that applied ESN for LTE-U resource allocation, no work has studied the use of ESN for proactive caching. In order to evaluate the actual performance of the proposed approach, we use *real data from Youku* for content simulations and use the *realistic measured mobility data from the Beijing University of Posts and Telecommunications* for mobility simulations. Simulation results show that the proposed approach yields significant gains, in terms of the total effective capacity, that reach up to $27.8\%$ and $30.7\%$, respectively, compared to random caching with clustering and random caching without clustering. Our key contributions are therefore: - A novel proactive caching framework that can accurately predict both the content request distribution and mobility pattern of each user and, subsequently, cache the most suitable contents while minimizing traffic and delay within a CRAN. - A new ESN-based learning algorithm to predict the users’ content request distribution and mobility patterns using users’ contexts. - Fundamental analysis on the memory capacity of the ESN with mobility data. - A low-complexity sublinear algorithm that can quickly determine the RRHs clustering and which contents to store at RRH cache and cloud cache. The rest of this paper is organized as follows. The system model is described in Section . The ESN-based content prediction approach is proposed in Section . The proposed sublinear approach for content caching and RRH clustering is presented in Section . In Section , simulation results are analyzed. Finally, conclusions are drawn in Section . System Model and Problem Formulation {#sec:SM} ==================================== ![\[CRAN\] A CRAN using clustering and caching.](figure112.eps){width="7cm"} Consider the downlink transmission of a cache-enabled CRAN in which a set $\mathcal{U} = \left\{ {1,2, \cdots ,U} \right\}$ of $U$ users are served by a set $\mathcal{R} = \{1,2,\ldots,{R}\}$ of $R$ RRHs. The RRHs are connected to the cloud pool of the BBUs via capacity-constrained, digital subscriber line (DSL) fronthaul links. The capacity of the fronthaul link is limited and $v_F$ represents the maximum fronthaul transmission rate for all users. As shown in Fig. 1, RRHs which have the same content request distributions are grouped into a virtual cluster which belongs to a set $\mathcal{M} = \mathcal{M}_1 \cup \ldots \cup \mathcal{M}_{M}$ of $M$ virtual clusters. [We assume that each user will always connect to its nearest RRHs cluster and can request at most one content at each time slot $\tau$.]{} The virtual clusters with their associated users allow the CRAN to use zero-forcing dirty paper coding (ZF-DPC) of multiple-input multiple-output (MIMO) systems to eliminate cluster interference. The proposed approach for forming virtual clusters is detailed in Section \[sectionS\]. Virtual clusters are connected to the content servers via capacity-constrained wired backhaul links such as DSL. The capacity of the backhaul link is limited with $v_B$ being the maximum backhaul transmission rate for all users [@Sparsebeamforming]. Since each RRH may associate with more than one user, the RRH may have more than one type of content request distribution and belong to more than one cluster. Here, we note that the proposed approach can be deployed in any CRAN, irrespective of the way in which the functions are split between RRHs and BBUs. Mobility Model -------------- In our model, the users can be mobile and have periodic mobility patterns. In particular, we consider a system in which each user will regularly visit a certain location. For example, certain users will often go to the same office for work at the same time during weekdays. We consider daily periodic mobility of users, which is collected once every $H$ time slots. The proposed approach for predicting the users’ periodic mobility patterns is detailed in Section \[se:Mobility\]. In our model, each user is assumed to be moving from the current location to a target location at a constant speed and this user will seamlessly switch to the nearest RRH as it moves. We ignore the RRH handover time duration that a user needs to transfer from one RRH to another. Given each user’s periodic mobility, we consider the caching of content, separately, at the RRHs and cloud. Caching at the cloud allows to offload the backhaul traffic and overcome the backhaul capacity limitations. In particular, the cloud cache can store the popular contents that all users request from the content servers thus alleviating the backhaul traffic and improve the transmission QoS. Caching at the RRH, referred to as RRH cache hereinafter, will only store the popular content that the associated users request. The RRH cache can significantly offload the traffic and reduce the transmission delay of both the fronthaul and backhaul. We assume that each content can be transmitted to a given user during time slot $\tau$. [In our model, a time slot represents the time duration during which each user has an invariant content request distribution. During each time slot, each user can receive several contents.]{} The RRH cache is updated each time slot $\tau$ and the cloud cache is updated during $T_\tau$ time slots. [We assume that the cached content update of each RRH depends only on the users located nearest to this RRH.]{} We also assume that the content server stores a set ${\mathcal{N}}=\{1, 2,\ldots, {N}\}$ of all contents required by all users. All contents are of equal size $L$. The set of $C_c$ cloud cache storage units is given by $\mathcal{C}_c=\{1,2,\cdots,{C_c}\}$, where $C_c \le N$. The set of $C_r$ RRH cache storage units is given by $\mathcal{C}_{r}=\{1,2,\cdots,{C_r}\}$, where $C_{r} \le N$, $r \in \mathcal{R}$. Transmission Model ------------------ ![\[CRAN\] Content transmission in CRANs.](figure2.eps){width="9.5cm"} As shown in Fig. 2, contents can be sent from: a) a content server, b) a remote RRH cache storage unit, c) a cloud cache storage unit, or d) an RRH cache storage unit to the user. An *RRH* refers to an RRH that the user is already associated with, while a *remote RRH* refers to other RRHs that store the user’s required content but are not associated to this user. We assume that each content can be transmitted independently, and different contents are processed at different queues. The transmission rate of each content, $v_{BU}$, from the content server to the BBUs is: $$\label{eq:vBU} \setlength{\abovedisplayskip}{0 pt} \setlength{\belowdisplayskip}{0 pt} {v_{BU}} =\frac{{{v_B}}}{{N_B}},$$ where $N_B$ is the number of the users that request the contents that must be transmitted from the backhaul to the BBUs. Since the content transmission rates, from the cloud cache to the BBUs and from the RRH cache to the local RRH, can occur at a rate that is higher than that of the backhaul and fronthaul links such as in [@Cluster] and [@Cooperative], we ignore the delay and QoS loss of these links. After transmitting the content to the BBUs, the content is delivered to the RRHs over fronthaul links. We also assume that the transmission rate from the RRH to the BBUs is the same as the rate from the BBUs to the RRH. Subsequently, the transmission rate, $v_{FU}$, of each content from the BBUs to the RRHs is ${v_{FU}} =\frac{{{v_F}}}{{ {{N_{F}}}}}$, where $N_{F}$ is the number of the users that request contents that must be transmitted from the fronthaul to the RRHs. After transmitting the content to the RRHs, the content is transmitted to the users over the radio access channels. Therefore, the total transmission link of a specific content consists of one of the following links: a) content server-BBUs-RRH-user, b) cloud cache-BBUs-RRH-user, c) RRH cache-RRH-user, and d) remote RRH cache-remote RRH-BBUs-RRH-user. Note that the wireless link is time-varying due to the channel as opposed to the static, wired, DSL fronthaul and backhaul links. To mitigate interference, the RRHs can be clustered based on the content requests to leverage MIMO techniques. This, in turn, can also increase the effective capacity for each user, since the RRHs can cooperate and use ZF-DPC to transmit their data to the users. Therefore, the received signal-to-interference-plus-noise ratio of user $i$ from the nearest RRH $k \in \mathcal{M}_i$ at time $t$ is [@Exploring]: $${\gamma _{t,ik}} = \frac{{Pd_{t,ik}^{ - \beta }{{\left\| {{h_{t,ik}}} \right\|}^2}}}{{\sum\limits_{j \in {\mathcal{M} \mathord{\left/ {\vphantom {\mathcal{M} {{\mathcal{M}_i}}}} \right. \kern-\nulldelimiterspace} {{\mathcal{M}_i}}}} {Pd_{t,ij}^{ - \beta }{{\left\| {{h_{t,ij}}} \right\|}^2}+{\sigma ^2}} }},$$ where $h_{t,ik}$ is the Rayleigh fading parameter and $d_{t,ik}^{-\beta}$ is the path loss at time $t$, with $d_{t,ik}$ being the distance between RRH $k$ and user $i$ at time $t$, and $\beta$ being the path loss exponent. $\sigma^2$ is the power of the Gaussian noise, and $P$ is the transmit power of each RRH, assumed to be equal for all RRHs. We also assume that the bandwidth of each downlink user is $B$. Since the user is moving and the distance between the RRH and user is varying, the channel capacity between RRH $k$ and user $i$ at time $t$ will be ${C_{t,ik}} =B{\log _2}\left( {1 + \gamma_{t,ik} } \right)$. Since each user is served by the nearest RRH, we use $d_{t,i}$, $h_{t,i}$, $C_{t,i}$ and $\gamma_{t,i}$ to refer to $d_{t,ik}$, $h_{t,ik}$ $C_{t,ik}$ and $\gamma_{t,ik}$, for simplicity. [Note that, ZF-DPC is implemented in the cloud and can be used for any transmission link.]{} Effective Capacity ------------------ Since the capacity $C_{t,i}$ does not account for delay, it cannot characterize the QoS of requested content. In contrast, the notion of an effective capacity, as defined in [@Effective], represents a useful metric to capture the maximum content transmission rate of a channel with a specific QoS guarantee. First, we introduce the notion of a QoS exponent that allows quantifying the QoS of a requested content and, then, we define the effective capacity. The QoS exponent related to the transmission of a given content $n$ to a user $i$ with a stochastic waiting queue length $Q_{i,n}$ is [@Effective]: $$\label{eq:thetail} \setlength{\abovedisplayskip}{4 pt} \theta_{i,n} = \mathop {\lim }\limits_{q \to \infty } \frac{{\log_2 {\ensuremath{\operatorname{Pr}}}\left[ {Q_{i,n} > q} \right]}}{q},$$ where $q$ is the system allowable threshold of queue length. For a large threshold value ${{q_{\max }}}$, the buffer violation probability of content $n$ for user $i$ can be approximated by: $$\setlength{\abovedisplayskip}{3 pt} \setlength{\belowdisplayskip}{3 pt} {\ensuremath{\operatorname{Pr}}}\left[ {Q_{i,n} > {q_{\max }}} \right] \mathop \approx {e^{ - \theta_{i,n} {q_{\max }}}}.$$ This approximation is obtained from the large deviation theory. Then, the relation between buffer violation probability and delay violation probability for user $i$ with content $n$ is [@Effective]: $$\setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} {\ensuremath{\operatorname{Pr}}}\left[ {D_{i,n} > {D_{\max }}} \right] \le k\sqrt {{\ensuremath{\operatorname{Pr}}}\left[{Q_{i,n} > {q_{\max }}} \right]},$$ where $D_{i,n}$ is the delay of transmitting content $n$ to user $i$ and $D_{\max}$ is the maximum tolerable delay of each content transmission. Here, $k$ is a positive constant and the maximum delay $q_{\max}=cD_{\max}$, with $c$ being the transmission rate over the transmission links. Therefore, $\theta_{i,n}$ can be treated as the QoS exponent of user $i$ transmitting content $n$ which also represents the delay constraint. A smaller $\theta_{i,n}$ reflects a looser QoS requirement, while a larger $\theta_{i,n}$ expresses a more strict QoS requirement. The QoS exponent pertaining to the transmission of a content $n$ to user $i$ with delay $D_{i,n}$ is [@Cluster]: $$\label{eq:thetaD} \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} \theta_{i,n} = \mathop {\lim }\limits_{{D_{\max }} \to \infty } \frac{{-\log {\ensuremath{\operatorname{Pr}}}\left( {D_{i,n} > {D_{\max }}} \right)}}{{{D_{\max }} - {{{N_h}L} \mathord{\left/ {\vphantom {{{N_h}L} v}} \right. \kern-\nulldelimiterspace} v}}},$$ where $N_h$ indicates the number of hops of each transmission link and $v$ indicates the rate over the wired fronthaul and backhaul links. Based on (\[eq:thetail\])-(\[eq:thetaD\]), the cumulative distribution function of delay of user $i$ transmitting content $n$ with a delay threshold $D_{\max}$ is given by: $$\label{eq:PrD} \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} {\ensuremath{\operatorname{Pr}}}\left( {D_{i,n} > {D_{\max }}} \right) \approx {e^{ - \theta_{i,n} \left( {{D_{\max }} - {{{N_h}} \mathord{\left/ {\vphantom {{{N_h}} v}} \right. \kern-\nulldelimiterspace} v}} \right)}}.$$ The corresponding *QoS exponents* pertaining to the transmission of a content $n$ to a user $i$ can be given as follows: a) content server-BBUs-RRH-user $\theta_{i,n}^{S}$, b) cloud cache-BBUs-RRH-user $\theta_{i,n}^{A}$, c) local RRH cache-RRH-user $\theta_{i,n}^{O}$, d) remote RRH cache-remote RRH-BBUs-RRH-user $\theta_{i,n}^{G}$. Since the QoS of each link depends on the QoS exponents, we use the relationship between the QoS exponent parameters to represent the transmission quality of each link. In order to quantify the relationship of the QoS exponents among these links, we state the following result: \[pro1\] *To achieve the same QoS and delay of transmitting content $n$ over the wired fronthaul and backhaul links, the QoS exponents of the four transmission links of content $n$ with $v_{BU}$ and $v_{FU}$ must satisfy the following conditions:* $$\setlength{\abovedisplayskip}{3 pt} \setlength{\belowdisplayskip}{3 pt} \begin{split} &\text{a)}\;\;\theta _{i,n}^S = \frac{{\theta _{i,n}^O}}{{1 - {{2L} \mathord{\left/ {\vphantom {{2L} {{v_{BU}}{D_{\max }}}}} \right. \kern-\nulldelimiterspace} {{v_{BU}}{D_{\max }}}}}}, \;\; \text{b)}\;\;\theta _{i,n}^A = \frac{{\theta _{i,n}^O}}{{1 - {{L} \mathord{\left/ {\vphantom {{2L} {{v_{FU}}{D_{\max }}}}} \right. \kern-\nulldelimiterspace} {{v_{FU}}{D_{\max }}}}}},\;\;\\ &\text{c)}\;\;\theta _{i,n}^G = \frac{{\theta _{i,n}^O}}{{1 - {{2L} \mathord{\left/ {\vphantom {{3L} {{v_{FU}}{D_{\max }}}}} \right. \kern-\nulldelimiterspace} {{v_{FU}}{D_{\max }}}}}}. \end{split}$$ See Appendix A. Proposition \[pro1\] captures the relationship between the QoS exponents of different links. This relationship indicates the transmission QoS for each link. From Proposition \[pro1\], we can see that, given the QoS requirement $\theta_{i,n}^{O}$ for transmitting content $n$, the only way to satisfy the QoS requirement $\theta_{i,n}^{O}$ over a link b) is to take the limits of the transmission rate $v_{FU}$ to infinity. Based on Proposition \[pro1\] and $\theta_{i,n}^O$, we can compute the QoS exponents achieved by the transmission of a content $n$ from different links. The BBUs can select an appropriate link for each content transmission with a QoS guarantee according to the QoS exponent of each link. Given these basic definitions, the effective capacity of each user is given next. Since the speed of each moving user is constant, the cumulative channel capacity during the time slot $\tau$ is given as ${C_{\tau ,i}} = \sum\nolimits_{t = 1,2, \ldots ,\tau } {{C_{t,i}}} = {\mathbb{E}_{d_i,h_i}}[{C_{t,i}}]$. Therefore, the effective capacity of user $i$ [receiving]{} content $n$ during time $\tau$ is given by [@Effective]: $$\label{eq:E} \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} E_{\tau,i}\!\left( \theta _{i,n_{i\tau},\tau}^j\right)\!=\! - \mathop \frac{1}{{ \theta _{i,n_{i\tau},\tau}^j \tau}}\log_2 \mathbb{E}_{d_i,h_i}\!\!\!\left[ {{e^{ - \theta _{i,n_{i\tau},\tau}^j C_{\tau,i} }}} \right], $$ where $n_{i\tau}$ represents the content that user $i$ requests at time slot $\tau$, $j \in \left\{ {O,A,S,G} \right\}$ indicates the link that transmits the content $n$ to user $i$ and $\mathbb{E}_{d_i,h_i}\left[x\right]$ is the expectation of $x$ with respect to distribution of $d_i$ and $h_i$. Based on (\[eq:E\]), the sum effective capacity of all moving users during time slot $k$ is: $$\label{eq:Se} \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} {E_k} = \sum\limits_{i \in \mathcal{U}} {{E_{k,i}}\left( {\theta _{i,n_{ik},k}^j} \right)}.$$ The sum effective capacity $\bar E$ is analyzed during $T$ time slots. Therefore, the long term effective capacity $\bar E$ is given by $\bar E = \frac{1}{T}\sum\nolimits_{k = 1}^T {{E_k}} $. $\bar E$ actually captures the delay and QoS of contents that are transmitted from the content server, remote RRHs, and caches to the network users during a period $T$. [Note that the use of the effective capacity is known to be valid, as long as the following two conditions hold [@Effective]: a) Each user’s content transmission has its own, individual queue. b) The buffer of each queue is of infinite (large) size. Since the BBUs will allocate separate spectrum resource for each user’s requested content transmission, we can consider that each users’ content transmission is independent and hence, condition a) is satisfied. For condition 2), since we deal with the queue of each user at the level of a cloud-based system, such an assumption will be reasonable, given the high capabilities of a cloud server. Therefore, the conditions are applicable to the content transmission scenario in the proposed framework.]{} Problem Formulation ------------------- Given this system model, our goal is to develop an effective caching scheme and content RRH clustering approach to reduce the interference and offload the traffic of the backhaul and fronthaul based on the predictions of the users’ content request distributions and periodic mobility patterns. To achieve this goal, we formulate a QoS and delay optimization problem whose objective is to maximize the long-term sum effective capacity. This optimization problem of caching involves predicting the content request distribution and periodic location for each user, and finding optimal contents to cache at the BBUs and RRHs. This problem can be formulated as follows: $$\label{eq:sum} \addtolength\abovedisplayshortskip{-8pt} \addtolength\belowdisplayshortskip{-7pt} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathop {\max }\limits_{{\mathcal{C}_c, \mathcal{C}_{r}}} {\bar E} =\mathop {\max }\limits_{{\mathcal{C}_c, \mathcal{C}_{r}}} \frac{1}{{{T}}}\sum\limits_{k = 1}^{{T}} {\sum\limits_{i \in \mathcal{U}} {E_{k,i}\left( {\theta _{i,n_{ik},k}^j} \right)} } ,\\$$ $$\begin{aligned} \label{c1} &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\text{s. t.}\scalebox{1}{$\;\;\;\;{m}\bigcap {{f}} = \emptyset, m \ne f, m, f \in {{\mathcal{C}_c}}$, or $m, f \in {{\mathcal{C}_r}}$,} \tag{\theequation a}\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\scalebox{1}{$\;\;\;\;\;\;\;\;j \in \left\{ {O,A,S,G} \right\}$,} \tag{\theequation b}\\ &\scalebox{1}{$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!\mathcal{C}_c, \mathcal{C}_{r}, n_{ik}, \subseteq \mathcal{N}, r \in \mathcal{R},\;\;\;\;$} \tag{\theequation c} $$ where $\mathcal{C}_c$ and $\mathcal{C}_{r}$ represent, respectively, [the set of contents that stored in the cloud cache and RRH cache]{}, (\[eq:sum\]a) captures the fact that each cache storage unit in the RRH and cloud stores a single, unique content, (\[eq:sum\]b) represents that the links selection of transmitting each content, and (\[eq:sum\]c) indicates that the contents at the cache will all come from the content server. Here, we note that, storing contents in the cache can increase the rates $v_{BU}$ and $v_{FU}$ of the backhaul and fronthaul which, in turn, results in the increase of the effective capacity. Moreover, storing the most popular contents in the cache can maximize the number of users receiving content from the cache. This, in turn, will lead to maximizing the total effective capacity. Meanwhile, the prediction of each user’s mobility pattern can be combined with the prediction of the user’s content request distribution to determine which content to store in which RRH cache. Such intelligent caching will, in turn, result in the increase of the effective capacity. Finally, RRHs’ clustering with MIMO is used to further improve the effective capacity by mitigating interference within each cluster. Fig. \[solution\] summarizes the proposed framework that is used to solve the problem in (\[eq:sum\]). Within this framework, we first use the ESNs predictions of content request distribution and mobility pattern to calculate the average content request percentage for each RRH’s associated users. Based on the RRH’s average content request percentage, the BBUs determine the content that must be cached at each RRH. Based on the RRH caching and the content request distribution of each user, the BBUs will then decide on which content to cache at cloud. ![\[solution\] Overview of the problem solution.](solution2.eps){width="9cm"} Echo State Networks for Content Prediction and Mobility {#section2} ======================================================= The optimization problem in (\[eq:sum\]) is challenging to solve, because the effective capacity depends on the prediction of the content request distribution which determines the popularity of a given content. The effective capacity also depends on the prediction of the user’s mobility pattern that will determine the user association thus affecting the RRH caching. In fact, since the RRH caching and cloud caching need to be aware of the content request distribution of each user in advance, the optimization problem is difficult to solve using conventional optimization algorithms since such conventional approaches are not able to predict the user’s content request distribution for the BBUs. Moreover, in a dense CRAN, the BBUs may not have the entire knowledge of the users’ contexts that are needed to improve the accuracy of the content and mobility predictions thus affecting the cache placement strategy. These reasons render the optimization problem in (\[eq:sum\]) challenging to solve in the presence of limited information. To address these challenges, we propose a novel approach to predict the content request distribution and mobility pattern for each user based on the powerful framework of *echo state networks* [@Harnessing]. ESNs are an emerging type of recurrent neural networks [@APractical] that can track the state of a network and predict the future information, such as content request distribution and user mobility pattern, over time. Content Distribution Prediction ------------------------------- In this subsection, we formulate the ESN-based content request distribution prediction algorithm. A prediction approach based on ESNs consists of four components: a) agents, b) input, c) output, and d) ESN model. The ESN will allow us to build the content request distribution based on each user’s context. The proposed ESN-based prediction approach is thus defined by the following key components: $\bullet$ *Agents*: The agents in our ESNs are the BBUs. Since each ESN scheme typically performs prediction for just one user, the BBUs must implement $U$ ESN algorithms at each time slot. $\bullet$ *Input:* The ESN takes an input vector $\boldsymbol{x}_{t,j}=\left[ {x_{tj1}, \cdots , x_{tjK}} \right]^{\mathrm{T}}$ that represents the context of user $j$ at time $t$ including content request time, week, gender, occupation, age, and device type (e.g., tablet or smartphone). The vector $\boldsymbol{x}_{t,j}$ is then used to determine the content request distribution ${\boldsymbol{y} _{t,j}}$ for user $j$. For example, the types of videos and TV programs that interest young teenage students, will be significantly different from those that interest a much older demographic. Indeed, the various demographics and user information will be critical to determine the content request preferences of various users. Here, $K$ is the number of properties that constitute the context information of user $j$. $\bullet$ *Output:* The output of the ESN at time $t$ is a vector of probabilities $\boldsymbol{y}_{t,j}= \left[ {{p_{tj1}},{p_{tj2}}, \ldots ,{p_{tjN}}} \right]$ that represents the probability distribution of content request of user $j$, where $p_{tjn}$ is the probability that user $j$ requests content $n$ at time $t$. $\bullet$ *ESN Model:* An ESN model can approximate the function between the input $\boldsymbol{x}_{t,j}$ and output $\boldsymbol{y}_{t,j}$, thus building the relationship between each user’s context and the content request distribution. For each user $j$, an ESN model is essentially a dynamic neural network, known as the dynamic reservoir, which will be combined with the input $\boldsymbol{x}_{t,j}$ representing the context of user $j$. Mathematically, the dynamic reservoir consists of the input weight matrix $\boldsymbol{W}_j^{\alpha,in} \in {\mathbb{R}^{N_w \times K}}$, and the recurrent matrix $\boldsymbol{W}_j^\alpha \in {\mathbb{R}^{N_w \times N_w}}$, where $N_w$ is the number of the dynamic reservoir units that the BBUs use to store the context of user $j$. The output weight matrix $\boldsymbol{W}_j^{\alpha,out} \in {\mathbb{R}^{N \times \left(N_w+K\right)}}$ is trained to approximate the prediction function. $\boldsymbol{W}_j^{\alpha,out}$ essentially reflects the relationship between context and content request distribution for user $j$. The dynamic reservoir of user $j$ is therefore given by the pair $\left( \boldsymbol{W}_j^{\alpha,in}, \boldsymbol{W}_j^\alpha \right)$ which is initially generated randomly via a uniform distribution and $\boldsymbol{W}_j^\alpha$ is defined as a sparse matrix with a spectral radius less than one [@APractical]. $\boldsymbol{W}_j^{\alpha,out}$ is also initialized randomly via a uniform distribution. By training the output matrix $\boldsymbol{W}_j^{\alpha,out}$, the proposed ESN model can predict the content request distribution based on the input $\boldsymbol{x}_{t,j}$, which will then provide the samples for the sublinear algorithm in Section \[sectionS\] that effectively determines which content to cache. Given these basic definitions, we introduce the dynamic reservoir state ${\boldsymbol{v}_{t,j}^\alpha}$ of user $j$ at time $t$ which is used to store the states of user $j$ as follows: $$\label{eq:state} \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} {\boldsymbol{v}_{t,j}^\alpha} ={\mathop{f}\nolimits}\!\left( {\boldsymbol{W}_j^\alpha{\boldsymbol{v}_{t - 1,j}^\alpha} + \boldsymbol{W}_j^{\alpha,in}{\boldsymbol{x}_{t,j}}} \right),$$ where $f\left( \cdot \right)$ is the tanh function. Suppose that each user $j$ has a content request at each time slot. Then, the proposed ESN model will output a vector that captures the content request distribution of user $j$ at time $t$. The output yields the actual distribution of content request at time $t$: $$\label{eq:es} \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} \boldsymbol{y}_{t,j} \left(\boldsymbol{x}_{t,j}\right) = {\boldsymbol{W}_{t,j}^{\alpha,out}}\left[ {{\boldsymbol{v}_{t,j}^\alpha};{\boldsymbol{x}_{t,j}}} \right],$$ where ${\boldsymbol{W}_{t,j}^{\alpha,out}}$ is output matrix ${\boldsymbol{W}_{j}^{\alpha,out}}$ at time $t$. In other words, (\[eq:es\]) is used to build the relationship between input ${\boldsymbol{x}_{t,j}}$ and the output $\boldsymbol{y}_{t,j}$. In order to build this relationship, we need to train $\boldsymbol{W}_{t,j}^{\alpha,out}$. A linear gradient descent approach is used to derive the following update rule, $$\label{eq:updatew} \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} {\boldsymbol{W}_{t + 1,j}^{\alpha,out}} = {\boldsymbol{W}_{t,j}^{\alpha,out}} + {\lambda^\alpha }\left( {\boldsymbol{e}_{t,j}-\boldsymbol{y}_{t,j}\left(\boldsymbol{x}_{t,j} \right)} \right)\left[ {{\boldsymbol{v}_{t,j}^\alpha};{\boldsymbol{x}_{t,j}}} \right]^{\mathrm{T}},$$ where $\lambda^\alpha$ is the learning rate and $\boldsymbol{e}_{t,j}^\alpha$ is the real content request distribution of user $j$ at time $t$. Indeed, (\[eq:updatew\]) shows how an ESN can approximate to the function of (\[eq:es\]). Mobility Prediction {#se:Mobility} ------------------- [In this subsection, we study the mobility pattern prediction of each user. First, in mobile networks, the locations of the users can provide key information on the user-RRH association to the content servers which can transmit the most popular contents to the corresponding RRHs. Second, the type of the content request will in fact depend on the users’ locations.]{} Therefore, we introduce a minimum complexity ESN algorithm to predict the user trajectory in this subsection. Unlike the ESN prediction algorithm of the previous subsection, the ESN prediction algorithm of the user mobility proposed here is based on the minimum complexity dynamic reservoir and adopts an offline method to train the output matrix. The main reason behind this is that the prediction of user mobility can be taken as a time series and needs more data to train the output matrix. Therefore, we use a low complexity ESN to train the output matrix and predict the position of each user. The ESN will help us predict the user’s position based on the positions that the user had visited over a given past history, such as the past few weeks, for example. Here, the mobility prediction ESN will also include four components, with the BBUs being the agents, and the other components being: $\bullet$ *Input:* ${m}_{t,j}$ represents the current location of user $j$. This input ${m}_{t,j}$ combining with the history input data, $\left[{m}_{t-1,j},\dots,{m}_{t-M,j}\right]$, determines the positions ${\boldsymbol{s} _{t,j}}$ that the user is expected to visit. Here, $M$ denotes the number of the history data that an ESN can record. $\bullet$ *Output:* $\boldsymbol{s}_{t,j}=\left[ {s_{tj1}, \cdots , s_{tjN_s}} \right]^{\mathrm{T}}$ represents the position that user $j$ is predicted to visit for the next steps, where $N_s$ represents the number of position that user $j$ is expected to visit in the next $N_s$ time duration $H$. $\bullet$ *Mobility Prediction ESN Model:* An ESN model builds the relationship between the user’s context and positions that the user will visit. For each user $j$, an ESN model will be combined with the input $\boldsymbol{m}_{t,j}$ to record the position that the user has visited over a given past history. The ESN model consists of the input weight matrix $\boldsymbol{W}_j^{in} \in {\mathbb{R}^{W \times 1}}$, the recurrent matrix $\boldsymbol{W}_j \in {\mathbb{R}^{W \times W}}$, where $W$ is the number of units of the dynamic reservoir that the BBUs use to store position records of user $j$, and the output weight matrix $\boldsymbol{W}_j^{out} \in {\mathbb{R}^{N_s \times W}}$. The generation of $\boldsymbol{W}_j^{in}$ and $\boldsymbol{W}_j^{out}$ are similar to the content distribution prediction approach. $\boldsymbol{W}_j$ is defined as a full rank matrix defined as follows: $$\small \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} \boldsymbol{W}_j=\left[ {\begin{array}{*{20}{c}} {{0}}&{{0}}& \cdots &{{w}}\\ {{w}}&0&0&0\\ 0& \ddots &0&0\\ 0&0&{{w}}&0 \end{array}} \right],$$ where $w$ can be set as a constant or follows a distribution, such as uniform distribution. The value of $w$ will be detailed in Theorem \[theorem1\]. Given these basic definitions, we use a linear update method to update the dynamic reservoir state ${\boldsymbol{v}_{t,j}}$ of user $j$, which is used to record the positions that user $j$ has visited as follows: $$\label{eq:reservoirstate} \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} {\boldsymbol{v}_{t,j}} ={\boldsymbol{W}_j{\boldsymbol{v}_{t - 1,j}} + \boldsymbol{W}_j^{in}{{m}_{t,j}}}.$$ The position of output $\boldsymbol{s}_{t,j}$ based on ${\boldsymbol{v}_{t,j}}$ is given by: $$\label{eq:y2} \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} {\boldsymbol{s}_{t,j}} = \boldsymbol{W}_j^{out}{\boldsymbol{v}_{t,j}}.$$ In contrast to (\[eq:updatew\]), $\boldsymbol{W}_j^{out}$ of user $j$ is trained in an offline manner using ridge regression [@APractical]: $$\label{eq:w2} \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} \boldsymbol{W}_j^{out}=\boldsymbol{s}_j{\boldsymbol{v}_j^{\rm T}}{\left({\boldsymbol{v}_j^{\rm T}}\boldsymbol{v}_j + {\lambda ^2}\boldsymbol{\rm I}\right)^{ - 1}},$$ where $\boldsymbol{v}_j=\left[\boldsymbol{v}_{1,j},\dots, \boldsymbol{v}_{N_{tr},j}\right] \in \mathbb{R}^{W \times N_{tr}} $ is the reservoir states of user $j$ for a period $N_{tr}$, $\boldsymbol{s}_j$ is the output during a period $N_{tr}$, and $\boldsymbol{\rm I}$ is the identity matrix. Given these basic definitions, we derive the memory capacity of the mobility ESN which is related to the number of reservoir units and the value of $w$ in $\boldsymbol{W}_j$. [The ESN memory capacity is used to quantify the number of the history input data that an ESN can record. For the prediction of the mobility pattern,]{} the memory capacity of the mobility ESN determines the ability of this model to record the locations that each user $j$ has visited. First, we define the following $K \times K$ matrix, given that ${\boldsymbol{W}_j^{in}} = {\left[ {w_1^{in}, \ldots ,w_{W}^{in}} \right]^{\rm T}}$: $$\small \boldsymbol{\Omega} = \left[ {\begin{array}{*{20}{c}} {w_1^{in}}&{w_{W}^{in}}& \cdots &{w_2^{in}}\\ {w_2^{in}}&{w_1^{in}}& \cdots &{w_3^{in}}\\ \vdots & \vdots & \cdots & \vdots \\ {w_{W}^{in}}&{w_{W-1}^{in}}& \cdots &{w_1^{in}} \end{array}} \right].$$ Then, the memory capacity of the mobility ESN can be given as follows: \[theorem1\] *In a mobility ESN, we assume that the reservoir $\boldsymbol{W}_j$ is generated randomly via a specified distribution, $\boldsymbol{W}_j^{in}$ guarantees that the matrix $\boldsymbol{\Omega}$ regular, and the input ${m}_{t,j}$ has periodicity. Then, the memory capacity of this mobility ESN will be given by:* $$\label{eq:theorem1}\small \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} M\!\! =\!\!\!\sum\limits_{k = 0}^{W - 1} {{{\!\!\left(\sum\limits_{j = 0}^\infty {\mathbb{E}\!\left[{w^{2Wj + 2k}}\right]\!} \right)^{\!\! \!- 1}}}\!\!\sum\limits_{j = 0}^\infty {\mathbb{E}{{\left[{w^{Wj + k}}\right]}^2}} } \!\!-{{\left(\sum\limits_{j = 0}^\infty {\mathbb{E}\left[{w^{2Wj}}\right]} \right)^{\!\!\!-1}}}\!\!.$$ See Appendix B. The memory capacity of the mobility ESN indicates the ability of the mobility ESN model to record the locations that each user has visited. From Theorem \[theorem1\], we can see that the ESN memory capacity depends on the distribution of reservoir unit $w$ and the number of the reservoir units $W$. A larger memory capacity implies that the ESN can store more locations that the user has visited. The visited locations can improve the prediction of the user mobility. Since the size of the reservoir $\boldsymbol{W}_j$ and the value of $w$ will have an effect on the mobility prediction, we need to set the size of $\boldsymbol{W}_j$ appropriately to satisfy the memory capacity requirement of the user mobility prediction based on Theorem \[theorem1\]. Different from the existing works in [@Short] and [@Minimum] that use an independent and identically distributed input stream to derive the ESN memory capacity, we formulate the ESN memory capacity with a periodic input stream. Next, we formulate the upper and lower bounds on the ESN memory capacity with different distributions of the reservoir $\boldsymbol{W}_j$. The upper bound of the ESN memory capacity can give a guidance for the design of $\boldsymbol{W}_j$. \[pro3\] *Given the distribution of the reservoir $\boldsymbol{W}_j$ $\left(\left| w \right| < 1\right)$, the upper and lower bounds of the memory capacity of the mobility ESNs are given by:*\ 1) *If $w \in \boldsymbol{W}_j$ follows a zero-mean distribution $\left(\text{i.e. } {w \in \left[ { - 1,1} \right]} \right)$, then $0 \le M < {\left\lfloor {\frac{W}{2}} \right\rfloor }+1$, where $\left\lfloor {x} \right\rfloor $ is the floor function of $x$.*\ 2) *If $w \in \boldsymbol{W}_j$ follows a distribution that makes $w > 0$, then $0 < M < W$.* See Appendix C. From Proposition \[pro3\], we can see that, as $P\left(w = a\right) = 1$ and $a \to 1$, the memory capacity of the mobility ESN $M$ will be equal to the number of reservoir units $W$. Since we predict $N_s$ locations for each user at time $t$, we need to set the number of reservoir units above $W=N_s+1$. Sublinear Algorithm for Caching {#sectionS} =============================== The predictions of the content request distribution and user mobility pattern in Section \[section2\] must now be leveraged to determine which content to cache at the RRHs, cluster the RRHs at each time slot, and identify which contents to store in cloud cache during a given period. Clustering the RRHs based on the request content will also enable the CRAN to use ZF-DPC of MIMO to eliminate cluster interference. However, it is challenging for the BBUs to scan each content request distribution prediction among the thousands of users’ content request distribution predictions resulting from the ESNs’ output within a limited time. In addition, in a dense CRAN, the BBUs may not have the entire knowledge of the users’ contexts and distributions of content request in a given period, thus making it challenging to determine which contents to cache as per (\[eq:sum\]). To address these challenges, we propose a novel *sublinear approach* for caching[@Sublinear]. A sublinear algorithm is typically developed based on random sampling theory and probability theory [@Sublinear] to perform effective big data analytics. In particular, sublinear approaches can obtain the approximation results to the optimal result of an optimization problems by only looking at a subset of the data for the case in which the total amount of data is so massive that even linear processing time is not affordable. For our model, a sublinear approach will enable the BBUs to compute the average of the content request percentage of all users so as to determine content caching at the cloud without scanning through the massive volume of data pertaining to the users’ content request distributions. Moreover, using a sublinear algorithm enables the BBUs to determine the similarity of two users’ content request distributions by only scanning a portion of each content request distribution. Compared to traditional stochastic techniques, a sublinear algorithm can control the tradeoff between algorithm processing time or space, and algorithm output quality. Such algorithms can use only a few samples to compute the average content request percentage within the entire content request distributions of all users. Next, we first begin by describing how to use sublinear algorithm for caching. Then, we introduce the entire process using ESNs and sublinear algorithms used to solve (\[eq:sum\]). Sublinear Algorithm for Clustering and Caching {#al:sub} ---------------------------------------------- In order to cluster the RRHs based on the users’ content request distributions and determine which content to cache at the RRHs and BBUs, we first use the prediction of content request distribution and mobility for each user resulting from the output of the ESN schemes to cluster the RRHs and determine which content to cache at RRHs. The detailed clustering step is specified as follows: - The cloud predicts the users’ content request distribution and mobility patterns. - Based on the users’ content request distribution and locations, the cloud can estimate the users’ RRH association. - Based on the users’ RRH association, the cloud can determine each RRH’s content request distribution and then cluster the RRHs into several groups. For any two RRHs, when the difference of their content request distributions is below $\chi$, the cloud will cluster these two RRHs into the same group. Here, we use the sublinear Algorithm 8 in [@Sublinear] to calculate the difference between two content request distributions. Based on the RRHs’ clustering, we compute the average of the content request percentage of all users and we use this percentage to determine which content to cache in the cloud. Based on the prediction of content request distribution and mobility for each user resulting from the output of the ESN schemes, each RRH must determine the contents to cache according to the ranking of the average content request percentage of its associated users, as given by the computed percentages. For example, denote $\boldsymbol{p}_{r,1}$ and $\boldsymbol{p}_{r,2}$ as the prediction of content request distribution for two users that are associated with RRH $r$. The average content request percentage is given as $\boldsymbol{p}_r={{({\boldsymbol{p}_{r,1}} + {\boldsymbol{p}_{r,2}})} \mathord{\left/ {\vphantom {{({p_1} + {p_2})} 2}} \right. \kern-\nulldelimiterspace} 2}$. Based on the ranking of the average content request percentage of the associated users, the RRH selects $C_r$ contents to store in the cache as follows: $$\label{eq:Cr} \setlength{\abovedisplayskip}{2 pt} \setlength{\belowdisplayskip}{3 pt} \mathcal{C}_{r}=\mathop {\arg\max }\limits_{\mathcal{C}_{r}} \sum\limits_{n \in \mathcal{C}_{r} } {{p_{rn}}},$$ where $p_{rn}={\sum\nolimits_{i \in {\mathcal{U}_r}} {p_{rin}} E_{k,i}(\theta _{i,n,k}^O) \mathord{\left/ {\vphantom {\sum\nolimits_{i \in {\mathcal{U}_r}} {{\boldsymbol{p}_i}} N_r}} \right. \kern-\nulldelimiterspace} N_r}$ is the average weighted percentage of the users that are associated with RRH $r$ requesting content $n$, $\mathcal{U}_r$ is the set of users that are associated with RRH $r$, and $N_r$ is the number of users that are associated with RRH $r$. To determine the contents that must be cached at cloud, the cloud needs to update the content request distribution of each user to compute the distribution of the requested content that must be transmitted via fronthaul links based on the associated RRH cache. We define the distribution of the requested content that must be transmitted via fronthaul links using the updated content request distribution, $\boldsymbol{p}'_{r,1}=\left[p'_{r11},\dots,p'_{r1N}\right]$. The difference between $\boldsymbol{p}_{r,1}$ and $\boldsymbol{p}'_{r,1}$ is that $\boldsymbol{p}_{r,1}$ contains the probability of the requested content that can be transmitted from the RRH cache. For example, we assume that content $n$ is stored at the cache of RRH $r$, which means that content $n \in \mathcal{C}_{r}$, consequently, $p'_{r1n}=0$. Based on the updated content request distribution, the BBUs can compute the average percentage of each content within the entire content request distributions. For example, let $\boldsymbol{p}'= \sum\nolimits_{\tau = 1}^{{T }}{{\sum\nolimits_{i = 1}^U {{\boldsymbol{p}'_{\tau,i}E_{k,i}(\theta _{i,k}^A)}} } \mathord{\left/ {\vphantom {{\sum\nolimits_{i = 1}^R {{p_{\tau,i}}} } {TU}}} \right. \kern-\nulldelimiterspace} {TU}}$ be the average of the updated content request probability during $T$, where $\boldsymbol{p}'_{\tau,i}$ is the updated content request distribution of user $i$ during time slot $\tau$. Consequently, the BBUs select $C_c$ contents to store at the cloud cache according to the rank of the average updated content request percentage $\boldsymbol{p}'$ which is: $$\label{eq:Cc} \setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} \mathcal{C}_{c}=\mathop {\arg \max} \limits_{\mathcal{C}_{c}} \sum\limits_{n \in \mathcal{C}_{c} } {{p'_{n}}}.$$ However, within a period $T$, the BBUs cannot record the updated content request distributions for all of the users as this will lead to a massive amount of data that is equal to $N\cdot U\cdot T$. The sublinear approach can use only a few updated content request distributions to approximate the actual average updated content request percentage. Moreover, the sublinear approach can control the deviation from the actual average updated content request percentage as well as the approximation error. Since the calculation of the percentage of each content is independent and the method of computing each content is the same, we introduce the percentage calculation of one given content. We define $\epsilon$ as the error that captures the deviation from the actual percentage of each content request. Let $\delta$ be a confidence interval which denotes the probability that the result of sublinear approach exceeds the allowed error interval. To clarify the idea, we present an illustrative example. For instance, assume that the actual result for the percentage of content $n$ is $\alpha=70\%$ with $\epsilon=0.03$ and $\delta=0.05$. This means that using a sublinear algorithm to calculate the percentage of content request of type $n$ can obtain a result whose percentage ranges from $67\%$ to $73\%$ with 95% probability. Then, the relationship between the number of the updated content request distributions $N_n$ that a sublinear approach needs to calculate the percentage of content $n$, $\epsilon$, and $\delta$ can be given by [@Sublinear]: $$\label{eq:sublinear} \setlength{\abovedisplayskip}{0 pt} \setlength{\belowdisplayskip}{3 pt} N_{n}=- \frac{{\ln \delta }}{{2{\epsilon ^2}}}.$$ From (\[eq:sublinear\]), we can see that a sublinear algorithm can transform a statistical estimation of the expected value into a bound with error deviation $\epsilon$ and confidence interval $\delta$. After setting $\epsilon$ and $\delta$, the sublinear algorithm can just scan $N_n$ updated content request distributions to calculate the average percentage of each content. Based on the average updated content request percentage, the BBUs store the contents that have the high percentages. Proposed Framework based on ESN and Sublinear Approaches -------------------------------------------------------- In this subsection, we formulate the proposed algorithm to solve the problem in (\[eq:sum\]). First, the BBUs need to run the ESN algorithm to predict the distribution of content requests and mobility pattern for each user as per Section \[section2\], and determine which content to store in RRH cache based on the average content request percentage of the associated users at each time slot. Then, based on the content request distribution of each user, the BBUs cluster the RRHs and sample the updated content request distributions to calculate the percentage of each content based on (\[eq:sublinear\]). Finally, the BBUs uses the approximated average updated content request percentage to select the contents which have the high percentages to cache at cloud. Based on the above formulations, the algorithm based on ESNs and sublinear algorithms is shown in Algorithm \[al2\]. Note that, in step 8 of Algorithm 1, a single RRH may belong to more than one cluster since its associated users may have different content request distribution. As an illustrative example, consider a system having two RRHs: an RRH $a$ has two users with content request distributions $\boldsymbol{p}_{a,1}$ and $\boldsymbol{p}_{a,2}$, an RRH $b$ has two users with content request distribution $\boldsymbol{p}_{b,1}$ and $\boldsymbol{p}_{b,3}$, and an RRH $c$ that is serving one user with content request distribution $\boldsymbol{p}_{c,2}$. If $\boldsymbol{p}_{a,1}=\boldsymbol{p}_{b,1}$ and $\boldsymbol{p}_{a,2}=\boldsymbol{p}_{c,2}$, the BBUs will group RRH $a$ and RRH $b$ into one cluster ($\boldsymbol{p}_{a,1}=\boldsymbol{p}_{b,1}$) and RRH $a$ and RRH $c$ into another cluster ($\boldsymbol{p}_{a,2}=\boldsymbol{p}_{c,2}$). In this case, the RRHs that are grouped into one cluster will have the highest probability to request the same contents. In essence, caching the contents that have the high percentages means that the BBUs will encourage more users to receive the contents from the cache. From (\[eq:vBU\]), we can see that storing the contents in the RRH cache and cloud cache can reduce the backhaul and fronthaul traffic of each content that is transmitted from the content server and BBUs to the users. Consequently, caching increases the backhaul rate $v_{BU}$ and $v_{FU}$ which will naturally result in a reduction of $\theta$ and an improvement in the effective capacity. We will show next that the proposed caching Algorithm 1 would be an optimal solution to the problem. \[1\] The set of users’ contexts, $\boldsymbol{x}_{t}$ and $\boldsymbol{m}_{t}$;\ initialize $\boldsymbol{W}_j^{\alpha,in}$, $\boldsymbol{W}_j^{\alpha}$, $\boldsymbol{W}_j^{\alpha,out}$, $\boldsymbol{W}_j^{in}$, $\boldsymbol{W}_j$, $\boldsymbol{W}_j^{out}$, $\boldsymbol{y}_{j}=0$, $\boldsymbol{s}_{j}=0$, $\epsilon$, and $\delta$\ update the output weight matrix $\boldsymbol{W}_{T_\tau+1,j}^{out}$ based on (\[eq:w2\]) obtain prediction $\boldsymbol{s}_{T_\tau+1,j}$ based (\[eq:y2\]) obtain prediction $\boldsymbol{y}_{\tau+1,j}$ based on (\[eq:es\]) update the output weight matrix $\boldsymbol{W}_{\tau+1,j}^{\alpha,out}$ based on (\[eq:updatew\]) determine which content to cache in each RRH based on (\[eq:Cr\]) cluster the RRHs calculate the content percentage for each content based on (\[eq:sublinear\]) determine which content to cache in cloud based on (\[eq:Cc\]) For the purpose of evaluating the performance of the proposed Algorithm 1, we assume that the ESNs can predict the content request distribution and mobility for each user accurately, which means that the BBUs have the entire knowledge of the location and content request distribution for each user. Consequently, we can state the following theorem: \[theorem2\] *Given the accurate ESNs predictions of the mobility and content request distribution for each user, the proposed Algorithm \[al2\] will reach an optimal solution to the optimization problem in (\[eq:sum\])*. See Appendix D. Complexity and Overhead of the Proposed Approaches -------------------------------------------------- In terms of complexity, for each RRH cache replacement action, the cloud needs to implement $U$ ESN algorithms to predict the users’ content request distribution. For each cloud caching update, the cloud needs to implement $U$ ESN algorithms to predict the users’ mobility patterns. During each time duration for cached content replacement, $T_\tau$, the cached contents stored at an RRH cache will be replaced $\frac{T_\tau}{\tau}$ times. Therefore, the complexity of Algorithm 1 is $O(U \times \frac{T_\tau}{\tau})$. However, it is a learning algorithm which can build a relationship between the users’ contexts and behavior. After the ESN-based algorithm builds this relationship, the ESN-based algorithm can directly output the prediction of the users’ behavior without any additional training. Here, we note that the running time of the approach will decrease once the training process is completed. Next, we investigate the computational overhead of Algorithm 1, which is summarized as follows: a) [*Overhead of users information transmission between users and the content server:*]{} The BBUs will collect all the users’ behavior information and the content server will handle the users’ content request at each time slot. However, this transmission incurs no notable overhead because, in each time slot, the BBUs need to only input the users’ information to the ESN and the cloud has to deal with only one content request for each user. b) [*Overhead of content transmission for RRH caching update and cloud caching update:*]{} The content servers need to transmit the most popular contents to the RRHs and BBUs. However, the contents stored at RRH cache and cloud cache are all updated during off-peak hours. At such off-peak hours, the fronthaul and backhaul traffic loads will already be low and, thus, having cache updates will not significantly increase the traffic load of the content transmission for caching. c) [*Overhead of the proposed algorithm:*]{} As mentioned earlier, the total complexity of Algorithm 1 is $O(U \times \frac{T_\tau}{\tau})$. Since all the algorithm is implemented at the BBUs which has high-performance processing capacity, the overhead of Algorithm 1 will not be significant. Simulation Results ================== For simulations, the content request data that the ESN uses to train and predict content request distribution is obtained from *Youku* of *China network video index*[^1]. The detailed parameters are listed in Table . The mobility data is measured from real data generated at the *Beijing University of Posts and Telecommunications*. [Note that the content request data and mobility data sets are independent. To map the data, we record the students’ locations during each day and map arbitrarily the students’ locations to one user’ content request activity from Youku.]{} The results are compared to three schemes [@Content]: a) optimal caching strategy with complete information, b) random caching with clustering, and c) random caching without clustering. All statistical results are averaged over 5000 independent runs. Note that, the benchmark algorithm a) is based on the assumption that the CRAN already knows the entire content request distribution and mobility pattern. Hereinafter, we use the term “error" to refer to the sum deviation from the estimated distribution of content request to its real distribution. **Parameters** **Values** **Parameters** **Values** ---------------- ------------ ------------------ ------------ $r$ 1000 m $P$ 20 dBm $R$ 1000 $\beta$ 4 $B$ 1 MHz $\lambda^\alpha$ 0.01 $L$ 10 Mbit $S$ 25 $\theta _s^O$ 0.05 $T$ 300 $N_w$ 1000 $\sigma ^2$ -95 dBm $C_c$,$C_r$ 6,3 $D_{\max }$ 1 $K$ 7 $N_s$ 10 $\delta$ 0.05 $\epsilon$ 0.05 $H$ 3 $\lambda$ 0.5 $T_\tau$ 30 $\chi$ 0.85 : SYSTEM PARAMETERS ![\[Fig3\] Error as the number of iteration varies.](figure1.eps){width="7cm"} Fig. \[Fig3\] shows how the error of the ESN-based estimation changes as the number of the iteration varies. In Fig. \[Fig3\], we can see that, as the number of iteration increases, the error of the ESN-based estimation decreases. Fig. \[Fig3\] also shows that the ESN approach needs less than 50 iterations to estimate the content request distribution for each user. This is due to the fact that ESNs need to only train the output weight matrix. Fig. \[Fig3\] also shows that the learning rates $\lambda^\alpha=0.01,0.001$, and $0.03$ result, respectively, in an error of $0.2\%, 0.1\%$, and $0.43\%$. Clearly, adjusting the learning rates at each iteration can affect the accuracy of the ESNs’ prediction. Figs. \[fig4\] and \[fig5\] evaluate the accuracy of using ESN for predicting the users’ mobility patterns. First, in Fig. \[fig4\], we show how ESN can predict the users’ mobility patterns as the size of the training dataset $N_{tr}$ (number of training data to train $\boldsymbol{W}^{out}$) varies. The considered training data is the user’s context during a period. In Fig. \[fig4\], we can see that, as the size of the training dataset increases, the proposed ESN approach achieves more improvement in terms of the prediction accuracy. Fig. \[fig5\] shows how ESN can predict users mobility as the number of the ESNs reservoir units $W$ varies. In Fig. \[fig5\], we can see that the proposed ESN approach achieves more improvement in terms of the prediction accuracy as the number of the ESNs reservoir units $W$ increases. This is because the number of the ESNs reservoir units $W$ directly affects the ESN memory capacity which directly affects the number of user positions that the ESN algorithm can record. Therefore, we can conclude that the choice of an appropriate size of the training dataset and an appropriate number of the ESNs reservoir units are two important factors that affect the ESN prediction accuracy of the users’ mobility patterns. Fig. \[learningfigure\] shows how the prediction accuracy of a user in a period changes as the number of the hidden units varies. Here, the hidden units of the ESN represents the size of the reservior units. From Fig. \[learningfigure\], we can see that the prediction of the ESN-based learning algorithm is is more accurate compared to the deep learning algorithm and this accuracy improves as the number of the hidden units increases. In particular, the ESN-based algorithm can yield up to of 14.7% improvement in terms of the prediction accuracy compared with a deep learning algorithm. This is due to the fact that the ESN-based algorithm can build the relationship between the prediction and the position that the user has visited which is different from the deep learning algorithm that just records the property of each user’s locations. Therefore, the ESN-based algorithm can predict the users’ mobility patterns more accurately. ![\[learningfigure\]Prediction accuracy of mobility patterns as the number of hidden units varies. [Here, we use the deep learning algorithm in [@nguyen2012extracting] as a benchmark. The total number of hidden units in deep learning is the same as the number of reservoir units in ESN.]{}](learningfigure.eps){width="7cm"} ![\[Fig6\] Error and failure as confidence and allowable error exponents vary.](sublinear.eps){width="7cm"} In Fig. \[Fig6\], we show how the failure and error of the content request distribution for each user vary with the confidence exponent $\delta$ and the allowable error exponent $\epsilon$. Here, the error corresponds to the difference between the result of the sublinear algorithm and the actual content request distribution while failure pertains to the probability that the result of our sublinear approach exceeds the allowable error $\epsilon$. From Fig. \[Fig6\], we can see that, as $\delta$ and $\epsilon$ increase, the probabilities of failure and error of the content request distribution also increase. This is due to the fact that, as $\delta$ and $\epsilon$ increase, the number of content request distribution samples that the sublinear approach uses to calculate the content percentage decreases. Fig. \[Fig6\] also shows that even for a fixed $\epsilon$, the error also increase as $\delta$ increases. This is because, as $\delta$ changes, the number of content request distribution samples would also change, which increases the error. ![\[Fig8\] Sum effective capacity vs. the number of the storage units at cloud cache.](figure8.eps){width="7cm"} Fig. \[Fig8\] shows how the sum of the effective capacities of all users in a period changes as the number of the storage units at the cloud cache varies. In Fig. \[Fig8\], we can see that, as the number of the storage units increases, the effective capacities of all considered algorithms increase since having more storages allows offloading more contents from the content server, which, in turn, will increase the effective capacity for each content. From Fig. \[Fig8\], we can see that the proposed algorithm can yield up to of $27.8\%$ and $30.7\%$ improvements in terms of the sum effective capacity compared with random caching with clustering and random caching without clustering for the case with one cloud cache storage unit. These gains are due to the fact that the proposed approach can store the contents based on the ranking of the average updated content request percentage of all users as computing by the proposed ESNs and sublinear algorithm. ![\[Fig10\] Sum effective capacity vs. the number of the RRHs.](figure10.eps){width="7cm"} Fig. \[Fig10\] shows how the sum of the effective capacities of all users in a period changes as the number of the RRHs varies. In Fig. \[Fig10\], we can see that, as the number of the RRHs increases, the effective capacities of all algorithms increase since having more RRHs reduces the distance from the user to its associated RRH. In Fig. \[Fig10\], we can also see that the proposed approach can yield up to 21.6% and 24.4% of improvements in the effective capacity compared to random caching with clustering and random caching without clustering, respectively, for a network with 512 RRHs. Fig. \[Fig10\] also shows that the sum effective capacity of the proposed algorithm is only $0.7\%$ below the optimal caching scheme that has a complete knowledge of content request distribution, mobility pattern, and the real content request percentage. Clearly, the proposed algorithm reduces running time of up to $34\%$ and only needs 600 samples of content request to compute the content percentage while only sacrificing $0.7\%$ network performance. ![\[Fig11\] Sum effective capacity vs. the number of the users.](figure11.eps){width="7cm"} Fig. \[Fig11\] shows how the sum of the effective capacities of all users in a period changes as the number of the users varies. In Fig. \[Fig11\], we can see that, as the number of the users increases, the effective capacities of all considered algorithms increase as caching can offload more users from the backhaul and fronthaul links as the number of users increases. In Fig. \[Fig11\], we can also see that the proposed approach can yield up to 21.4% and 25% of improvements in the effective capacity compared, respectively, with random caching with clustering and random caching without clustering for a network with 960 users. This implies that the proposed ESN-based algorithm can effectively use the predictions of the ESNs to determine which content to cache. In Fig. \[Fig11\], we can also see that the deviation from the proposed algorithm to the optimal caching increases slightly when the number of users varies. This is due to the fact that the number of content request distributions that the proposed algorithm uses to compute the content percentage is fixed as the total number of content request distributions increases, which will affect the accuracy of the sublinear approximation. Conclusion ========== In this paper, we have proposed a novel caching framework for offloading the backhaul and fronthaul loads in a CRAN system. We have formulated an optimization problem that seeks to maximize the average effective capacities. To solve this problem, we have developed a novel algorithm that combines the machine learning tools of echo state networks with a sublinear caching approach. The proposed algorithm enables the BBUs to predict the content request distribution of each user with limited information on the network state and user context. The proposed algorithm also enables the BBUs to calculate the content request percentage using only a few samples. Simulation results have shown that the proposed approach yields significant performance gains in terms of sum effective capacity compared to conventional approaches. Appendix {#appendix .unnumbered} ======== Proof of Proposition \[pro1\] {#Ap:a} ----------------------------- Based on (\[eq:thetaD\]), the relationship between $\theta_{i,n}^S$ and $\theta_{i,n}^O$ will be: $$\label{eq:1theta} \setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} \frac{1}{{\theta _{i,n}^S}} = \frac{1}{{\theta _{i,n}^O}} - \frac{{{{{N_h}L} \mathord{\left/ {\vphantom {{{N_h}L} v}} \right. \kern-\nulldelimiterspace} v}}}{{ - \log {\ensuremath{\operatorname{Pr}}}\left( {D > {D_{\max }}} \right)}}.$$ Substituting (\[eq:PrD\]) into (\[eq:1theta\]), we obtain: $$\setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} \frac{1}{{\theta _{i,n}^S}} = \frac{1}{{\theta _{i,n}^O}}\left(1 - \frac{{{N_h}L}}{{{D_{\max }}v}}\right).$$ Based on Proposition 5 in [@Effectivecapacity], for the transmission link a), we can take the backhaul transmission rate $v_{BU}$ as the external rate, and consequently, the link hops $N_h$ consists of the link from the BBUs to the RRHs and the link from the RRHs to the users ($N_h=2$). We complete the proof for link a). For link b) and link d), we ignore the delay and QoS losses of the transmission rates from the caches to the BBUs and RRHs, and consequently, the link hops of b) and d) are given as $N_h=1$ and $N_h=2$. The other proofs are the same as above. Proof of Theorem \[theorem1\] {#ap:b} ----------------------------- Given an input stream $\boldsymbol{m}(\ldots t)=\ldots m_{t-1}m_t$, where $m_t$ follows the same distribution as $m_{t-W}$, we substitute the input stream $\boldsymbol{m}(\ldots t)$ into (\[eq:reservoirstate\]), then we obtain the states of the reservoir units at time $t$: $$\small\nonumber \setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} \begin{split} {v_{t,1}} =\; & w_1^{in}{m_t} + w_W^{in}{m_{t - 1}}w + \cdots + w_2^{in}{m_{t - (W -1)}}{w^{W - 1}} + \cdots\\ &+ w_1^{in}{m_{t - W}}{w^N} + \cdots + w_2^{in}{m_{t - (2W - 1)}}{w^{2W - 1}}\\ &+ w_1^{in}{m_{t - 2W}}{w^{2W}} + \cdots\\ {v_{t,2}} =\; & w_2^{in}{m_t} + w_1^{in}{m_{t - 1}}w + \cdots + w_{3}^{in}{m_{t - (W -1)}}{w^{W - 1}} + \cdots\\ &+ w_2^{in}{m_{t - W}}{w^W} + \cdots + w_3^{in}{m_{t - (2W - 1)}}{w^{2W - 1}}\\ &+ w_2^{in}{m_{t - 2N}}{w^{2W}} + \cdots\\ \end{split}$$ Here, we need to note that the ESN having the ability to record the location that the user has visited at time $t-k$ denotes the ESN can output this location at time $t$. Therefore, in order to output $m_{t-k}$ at time $t$, the optimal output matrix $\boldsymbol{W}_j^{out}$ is given as [@Short]: $$\setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} {\boldsymbol{W}_j^{out}} =\left(\mathbb{E}{\left[ {\boldsymbol{v}_{t,j}{\boldsymbol{v}_{t,j}^{\rm T}}}\right]^{ - 1}}\mathbb{E}\left[ {\boldsymbol{v}_{t,j}m_{t - k} } \right]\right)^{\rm T},$$ where $\mathbb{E}{\left[ {\boldsymbol{v}_{t,j}{\boldsymbol{v}_{t,j}^{\rm T}}}\right]}$ is the covariance matrix of $\boldsymbol{W}_{j}^{in}$. Since the input stream is periodic and zero expectation, each element $\mathbb{E}{\left[ {v_{t,i}{v_{t,j}}}\right]}$ of this matrix will be: $$\setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} \begin{split} \mathbb{E}\left[ {{v_{t,i}}{v_{t,j}}}\right]& = w_i^{in}w_j^{in}\sigma _t^2 + w_{i-1(\bmod)W}^{in}w_{j-1(\bmod)W}^{in}\sigma _{t - 1}^2{w^2} \\ &\;\;\;\;\;+ \cdots + w_i^{in}w_j^{in}\sigma _{t - W}^2{w^{2W}} + \cdots \\ &= w_i^{in}w_j^{in}\sigma _t^2\sum\limits_{j = 0}^\infty {{w^{2Wj}}} + \cdots \\ &+w_{i-(W-1)}^{in}w_{j-(W-1)}^{in}\sigma _{t - (W-1)}^2\sum\limits_{j = 0}^\infty {{w^{2Wj + 2(W - 1)}}} \\ &={\boldsymbol{\Omega} _i}\boldsymbol{\Gamma} {\boldsymbol{\Omega}_j^{\rm T}}, \end{split}$$ where $$\small\nonumber \setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} \boldsymbol{\Gamma}=\left[\!\! {\begin{array}{*{20}{c}} {\sigma _t^2\sum\limits_{j = 0}^\infty {{\mathbb{E}\left[w^{2Wj}\right]}} }&0&0\\ 0& \ddots &0\\ 0&0&{\sigma _{t - (W - 1)}^2\sum\limits_{j = 0}^\infty {{\mathbb{E}[w^{2Wj + 2(W - 1)}]}} } \end{array}} \!\right],$$ $\boldsymbol{\Omega}_j$ indicates row $j$ of $\boldsymbol{\Omega}$, $v_{t,j}$ is the element of $\boldsymbol{v}_{t,j}$, and $\sigma _{t -k}^2$ is the variance of $m_{t-k}$. Consequently, $\mathbb{E}{\left[ {\boldsymbol{v}_{t,j}{\boldsymbol{v}_{t,j}^{\rm T}}}\right]}={\boldsymbol{\Omega}}\boldsymbol{\Gamma} {\boldsymbol{\Omega}^{\rm T}}$, $\mathbb{E}\left[ {\boldsymbol{v}_{t,j}m_{t - k} }\right]=\mathbb{E}\left[{w^k}\right]\sigma _{t - k}^2{\boldsymbol{\Omega}_{k+1(\bmod)W}^{\rm T}}$ and ${\boldsymbol{W}^{out}} = \mathbb{E}\left[{w^k}\right]\sigma _{t - k}^2\boldsymbol{\Omega} _{k + 1(\bmod)W}{({\boldsymbol{\Omega}}\boldsymbol{\Gamma} \boldsymbol{\Omega}^{\rm T} )^{ - 1}}$. Based on these formulations and (\[eq:y2\]), the ESN output at time $t$ will be ${s_{t,j}} = \boldsymbol{W}^{out}{\boldsymbol{v}_{t,j}}=\mathbb{E}\left[{w^k}\right]\sigma _{t - k}^2\boldsymbol{\Omega} _{k + 1(\bmod)W}{\left({\boldsymbol{\Omega}}\boldsymbol{\Gamma} \boldsymbol{\Omega}^{\rm T} \right)^{ - 1}}{\boldsymbol{v}_{t,j}}$. Consequently, the covariance of ESN output $s_{t,j}$ with the actual input $m_{t-k,j}$ is given as: $$\small\nonumber \setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} \begin{split} &\textrm{Cov}\left({s_{t,j}},{m_{t - k,j}}\right) \\ &=\mathbb{E}\left[{w^k}\right]\sigma _{t - k}^2\boldsymbol{\Omega} _{k + 1(\bmod)W}{\left({\boldsymbol{\Omega}}\boldsymbol{\Gamma} \!\!\boldsymbol{\Omega}^{\rm T} \right)^{ - 1}}\mathbb{E}\left[{\boldsymbol{v}_{t,j}},{m_{t - k}}\right],\\ &=\mathbb{E}\left[{w^k}\right]^2\!\!\sigma _{t - k}^4\!\left(\!\boldsymbol{\Omega} _{k + 1(\!\bmod\!)W}{\left(\!\boldsymbol{\Omega}^{\rm T}\!\right)^{\!-1}}\right)\!\boldsymbol{\Gamma}^{-1}\!\!\left(\boldsymbol{\Omega}^{-1}\boldsymbol{\Omega} _{k + 1(\!\bmod\!)W}^{\rm T}\!\right),\\ &\mathop = \limits^{(a)} \mathbb{E}\left[{w^k}\right]^2\sigma _{t - k}^2{\left(\sum\limits_{j = 0}^\infty {\mathbb{E}\left[{w^{2Wj + 2k(\bmod)W}}\right]} \right)^{ - 1}}, \end{split}$$ where $(a)$ follows from the fact that ${\boldsymbol{\Omega} _{k + 1(\bmod )W}} = \boldsymbol{e}_{k + 1}^{\rm T}\boldsymbol{\Omega}^{\rm T}$ and ${\boldsymbol{e}_{k + 1}} = (0, \ldots ,{1_{k + 1}},0 \ldots 0)^{\rm T} \in {\mathbb{R}^{W}}$. Therefore, the memory capacity of this ESN is given as [@Minimum]: $$\small\nonumber \setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} \begin{split} M &\!= \sum\limits_{k =0}^\infty {\mathbb{E}\!{{\left[{w^k}\right]}^2}{{\!\left(\sum\limits_{j = 0}^\infty {\mathbb{E}\!\left[{w^{2Wj + 2k(\!\bmod\!)W}}\right]}\! \right)^{\!\!-1}}}}\!\!\!\!\!\!-\!{{\left(\sum\limits_{j = 0}^\infty {\mathbb{E}\!\left[{w^{2Wj}}\right]} \!\right)^{\!\!-1}}}\!\!\!\!,\\ &\!\!\!\!\!\!\!=\sum\limits_{k = 0}^{W - 1} \!{\mathbb{E}{{\left[{w^k}\right]}^2}{{\left(\sum\limits_{j = 0}^\infty \! {\mathbb{E}\left[{w^{2Wj + 2k}}\right]} \right)^{ - 1}}}}\!\!\!\!\\ &\!\!\!\!\!\!\!+\!\!\!\!\!\sum\limits_{k = W}^{2W - 1} \!\!{\mathbb{E}{{\left[{w^k}\right]}^2}{{\!\left(\sum\limits_{j = 0}^\infty{\mathbb{E}\!\left[{w^{2Wj + 2k(\bmod)W}}\right]} \!\right)^{ \!\!\!- 1}}}} \!\!\!\!\!+ \! \cdots\!-\!{{\left(\sum\limits_{j = 0}^\infty \! {\mathbb{E}\!\left[{w^{2Wj}}\right]}\!\!\right )^{\!\!-1}}}\!\!\!\!\!,\\ &\!\!\!\!\!\!\!=\!\!\sum\limits_{k = 0}^{W - 1} {{{\!\!\left(\sum\limits_{j = 0}^\infty {\mathbb{E}\!\!\left[{w^{2Wj + 2k}}\!\right]}\!\! \right)^{ \!\!\!- 1}}}\!\!\sum\limits_{j = 0}^\infty {\mathbb{E}{{\left[{w^{Wj + k}}\right]}^2}} }\! \!-\!{{\left(\sum\limits_{j = 0}^\infty {\mathbb{E}\left[{w^{2Wj}}\right]} \right)^{\!\!-1}}}\!\!. \end{split}$$ This completes the proof. Proof of Proposition \[pro3\] {#Ap:c} ----------------------------- For 1), we first use the distribution that $P\left(w = a \right) = 0.5$ and $P\left(w = -a\right) = 0.5$ to formulate the memory capacity, where $a\in (0,1)$. Then, we discuss the upper bound. Based on the distribution property of $w$, we can obtain that $\mathbb{E}\left[w^{2W}\right]=a^{2W}$ and $\mathbb{E}\left[w^{2W+1}\right]=0$. The memory capacity is given as: $$\small\label{eq:M} \setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} \begin{split} M &\!=\! \sum\limits_{k = 0}^{W - 1} {{{\!\!\left(\sum\limits_{j = 0}^\infty {\mathbb{E}\!\left[{w^{2Wj + 2k}}\right]}\! \right)^{\!\!\! - 1}}}\!\!\!\sum\limits_{j = 0}^\infty {\mathbb{E}{{\left[{w^{Wj + k}}\right]}^2}} }\!\!\!-\!{{\left(\sum\limits_{j = 0}^\infty {\mathbb{E}\!\left[{w^{2Wj}}\right]} \!\!\right)^{\!\!\!-1}}}\!\!,\\ &\!\!\!\!\!\!\!\!=\!\!\!\sum\limits_{k = 0}^{W - 1} {{{\!\!\left(\sum\limits_{j = 0}^\infty {{a^{2Wj + 2k}}} \!\right)^{\!\!\! - 1}}}\!\!\sum\limits_{j = 0}^\infty {{{{a^{2Wj + 2k}}}}} }\! \!-\!{{\left(\sum\limits_{j = 0}^\infty {{a^{2Wj}}} \!\right)^{\!\!\!-1}}}\!\!\!, \left(\!\text{$k$ is an even}\right),\\ &\!\!\!\!\!\!\!\!=\!\sum\limits_{k = 0}^{\left\lfloor {\frac{W}{2}} \right\rfloor } 1 - \left(1 - {a^{2W}}\right)={\left\lfloor {\frac{W}{2}} \right\rfloor }+a^{2W} <{\left\lfloor {\frac{W}{2}} \right\rfloor }+1. \end{split}$$ From (\[eq:M\]), we can also see that the memory capacity $M$ increases as both the moment $\mathbb{E}\left[w^k\right]$ and $a$ increase, $k \in \mathbb{Z}^{+}$. This completes the proof of 1). For case 2), we can use a similar method to derive the memory capacity exploiting distribution that $P\left(w = a\right) = 1$ and consequently, $\mathbb{E}\left[w^{k}\right]=a^k$, this yielding $M=W-1+a^{2W}$. Since $a \in (0,1)$, $M<W$ which is also correspondent to the existing work [@Short]. Proof of Theorem \[theorem2\] {#ap:d} ----------------------------- The problem based on (\[eq:sum\]) for each time slot can be rewritten as: $$\label{proofEs} \setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} {\bar E} =\frac{1}{{{T }}}\sum\limits_{k = 1}^{{T }} \sum\limits_{i \in \mathcal{U}} { {E_{k,i}\left(\theta _{{i,n_{ik},k}}^j\right)} },$$ where $j \in \left\{ {O,A,S,G} \right\}$. Denote $\boldsymbol{p}_{k,i}=\left[p_{ki1},p_{ki2},\dots,p_{kiN}\right]$ as the content request distribution of user $i$ at time slot $k$, the average effective capacities of the users is given by: $$\small\label{proofEsn} \begin{split} \setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} {\widetilde E_{k}} &=\!\! \sum\limits_{i \in \mathcal{U}} \!\!\left(\sum\limits_{n_{ik} \in {\mathcal{C}_{i}}} \!{{\!p_{kin_{ik}}}} E_{k,i}\!\left(\theta _{i,n_{ik},k}^O\right) \!\!+ \!\! \!\! \!\!\sum\limits_{n_{ik} \in {{{\mathcal{C}_c}} \mathord{\left/ {\vphantom {{{C_c}} {{C_k}}}} \right. \kern-\nulldelimiterspace} {{\mathcal{C}_i}}}} \!\!\!\!\!{{\!p_{kin_{ik}}}} E_{k,i}\!\left(\theta _{i,n_{ik},k}^A\right)\right) \!\! \\&+\sum\limits_{i \in \mathcal{U}} \!\!\left(\! \sum\limits_{n_{ik} \in \mathcal{N}'}\!\!{{p_{kin_{ik}}}} E_{k,i}\!\left(\theta _{i,n_{ik},k}^S\right) \!\!+\!\!\!\! \!\!\sum\limits_{n_{ik} \in {{\mathcal{C}'_i}}} \!{{p_{kin_{ik}}}} E_{k,i}\!\left(\theta _{i,n_{ik},k}^G\right)\!\right) , \end{split}$$ where $\mathcal{C}_i$ is the set of RRH cache that is associated with user $i$, $\mathcal{N}'$ and $\mathcal{C}'_i$ represent, respectively, the contents that the BBUs arrange to transmit from the content server and remote RRHs cache. Since the transmission from the content server and the remote RRH cache can be scheduled by the BBUs based on Proposition \[pro1\], we only need to focus on the transmissions from the cloud cache and RRHs cache to the users which results in the average effective capacities of the users at time slot $k$ as follows: $$\small\label{proofEsnA} \setlength{\abovedisplayskip}{5 pt} \setlength{\belowdisplayskip}{5 pt} \begin{split} {\widetilde E_{k}} &=\sum\limits_{i \in \mathcal{U}} \sum\limits_{n_{ik} \in {\mathcal{C}_{i}}} {{p_{kin_{ik}}}} E_{k,i}\left(\theta _{i,n_{ik},k}^O\right) \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+\sum\limits_{i \in \mathcal{U}} \sum\limits_{n_{ik} \in {{{\mathcal{C}_c}} \mathord{\left/ {\vphantom {{{C_c}} {{C_k}}}} \right. \kern-\nulldelimiterspace} {{\mathcal{C}_i}}}} {{p_{kin_{ik}}}} E_{k,i}\left(\theta _{i,n_{ik},k}^A\right)+F,\\ &= { \sum\limits_{r \in \mathcal{ R}}\sum\limits_{i \in \mathcal{U}_r}{\sum\limits_{n_{ik} \in {\mathcal{C}_{r}}} {{p_{kin_{ik}}}} E_{k,i}\left(\theta _{i,n_{ik},k}^O\right)} } \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+\sum\limits_{i \in \mathcal{U}} \sum\limits_{n_{ik} \in {\mathcal{C}_c}} {{{p_{kin_{ik}}}E_{k,i}\left(\theta _{i,n_{ik},k}^A\right)} } +F, \end{split}$$ where $$\small\nonumber \!F\!=\!\sum\limits_{i \in \mathcal{U}}\!\sum\limits_{n_{ik} \in \mathcal{N}'}\!\! {{\!\!p_{kin_{ik}}}} E_{k,i}(\theta _{i,n_{ik},k}^S)+\! \!\sum\limits_{i \in \mathcal{U}}\sum\limits_{n_{ik} \in {{\mathcal{C}'_i}}} \!\!{{\!p_{kin_{ik}}}} E_{k,i}(\theta _{i,n_{ik},k}^G).$$ Since $E_{k,i}(\theta _{i,n_{ik},k}^O)$ depends only on $\theta _{i,n_{ik},k}^O$, we can consider it as a constant during time slot $k$ and consequently, we only need to optimize ${\sum\limits_{i \in \mathcal{U}_r}\! {\sum\limits_{n_{ik} \in {\mathcal{C}_{r}}} \!\!\!{{p_{kin_{ik}}}} E_{k,i}(\theta _{i,n_{ik},k}^O)} }$ for each RRH. Therefore, we can select the content that has the maximal value of ${\sum\limits_{i \in \mathcal{U}_r}\! {{{p_{kin_{ik}}}} E_{k,i}(\theta _{i,n_{ik},k}^O)} }$, which corresponds to the proposed RRH caching method in Section \[al:sub\]. Since the contents that are stored in the cloud cache are updated during a period $T$, the optimization of the cloud cache based on (\[proofEs\]) and (\[proofEsnA\]) is given as: $$\setlength{\abovedisplayskip}{4 pt} \setlength{\belowdisplayskip}{4 pt} E_{c}=\max\frac{1}{{{T }}}\sum\limits_{k = 1}^{{T }}\sum\limits_{i \in \mathcal{U}}\sum\limits_{n_{ik} \in {{{\mathcal{C}_c}} \mathord{\left/ {\vphantom {{{C_c}} {{C_k}}}} \right. \kern-\nulldelimiterspace} {{\mathcal{C}_i}}}} {{{p_{kin_{ik}}}E_{k,i}(\theta _{i,n_{ik},k}^A)} }.$$ Here, the average of the effective capacity is over different contents transmission. After obtain the updated content request distribution of each user, we can use the same method to prove that the proposed algorithm can reach to the optimal performance. [^1]: The data is available at <http://index.youku.com/>.
--- abstract: 'Parametric Markov chains have been introduced as a model for families of stochastic systems that rely on the same graph structure, but differ in the concrete transition probabilities. The latter are specified by polynomial constraints for the parameters. Among the tasks typically addressed in the analysis of parametric Markov chains are (1) the computation of closed-form solutions for reachabilty probabilities and other quantitative measures and (2) finding symbolic representations of the set of parameter valuations for which a given temporal logical formula holds as well as (3) the decision variant of (2) that asks whether there exists a parameter valuation where a temporal logical formula holds. Our contribution to (1) is to show that existing implementations for computing rational functions for reachability probabilities or expected costs in parametric Markov chains can be improved by using fraction-free Gaussian elimination, a long-known technique for linear equation systems with parametric coefficients. Our contribution to (2) and (3) is a complexity-theoretic discussion of the model checking problem for parametric Markov chains and probabilistic computation tree logic (PCTL) formulas. We present an exponential-time algorithm for (2) and a PSPACE upper bound for (3). Moreover, we identify fragments of PCTL and subclasses of parametric Markov chains where (1) and (3) are solvable in polynomial time and establish NP-hardness for other PCTL fragments.' author: - Lisa Hutschenreiter - Christel Baier - Joachim Klein bibliography: - 'lit.bib' title: 'Parametric Markov Chains: PCTL Complexity and Fraction-free Gaussian Elimination[^1] ' --- Introduction ============ Finite-state Markovian models are widely used as an operational model for the quantitative analysis of systems with probabilistic behaviour. In many cases, only estimates of the transition probabilities are available. This, for instance, applies to fault-tolerant systems where the transition probabilities are derived from error models obtained using statistical methods. Other examples are systems operating with resource-management protocols that depend on stochastic assumptions on the future workload, or cyber-physical systems where the interaction with its environment is represented stochastically. Furthermore, often the transition probabilities of Markovian models depend on configurable system parameters that can be adjusted at design-time. The task of the designer is to find a parameter setting that is optimal with respect to a given objective. This motivated the investigation of *interval Markov chains* (IMCs) [@JonLar91] specifying intervals for the transition probabilities (rather than concrete values). More general is the model of *parametric Markov chains* (pMCs), which has been introduced independently by Daws [@Daws05] and Lanotte et al. [@LanMagSchTroina07], where the transition probabilities are given by polynomials with rational coefficients over a fixed set of real-valued parameters $x_1,\ldots,x_k$. These concepts can be further generalized to accommodate rational functions, i.e., quotients of polynomials, as transition probabilities (see, e.g., [@HHZ-STTT11]). It is well-known that the probabilities $p_s$ for reachability conditions $\Diamond {\mathit{Goal}}$ in parametric Markov chains with a finite state space $S$ can be characterized as the unique solution of a linear equation system $A \cdot p = b$ where $p=(p_s)_{s\in S}$ is the solution vector, and $A = A(x_1,\ldots,x_k)$ is a matrix where the coefficients are rational functions. Likewise, $b= b(x_1,\ldots,x_k)$ is a vector whose coefficients are rational functions. Note that it is no limitation to assume that the entries in $A$ and $b$ are polynomials, as rational function entries can be converted to a common denominator, which can then be removed. Now, $A \cdot p = b$ can be viewed as a linear equation system over the field ${\mathbb{Q}}(x_1,\ldots,x_k)$ of rational functions with rational coefficients. As a consequence, the probabilities for reachability conditions are rational functions. This has been observed independently by Daws [@Daws05] and Lanotte et al. [@LanMagSchTroina07] for pMCs. Daws [@Daws05] describes a computation scheme that relies on a state-elimination algorithm inspired by the state-elimination algorithm for computing regular expressions for nondeterministic finite automata. This, however, is fairly the same as Gaussian elimination for matrices over the field of rational functions. As observed by Hahn et al. [@HHZ-STTT11], the naïve implementation of Gaussian elimination for pMCs, that treats the polynomials in $A$ and $b$ as syntactic atoms, leads to a representation of the rational functions $p_s=p_s(x_1,\ldots,x_k)$ as the quotient of extremely (exponentially) large polynomials. In their implementation PARAM [@PARAM-HHWZ10] (as well as in the re-implementation within the tool PRISM [@KwiatkowskaNP11]), the authors of [@HHZ-STTT11] use computer-algebra tools to simplify rational functions in each step of Gaussian elimination by identifying the greatest common divisor (gcd) of the numerator and the denominator polynomial. Together with polynomial-time algorithms for the gcd-computation of univariate polynomials, this approach yields a polynomial-time algorithm for computing the rational functions for reachability probabilities in pMCs with a single parameter. Unfortunately, gcd-computations are known to be expensive for the multivariate case (i.e., $k \geqslant 2$) [@GeCzLa93]. To mitigate the cost of the gcd-computations, the tool Storm [@DJKV-CAV17] successfully uses techniques proposed in [@JansenCVWAKB14] such as caching and the representation of the polynomials in partially factorized form during the elimination steps. However, it is possible to completely avoid gcd-computations by using *one-step fraction-free Gaussian elimination*. Surprisingly, this has not yet been investigated in the context of pMCs, although it is a well-known technique in mathematics. According to Bareiss [@Bareiss72], this variant of Gaussian elimination probably goes back to Camille Jordan (1838–1922), and has been rediscovered several times since. Like standard Gaussian elimination it relies on the triangulation of the matrix, and finally obtains the solution by back substitution. Applied to matrices over polynomial rings the approach generates matrices with polynomial coefficients (rather than rational functions) and ensures that the degree of the polynomials in all intermediate matrices grows at most linearly. This is achieved by dividing, in each elimination step, by a factor known by construction. Thus, when applied to a pMC with linear expressions for the transition probabilities, the degree of all polynomials in the solution vector is bounded by the number of states. For the univariate case ($k=1$), this yields an alternative polynomial-time algorithm for the computation of the rational functions for reachability probabilities. Analogous statements hold for expectations of random variables that are computable via linear equation systems. This applies to expected accumulated weights until reaching a goal, and to the expected mean payoff. **Contribution.** The purpose of the paper is to study the complexity of the model checking problem for pMCs and probabilistic computation tree logic (PCTL) [@HaJo94], and its extensions by expectation operators for pMCs augmented by weights for its states. In the first part of the paper (Section \[sec:gauss\]), we discuss the use of Bareiss’ one-step fraction-free Gaussian elimination for the computation of reachability probabilities. The second part of the paper (Section \[sec:theory\]) presents complexity-theoretic results for the PCTL model checking problem in pMCs. We describe an exponential-time algorithm for computing a symbolic representation of all parameter valuations under which a given PCTL formula holds, and provide a PSPACE upper bound for the decision variants that ask whether a given PCTL formula holds for some or all admissible parameter valuations. The known NP-/coNP-hardness results for IMCs [@SeViAg06; @ChatSenHen08] carry over to the parametric case. We strengthen this result by showing that the existential PCTL model checking problem remains NP-hard even for acyclic pMCs and PCTL formulas with a single probability operator. For the univariate case, we prove NP-completeness for the existential PCTL model checking problem, and identify two fragments of PCTL where the model checking is solvable in polynomial time. The first fragment are Boolean combinations of threshold constraints for reachability probabilities, expected accumulated weights until reaching a goal, and expected mean payoffs. The second fragment consists of PCTL formulas in positive normal form with lower probability thresholds interpreted over pMCs satisfying some monotonicity properties. Furthermore, we observe that the model checking problem for PCTL with expectation operators for reasoning about expected costs until reaching a goal is in P for Markov chains where the weights of the states are given as polynomials over a single parameter, when restricting to Boolean combinations of the expectation operators. Proofs and further details on the experiments omitted in the main part due to space constraints can be found in the [extended version [@GandALF-extended]. ]{} **Related work.** Fraction-free Gaussian elimination is well-known in mathematics, and has been further investigated in various directions for matrices over unique factorization domains (such as polynomial rings), see e.g. [@McClellan73; @Kannan85; @Sit92; @NakTurWil97]. To the best of our knowledge, fraction-free Gaussian elimination has not yet been studied in the context of parametric Markovian models. Besides the above mentioned work [@Daws05; @PARAM-HHWZ10; @HHZ-STTT11; @JansenCVWAKB14; @DJJCVBKA-CAV15] on the computation of the rational functions for reachability probabilities in pMCs, [@LanMagSchTroina07] identifies instances where the parameter synthesis problem for pMCs with 1 or 2 parameters and probabilistic reachability constraints is solvable in polynomial time. These rely on the fact that there are closed-form representations of the (complex) zero’s for univariate polynomials up to degree 4 and rather strong syntactic characterizations of pMCs. In Section \[sec:gauss\] we will provide an example to illustrate that the number of monomials in the numerators of the rational functions for reachability probabilities can grow exponentially in the number of states. We hereby reveal a flaw in [@LanMagSchTroina07] where the polynomial-time computability of the rational functions for reachability probabilities has been stated even for the multivariate case. [@FilieriGT11] considers an approach for solving the parametric linear equation system obtained from sparse pMCs via Laplace expansion. Model checking problems for IMCs and temporal logics have been studied by several authors. Most in the spirit of our work on the complexity of the PCTL model checking problem for pMCs is the paper [@SeViAg06] which studies the complexity of PCTL model checking in IMCs. Further complexity-theoretic results of the model checking problem for IMCs and temporal logics have been established in [@ChatSenHen08] for omega-PCTL (extending PCTL by Boolean combinations of Büchi and co-Büchi conditions), and in [@BLW-TACAS13] for linear temporal logic (LTL). Our results of the second part can be seen as an extension of the work [@SeViAg06; @ChatSenHen08] for the case of pMCs. The NP lower bound for the multivariate case and a single threshold constraint for reachability probabilities strengthen the NP-hardness results of [@SeViAg06]. There exist several approaches to obtain regions of parameter valuations of a pMC in which PCTL formulas are satisfied or not, resulting in an approximative covering of the parameter space. PARAM [@HHZ-STTT11; @PARAM-HHWZ10] employs a heuristic, sampling based approach, while PROPhESY [@DJJCVBKA-CAV15] relies on SMT solving via the existential theory of the reals to determine whether a given formula holds for all valuations in a sub region. For the same problem, [@QuatmannD0JK16] uses a parameter lifting technique that avoids having to solve the parametric equation system by obtaining lower and upper bounds for the values in a given region by a reduction to non-parametric Markov decision processes. Preliminaries {#sec:prelim} ============= The definitions in this section require a general understanding of Markov models, standard model checking, and temporal logics. More details can be found, e.g., in [@Ku95; @BaKa08]. **Discrete-time Markov chain.** A *(discrete-time) Markov chain* (MC) ${\mathcal{M}}$ is a tuple $( S , {s_{\textit{\tiny init}}}, E, P)$ where $S$ is a non-empty, finite set of *states* with the *initial state* ${s_{\textit{\tiny init}}}\in S$, $E \subseteq S \times S$ is a transition relation, and $P \colon S \times S \to [0,1]$ is the *transition probability function* satisfying $P(s,t)=0$ if and only if $(s,t)\notin E$, and $\sum _{t\in S } P(s,t) = 1$ for all $s\in S $ with ${\mathit{Post}}(s){\mathrel{\stackrel{\text{\tiny def}}{=}}}\{t\in S : (s,t)\in E\}$ nonempty. We refer to $G_{{\mathcal{M}}}=(S,E)$ as the *graph* of ${\mathcal{M}}$. A state $s \in S$ in which ${\mathit{Post}}(s) = \varnothing$ is called a *trap (state)* of ${\mathcal{M}}$. An *infinite path* in ${\mathcal{M}}$ is an infinite sequence $s_0s_1 \ldots \in S^{\omega}$ of states such that $(s_i,s_{i+1})\in E$ for $i\in{\mathbb{N}}$. Analogously, a *finite path* in ${\mathcal{M}}$ is a finite sequence $s_0s_1 \ldots s_m \in S^{*}$ of states in ${\mathcal{M}}$ such that $(s_i,s_{i+1})\in E$ for $i=0,1,\ldots,m{-}1$. A path is called *maximal* if it is infinite or ends in a trap. $\operatorname{Paths}(s)$ denotes the set of all maximal paths in ${\mathcal{M}}$ starting in $s$. Relying on standard techniques, every MC induces a unique probability measure ${{\mathrm{Pr}}}^{{\mathcal{M}}}_s$ on the set of all paths. **Parameters, polynomials, and rational functions.** Let $x_1,\ldots,x_k$ be parameters that can assume any real value, $\overline{x}=(x_1,\ldots,x_k)$. We write ${\mathbb{Q}}[\overline{x}]$ for the *polynomial ring* over the rationals with variables $x_1,\ldots,x_k$. Each $f\in {\mathbb{Q}}[\overline{x}]$ can be written as a sum of monomials, i.e., $f=\sum_{(i_1,\ldots,i_k) \in I} \alpha_{i_1,\ldots,i_k} \cdot x_1^{i_1} \cdot x_2^{i_2} \cdot \ldots \cdot x_k^{i_k}$ where $I$ is a finite subset of ${\mathbb{N}}^k$ and $\alpha_{i_1,\ldots,i_k}\in {\mathbb{Q}}$. If $I$ is empty, or $\alpha_{i_1,\ldots,i_k}=0$ for all tuples $(i_1,\ldots,i_k)\in I$, then $f$ is the *null function*, generally denoted by 0. The *degree* of $f$ is $\operatorname{deg}(f) = \max \bigl\{\, i_1 + \ldots + i_k :\allowbreak (i_1,\ldots,i_k)\in I, \alpha_{i_1,\ldots,i_k}\not= 0\, \bigr\}$ where $\max ( \varnothing ) = 0$. A *linear function* is a function $f \in {\mathbb{Q}}{}[\overline{x}]$ with $\deg(f) \leqslant 1$. A *rational function* is a function of the form $f/g$ with $f,g\in {\mathbb{Q}}[\overline{x}]$, $g \neq 0$. The field of all rational functions is denoted by ${\mathbb{Q}}(\overline{x})$. We write ${\mathit{Constr}}[\overline{x}]$ for the set of all *polynomial constraints* of the form $f \bowtie g$ where $f,g\in {\mathbb{Q}}[\overline{x}]$, and $\bowtie \in \{<,\leqslant, >,\geqslant, = \}$. **Parametric Markov chain.** A *(plain) parametric Markov chain* on $\overline{x}$, pMC for short, is a tuple ${\mathfrak{M}}= ( S , {s_{\textit{\tiny init}}}, E,{\mathbf{P}})$ where $S$, ${s_{\textit{\tiny init}}}$, and $E$ are defined as for MCs, and ${\mathbf{P}}\colon S\times S \to {\mathbb{Q}}(\overline{x})$ is the transition probability function with ${\mathbf{P}}(s,t) = 0$, i.e., the null function, iff $(s,t)\notin E$. Intuitively, a pMC defines the family of Markov chains arising by plugging in concrete values for the parameters. A parameter valuation $\overline{\xi} = (\xi_1,\ldots,\xi_k)\in {\mathbb{R}}^k$ is said to be *admissible* for ${\mathfrak{M}}$ if for each state $s\in S$ we have $\sum_{t\in S} P_{\overline{\xi}}(s,t) =1$ if ${\mathit{Post}}(s)$ nonempty, and $P_{\overline{\xi}}(s,t) >0$ iff $(s,t)\in E$, where $P_{\overline{\xi}}(s,t) = {\mathbf{P}}(s,t)(\overline{\xi})$ for all $(s,t)\in S\times S$. Let $X_{{\mathfrak{M}}}$, or briefly $X$, denote the set of admissible parameter valuations for ${\mathfrak{M}}$. Given $\overline{\xi}\in X$ the Markov chain associated with $\overline{\xi}$ is ${\mathcal{M}}_{\overline{\xi}} = {\mathfrak{M}}(\overline{\xi}) = (S,{s_{\textit{\tiny init}}},E,P_{\overline{\xi}})$. The semantics of the pMC ${\mathfrak{M}}$ is then defined as the family of Markov chains induced by admissible parameter valuations, i.e., $\llbracket {\mathfrak{M}}\rrbracket = \bigl\{\, {\mathfrak{M}}(\overline{\xi}) : \overline{\xi} \in X \,\bigr\} $. An *augmented pMC* is a tuple ${\mathfrak{M}}= ( S, {s_{\textit{\tiny init}}}, E, {\mathbf{P}}, {\mathfrak{C}})$ where $S$, ${s_{\textit{\tiny init}}}$, $E$, and ${\mathbf{P}}$ are defined as for plain pMCs, and ${\mathfrak{C}}\subset {\mathit{Constr}}[\overline{x}]$ is a finite set of polynomial constraints. A parameter valuation $\overline{\xi}$ is *admissible* for an augmented pMC if it is admissible for the induced plain pMC $(S, {s_{\textit{\tiny init}}}, E, {\mathbf{P}})$, and satisfies all polynomial constraints in ${\mathfrak{C}}$. As for plain pMC, we denote the set of admissible parameter valuations of an augmented pMC by $X_{{\mathfrak{M}}}$, or briefly $X$. A, possibly augmented, pMC ${\mathfrak{M}}$ is called *linear*, or *polynomial*, if all transition probability functions and constraints are linear functions in $\overline{x}$, or polynomials in $\overline{x}$, respectively. **Interval Markov chain.** An *interval Markov chain* (IMC) [@SeViAg06] can be seen as a special case of a linear augmented pMC with one parameter $x_{s,t}$ for each edge $(s,t)\in E$, and linear constraints $\alpha_{s,t} \unlhd_1 x_{s,t} \unlhd_2 \beta_{s,t}$ for each edge with $\alpha_{s,t},\beta_{s,t}\in {\mathbb{Q}}\cap [0,1]$ and $\unlhd_1,\unlhd_2\in \{<,\leqslant\}$. According to the terminology introduced in [@SeViAg06], this corresponds to the semantics of IMC as an “uncertain Markov chain”. The alternative semantics of IMC as a Markov decision process will not be considered in this paper. **Labellings and weights.** Each of these types of Markov chain, whether MC, plain or augmented pMC, or IMC, can be equipped with a *labelling function* ${\mathcal{L}}\colon S \to 2^{\mathrm{AP}}$, where ${\mathrm{AP}}$ is a finite set of *atomic propositions*. If not explicitly stated, we assume the implicit labelling of the Markov chain defined by using the state names as atomic propositions and assigning each name to the respective state. Furthermore, we can extend any Markov chain with a *weight function* ${\mathit{wgt}}\colon S\to {\mathbb{Q}}$. The value assigned to a specific state $s\in S$ is called the weight of $s$. It is sometimes also referred to as the *reward* of $s$. In addition to assigning rational values we also consider parametric weight functions ${\mathit{wgt}}\colon S\to {\mathbb{Q}}( \overline{x})$. **Probabilistic computation tree logic.** We augment the standard notion of probabilistic computation tree logic with operators for the expected accumulated weight and mean payoff, and for comparison. Let ${\mathrm{AP}}$ be a finite set of atomic propositions. $\bowtie$ stands for $\leqslant, \geqslant, <, >$, or $=$, $c\in [0,1]$, $r\in\mathbb{Q}$. Then $$\begin{aligned} \begin{array}{lcll} \Phi &::= & {\texttt{true}}\mid a \mid \Phi \wedge \Phi \mid \neg \Phi \mid {\operatorname{\mathbb{P}}_{\bowtie c}}\bigl(\varphi\bigr) \mid {\operatorname{\mathbb{E}}_{\bowtie r}}\bigl(\rho\bigr) \mid {\operatorname{\mathbb{C}}_{{\mathrm{Pr}}}}(\varphi,\bowtie,\varphi) \mid {\operatorname{\mathbb{C}}_{{\mathrm{E}}}}(\rho,\bowtie,\rho) & \text{\footnotesize\emph{state formula}} \\[1ex] \varphi &::= & \operatorname{\bigcirc}\Phi \mid \Phi{\mathbin{\mathsf{U}}}\Phi \hspace*{0.5cm}\text{\footnotesize\emph{path formula}} \hspace*{1.5cm} \rho \ ::=\ \operatorname{\tikz[baseline = -0.65ex]{\node [inner sep = 0pt] {$\Diamond$}; \draw (0,-0.9ex) -- (0,0.9ex) (-0.6ex,0) -- (0.6ex,0);}}\Phi \mid \operatorname{mp}(\Phi) & \multicolumn{1}{r}{\hspace*{-2cm} \text{\footnotesize\emph{terms for random variables}}} \end{array}\end{aligned}$$ where $a\in {\mathrm{AP}}$. The basic temporal modalities are $\operatorname{\bigcirc}$ (*next*) and ${\mathbin{\mathsf{U}}}$ (*until*). The usual derived temporal modalities $\Diamond$ (*eventually*), ${\mathbin{\mathsf{R}}}$ (*release*) and $\Box$ (*always*) are defined by $\operatorname{\Diamond}\Phi {\mathrel{\stackrel{\text{\tiny def}}{=}}}{\texttt{true}}{\mathbin{\mathsf{U}}}\Phi$, and ${\operatorname{\mathbb{P}}_{\bowtie c}}(\Phi_1 {\mathbin{\mathsf{R}}}\Phi_2) {\mathrel{\stackrel{\text{\tiny def}}{=}}}{\operatorname{\mathbb{P}}_{\overline{\bowtie} 1 {-} c}} ((\neg \Phi_1) {\mathbin{\mathsf{U}}}(\neg \Phi_2))$, where, e.g., $\overline{\leqslant}$ is $\geqslant$ and $\overline{<}$ is $>$, and $\Box \Phi {\mathrel{\stackrel{\text{\tiny def}}{=}}}{\texttt{false}}{\mathbin{\mathsf{R}}}\Phi$. For an MC ${\mathcal{M}}$ with states labelled by ${\mathcal{L}}\colon S \to {\mathrm{AP}}$ we use the standard semantics. We only state the semantics of the probability, expectation, and comparison operators here. For each state $s\in S$, $ s \models_{{\mathcal{M}}} {\operatorname{\mathbb{P}}_{\bowtie c}}(\varphi) $ iff ${\mathrm{Pr}}^{{\mathcal{M}}}_s(\varphi) \bowtie c$, and $s \models_{{\mathcal{M}}} {\operatorname{\mathbb{C}}_{{\mathrm{Pr}}}}(\varphi_1,\bowtie, \varphi_2)$ iff ${\mathrm{Pr}}^{{\mathcal{M}}}_s (\varphi_1) \bowtie {\mathrm{Pr}}^{{\mathcal{M}}}_s (\varphi_2)$. Here ${\mathrm{Pr}}^{{\mathcal{M}}}_s(\varphi)$ is short for ${{\mathrm{Pr}}}^{{\mathcal{M}}}_s\{\, \pi\in\operatorname{Paths}(s) : \pi\models_{{\mathcal{M}}} \varphi \,\}$. Furthermore, $s \models_{{\mathcal{M}}} {\operatorname{\mathbb{E}}_{\bowtie r}}(\rho)$ iff ${\mathrm{E}}^{{\mathcal{M}}}_s \bigl( \rho^{{\mathcal{M}}} \bigr) \bowtie r$, and $s \models_{{\mathcal{M}}} {\operatorname{\mathbb{C}}_{{\mathrm{E}}}}(\rho_1,\bowtie, \rho_2)$ iff ${\mathrm{E}}^{{\mathcal{M}}}_s \bigl(\rho_1^{{\mathcal{M}}} \bigr) \bowtie {\mathrm{E}}^{{\mathcal{M}}}_s \bigl(\rho_2^{{\mathcal{M}}} \bigr)$, where ${\mathrm{E}}^{{\mathcal{M}}}_s(\cdot)$ denotes the expected value of the respective random variable. For detailed semantics of the expectation operators, see[ [@GandALF-extended]. ]{}We write ${\mathcal{M}}\models \Phi$ iff ${s_{\textit{\tiny init}}}\models_{{\mathcal{M}}} \Phi$. **Notation: PCTL+EC and sublogics.** We use PCTL to refer to unaugmented probabilistic computation tree logic. If we add only the expectation operator we write PCTL+E, and, analogously, PCTL+C if we only add the comparison operator for probabilities. PCTL+EC denotes the full logic defined above. **DAG-representation and length of formulas.** We consider for any PCTL+EC state formula the *directed acyclic graph* (DAG) representing its syntactic structure. Each node of the DAG represents one of the sub-state formulas. The use of a DAG rather than the syntax tree allows the representation of subformulas that occur several times in the formula $\Phi$ by a single node. The leaves of the DAG can be the Boolean constant ${\texttt{true}}$, and atomic propositions. The inner nodes of the DAG, e.g., of a PCTL formula, are labelled with one of the operators $\wedge$, $\neg$, ${\operatorname{\mathbb{P}}_{\bowtie c}}(\,\cdot {\mathbin{\mathsf{U}}}\cdot\,)$, ${\operatorname{\mathbb{P}}_{\bowtie c}}(\operatorname{\bigcirc}\,\cdot\,)$. Nodes labelled with $\neg$ and ${\operatorname{\mathbb{P}}_{\bowtie c}}(\operatorname{\bigcirc}\,\cdot\,)$ have a single outgoing edge, while nodes labelled with $\wedge$ or ${\operatorname{\mathbb{P}}_{\bowtie c}}(\,\cdot {\mathbin{\mathsf{U}}}\cdot\,)$ have two outgoing edges. For the above-mentioned extensions of PCTL the set of possible inner node labels is extended accordingly. For example, a node $v$ representing the PCTL+C formula ${\operatorname{\mathbb{C}}_{{{\mathrm{Pr}}}}}(\operatorname{\bigcirc}\Phi_1, \bowtie, \Phi_2 {\mathbin{\mathsf{U}}}\Phi_3)$ has three outgoing edges. If $\Phi_1=\Phi_2$ then there are two parallel edges from $v$ to a node representing $\Phi_1$. The length of a PCTL+EC formula is defined as the number of nodes in its DAG. Fraction-free Gaussian elimination {#sec:gauss} ================================== Given a pMC ${\mathfrak{M}}$ as in Section \[sec:prelim\], the probabilities ${\mathrm{Pr}}^{{\mathfrak{M}}(\overline{x})}_s(\Diamond a)$ for reachability conditions are rational functions and computable via Gaussian elimination. As stated in the introduction, this has been originally observed in [@Daws05; @LanMagSchTroina07] and realized, e.g., in the tools PARAM [@PARAM-HHWZ10] and Storm [@DJJCVBKA-CAV15; @DJKV-CAV17] together with techniques based on gcd-computations on multivariate polynomials. In this section, we discuss the potential of fraction-free Gaussian elimination as an alternative, which is well-known in mathematics [@Bareiss72; @GeCzLa93], but to the best of our knowledge, has not yet been considered in the context of pMCs. While the given definitions allow for rational functions in the transition probability functions of (augmented) pMCs, we will focus on polynomial (augmented) pMCs throughout the remainder of the paper. Generally, a linear equation systems containing rational functions as coefficients can be rearranged to one containing only polynomials by multiplying each line with the common denominator of the respective rational functions. Due to the multiplications this involves the risk of a blow-up in the coefficient size. To avoid this we add variables in the following way. Let ${\mathfrak{M}}= (S, {s_{\textit{\tiny init}}}, E, {\mathbf{P}}, {\mathfrak{C}})$ be an (augmented) pMC. For all $(s, t)\in E$ introduce a fresh variable $x_{s,t}$. By definition ${\mathbf{P}}(s,t) = \frac{f_{s,t}}{g_{s,t}}$ for some $f_{s,t},g_{s,t}\in {\mathbb{Q}}[\overline{x}]$. Let ${\mathbf{P}}'(s,t) = f_{s,t}\cdot x_{s,t}$ if $(s,t)\in E$, ${\mathbf{P}}'(s,t) = 0$ if $(s,t)\notin E$, ${\mathfrak{C}}' = {\mathfrak{C}}\cup \{ g_{s,t}\cdot x_{s,t} = 1 : (s,t)\in E \}$. Then ${\mathfrak{M}}' = (S, {s_{\textit{\tiny init}}}, E, {\mathbf{P}}', {\mathfrak{C}}')$ is a polynomial augmented pMC. **Linear equation systems with polynomial coefficients.** Let $x_1,\ldots,x_k$ be parameters, $\overline{x}=(x_1,\ldots,x_k)$. We consider linear equation systems of the form $A\cdot p = b$, where $A = (a_{i,j})_{i,j = 1,\ldots,n}$ is a non-singular $n\times n$-matrix with $a_{i,j}=a_{i,j}(\overline{x}) \in {\mathbb{Q}}[\overline{x}]$. Likewise, $b = (b_i)_{i = 1,\ldots,n}$ is a vector of length $n$ with $b_i = b_i(\overline{x}) \in {\mathbb{Q}}[\overline{x}]$. The solution vector $p = (p_i)_{i=1,\ldots,n}$ is a vector of rational functions $p_i=f_i/g_i$ with $f_i,g_i \in {\mathbb{Q}}[\overline{x}]$. By Cramer’s rule we obtain $p_i = \frac{\det(A_i)}{\det(A)}$, where $\det(A)$ is the determinant of $A$, and $\det(A_i)$ the determinant of the matrix obtained when substituting the $i$-th column of $A$ by $b$. If the coefficients of $A$ and $b$ have at most degree $d$, the Leibniz formula implies that $f_i$ and $g_i$ have at most degree $n\cdot d$. \[lemma:exp-many-monomials\] There is a family $({\mathfrak{M}}_k)_{k\geqslant 2}$ of acyclic linear pMCs where ${\mathfrak{M}}_k$ has $k$ parameters and $n=k{+}3$ states, including distinguished states $s_0$ and ${\mathit{goal}}$, such that ${{\mathrm{Pr}}}_{s_0}^{{\mathfrak{M}}(\overline{x})}(\Diamond {\mathit{goal}})$ is a polynomial for which even the shortest sum-of-monomial representation has $2^k$ monomials. \[algo:gauss\] $a_{0,0} = 1$ \[step:div1\] $a_{i,j} = \bigl( a_{m,m} \cdot a_{i,j} - a_{i,m} \cdot a_{m,j}\bigr) / a_{m-1,m-1}$ [exploit exact divisibility by $a_{m-1,m-1}$]{} \[step:div2\] $b_{i} = \bigl( a_{m,m} \cdot b_{i} - a_{i,m} \cdot b_{m}\bigr) / a_{m-1,m-1}$ [exploit exact divisibility by $a_{m-1,m-1}$]{} $a_{i,m} = 0$ \[step:div3\] $b_m = \bigl( a_{n,n} \cdot b_m - \sum_{i = m +1}^n a_{m,i} \cdot b_{i} \bigr) / a_{m,m}$ [exploit exact divisibility by $a_{m,m}$]{} **return** $ \bigl( b_i / a_{n,n} \bigr) _{i = 1,\ldots, n}$ [rational solution functions]{} **One-step fraction-free Gaussian elimination** is a variant of fraction-free Gaussian elimination that allows for divisions which are known to be exact at the respective point in the algorithm. When using *naïve fraction-free Gaussian elimination* the new coefficients after the $m$-th step, $m = 1,\ldots ,n-1$, are computed as $a^{(m)}_{i,j} = a^{(m-1)}_{i,j} a^{(m-1)}_{m,m} - a^{(m-1)}_{i,m} a^{(m-1)}_{m,j} $ for $i,j = m + 1,\ldots, n$, where $a^{(0)}_{i,j} = a_{i,j}$. When applied to systems with polynomial coefficients this results in the degree doubling after each step, so the degree grows exponentially. In a step of *one-step fraction-free Gaussian elimination* (see Algorithm \[algo:gauss\]), the computation of the coefficients changes to $a^{(m)}_{i,j} = \bigl(\, a^{(m-1)}_{i,j} a^{(m-1)}_{m,m} - a^{(m-1)}_{i,m} a^{(m-1)}_{m,j} \,\bigr) / a^{(m-1)}_{m-1,m-1}$ with $a^{(0)}_{0,0} = 1$. Using Sylvester’s identity one can prove that $a^{(m)}_{i,j}$ is again a polynomial, and that $a^{(m-1)}_{m-1,m-1}$ is in general the maximal possible divisor. The $b_i$ are updated analogously. If the maximal degree of the initial coefficients of $A$ and $b$ is $d$, this technique therefore guarantees, that after $m$ steps the degree of the coefficients is at most $(m{+}1)\cdot d$, i.e., it grows linear in $d$ during the procedure. For polynomials the division by $ a^{(m-1)}_{m-1,m-1} $ can be done using standard polynomial division. The time-complexity of the exact multivariate polynomial division in this case is in each step ${\mathcal{O}}\bigl( \operatorname{poly}(md)^k \bigr)$, so for the full one-step fraction-free Gaussian elimination it is ${\mathcal{O}}\bigl( \operatorname{poly}(n,d)^k \bigr)$. Proposition 4.3 in [@LanMagSchTroina07] states that the rational functions $p_i=f_i/g_i$ for reachability probabilities in pMC with a representation of the polynomials $f_i$, $g_i$ as sums of monomials (called normal form in [@LanMagSchTroina07]) are computable in polynomial time. This contradicts Lemma \[lemma:exp-many-monomials\] which shows that the number of monomials in the representation of a reachability probability as a sum of monomials can be exponential in the number of parameters. However, the statement is correct for the univariate case. \[lemma:reach-prob-univariate\] Let ${\mathfrak{M}}$ be a polynomial pMC over a single parameter and $T$ a set of states. Then, the rational functions for the reachability probabilities ${{\mathrm{Pr}}}^{{\mathfrak{M}}}_s(\Diamond T)$ are computable in polynomial time. Analogously, rational functions for the expected accumulated weight until reaching $T$ or the expected mean payoff are computable in polynomial time. Note that the degrees of the polynomials $a_{i,j}^{(m)}$ and $b_j^{(m)}$ computed by one-step fraction-free Gaussian elimination for reachability probabilities are bounded by $(m{+}1)\cdot d$, where $d=\max_{s,t\in S} \operatorname{deg}({\mathbf{P}}(s,t))$, so the polynomials have representations as sums of at most $md{+}1$ monomials. In particular, the degree and representation size of the final polynomials $f_s=b_s^{(n)}$ and $g_s=a_{s,s}^{(n)}$ for the rational functions ${{\mathrm{Pr}}}^{{\mathfrak{M}}(x)}_s(\Diamond {\mathit{goal}})=f_s/g_s$ is in ${\mathcal{O}}(n d)$ where $n$ is the number of states of ${\mathfrak{M}}$. Another observation concerns the case where only the right-hand side of the linear equation system is parametric. Systems of this form occur, e.g., when considering expectation properties for MCs with parametric weights. \[lemma:right-hand-param\] Let $A \cdot p = b$ be a parametric linear equation system as defined above where $A$ is parameter-free. Then the solution vector $p=(p_i)_{i=1,\ldots,n}$ consist of polynomials of the form $p_i = \sum_{i = 1}^n \beta_i \cdot b_i$ with $\beta_i\in {\mathbb{Q}}$ and can be computed in polynomial time. **Stratification via SCC-decomposition.** It is well known (e.g., [@CBGK08; @JansenCVWAKB14]) that for probabilistic/parametric model checking a decomposition into strongly-connected components (SCCs) can yield significant performance benefits due to the structure of the underlying models. We have adapted the one-step fraction-free Gaussian elimination approach by a preprocessing step that permutes the matrix according to the topological ordering of the SCCs. This results in the coefficient matrix already having a stair-like form at the start of the algorithm. In the triangulation part of the algorithm, each SCC can now be considered separately, as non-zero entries below the main diagonal only occur within each SCC. While the back-substitution in the general one-step fraction-free elimination will result in each entry on the main diagonal being equal to the last, this property is now only maintained within the SCCs. Formally, this means that the back substitution step in Algorithm \[alg:gauss\] is replaced by the following: $$\begin{aligned} b_m &= & \Bigl(a^*\!(\text{current SCC}) \cdot b_m - \sum\limits_{i = m + 1}^n a_{m,i} \cdot b_i \cdot \dfrac{a^*\!(\text{current SCC})} {a^*\!(\text{SCC at $i$})}\Bigr) \; / \; a_{m,m}\end{aligned}$$ where $a^*\!(\text{SCC at $n$}) = a_{n,n}$, and, for $i = 1,\ldots, n{-}1$, $a^*\!(\text{SCC at $i$}) = a^*\!(\text{SCC at $i {+} 1$})$ if the $i$-th and $(i{+}1)$-st state belong to the same SCC and $a^*\!(\text{SCC at $i$}) = a_{i,i} \cdot a^*\!(\text{SCC at $i {+} 1$})$ otherwise. Intuitively, $a^*\!(\text{SCC at $i$})$ is the product of the $a$’s on the diagonal corresponding to the last states in the current SCC and the SCCs below. Of course, the return statement also has to be adjusted accordingly. The advantage of this approach is that the polynomials in the rational functions aside from the ones in the first strongly connected component will have an even lower degree. **Implementation and experiments.** For a first experimental evaluation of the one-step fraction-free Gaussian elimination approach ([*GE-ff*]{}) in the context of probabilistic model checking, we have implemented this method (including the SCC decomposition and topological ordering described above) as an alternative solver for parametric linear equation systems in the state-of-the-art probabilistic model checker Storm [@DJKV-CAV17]. We compare [*GE-ff*]{} against the two solvers provided by Storm (v1.0.1) for solving parametric equation systems, i.e., the solver based on the [*eigen*]{} linear algebra library[^2] and on state elimination ([*state-elim*]{}) [@HHZ-STTT11]. Both of Storm’s solvers use partially factorized representations of the rational functions provided by the CArL library[^3]. This approach, together with caching, was shown [@JansenCVWAKB14] to be beneficial due to improved performance of the gcd-computations during the simplification steps. It should be noted that our implementation is intended to provide first results that allow to gauge whether the fraction-free method, by avoiding gcd-computations, can be beneficial in practice and is thus rather naïve in certain aspects. As an example, it currently relies on a dense matrix representation, with performance improvements for larger models to be expected from switching to sparse representations as used in Storm’s [*eigen*]{} and [*state-elim*]{} solvers. In addition to the fraction-free approach, our solver can also be instantiated to perform a straight-forward Gaussian elimination, using any of the representations for rational functions provided by the CArL library. In all our experiments, we have compared the solutions obtained by the different solvers and verified that they are the same. **Experimental studies.** The source code of our extension of Storm and the artifacts of the experiments are available online.[^4] As our [*GE-ff*]{} implementation is embedded as an alternative solver in Storm, we mainly report the time actually spent for solving the parametric equation system, as the other parts of model checking (model building, precomputations) are independent of the chosen solver. For benchmarking, we used a machine with two Intel Xeon E5-2680 8-core CPUs at 2.70GHz and with 384GB RAM, a time out of 30 minutes and a memory limit of 30GB. All the considered solvers run single-threaded. We have considered three different classes of case studies for experiments. **Complete pMC.** As a first experiment to gauge the efficiency in the presence of a high ratio of parameters to states, we considered a family of pMCs with a complete graph structure (over $n$ states) and one parameter per transition, resulting in $n\cdot(n+1)$ parameters (for details see[ [@GandALF-extended]). ]{} $n$ rows param. [*eigen*]{} [*state-elim*]{} [*GE-ff*]{} [**]{} ----- ------ -------- -------------- ------------------ ------------- -------- 4 4 20 5 5 30 6 6 42 [time-out]{} [time-out]{} : Statistics for “complete pMC”. Matrix rows and number of distinct parameters, as well as time for solving the parametric equation system per solver. For $n=7$, all solvers timed out (30min). \[tab:complete-pdtmc\] Table \[tab:complete-pdtmc\] depicts statistics for the corresponding computations, for the two standard solvers in Storm ([*eigen*]{} and [*state-elim*]{}), as well as our fraction-free implementation ([*GE-ff*]{}). For [*state-elim*]{}, we always use the default elimination order (forward). The time for [*GE-ff*]{} corresponds to the time until a solution rational function (for all states) is obtained. As the numerator and denominator of these rational functions are not necessarily coprime, for comparison purposes we list as well the time needed for simplification ([**]{}) via division by the gcd. As can be seen, here, the fraction-free approach significantly outperforms Storm’s standard solvers and scales to a higher number of parameters. We confirmed using profiling that the standard solvers indeed spend most of the time in gcd-computations. **Multi-parameter Israeli-Jalfon self-stabilizing.** The benchmarks used to evaluate parametric model checking implementations in previous papers tend to be scalable in the number of components but use a fixed number of parameters, usually 2. To allow further experiments with an increasing number of parameters, we considered a pMC-variant of the Israeli-Jalfon self-stabilizing protocol with $n$ processes, $k$ initial tokens and $n$ parameters (for details see[ [@GandALF-extended]). ]{} $n$ $k$ rows param. [*eigen*]{} [*state-elim*]{} [*GE-fac*]{} [*GE-ff*]{} [**]{} ----- ----- ------ -------- ------------- ------------------ -------------- ------------- -------- 4 3 21 4 4 4 15 4 5 2 16 5 5 3 36 5 5 4 51 5 5 5 31 5 : Statistics for “Israeli-Jalfon”, with strong bisimulation quotienting. Matrix rows and number of distinct parameters, as well as time for solving the parametric equation system per solver. \[tab:ij\] Table \[tab:ij\] depicts the time spent for computing the rational functions for several instances. As can be seen, the fraction-free approach is competitive for the smaller instances, with performance between the [*eigen*]{} and [*state-elim*]{} solvers for the larger instances. We have also included running times for [*GE-fac*]{}, i.e., for our naïve implementation of Gaussian elimination using the representation for rational functions as used by Storm for the standard solvers, including automatic gcd-based simplification after each step to ensure that numerator and denominator are coprime. [*GE-fac*]{} operates on the same, topologically sorted matrix as the fraction-free [*GE-ff*]{}. Curiously, [*GE-fac*]{} is able to outperform the [*eigen*]{} solver for some of the larger instances. We believe this is mainly due to differences in the matrix permutation and their effect on the elimination order, which is known to have a large impact on performance (e.g., [@DJJCVBKA-CAV15]). **Benchmark case studies from [@DJJCVBKA-CAV15].** Furthermore, we considered several case study instances that were used in [@DJJCVBKA-CAV15] to benchmark parametric model checkers, namely the *brp*, *crowds*, *egl*, *nand*, *zeroconf* models. Table \[tab:bench-selected\] depicts statistics for selected instances, for further details see[ [@GandALF-extended]. ]{}The application of bisimulation quotienting often has a large impact on the size of the linear equation system, so we performed experiments with and without quotienting. For *crowds*, bisimulation quotienting was particularly effective, with all considered instances having a very small state space and negligible solving times. For the non-quotiented instances, Storm’s standard solvers outperform [*GE-ff*]{}. For the *zeroconf* instance in Table \[tab:bench-selected\], [*GE-ff*]{} is competitive. Note that the models in the *brp*, *egl* and *nand* case studies are acyclic and that the parametric transition probabilities and rewards are polynomial. As a consequence, the gcd-computations used in Storm’s solvers don’t impose a significant overhead as the rational functions during the computation all have denominator polynomials of degree zero. model rows param. [*eigen*]{} [*state-elim*]{} [*GE-ff*]{} [**]{} --------------------------- ------- -------- ------------- ------------------ -------------- -------------- Crowds (3,5), weak-bisim 40 2 Crowds (5,5), weak-bisim 40 2 Crowds (10,5), weak-bisim 40 2 Crowds (3,5) 715 2 Crowds (5,5) 2928 2 [time-out]{} Crowds (10,5) 25103 2 [time-out]{} — Zeroconf (1000) 1002 2 : Selected statistics for the benchmarks of [@DJJCVBKA-CAV15]. Matrix rows and number of distinct parameters, as well as time for solving the parametric equation system per solver.[]{data-label="tab:bench-selected"} Overall, the experiments have shown that there are instances where the fraction-free approach can indeed have a positive impact on performance. Keeping in mind that our implementation has not yet been significantly optimized, we believe that the fraction-free approach is an interesting addition to the gcd-based solver approaches. In particular, the application of better heuristics for the order of processing (i.e., the permutation of the matrix) could still lead to significant performance increases. Complexity of the PCTL+EC model checking problem {#sec:theory} ================================================ We now study the complexity of the following variants of the PCTL+EC model checking problem. Given an augmented pMC ${\mathfrak{M}}= ( S , {s_{\textit{\tiny init}}}, E, {\mathbf{P}}, {\mathfrak{C}})$ and a PCTL+EC (state) formula $\Phi$: ---------------- ------------------------------------------------------------------------------------------------------------------------------------- (All) Compute a representation of the set of all satisfying parameter valuations, i.e., the set of all admissible parameter valuations $\overline{\xi}\in X$ such that ${\mathfrak{M}}(\overline{\xi}) \models \Phi$. \[1ex\] (MC-E) Does there exist a valuation $\overline{\xi}\in X$ such that ${\mathfrak{M}}(\overline{\xi}) \models \Phi$? \[1ex\] (MC-U) Does ${\mathfrak{M}}(\overline{\xi}) \models \Phi$ hold for all admissible valuations $\overline{\xi}\in X$? ---------------- ------------------------------------------------------------------------------------------------------------------------------------- (MC-E) and (MC-U) are essentially duals of each other. Note that the answer for the universal variant (MC-U) is obtained by the negation of the answer for (MC-E) with formula $\neg\Phi$, and vice versa. In what follows, we shall concentrate on (All) and the existential model checking problem (MC-E). **Computing all satisfying parameter valuations.** As before, $X=X_{{\mathfrak{M}}}$ denotes the set of admissible valuations. In what follows, let $\chi$ be the conjunction of the polynomial constraints in ${\mathfrak{C}}$ as well as the constraints $\sum_{t\in S} {\mathbf{P}}(s,t)=1$ for each non-trap state $s\in S$, and $0 < {\mathbf{P}}(s,t)$ for each edge $(s,t)\in E$. We then have $\overline{\xi}\models \chi$ if and only if $\overline{\xi}$ is admissible, i.e., $\overline{\xi}\in X$. Let $\Phi$ be a PCTL+EC formula. The *satisfaction function* ${\mathrm{Sat}}_{{\mathfrak{M}}}(\Phi) \colon X \to 2^{S}$ is defined by: $$\begin{aligned} {\mathrm{Sat}}_{{\mathfrak{M}}}(\Phi)(\overline{\xi}) & \stackrel{\text{\tiny def}}{=} \bigl\{ \, s\in S : s\models_{{\mathfrak{M}}(\overline{\xi})} \Phi \, \bigr\} = {\mathrm{Sat}}_{{\mathfrak{M}}(\overline{\xi})} (\Phi) \end{aligned}$$ We now present an algorithm to compute a symbolic representation of the satisfaction function that groups valuations with the same corresponding satisfaction set together. More precisely, we deal with a representation of the satisfaction function ${\mathrm{Sat}}_{{\mathfrak{M}}}(\Phi)$ by a finite set ${\texttt{Sat}}_{{\mathfrak{M}}}(\Phi)$ of pairs $(\gamma,T)$ where $\gamma$ is a Boolean combination of constraints and $T \subseteq S$ such that (i) $(\gamma,T)\in {\texttt{Sat}}_{{\mathfrak{M}}}(\Phi)$ and $\overline{\xi}\models \gamma$ implies $T = {\mathrm{Sat}}_{{\mathfrak{M}}}(\Phi)(\overline{\xi})$, and (ii) whenever $T = {\mathrm{Sat}}_{{\mathfrak{M}}}(\Phi)(\overline{\xi})$ then there is a pair $(\gamma,T)\in {\texttt{Sat}}_{{\mathfrak{M}}}(\Phi)$ such that $\overline{\xi}\models \gamma$. Given the DAG representation of the PCTL formula $\Phi$, we follow the standard model checking procedure for CTL-like branching-time logics and compute ${\texttt{Sat}}_{{\mathfrak{M}}}(\Psi)$ for the subformulas $\Psi$ assigned to the nodes in the DAG for $\Phi$ in a bottom-up manner. As the leaves of the DAG can be atomic propositions $a$ or the formula ${\texttt{true}}$, the base cases are ${\texttt{Sat}}_{{\mathfrak{M}}}({\texttt{true}}) = \bigl\{ (\chi,S) \bigr\}$, and . Consider now the inner node $v$ of the DAG for $\Phi$ labelled by the outermost operator of the subformula $\Psi$. Suppose that the children of $v$ have already been treated, so when computing ${\texttt{Sat}}_{{\mathfrak{M}}}(\Psi)$ the satisfaction sets of the proper subformulas of $\Psi$ are known. If $v$ is labelled by $\neg$ or $\wedge$, i.e., $\Psi = \neg\Psi'$ or $\Psi = \Psi_1 \wedge \Psi_2$, then ${\texttt{Sat}}_{{\mathfrak{M}}}(\Psi) = \bigl\{ \, (\gamma,S \setminus T) : (\gamma,T)\in {\texttt{Sat}}_{{\mathfrak{M}}}(\Psi')\, \bigr\}$ respectively ${\texttt{Sat}}_{{\mathfrak{M}}}(\Psi) = \bigl\{ \ (\gamma_1 \wedge \gamma_2,T_1 \cap T_2) \ : \ (\gamma_i,T_i)\in {\texttt{Sat}}_{{\mathfrak{M}}}(\Psi_i),\ i=1,2 \ \bigr\}$. If $\Psi = {\mathbb{P}}_{\bowtie c}(\Psi_1 {\mathbin{\mathsf{U}}}\Psi_2)$, then $$\begin{aligned} {\texttt{Sat}}_{{\mathfrak{M}}}(\Psi) & \ = \ \bigl\{ \ (\gamma_1 \wedge \gamma_2 \wedge \delta_{\gamma_1,T_1,\gamma_2,T_2,R}) \ : \ (\gamma_1,T_1)\in {\texttt{Sat}}_{{\mathfrak{M}}}(\Psi_1), \ (\gamma_2,T_2)\in {\texttt{Sat}}_{{\mathfrak{M}}}(\Psi_2), \ R \subseteq S \ \bigr\}\end{aligned}$$ where $\delta_{\gamma_1,T_1,\gamma_2,T_2,R}$ is the conjunction of the constraints ${{\mathrm{Pr}}}^{{\mathfrak{M}}}_{s}(T_1 {\mathbin{\mathsf{U}}}T_2) \bowtie c$ for each state $s\in R$, and for each state $s\in S \setminus R$. Here, ${{\mathrm{Pr}}}^{{\mathfrak{M}}}_{s}(T_1 {\mathbin{\mathsf{U}}}T_2)$ is the rational function that has been computed using (i) a graph analysis to determine the set $U$ of states $s$ with $s \models \exists (T_1 {\mathbin{\mathsf{U}}}T_2)$ and (ii) fraction-free Gaussian elimination (Section \[sec:gauss\]) to compute the rational functions ${{\mathrm{Pr}}}_s^{{\mathfrak{N}}}(\Diamond T_2)$ in the pMC ${\mathfrak{N}}$ resulting from ${\mathfrak{M}}$ by turning the states in $(S \setminus U) \cup T_2$ into traps. If $f_s$ and $g_s$ are polynomials computed by fraction-free Gaussian elimination such that ${{\mathrm{Pr}}}^{{\mathfrak{M}}}_{s}(T_1 {\mathbin{\mathsf{U}}}T_2) = f_s/g_s$ then ${{\mathrm{Pr}}}^{{\mathfrak{M}}}_{s}(T_1 {\mathbin{\mathsf{U}}}T_2) \bowtie c$ is a shortform notation for $f_s - c \cdot g_s \bowtie 0$. The treatment of ${\mathbb{P}}_{\bowtie c}(\operatorname{\bigcirc}\Psi)$ and the expectation operators is similar, and can be found in[ [@GandALF-extended]. ]{}After treating a node of the DAG, we can simplify the set ${\texttt{Sat}}_{{\mathfrak{M}}}(\Psi)$ by first removing all pairs $(\gamma,T)$ where $\gamma$ is not satisfiable (using algorithms for the existential theory of the reals), and afterwards combining all pairs with the same $T$-component, that is, instead of $m$ pairs $(\gamma_1,T),\ldots,(\gamma_m,T)\in {\texttt{Sat}}_{{\mathfrak{M}}}(\Psi)$, we consider a single pair $(\gamma_1 \vee \ldots \vee \gamma_m,T)$. To answer question (All), the algorithm finally returns the disjunction of all formulas $\gamma$ with ${s_{\textit{\tiny init}}}\in T$ for $(\gamma,T)\in{\texttt{Sat}}_{{\mathfrak{M}}}(\Phi)$. **Complexity bounds of (All) and (MC-E).** The existential theory of the reals is known to be in PSPACE and NP-hard, and there is an upper bound on the time-complexity, namely $\ell^{k+1}\cdot d^{{\mathcal{O}}(k)}$ where $\ell$ is the number of constraints, $d$ the maximum degree of the polynomials in the constraints, and $k$ the number of parameters [@BaPoRo08]. Recall from Section \[sec:gauss\] that a known upper bound on the time-complexity of one-step fraction-free Gaussian elimination is ${\mathcal{O}}\bigl( \operatorname{poly}(n,d)^k \bigr)$, where $n$ is the number of equations, $d$ the maximum degree of the initial coefficient polynomials, and $k$ the number of parameters. Combining both approaches, the one-step fraction-free Gaussian elimination for solving linear equation systems with polynomial coefficients, and the existential theory of the reals for treating satisfiability of conjunctions of polynomial constraints, one directly obtains the following bound for the computational complexity of PCTL+EC model checking on augmented polynomial pMCs. Note that this assumes that the number of constraints in ${\mathfrak{C}}$ is at most polynomial in the size of $S$. \[thm:complexity-PCTL-multivariate\] Let $\Phi$ be a PCTL+EC formula. Given an augmented polynomial pMC ${\mathfrak{M}}$, where the maximum degree of transition probabilities ${\mathbf{P}}(s,t)$, and polynomials in the constraints in ${\mathfrak{C}}$ is $d$, a symbolic representation of the satisfaction function ${\mathrm{Sat}}_{{\mathfrak{M}}}(\Phi)$ is computable in time ${\mathcal{O}}\bigl( |\Phi|\cdot \operatorname{poly}\bigl( \text{size}({\mathfrak{M}}), d \bigr)^{k\cdot |\Phi|_{{\operatorname{\mathbb{P}}_{}},{\operatorname{\mathbb{E}}_{}},{\mathbb{C}}}} \bigr)$, where $|\Phi|_{{\operatorname{\mathbb{P}}_{}},{\operatorname{\mathbb{E}}_{}},{\mathbb{C}}}$ is the number of probability, expectation and comparison operators in $\Phi$. \[PSPACE-multivariate\] The existential PCTL+EC model checking problem (MC-E) for augmented pMC is in PSPACE. The main idea of a polynomially space-bounded algorithm is to guess nondeterministically sets $T_{\Psi}$ of states for the subformulas $\Psi$ where the outermost operator is a probability, expectation or comparison operator, and then apply a polynomially space-bounded algorithm for the existential theory of the reals [@BaPoRo08] to check whether there is a parameter valuation $\overline{\xi}$ such that $T_{\Psi}={\mathrm{Sat}}_{{\mathfrak{M}}}(\Psi)(\overline{\xi})$ for all $\Psi$. NP- and coNP-hardness of (MC-E) follow from results for IMCs [@SeViAg06; @ChatSenHen08]. More precisely, [@ChatSenHen08] provides a polynomial reduction from SAT to the (existential and universal) PCTL model checking problem for IMCs. In fact, the reduction of [@ChatSenHen08] does not require full PCTL, instead Boolean combinations of simple probabilistic constraints ${\mathbb{P}}_{\geqslant c_i}(\operatorname{\bigcirc}a_i)$ without nesting of the probability operators are sufficient. The following theorem strengthens this result by stating NP-hardness of (MC-E) even for formulas ${\operatorname{\mathbb{P}}_{>c}}(\Diamond a)$ consisting of a single probability constraint for a reachability condition. \[thm:NP-hard-multi\] Given an augmented polynomial pMC ${\mathfrak{M}}$ on parameters $\overline{x}$ with initial state ${s_{\textit{\tiny init}}}$ and an atomic proposition $a$, and a probability threshold $c\in {\mathbb{Q}}\,\cap\, ]0,1[$, the problem to decide whether there exists $\overline{\xi} \in X$ such that ${{\mathrm{Pr}}}^{{\mathfrak{M}}(\overline{\xi})}_{{s_{\textit{\tiny init}}}}(\Diamond a) > c$ is NP-hard, even for acyclic pMCs with the assigned transition probabilities being either constant, or linear in one parameter, i.e., ${\mathbf{P}}(s,t) \in \bigcup_{i = 1}^k {\mathbb{Q}}[x_i]$, $\deg({\mathbf{P}}(s,t))\leq 1$, for all $(s,t)\in E$, and where the polynomial constraints for the parameters $x_1,\ldots,x_k$ are of the form $f(x_i) \geqslant 0$ with $f\in {\mathbb{Q}}[x_i]$, $\deg(f) \leq 2$. **Univariate pMCs.** In many scenarios, the number of variables has a fixed bound instead of increasing with the model size. We consider here the case of *univariate* pMC, i.e., pMC with a single parameter. \[thm:upMC-nonnest\] Let $\Phi$ be a PCTL+EC formula without nested probability, expectation or comparison operators, and let ${\mathfrak{M}}$ be a polynomial pMC on the single parameter $x$. The problem to decide whether there exists an admissible parameter valuation $\xi \in X$ such that ${\mathfrak{M}}(\xi) \models \Phi$ is in P. If we restrict PCTL+EC to Boolean combinations of probability, expectation, and comparison operators, (MC-E) can be dealt with by first computing polynomial constraints for ${s_{\textit{\tiny init}}}$ for each probability, expectation, and comparison operator independently (this can be done in polynomial time by Lemma \[lemma:reach-prob-univariate\]), and afterwards applying a polynomial-time algorithm for the univariate existential theory of the reals [@BeKoRe86] once to the appropriate Boolean combination of the constraints. \[thm:np-complete-uni\] Let $\Phi$ be a PCTL+EC formula, and let ${\mathfrak{M}}$ be a polynomial pMC on the single parameter $x$. The PCTL+EC model checking problem to decide whether there exists an admissible parameter valuation $\xi \in X$ such that ${\mathfrak{M}}(\xi) \models \Phi$ is NP-complete. NP-hardness even holds for acyclic polynomial pMCs and the fragment of PCTL+C that uses the comparison operator ${\operatorname{\mathbb{C}}_{{{\mathrm{Pr}}}}}$, but not the probability operator ${\mathbb{P}}$, as well as for (cyclic) polynomial pMC in combination with PCTL. **(MC-E) for monotonic PCTL on univariate pMCs.** The parameters in pMC typically have a fixed meaning, e.g., probability for the occurrence of an error, in which case the probability to reach a state where an error has occurred is increasing in $x$. This motivates the consideration of univariate pMCs and PCTL formulas that are monotonic in the following sense. Given a univariate polynomial pMC ${\mathfrak{M}}=(S,{s_{\textit{\tiny init}}},E,{\mathbf{P}})$, let $E_+$ denote the set of edges $(s,t)\in E$ such that the polynomial ${\mathbf{P}}(s,t)$ is monotonically increasing in $X$, i.e., whenever $\xi_1,\xi_2\in X$ and $\xi_1 < \xi_2$ then ${\mathbf{P}}(s,t)(\xi_1)\leqslant {\mathbf{P}}(s,t)(\xi_2)$. Let $S_+$ denote the set of states $s$ such that for each finite path $\pi = s_0\, s_1 \ldots s_m$ with $s_m=s$ we have $(s_i,s_{i+1})\in E_+$ for $i=0,1,\ldots,m{-}1$. As $(s,t)\in E_+$ iff there is no value $\xi \in {\mathbb{R}}$ such that $\xi \models \chi \wedge ({\mathbf{P}}(s,t)'<0)$, the set $E_+$ is computable in polynomial time using a polynomial-time algorithm for the univariate theory of the reals [@BeKoRe86]. Here, $\chi$ is as before the Boolean combination of polynomial constraints characterizing the set $X$ of admissible parameter values, and ${\mathbf{P}}(s,t)'$ is the first derivative of the polynomial ${\mathbf{P}}(s,t)$. Thus, the set $S_+$ is computable in polynomial time. Let ${\mathfrak{M}}$ be a univariate polynomial pMC and $\Psi$ a monotonic PCTL formula, that is, $\Psi$ is in the PCTL fragment obtained by the following grammar: $$\begin{aligned} \Phi & \ ::= \ & a \in S_+ \ \mid \ \Phi\wedge\Phi \ \mid \ \Phi\vee\Phi \ \mid \ {\operatorname{\mathbb{P}}_{\geqslant c}}(\varphi) \ \mid \ {\operatorname{\mathbb{P}}_{> c}}(\varphi) \\ \varphi &::= & \operatorname{\bigcirc}\Phi \ \mid \ \Phi {\mathbin{\mathsf{U}}}\Phi \ \mid \ \Phi {\mathbin{\mathsf{R}}}\Phi \ \mid \ \ \Diamond \Phi \ \mid \ \ \Box \Phi\end{aligned}$$ where $c \in {\mathbb{Q}}_{>0}$. Then, ${\mathrm{Sat}}_{{\mathfrak{M}}(\xi_1)}(\Psi) \subseteq {\mathrm{Sat}}_{{\mathfrak{M}}(\xi_2)}(\Psi)$ for any two valuations $\xi_1$ and $\xi_2$ of $x$ with $\xi_1 < \xi_2$. Hence, if $\Psi$ is monotonic then the satisfaction function $X \to 2^S$, $\xi \mapsto {\mathrm{Sat}}_{{\mathfrak{M}}}(\Psi)(\xi) = {\mathrm{Sat}}_{{\mathfrak{M}}(\xi)}(\Psi)$ is monotonic. For each monotonic PCTL formula $\Psi$ there exist $S_{\Psi} \subseteq S$ and $\xi_{\Psi} \in X$ such that ${\mathrm{Sat}}_{{\mathfrak{M}}(\xi)}(\Psi) = S_{\Psi}$ for all $\xi \geqslant \xi_{\Psi}$ and ${\mathrm{Sat}}_{{\mathfrak{M}}(\xi')}(\Psi) \subseteq S_{\Psi}$ for all $\xi' < \xi_{\Psi}$. To decide (MC-E) for a given monotonic formula $\Phi$, it suffices to determine the sets $S_{\Psi}$ for the sub-state formulas $\Psi$ of $\Phi$. This can be done in polynomial time. Using this observation, we obtain: \[thm:mon-uni\] Let ${\mathfrak{M}}= (S, {s_{\textit{\tiny init}}}, E, {\mathbf{P}}, {\mathfrak{C}})$ be a univariate polynomial pMC on $x$, and $\Phi$ a monotonic PCTL formula. Then the model checking problem to decide whether there exists an admissible parameter valuation $\xi$ for $x$ such that ${\mathfrak{M}}(\xi)\models \Phi$ is in P. **Model checking PCTL+EC on MCs with parametric weights.** We now consider the case where ${\mathcal{M}}$ is an ordinary Markov chain augmented with a parametric weight function ${\mathit{wgt}}\colon S \to {\mathbb{Q}}[\overline{x}]$. Given a set $T \subseteq S$ such that ${{\mathrm{Pr}}}^{{\mathcal{M}}}_s(\Diamond T)=1$ for all states $s \in S$, the vector of the expected accumulated weights $e = ({\mathrm{E}}_s^{{\mathcal{M}}}(\operatorname{\tikz[baseline = -0.65ex]{\node [inner sep = 0pt] {$\Diamond$}; \draw (0,-0.9ex) -- (0,0.9ex) (-0.6ex,0) -- (0.6ex,0);}}T))_{s\in S}$ is computable as the unique solution of a linear equation system of the form $A \cdot e = b$, where the matrix $A$ is non-parametric, and only the vector $b$ depends on $\overline{x}$. By Lemma \[lemma:right-hand-param\], ${\mathrm{E}}_s^{{\mathcal{M}}}(\operatorname{\tikz[baseline = -0.65ex]{\node [inner sep = 0pt] {$\Diamond$}; \draw (0,-0.9ex) -- (0,0.9ex) (-0.6ex,0) -- (0.6ex,0);}}T)$ is a polynomial of the form $\sum _{t\in S} \beta_{s,t}\cdot {\mathit{wgt}}(t)$ with $\beta_{s,t}\in {\mathbb{Q}}$ for all $s\in S$, and can be computed in polynomial time. The expected mean payoff for a given set $T$ is given by ${\mathrm{E}}_s^{{\mathcal{M}}}(\operatorname{mp}(T)) = \sum_{\text{BSCC $B$}} {{\mathrm{Pr}}}_s^{{\mathcal{M}}}(\Diamond B)\cdot \operatorname{mp}(B)(T)$ where $\operatorname{mp}(B)(T) = \sum_{t\in T} \zeta_t \cdot {\mathit{wgt}}_T(t)$ with $\zeta_t$ being the steady-state probability for state $t$ inside $B$ (viewed as a strongly connected Markov chain), and ${\mathit{wgt}}_T(t)=0$ if $t \notin T$, ${\mathit{wgt}}_T(t)={\mathit{wgt}}(t)$ for $t\in T$. As the transition probabilities are non-parametric, the steady-state probabilities are obtained as the unique solution of a non-parametric linear equation system. So both types of expectations can be computed in polynomial time. Unfortunately, the treatment of formulas with nested expectation operators is more involved. Using the standard computation scheme that processes the DAG-representation of the given PCTL+EC formula in a bottom-up manner to treat inner subformulas first, the combination of polynomial constraints after the consideration of an inner node is still as problematic as in the pMC-case. Using known algorithms for the existential theory of the reals yields the following bound. \[PCTL+EC-weights\] Let ${\mathcal{M}}$ be an MC with parametric weights over $k$ parameters, and $\Phi$ a PCTL+EC formula. The problem (MC-E) is solvable in time ${\mathcal{O}}\bigl(\, |\Phi|\cdot \operatorname{poly}\bigl(\text{size}({\mathcal{M}}),d\bigr)^{k \cdot |\Phi|_{{\operatorname{\mathbb{E}}_{}}, {\operatorname{\mathbb{C}}_{{\mathrm{E}}}}}} \,\bigr)$, where $|\Phi|_{{\operatorname{\mathbb{E}}_{}}, {\operatorname{\mathbb{C}}_{{\mathrm{E}}}}}$ is the number of expectation and expectation comparison operators in the formula, and $d$ the maximum degree of the polynomials assigned as weights. If there is only one parameter, the model checking for MCs with parametric weights is solvable in polynomial time for the fragment of PCTL+EC without nested formulas (cf. Theorem \[thm:upMC-nonnest\]). Conclusion {#sec:conc} ========== In this paper we revisited the model checking problem for pMC and PCTL-like formulas. The purpose of the first part is to draw attention to the fraction-free Gaussian elimination for computing rational functions for reachability probabilities, expected accumulated weights and expected mean payoffs as an alternative to the gcd-based algorithms that have been considered before and are known to suffer from the high complexity of gcd-computations for multivariate polynomials. The experiments with our (not yet optimized) implementation indicate that such an approach can indeed be feasible and beneficial in practice. We thus intend to refine this implementation in future work, including research into further structural heuristics and the potential of a combination with gcd-based simplifications at opportune moments. In the second part of the paper we studied the complexity of the model checking problem for pMC and PCTL and its extension PCTL+EC by expectation and comparison operators. We identified instances where the model checking problem is NP-hard as well as fragments of PCTL+EC where the model checking problem is solvable in polynomial time. The latter includes the model checking problem for Boolean combinations of probability or expectation conditions for univariate pMCs. This result has been obtained using the fraction-free Gaussian elimination to compute rational functions for reachability probabilities or expected accumulated weights or expected mean payoffs, and polynomial time algorithms for the theory of the reals over a fixed number of variables. As the time complexity of the fraction-free Gaussian elimination is also polynomial for matrices and vectors with a fixed number of parameters and the polynomial-time decidability for the theory of the reals also holds when the number of variables is fixed [@BeKoRe86], Theorem \[thm:upMC-nonnest\] also holds for pMC with a fixed number of parameters. [^1]: The authors are supported by the DFG through the Collaborative Research Center SFB 912 – HAEC, the Excellence Initiative by the German Federal and State Governments (cluster of excellence cfaed), the Research Training Group QuantLA (GRK 1763) and the DFG-projects BA-1679/11-1 and BA-1679/12-1. [^2]: <http://eigen.tuxfamily.org/> [^3]: <https://github.com/smtrat/carl> [^4]: <http://wwwtcs.inf.tu-dresden.de/ALGI/PUB/GandALF17/>
--- author: - | T. W. Allen and C. J. Burden\ [Department of Theoretical Physics,]{}\ [Research School of Physical Sciences and Engineering,]{}\ [Australian National University, Canberra, ACT 2601, Australia]{}\ title: Positronium States in QED3 --- =-52pt =-17pt Introduction ------------ The similarities between Quantum Electrodynamics in three space-time dimensions (QED3) and Quantum Chromodynamics in four space-time dimensions (QCD4) and the simplicity of the theory make QED3 attractive for the study of non-perturbative methods. QED3 is an abelian theory and provides a logarithmic confining $e^-$-$e^+$ potential [@BPR92]. Our approach to positronium states in QED3 is via a solution to the homogenous Bethe-Salpeter equation with fermion propagator input from the Schwinger-Dyson equation. The full Schwinger-Dyson and Bethe-Salpeter equations are intractable. Here we consider a solvable system of integral equations within the quenched, ladder approximation. This crude truncation of the full equations does break gauge covariance but has very attractive features and has been employed extensively in QCD4 spectrum calculations [@BSpapers; @SC94]. This study continues on from a previous study [@Bu92] which uses a four-component fermion version of QED3. In this version, the massless case exhibits a chiral-like $U(2)$ symmetry broken into a $U(1) \times U(1)$ symmetry by the generation of a dynamical fermion mass, resulting in a doublet of Goldstone bosons. This pion-like solution is important for drawing similarities between QED3 and QCD4. The four component version of QED3 is also preferred to the two component version because the Dirac action in the two component version is not parity invariant for massive fermions. QCD4 is parity invariant and we aim to have as much in common with that theory as possible. The previous work was restricted to zero bare fermion mass, while in this study the bare mass is increased from zero to large values in order to compare with results in the non-relativistic limit. This study also takes a closer look at the choice of fermion propagator input. Knowledge of the analytic properties of the fermion propagator is important for determining the approximation’s ability to provide confinement and whether or not any singularities will interfere with a Bethe-Salpeter solution. Based on the work of Maris [@Ma93; @Ma95] the occurrence of mass-like complex singularities is expected which have the potential to influence our calculations. In section 2 we look at the Bethe-Salpeter and Schwinger-Dyson approximations used in this work and the method used to find the bound state masses. A brief review of transformation properties in QED3 is given in the appendix. These transformation properties are of vital importance for an understanding of the structure of the $e^-$-$e^+$ vertex function and the classification of the bound states. Section 3 describes the non-relativistic limit and the connection between the Bethe-Salpeter and Schrödinger equations for QED3. In section 4 the approximation to the fermion propagator is detailed. The structure of the propagators will be analysed in the complex plane where we attempt to locate the expected mass-like singularities. In section 5 the Bethe-Salpeter solutions are reported and comparisons are made with non-relativistic limit calculations. The results are discussed and conclusions given in section 6. Solving the Bethe-Salpeter Equation ----------------------------------- The Bethe-Salpeter (BS) kernel for this work is a simple one-photon exchange (ladder approximation) which is a commonly used starting point. For convenience we use the quenched approximation, work in Feynman gauge and work only with the Euclidean metric. Fig. 1 shows the Bethe-Salpeter equation in the quenched ladder approximation. The corresponding integral equation is $$\Gamma(p,P) = -e^2 \int \mbox{$ \, \frac{d^3q}{(2\pi)^3} \,$} D(p-q) \gamma_\mu S(\mbox{$ \, \frac{1}{2} \,$} P+q) \Gamma(q,P)S(-\mbox{$ \, \frac{1}{2} \,$} P+q) \gamma_\mu, \label{eq:BS}$$ where $\Gamma(p,P)$ is the one fermion irreducible positronium-fermion-antifermion vertex with external legs amputated. The photon propagator $D(p-q)$ in Feynman gauge is $1/(p-q)^2$. The fermion propagator $S$ is the solution to a truncated Schwinger-Dyson (SD) equation. A fermion propagator has been chosen which supports spontaneous mass generation necessary for the formation of the Goldstone bosons. The truncated SD equation for a fermion of bare mass $m$ is $$\Sigma(p)= S(p)^{-1} - (i\!\not \! p + m) = e^2 \int \mbox{$ \, \frac{d^3q}{(2\pi)^3} \,$} D(p-q) \gamma_\mu S_F(q) \gamma_\mu. \label{eq:SD}$$ This approximation is the quenched, rainbow approximation named so because the photon propagator has been replaced by the bare photon propagator and the vertex function $\Gamma$ has been replaced by the bare vertex $\gamma$, resulting in a series of Feynman diagrams which resemble rainbows. In the quenched approximation the SD and BS equations can be recast in terms of a dimensionless momentum $p/e^2$ and bare fermion mass $m/e^2$. From here on we work in dimensionless units and set $e^2 = 1$. We use either of the two following equivalent representations of the fermion propagator, $$S(p)= -i \not\! p \sigma_V(p^2)+\sigma_S(p^2) \label{eq:SIGDEF}$$ or $$S(p)= \frac{1}{ i \not\! p A(p^2)+B(p^2) }. \label{eq:ABDEF}$$ The generation of a dynamical fermion mass and the breaking of chiral symmetry is signalled (in the massless limit) by non-zero $B(p^2)$. The vector and scalar parts $\sigma_V$ and $\sigma_S$ of the propagator are related to the functions $A$ and $B$ simply by dividing these functions ($A$, $B$) by a quantity $p^2 A^2(p^2) + B^2(p^2)$. Note that a substitution of this fermion propagator into the Ward-Takahashi identity shows that the bare vertex approximation breaks gauge covariance. However, this model is simple and does meet the requirement that the appropriate Goldstone bosons are formed [@DS79]. It is not difficult to derive a zero mass solution to our BSE analytically. A vertex proportional to the matrix $\gamma_4$ or $\gamma_5$, defined in the appendix, will reduce the quenched ladder BSE to the quenched ladder (rainbow) SDE in the case of zero bound state mass thus forming a doublet of massless states. According to the terminology used in the appendix this is an axi-scalar doublet. These solutions will be seen in section 5. Once the photon and fermion propagators are supplied, the BS equation can be written as a set of numerically tractable integral equations. To do this, we write the BS amplitude $\Gamma$ in its most general form consistent with the parity and charge conjugation of the required bound state, and then project out the coefficient functions for the individual Dirac components. It is convenient to work in the rest frame of the bound state by setting $P_{\mu}=(0,0,iM)$. Then the scalar and axi-scalar vertices given in the appendix by eqs. (\[eq:GS\]) and (\[eq:GAS\]) can be written as $$\begin{aligned} \lefteqn{\Gamma^S(q,P) = f(q_3,\mbox{$\left| {\bf q} \right|$};M) - \frac{i q_j \gamma_j} {\mbox{$\left| {\bf q} \right|$}} U(q_3,\mbox{$\left| {\bf q} \right|$};M) } \nonumber \\ & & + \frac{i q_j^{\perp} \gamma_j}{\mbox{$\left| {\bf q} \right|$}}\gamma_{45} V(q_3,\mbox{$\left| {\bf q} \right|$};M) + i \gamma_3 W(q_3,\mbox{$\left| {\bf q} \right|$};M), \label{eq:GSNEW}\end{aligned}$$ $$\begin{aligned} \Gamma^{AS}(q,P) & = & \left(\begin{array}{c} \gamma_4 \\ \gamma_5\end{array} \right) f(q_3,\mbox{$\left| {\bf q} \right|$};M) - \frac{i q_j \gamma_j}{\mbox{$\left| {\bf q} \right|$}} \left(\begin{array}{c} \gamma_4 \\ \gamma_5\end{array} \right) U(q_3,\mbox{$\left| {\bf q} \right|$};M) \nonumber \\ & & + \frac{i q_j^{\perp} \gamma_j}{\mbox{$\left| {\bf q} \right|$}} \left(\begin{array}{c} \gamma_5 \\ -\gamma_4\end{array} \right) V(q_3,\mbox{$\left| {\bf q} \right|$};M) - \gamma_3 \left( \begin{array}{c} \gamma_{4} \\ \gamma_{5}\end{array} \right) W(q_3,\mbox{$\left| {\bf q} \right|$};M), \label{eq:GASNEW}\end{aligned}$$ where the index $j$ takes on values 1 and 2 only, $\mbox{$\left| {\bf q} \right|$}=(q_1^2+q_2^2)^{\frac{1}{2}}$ and $\mbox{$ \, {\bf q} \,$}^{\perp}=(-q_2,q_1)$. The pseudoscalar and axi-pseudoscalar vertices are obtained from the scalar and axi-scalar vertices by multiplication by the matrix $\gamma_{45}$. It is found that the same coupled integral equations result when a vertex is multiplied by the matrix $\gamma_{45}$ and so (scalar, pseudoscalar) and (axi-scalar, axi-pseudoscalar) form two pairs of degenerate states. The four equations derived from the BSE, after some manipulation including an angular integration, are [@Bu92]: $$\begin{aligned} f(p) & = & \frac{3}{(2\pi)^2} \int^{\infty}_{-\infty}dq_3 \, \int^{\infty}_0 \mbox{$\left| {\bf q} \right|$} d\mbox{$\left| {\bf q} \right|$} \, \frac{1}{(\alpha^2-\beta^2)^{\frac{1}{2}}} \times\nonumber \\ & & (T_{ff}f(q)+T_{fU}U(q)+T_{fV}V(q)+T_{fW}W(q)) \nonumber \\ U(p) & = & \frac{1}{(2\pi)^2} \int^{\infty}_{-\infty}dq_3 \, \int^{\infty}_0 \mbox{$\left| {\bf q} \right|$} d\mbox{$\left| {\bf q} \right|$} \, \frac{(\alpha^2-\beta^2)^{\frac{1}{2}}-\alpha} {\beta(\alpha^2-\beta^2)^{\frac{1}{2}}} \times \nonumber \\ & & (T_{Uf}f(q)+T_{UU}U(q)+T_{UV}V(q)+T_{UW}W(q)) \nonumber \\ V(p) & = & \frac{1}{(2\pi)^2} \int^{\infty}_{-\infty}dq_3 \, \int^{\infty}_0 \mbox{$\left| {\bf q} \right|$} d\mbox{$\left| {\bf q} \right|$} \, \frac{(\alpha^2-\beta^2)^{\frac{1}{2}}-\alpha} {\beta(\alpha^2-\beta^2)^{\frac{1}{2}}} \times\nonumber \\ & & (T_{Vf}f(q)+T_{VU}U(q)+T_{VV}V(q)+T_{VW}W(q)) \nonumber \\ W(p) & = & \frac{1}{(2\pi)^2} \int^{\infty}_{-\infty}dq_3 \, \int^{\infty}_0 \mbox{$\left| {\bf q} \right|$} d\mbox{$\left| {\bf q} \right|$} \, \frac{1}{(\alpha^2-\beta^2)^{\frac{1}{2}}} \times\nonumber \\ & & (T_{Wf}f(q)+T_{WU}U(q)+T_{WV}V(q)+T_{WW}W(q)) , \label{eq:IE}\end{aligned}$$ where $$\alpha=(p_3-q_3)^2 + \mbox{$\left| {\bf p} \right|$}^2 + \mbox{ $\left| {\bf q} \right|$}^2, \;\;\;\;\;\; \beta=-2 \mbox{$\left| {\bf p} \right|$} \mbox{$\left| {\bf q} \right|$}.$$ Now define the momentum Q by $$Q^2=q_3^2 + \mbox{$\left| {\bf q} \right|$}^2 -\frac{1}{4}M^2 + iMq_3 , \label{eq:QDEF}$$ and use the abbreviations $\sigma_V=\sigma_V(Q^2)$ and $\sigma_S=\sigma_S(Q^2)$ for use in the definition of the functions $T_{ff},T_{fU},\ldots$ which are analytic functions of $q_3, \mbox{$\left| {\bf q} \right|$}$, and $M$. The diagonal $T$’s are given by $$\begin{aligned} T_{ff}&=&(\frac{1}{4}M^2 + q_3^{\,2} + \mbox{$\left| {\bf q} \right|$}^2) |\sigma_V|^2 \mp |\sigma_S|^2, \nonumber \\ T_{UU}&=&(\frac{1}{4}M^2 + q_3^{\,2} - \mbox{$\left| {\bf q} \right|$}^2) |\sigma_V|^2 \pm |\sigma_S|^2, \nonumber \\ T_{VV}&=&(\frac{1}{4}M^2 + q_3^{\,2} + \mbox{$\left| {\bf q} \right|$}^2) |\sigma_V|^2 \pm |\sigma_S|^2, \nonumber \\ T_{WW}&=&-(\frac{1}{4}M^2 + q_3^{\,2} - \mbox{$\left| {\bf q} \right|$}^2) |\sigma_V|^2 \pm |\sigma_S|^2,\end{aligned}$$ where the upper sign applies to the scalar equations and the lower sign to the axi-scalar equations. The off-diagonal $T$’s are, for the scalar positronium states: $$\begin{array}{ccccl} T_{fU}&=&T_{Uf}&=&(\sigma_V^{\ast}\sigma_S+\sigma_S^{\ast}\sigma_V) \mbox{$\left| {\bf q} \right|$}, \\ T_{fV}&=&T_{Vf}&=&-M\mbox{$\left| {\bf q} \right|$}|\sigma_V|^2, \nonumber \\ T_{fW}&=&T_{Wf}&=&-(\sigma_V^{\ast}\sigma_S+\sigma_S^{\ast}\sigma_V)q_3 +\frac{i}{2}(\sigma_V^{\ast}\sigma_S-\sigma_S^{\ast}\sigma_V)M, \\ T_{UV}&=&T_{VU}&=&-[\frac{1}{2}(\sigma_V^{\ast}\sigma_S+ \sigma_S^{\ast}\sigma_V)M +iq_3(\sigma_V^{\ast}\sigma_S-\sigma_S^{\ast}\sigma_V)], \\ T_{UW}&=&T_{WU}&=&2q_3\mbox{$\left| {\bf q} \right|$}|\sigma_V|^2, \\ T_{VW}&=&T_{WV}&=&-i(\sigma_V^{\ast}\sigma_S-\sigma_S^{\ast}\sigma_V) \mbox{$\left| {\bf q} \right|$}, \end{array}$$ and for the axi-scalar states: $$\begin{array}{ccccl} T_{fU}&=&T_{Uf}&=&i(\sigma_V^{\ast}\sigma_S-\sigma_S^{\ast}\sigma_V) \mbox{$\left| {\bf q} \right|$}, \\ T_{fV}&=&-T_{Vf}&=&M\mbox{$\left| {\bf q} \right|$}|\sigma_V|^2, \\ T_{fW}&=&-T_{Wf}&=&-i(\sigma_V^{\ast}\sigma_S-\sigma_S^{\ast}\sigma_V)q_3 -\frac{1}{2}(\sigma_V^{\ast}\sigma_S+\sigma_S^{\ast}\sigma_V)M, \\ T_{UV}&=&T_{VU}&=&- [\frac{i}{2}(\sigma_V^{\ast}\sigma_S-\sigma_S^{\ast}\sigma_V)M -q_3(\sigma_V^{\ast}\sigma_S+\sigma_S^{\ast}\sigma_V)], \\ T_{UW}&=&-T_{WU}&=&-2q_3\mbox{$\left| {\bf q} \right|$}|\sigma_V|^2, \\ T_{VW}&=&-T_{WV}&=&(\sigma_V^{\ast}\sigma_S+\sigma_S^{\ast}\sigma_V) \mbox{$\left| {\bf q} \right|$}. \end{array}$$ This is the same set of equations solved in Ref. [@Bu92] with only the fermion propagator input altered. The bare fermion mass $m$ only comes into the calculation through this input. The solution to the BSE involves iteration of the coupled integral equations in Eq. (\[eq:IE\]). These equations may be rewritten as $${\bf f}(\mbox{$\left| {\bf p} \right|$},p_3;M) = \int dq_3 \int d\mbox{$\left| {\bf q} \right|$} \, K(\mbox{$\left| {\bf p} \right|$},p_3;\mbox{$\left| {\bf q} \right|$},q_3;M) {\bf f} (\mbox{$\left| {\bf q} \right|$},q_3;M), \label{eq:BSI}$$ where ${\bf f}=(f,U,V,W)^{\rm T}$. For each symmetry case and each fermion mass this is solved as an eigenvalue problem of the form $$\int dq\,K(p,q;M) {\bf f}(q) = \Lambda(M) {\bf f}(p), \label{eq:EV}$$ for a given test mass $M$. This is repeated for different test bound state masses until an eigenvalue $\Lambda(M)=1$ is obtained. Non-Relativistic Limit ---------------------- We consider now the non-relativistic limit $m\rightarrow \infty$ of our BS formalism in order to enable comparisons with existing numerical calculations [@YH91; @THY95] of the Schrödinger equation for QED3, and with the large $m$ solution of Eq. (\[eq:IE\]) The Schrödinger equation with a confining logarithmic potential is an interesting problem in its own right. Initially one is faced with the problem of setting the scale of the potential, or equivalently, setting the zero of energy of the confined bound states. A solution to this problem was proposed by Sen [@S90] and Cornwall [@C80] in terms of cancellation of infrared divergences in perturbation theory. They introduce a regulating photon mass $\mu$ in order to set the potential as the 2-dimensional Fourier transform of the photon propagator $1/(k^2 + \mu^2)$, leading to a potential proportional to $\ln (\mu r)$. They further interpret the sum of the bare fermion mass and the fermion self energy evaluated at the bare fermion mass shell as a renormalised fermion mass, leading to a mass renormalisation $\delta m \propto \ln (m/\mu)$. The logarithmic divergences in the photon potential and the fermion self energy then conspire to cancel leaving a finite positronium mass. The first numerical treatment of the Schrödinger equation for QED3 using this line of argument was carried out by Yung and Hamer [@YH91]. In a subsequent, improved calculation by Tam, Hamer and Yung [@THY95], the formalism was shown to be consistent with an analysis of QED3 from the point of view of discrete light cone quantisation. Their resulting expression for the bound state energy, obtained as a solution to the differential equation $$\left\{ -\frac{1}{m} \nabla^2 + \frac{1}{2\pi} \left(C + \ln(mr) \right)\right\} \phi({\bf r}) = (E - 2m) \phi({\bf r}), \label{eq:THYDE}$$ where $C$ is Euler’s constant, is given in terms of the bare fermion mass $m$ as $$E = 2m + \frac{1}{4\pi} \ln m + \frac{1}{2\pi} \left(\lambda - \frac{1}{2} \ln\frac{2}{\pi} \right). \label{eq:THY1}$$ The lightest s-wave positronium state and first exited state are given by $\lambda_0 = 1.7969$ and $\lambda_1 = 2.9316$ respectively. The first five states are provided in ref. [@THY95]. Here we present a treatment of the non-relativistic limit of the QED3 positronium spectrum in terms of our SD–BS equation formalism. We begin with the fermion propagator in the limit $m\rightarrow \infty$. For large fermion mass we expect the residual effect of the chiral symmetry breaking contribution to the fermion self energy to be small compared with contribution from the perturbative loop expansion. We shall therefore assume to begin with that the self energy is reasonably well approximated by the one-loop result. The validity of this approximation for spacelike momenta will be demonstrated numerically in the next section. The 1-loop fermion self energy, with the functions $A$ and $B$ defined in Eq. (\[eq:ABDEF\]), is given by $$A = 1 + \frac{\Sigma_A(\frac{p^2}{m^2})}{m}, \label{eq:A1LOOP}$$ where $$\Sigma_A(x^2)=\frac{1}{8\pi x^2} \left[ 1 - \frac{1-x^2}{2x} \arccos\left( \frac{1-x^2}{1+x^2} \right) \right], \label{eq:SIGA1}$$ and $$B = m \left( 1 + \frac{\Sigma_B(\frac{p^2}{m^2})}{m} \right), \label{eq:B1LOOP}$$ where $$\Sigma_B(x^2)=\frac{3}{8\pi x} \arccos\left( \frac{1-x^2}{1+x^2} \right) . \label{eq:SIGB1}$$ This result is valid for (Euclidean) spacelike momenta $p^2 > 0$. An analytic continuation of the $\Sigma$ functions valid for $|p|^2 < m^2$, or $\left| x \right| <1$, is $$\Sigma_A(x^2)=\frac{1}{8\pi i x} \left[ \frac{x^2-1}{2x^2} \ln\left( \frac{1+ix}{1-ix} \right) + \frac{i}{x} \right],$$ $$\Sigma_B(x^2)=\frac{3}{8\pi i x} \ln\left( \frac{1+ix}{1-ix} \right) .$$ Note that this representation exposes a logarithmic infinity in the self energy at the bare fermion mass pole $p^2=-m^2$. This is the infrared divergence in the renormalised fermion self energy as defined by Sen [@S90] referred to above. However, in our formalism, this singularity does not lead to a pole in the propagator functions $\sigma_V$ and $\sigma_S$ defined in Eq. (\[eq:SIGDEF\]), which would signal the propagation of a free fermion [@RW94], but a logarithmic zero. Using Eqs. (\[eq:A1LOOP\]) and (\[eq:B1LOOP\]) we obtain $$\sigma_V(p^2) = \frac{1}{m^2}\frac{1 + \frac{\Sigma_A}{m}} {\epsilon\left(1 + \frac{\Sigma_A}{m}\right)^2 + 2\left(1 + \frac{\Sigma_A + \Sigma_B}{2m}\right) \frac{\Sigma_B - \Sigma_A}{m}},$$ $$\sigma_S(p^2) = \frac{1}{m}\frac{1 + \frac{\Sigma_B}{m}} {\epsilon\left(1 + \frac{\Sigma_A}{m}\right)^2 + 2\left(1 + \frac{\Sigma_A + \Sigma_B}{2m}\right) \frac{\Sigma_B - \Sigma_A}{m}},$$ where we have defined $$\epsilon = \frac{p^2 + m^2}{m^2}.$$ The functions $\sigma_V$ and $\sigma_S$ are plotted in Figs. 2a and 2b for $m=$ 1, 2, 4, 8 and $\infty$, the final curve being the bare propagator. From these plots we see that for large $m$, the deviation from the bare propagator due to the 1-loop self energy is dominated by the logarithmic contribution near the bare fermion mass shell, $\epsilon = 0$. With this in mind, we shall use the approximation $$\Sigma_A(x^2) \approx -\frac{1}{8\pi} \ln\frac{\epsilon}{4}, \label{eq:SIGAP1}$$ $$\Sigma_B(x^2) \approx -\frac{3}{8\pi} \ln\frac{\epsilon}{4}. \label{eq:SIGAPX}$$ Taking $\epsilon$ to be of order $1/m$ for the purposes of the BS equation (see Eq. (\[eq:EPSAPX\]) below) these approximations give $$S(p) = \frac{-i \! \not \! p + m}{m} \cdot \frac{1}{m\epsilon - \frac{1}{2\pi} \ln \frac{\epsilon}{4}} \left(1 + O\left(\frac{\ln m}{m}\right)\right). \label{eq:SAPX}$$ The vector and scalar parts of this approximate propagator (without the $O(\ln m /m)$ corrections) are also plotted in Figs. 2a and 2b for comparison. Turning now to the BS equation, we set the bound state momentum in Eq. (\[eq:BS\]) equal to $P_{\mu}=(2m+\delta)iv_{\mu}$, where $v_{\mu}=(0,0,1)$ and $-\delta$ is a “binding energy”. This gives (according to the momentum distribution in Fig. 1) $$\Gamma(p) = - \int \frac{d^3q}{(2\pi)^3} D(p-q) \, \gamma_\mu \, S\left[-(m+\frac{\delta}{2})iv_\mu+q_\mu\right] \Gamma(q) \, S\left[(m+\frac{\delta}{2})iv_\mu+q_\mu\right] \gamma_\mu. \label{eq:NRBSE}$$ Setting $$\epsilon \, = \, \frac{2}{m}\left(-\frac{1}{2}\delta+iq_3+ \frac{\left|{\bf q}\right|^2}{2m}\right) \, + \, O\left(\frac{1}{m^2}\right) \label{eq:EPSAPX}$$ in Eq. (\[eq:SAPX\]) gives $$S\left[(m+\frac{\delta}{2})iv_\mu+q_\mu \right] = \frac{1+\gamma_3}{2}\, \frac{1}{(\mbox{$ \, -\frac{1}{2} \,$} \delta+iq_3+\frac{\left|{\bf q}\right|^2}{2m})- \frac{1}{4\pi} \ln \frac{\epsilon}{4}} \, + \, O\left(\frac{\ln m}{m}\right). \label{eq:SPLUS}$$ Similarly $$S\left[-(m+\frac{\delta}{2})iv_\mu+q_\mu \right] = \frac{1-\gamma_3}{2}\, \frac{1}{(\mbox{$ \, -\frac{1}{2} \,$} \delta-iq_3+\frac{\left|{\bf q}\right|^2}{2m})- \frac{1}{4\pi} \ln \frac{\epsilon^*}{4}} \, + \, O\left(\frac{\ln m}{m}\right). \label{eq:SMINUS}$$ The $\left|{\bf q}\right|^2/2m$ term has been retained here to ensure convergence of the $\left|{\bf q}\right|$ integral in the BS equation below. Since the vertex $\Gamma$ is defined with the fermion legs truncated, and $S \propto \frac{1}{2}(1 \pm \gamma_3)$, the only relevant part of $\Gamma$ is the projection $\mbox{$ \, \frac{1}{2} \,$}(1-\gamma_3) \, \Gamma \, \mbox{$ \, \frac{1}{2} \,$}(1+\gamma_3)$. With this in mind, the general forms in eqs. (\[eq:GSNEW\]) and (\[eq:GASNEW\]) become, $$\mbox{$ \, \frac{1}{2} \,$}(1-\gamma_3) \, \Gamma^S \, \mbox{$ \, \frac{1}{2} \,$}(1+\gamma_3) = \mbox{$ \, \frac{1}{2} \,$}(1-\gamma_3) \frac{q_j \gamma_j}{\mbox{$\left| {\bf q} \right|$}} g(q_3,\mbox{$\left| {\bf q} \right|$})$$ $$\mbox{$ \, \frac{1}{2} \,$}(1-\gamma_3) \, \Gamma^{AS} \, \mbox{$ \, \frac{1}{2} \,$}(1+\gamma_3) = \mbox{$ \, \frac{1}{2} \,$}(1-\gamma_3) \left(\begin{array}{c} \gamma_4 \\ \gamma_5\end{array} \right) g(q_3,\mbox{$\left| {\bf q} \right|$}). \label{eq:SASEQ}$$ Substituting eqs. (\[eq:SPLUS\]), (\[eq:SMINUS\]) and (\[eq:SASEQ\]) into Eq. (\[eq:NRBSE\]) one obtains for the scalar states the single integral equation $$g(p) \approx\int \frac{d^3q}{(2\pi)^3} \, \frac{1}{(p - q)^2} \frac{\bf{p.q}}{\left|{\bf p}\right|\left|{\bf q}\right|} \frac{g(q)}{ \left|-\frac{1}{2}\delta+ iq_3 + \frac{\left|{\bf q}\right|^2}{2m} - \frac{1}{4\pi} \ln \frac{1}{2m} (-\frac{1}{2}\delta+iq_3+\frac{\left|{\bf q}\right|^2}{2m}) \right|^2 }, \label{eq:NREQ1}$$ and for the axi-scalar states the single equation $$g(p) \approx \int \frac{d^3q}{(2\pi)^3}\, \frac{1}{(p - q)^2} \frac{g(q)}{ \left|-\frac{1}{2}\delta+ iq_3 + \frac{\left|{\bf q}\right|^2}{2m} - \frac{1}{4\pi} \ln \frac{1}{2m} (-\frac{1}{2}\delta+iq_3+\frac{\left|{\bf q}\right|^2}{2m}) \right|^2 }. \label{eq:NREQ1A}$$ Note that, without the $\left|{\bf q}\right|^2/2m$ term in the denominator, translation invariance of the integrand implies that $g$ is independent of $\left|{\bf q}\right|$. In reality, $g$ is a slowly varying function of $\left|{\bf q}\right|$, and this extra $O(1/m^2)$ term must be retained in $\epsilon$ to account for the fact that the relevant region of integration in Eqs. (\[eq:NREQ1\]) and (\[eq:NREQ1A\]) extends out to $O(\sqrt{m})$ in the $\left|{\bf q}\right|$ direction, but only $O(1)$ in the $q_3$ direction. Numerical solutions of Eqs. (\[eq:NREQ1\]) and (\[eq:NREQ1A\]) will be given in section 5. The function $g$ is an even or odd function of $q_3$ corresponding to positronium states which are even or odd respectively under charge conjugation. In order to obtain a Schrödinger equation, we now rewrite the axi-scalar equation in the form $$\begin{aligned} g(p) & = & \int \frac{d^2{\bf }q}{(2\pi)^2} \int_{-\infty}^\infty \frac{dq_3}{2\pi}\, \frac{1}{(p_3 - q_3)^2 + \left|{\bf p - q}\right|^2} \times \nonumber \\ & & \!\!\!\!\! \frac{g(q)}{ \left(-\frac{1}{2}\delta+ iq_3 + \frac{\left|{\bf q}\right|^2}{2m} + \Sigma_+(q_3,\left|{\bf q}\right|)\right) \left(-\frac{1}{2}\delta - iq_3 + \frac{\left|{\bf q}\right|^2}{2m} + \Sigma_-(q_3,\left|{\bf q}\right|)\right)}, \label{eq:NREQ2}\end{aligned}$$ where $$\Sigma_\pm(q_3,\left|{\bf q}\right|) = - \frac{1}{4\pi} \ln \frac{1}{2m} \left(-\frac{1}{2}\delta \pm iq_3 +\frac{\left|{\bf q}\right|^2}{2m}\right). \label{eq:SIGPM}$$ Assuming the integrand dies off sufficiently rapidly as $q_3 \rightarrow -i\infty$, we deform the contour of integration around the pole at $$q_3^{\rm pole} = -i \left(-\frac{1}{2}\delta + \frac{\left|{\bf q}\right|^2}{2m} + \Sigma_-(q_3^{\rm pole},\left|{\bf q}\right|) \right), \label{eq:Q3POLE}$$ to obtain $$g(p) = \int \frac{d^2{\bf q}}{(2\pi)^2} \, \frac{1}{(p_3 - q_3^{\rm pole})^2 + \left|{\bf p - q}\right|^2} .\frac{g(q_3^{\rm pole},\left|{\bf q}\right|)} {-\delta + \frac{\left|{\bf q}\right|^2}{m} + 2\Re\Sigma_-(q_3^{\rm pole},\left|{\bf q}\right|)}. \label{eq:NREQ3}$$ (We could equally well deform the contour round the pole at $(q_3^{\rm pole})^*$ if the integrand decays in the opposite direction, without affecting our final result.) Defining $$\Phi(p_3,\left|{\bf p}\right|) = \frac{g(p_3,\left|{\bf p}\right|)} {-\delta + \frac{\left|{\bf p}\right|^2}{m} + 2\Re\Sigma_-(p_3,\left|{\bf p}\right|)},$$ gives $$\begin{aligned} \lefteqn{\left\{-\delta + \frac{\left|{\bf p}\right|^2}{m} + 2\Re\Sigma_-(p_3,\left|{\bf p}\right|)\right\} \Phi(p_3,\left|{\bf p}\right|) } \hspace{30 mm}\nonumber \\ & = & \int \frac{d^2{\bf q}}{(2\pi)^2} \, \frac{1}{(p_3 - q_3^{\rm pole})^2 + \left|{\bf p - q}\right|^2} \Phi(q_3^{\bf pole},\left|{\bf q}\right|). \label{eq:NRPHI1}\end{aligned}$$ In order to isolate the logarthmic infrared divergence we set $$p_3 = p_3^{\rm pole} + \mu, \label{eq:P3MU}$$ with $\mu$ small and real, and $p_3^{\rm pole}$ defined by analogy with Eq. (\[eq:Q3POLE\]). The right hand side of Eq. (\[eq:NRPHI1\]) then becomes $$\begin{aligned} \mbox{r.h.s.} & = & \int \frac{d^2{\bf q}}{(2\pi)^2} \, \frac{1}{\mu^2 + O(\mu(\left|{\bf p}\right| - \left|{\bf q}\right|)) + \left|{\bf p - q}\right|^2} \phi({\bf q}) \nonumber \\ & = & \mbox{F.T. of } \frac{-1}{2\pi} \left[C + \ln\left(\frac{\mu r}{2}\right)\right] \phi({\bf r}) \hspace{5 mm} \mbox{as $\mu \rightarrow 0$}, \label{eq:RHS}\end{aligned}$$ where $\phi({\bf q}) = \Phi(q_3^{\bf pole},\left|{\bf q}\right|)$. Following the reasoning of refs. [@S90] and [@C80], this logarithmic divergence should be cancelled by the fermion self energy contribution $2\Re\Sigma_-(p_3,\left|{\bf p}\right|)$. However, from Eq. (\[eq:SIGPM\]), we see that the logarithmic divergence in the self energy occurs at the bare fermion mass pole $p_3^{\rm bare} = -i (-\delta/2 + \left|{\bf p}\right|^2/2m)$, and not the dressed pole $p_3^{\bf pole}$. The problem lies in the use of the 1-loop approximation. If instead the fermion self energy is calculated to all orders in rainbow approximation, the self energy feeds back into the loop integral via the propagator to replace Eq. (\[eq:SIGPM\]) by $$\Sigma_-(p_3,\left|{\bf p}\right|) = - \frac{1}{4\pi} \ln \frac{1}{2m} \left(-\frac{1}{2}\delta - ip_3 +\frac{\left|{\bf p}\right|^2}{2m} + \Sigma_-(p_3,\left|{\bf p}\right|)\right), \label{eq:SIGM}$$ which provides a rainbow SD equation for $\Sigma_-$ in the non-relativistic limit. Then using Eqs. (\[eq:NRPHI1\]), (\[eq:Q3POLE\]), (\[eq:P3MU\]), (\[eq:RHS\]) and (\[eq:SIGM\]) and fourier transforming we finally obtain $$\left\{ -\frac{1}{m} \nabla^2 + \frac{1}{2\pi} \left(C + \ln(mr) \right)\right\} \phi({\bf r}) = \delta \phi({\bf r}),$$ agreeing with Eq. (\[eq:THYDE\]). Had we started from the scalar equation (\[eq:NREQ1\]) in place of the axi-scalar equation, the same result would have been obtained at Eq. (\[eq:RHS\]), leading to an identical Schrödinger equation. The important point to notice in this derivation is the significance of a non-perturbative solution to the SD equation in cancelling the infrared divergences. In the massless fermion limit, it is well known that chiral symmetry breaking plays a pivotal role in determining the bound state spectrum. It appears also that, even in the non-relativistic limit, the remnant effects of chiral symmetry breaking, via a non-perturbative solution to the SD equation, have a role to play. The Fermion Propagator ---------------------- The BSE described in Section 2 requires a fermion propagator input in the form of Eq. (\[eq:SIGDEF\]) or Eq. (\[eq:ABDEF\]) and this needs to be available over a region in the complex $p^2$ plane defined by Eq. (\[eq:QDEF\]) for $q_3$ and $\mbox{$\left| {\bf q} \right|$}$ real. This is the region [@SC92; @SC94] $$\Omega = \left\{ Q^2 = X + iY \left| X > \frac{Y^2}{M^2} - \frac{1}{4} M^2 \right. \right\}. \label{eq:REGION}$$ In this section we investigate ways of obtaining a solution to the SDE over $\Omega$. The fermion propagator, and thus the functions $\sigma_V$ and $\sigma_S$, must be well behaved over this region. The solution to the SDE (\[eq:SD\]) is quite simple along the positive real (spacelike)-$p^2$ axis. Substitution of the general expression for the fermion propagator (\[eq:ABDEF\]) into the SDE gives an integral equation involving $A$ and $B$ functions which can be split into two coupled integral equations by simple projections. Angular integrals can be performed to leave one dimensional integrals (over the modulus of the $q$ vector), $$A(p^2) - 1 = \frac{1}{4\pi^2p^2} \int^{\infty}_{0}dq \frac{q A(q^2)}{q^2 A^2(q^2)+B^2(q^2)} \left({\frac{p^2+q^2}{4 p} \ln\left({\frac{p+q}{p-q}}\right)^2 - q}\right),$$ $$B(p^2) - m = \frac{3}{8\pi^2p} \int^{\infty}_{0}dq \frac{q B(q^2)}{q^2 A^2(q^2)+B^2(q^2)} \ln\left({\frac{p+q}{p-q}}\right)^2 \label{eq:ABINTS}$$ The integrations range from 0 to some UV cutoff along the positive real axis. This theory is super-renormalisable and thus has no ultraviolet divergences and so this cutoff is merely a numerical limit made large enough so that it has no bearing on the results. For a set of points $p$ corresponding to the set of $q$ points in the integration, the equations are iterated until convergence to leave the solution along the positive real axis. However, the solution is required for complex $p^2$. We see three possibilities. The first is to use the converged functions $A(q^2)$ and $B(q^2)$ in the integrals over the same contour (positive real $q^2$) and supply the complex point $p$ desired. The integrals should provide the solution at that point $p$. However the analytic structure of the integrands in Eq. (\[eq:ABINTS\]) will not allow an analytic continuation by this method, because a pinch singularity in the integrand forces us to integrate through the point $p$ [@Ma95]. The second possibility is to rotate the contour through an angle $2\phi$ in the $p^2$ plane so that it passes through the desired point $p$ [@SC92]. In this way a cancellation of the complex parts within the logarithms occurs. Fig. 3 shows the first and second contours ($C_1$ and $C_2$ respectively). It can be seen from Eq. (\[eq:ABINTS\]) that the logarithms will have real arguments along the radial portion of $C_2$, while the arc portion contributes nothing to the integral because the integrand falls off sufficiently quickly in the ultraviolet [@Ma95]. Based on the Landau gauge calculations of Maris [@Ma93; @Ma95] we expect conjugate singularities to occur in the second and third quadrants of the $p^2$ plane away from the negative real (timelike) axis. Thus, as $2\phi$ increases towards $\pi$ from zero (and the negative real $p^2$ axis is approached) a singularity interferes and we may have convergence problems. It will be seen that these singularities can lie a fair way from $\phi=\frac{\pi}{2}$ and convergence problems can occur for $\phi$ not much more than $\frac{\pi}{4}$ (ie: barely reaching into the second quadrant of $p^2$). For the case $m=0$ to be considered shortly, no solution could be found for $\phi$ greater than 0.90 radians (with a reasonable convergence criterion $\frac{\Delta B}{B}<0.001$). For this solution to be applied to the BSE we need to know the value of the fermion propagator for $\phi$ from $0$ to $\frac{\pi}{2}$ and so this method is not practical. However, although a slowing of convergence as $\phi$ increases prevents a solution being attained in all of $\Omega$, it does provide an accurate solution in a large portion of $\Omega$. We therefore have a test for any $A$ and $B$ functions we wish to use in the BSE. The third possibility, and the one employed here and in ref. [@Bu92], is to find a good analytic fit along the positive real $p^2$ axis and extend the solution into the complex plane by analytic continuation. These fits may be for $A$ and $B$ or for the functions $\sigma_V$ and $\sigma_S$. The work of Maris [@Ma95] suggests that it is not necessary for $\sigma_V$ and $\sigma_S$ to be entire functions for the fermions to be confined, only that there be no poles on the timelike $p^2$ axis. Fits to functions $A$ and $B$ used in previous work [@Bu92] based on the known asymptotic infrared and ultraviolet behaviour of these functions were tested by comparing them with the direct solution for various angles $\phi$. The fits, adjusted to allow variable fermion mass $m$, are given by $$A_{\rm fit}(p^2)= \frac{a_1}{(a_2^{\,2} +p^2)^{\frac{1}{2}}} +a_3 e^{-a_4p^2} +1,$$ $$B_{\rm fit}(p^2)= \frac{b_1}{b_2 +p^2}+b_3 e^{-b_4p^2} + m. \label{eq:FIT}$$ The parameters $a_n$,$b_n$ are functions of fermion mass. The numerical solution to which these functions were fitted is an iterative solution to the SDE using a non-uniform 51 point grid along the positive real axis up to a momentum cutoff $p = 1000$ using a $0.1\%$ tolerance in the integration routine. Plots of the numerical solutions and function fits for various $m$ values are given in Fig. 4. Note that it is the $\sigma$ functions that are important in Eq. (\[eq:BS\]) and not $A$ and $B$, and thus the effect of the fit on the denominator $p^2 A^2 + B^2$ relating these must be considered. Conjugate poles exist where the factor $p^2 A^2 + B^2$ appearing in the denominator of the BSE integrand is zero. Table 1 lists the conjugate poles arising from the fits for each fermion mass and the corresponding maximum bound state masses allowed. The maximum $M$ allowed is the value for which the boundary of $\Omega$ in Eq. (\[eq:REGION\]) coincides with the conjugate poles. No comment about the viability of our model BSE can be made until solutions are attempted because the integration region depends on the solution mass $M$. The location of the conjugate singularities for the $m=0$ case in Table 1 is slightly different to that reported in Ref. [@Bu92] where it is $-0.00400 \pm i0.00666$. This is because of the flexibility of the fitting functions. The fit in this work and that in Ref. [@Bu92] for the zero fermion mass case had similar accuracy along the positive real $p^2$ axis but had the freedom to take on slightly different forms throughout the complex plane. This is because along the positive real $p^2$ axis the non-asymptotic form fixing parameters ($a_4$ and $b_4$) are only loosely determined. Despite the difference in the two results, the BSE calculation for bound state masses should show close agreement as each fit adequately models the direct solution throughout the complex plane. The singularities in the $\sigma$ fits for fermion masses greater than or equal to $0.1$ lie on the negative real $p^2$-axis. This suggests that free propagation occurs at these masses and the model is not confining. An accurate location of the singularities in the SDE solution would be needed before it can be said whether this result is due to the fits or the rainbow approximation used in the SDE solution. According to Ref. [@Ma95] the rainbow approximation SDE solution is expected to be confining even for large fermion mass. Thus we assume our result is due to the lack of accuracy in our fits near the negative real $p^2$ axis and that it is likely that the singularities move close to that axis as $m$ increases but never actually lie on that axis. Figs. 5a and 5b show plots of $\sigma_V$ and $\sigma_S$ moduli respectively for zero fermion mass and angles $\phi=0$, $\phi=\frac{\pi}{8}$ and $\phi=\frac{\pi}{4}$ against the $p$ modulus (with a range far smaller than the UV cutoff used in our calculations). The direct solutions to the SDE and the fits are compared. It can be seen that the functions are very good fits along the positive real axis ($\phi=0$), where both $\sigma_V$ and $\sigma_S$ are real. The fit is also good for $\phi=\frac{\pi}{8}$. Real and imaginary components have not been given separately as they show similar agreement. In the case $\phi=\frac{\pi}{4}$ the fitting function has begun to deviate from the SDE solution. This is mostly due to the apparent difference in the location of a spike. Based on the largest bound state mass for $m=0$ reported in the next section, the BSE integration region $\Omega$ extends along the direction $\phi=\frac{\pi}{4}$ out to a modulus of approximately $0.083$. In this range the small angle solutions are very accurate but for larger $\phi$, much of the error due to the difference in the location of the spike will be experienced. As the angle is increased further convergence problems occur until eventually no solution can be found at all ($\phi>0.90$). The spike forming in these plots signals that, as $\phi$ is increased, the contour of integration approaches a singularity. In fact, the conjugate poles which lie just off (or on as is the case for larger $m$) the negative real $p^2$ axis ($\phi=\frac{\pi}{2}$) are approached. It is important that both the direct solution to the SDE and the fits used in this work have this feature. This spike was not seen in any other fits which we attempted. Based on Fig. 5 it seems clear that the direct solution to the SDE must have singularities close to those in the fitting functions. Because the spikes are not in exactly the same places some error will be introduced in the contributions from the large $\phi$ part of $\Omega$. When the bound state mass becomes large, the large $\phi$ contributions will become more important and thus we expect the error in the position of the spikes to result in some noise in the solutions to the BS equation for large fermion mass. The $\sigma$ functions were studied for all fermion masses used in this work in the same fashion. The results were similar to the $m=0$ case and need not be shown here. In each case, when $\phi$ was increased far enough, a spike was observed in both the fit and solution, after which lack of convergence prevented an SDE solution. However, for very large fermion masses, the accuracy of the fits decreases as $m$ increases, and with good reason. As $m$ tends to infinity, the functions $A$ and $B$ approach constants ($1$ and $m$ respectively). For moderately large fermion masses experienced in this work, these functions become almost constant along the positive real $p^2$ axis while having a singularity near the negative real $p^2$-axis. It is too much to ask for simple four parameter fits along the positive real $p^2$-axis to reproduce accurately complex behaviour deep into the real timelike $p^2$-axis. The 1-loop propagators, Fig. 2 described in section 3 illustrate this well. There one can see how smooth and level the $\sigma$ functions are along the positive real $p^2$ axis and also how steep the functions become back along the negative real $p^2$ axis. Before moving on to the next section, we return briefly to the 1-loop approximation to the fermion propagator necessary for the non-relativistic approximations described in section 3. Fig. 6a compares our rainbow approximation solution $A$ to the 1-loop result given in Eqs. (\[eq:A1LOOP\]) and (\[eq:SIGA1\]). Fig. 6b compares $B$ from our rainbow approximation solution and the result in Eqs. (\[eq:B1LOOP\]) and (\[eq:SIGB1\]). Both of these comparisons were made at a large fermion mass ($m=5$). It can be seen that the curves in each case are in reasonable agreement, at least for spacelike momenta. Numerical Solution of the Bethe-Salpeter Equation ------------------------------------------------- The fits given by Eq. (\[eq:FIT\]) to the fermion propagator for a range of fermion masses were used in the solution of the Bethe-Salpeter coupled integral equations Eq. (\[eq:IE\]). This problem was restated in Eq. (\[eq:EV\]) as an eigenvalue problem. A grid of $25 \times 25$ ($\mbox{$\left| {\bf q} \right|$}$,$q_3$) tiles were used for the iterative procedure with linear interpolation on each of those tiles used for the sums ($T_{ij}f_j$) which are supplied at the corners of the tiles from the previous iteration. The tiles were non-uniform in size and an upper limit to the momentum components ($\mbox{$\left| {\bf q} \right|$}$ and $q_3$) of between $3.0$ and $9.0$ was used. The equations were iterated to convergence each time to determine eigenvalues for a given test bound state mass $M$. The Bound state masses were located by repetitive linear interpolation or extrapolation to search for the point where the eigenvalue $\Lambda$ of Eq. (\[eq:EV\]) is 1. This was repeated for each of the fermion masses ranging from 0 to 5.0. This procedure was used for each of the four non-degenerate bound state symmetries described in the appendix. Table 2 shows the bound state masses for each of the four symmetries (scalar ${\cal C}=+1$, scalar ${\cal C}=-1$, axi-scalar ${\cal C}=+1$ and axi-scalar ${\cal C}=-1$) for all fermion masses considered. Fig. 7a displays the solutions $M$ for fermion mass 0–0.1. Fig. 7b shows $M-2m$ over the greater range of 0–5. The axi-scalar ${\cal C}=+1$ solution is a degenerate axi-scalar/axi-pseudoscalar pair of Goldstone bosons for the case $m=0$, as seen in previous work [@Bu92]. Minor differences between Ref. [@Bu92] and the current work at $m=0$ are due to small differences in the propagator fits, as explained in section 4. For small $m$ the bound state masses rise rapidly with with increasing fermion mass. The mass of the “Goldstone” axi-scalar ${\cal C} = +1$ state scales roughly with the square root of the fermion mass, in agreement with the Gell-Mann–Okubo mass formula [@FS82]. In fact, for fermion masses 0 to 0.1 a linear regression against $\sqrt{m}$ has correlation coefficient 0.9964 with the mass growing as approximately $1.27 \times \sqrt{m}$. (The accuracy of the solution at $m = 0.001$, which comes out with an anomalously low bound state mass, is severely affected by numerical inaccuracy arising from the sensitivity the bound state mass to the eigenvalue $\Lambda$ in Eq. (\[eq:EV\]).) For large fermion masses, the bound state mass rises predominantly as twice the fermion mass plus possible logarithmic corrections. However, there appears to be a good deal of noise in the large $m$ solutions, reflecting the difficulty in accurately modelling the fermion propagator deep into the timelike region from spacelike fits. No solutions corresponding to states of negative charge parity were found for $m>1.0$. Numerical solutions to the integral equations (\[eq:NREQ1\]), (\[eq:NREQ1A\]) arising from our non-relativistic treatment are listed in Table 3 and plotted in Fig. 7b. Solutions with positive $\delta$ were found for fermion masses $m \geq 1.0$ in the positive charge parity sector. We were unable to locate any solutions to Eqs. (\[eq:NREQ1\]), (\[eq:NREQ1A\]) corresponding to negative charge parity states over a broad range of $\delta$. Also given in Table 3 and Fig. 7b are the two lowest lying s-wave solutions to the Schrödinger equation from the numerical work of Tam et al. [@THY95], given by Eq. (\[eq:THY1\]). The lack of exact agreement between the non-relativistic, 1-loop approximations Eqs. (\[eq:NREQ1\]) and (\[eq:NREQ1A\]), and the Schrödinger equation result Eq. (\[eq:THY1\]) is to be expected. As pointed out in Section 3, a complete cancellation of infrared divergences can only occur if the fermion self energy is calculated non-perturbatively to all orders. From Table 3, we see that at very high fermion masses, the accuracy of the 1-loop approximation is significantly affected as the conjugate poles in the propagator, measured in momenta scaled by the fermion mass, move closer to the bare fermion mass pole (see Fig. 2). At more moderate fermion masses, $m\approx 5$, the 1-loop approximation is more respectable. We see no clear agreement between the numerical results of Eq. (\[eq:IE\]), and either non-relativistic approximation Eqs. (\[eq:NREQ1\]), (\[eq:NREQ1A\]), or the Schrödinger equation result Eq. (\[eq:THY1\]). Our analysis of the non-relativistic limit of the BS equation exposes the importance of the analytic structure of the fermion propagator in the vicinity of the bare fermion mass pole $p^2 = -m^2$. The uneven nature of the lower two curves in Fig. 7b indicates that the determination of the timelike fermion propagator by an analytic fit to the spacelike propagator is inadequate for fermion masses $m \geq 1$. It is clear that a more careful analysis of the timelike nature of the fermion propagator, possibly involving a fully non-perturbative treatment of the SD equation to include remnant chiral symmetry breaking, is necessary for determining the bound state spectrum for even moderately large fermion masses. It is important to note that the poles in the fermion propagator fits listed in Table 1 lie outside the BS integration region $\Omega$ for all solutions obtained. This can be verified by observing that all masses in Table 2 are lower than the values $M_{max}$ listed in Table 1. A similar situation arises for the non-relativistic limit calculations. Listed in Table 3 are maximum allowed $\delta$ values if the integration region sampled by Eqs. (\[eq:NREQ1\]) and (\[eq:NREQ1A\]) is not to impinge on the conjugate propagator poles $q_3^{\rm pole}$ and $(q_3^{\rm pole})^*$ defined in Eq. (\[eq:Q3POLE\]). In all cases the numerical results lie within the permitted region. This requirement is equivalent to demanding that $q_3^{\rm pole}$ should not cross the real $q_3$ axis as as $\left|{\bf q}\right|$ ranges from $0$ to $\infty$. Interestingly, such a crossing would entail a more careful evaluation of residues than that carried out in Section 3 leading to the Schrödinger equation. We note that the Schrödinger equation results of Ref. [@THY95] include the first five s-wave states. It would certainly be of interest to locate the excited states within the framework of our BS treatment of QED3. We have searched for solutions to the eigenvalue equation (\[eq:EV\]) corresponding to excited states, and find in general no solutions within the mass ranges allowed by the values $M_{max}$ in Table 1. Since there is no reason to assume that the s-wave spectrum should be bounded above, it seems likely that there will be solutions to the BS equation for which the region of integration $\Omega$ does include the conjugate propagator poles discussed in Section 4. It follows that the functions $f$, $U$, $V$ and $W$ in the BS amplitudes of these states should have compensating zeros, in order that the right hand side of the BS equation be integrable. We conjecture that, if the fermion propagator has an infinite set of poles, there will be a sequence of excited states, the $n$th excited state having $n$ pairs of zeros in its BS amplitude. This conjecture is consistent with the the first excited state of the Schrödinger equation, also listed in Table 3, for which the wave function has a single zero. Although we are unable to determine accurately the spectrum in the large fermion mass limit, our calculations strongly suggest that there are no scalar or axi-scalar states with negative charge parity in this limit. This is consistent with the non-relativistic quark model in four dimensions in which negative charge parity scalar and pseudoscalar states are forbidden by the generalised Pauli exclusion principle [@FS82]. We note, however, that there is nothing to exclude such states in a fully relativistic BS treatment [@LS69], and indeed, negative charge parity scalar and axi-scalar states are found within the current model for light fermions. Conclusions ----------- In this paper we have solved the combination of rainbow Schwinger-Dyson and homogeneous Bethe-Salpeter equations in the quenched ladder approximation for three dimensional QED with massive fermions. QED3 was chosen because, like QCD, it is confining but without the complications of being non-abelian. A four-component version of this theory is used because, also like QCD, it provides a parity invariant action with a spontaneously broken chiral-like symmetry in the massless limit. The approximation is amenable to numerical solution, and should help assess the limitations of a technique frequently employed in models of QCD [@BSpapers]. The work in this paper carries on from a previous study of the same subject [@Bu92], but with the following extensions. Firstly, non-zero fermion masses is considered. Secondly, an analysis of the fermion propagator in the complex plane is carried out in order to assess the appropriateness of the approximations involved. Thirdly, an analysis of the non-relativistic limit, i.e., large bare fermion mass, is made in an attempt to compare with existing Schrödinger equation studies of QED3. The rainbow SD equation was solved in Euclidean space to give a fermion propagator for spacelike momenta, Euclidean $p^2 > 0$. The propagator is chirally asymmetric, and in the massless fermion limit, gives rise to a doublet of massless Goldstone positronium states analogous to the pion. Solution of the BS equation for massive positronium states requires knowledge of the fermion propagator $S(p)$ in the complex $p^2$-plane extending away from the spacelike axis, and a finite distance into the timelike axis $p^2 < 0$. By rotating the contour of integration we were able to extend the spacelike solution into part of the complex plane. However, the occurrence of complex conjugate poles in the fermion propagator prevented a numerical solution to the SD equation throughout the complete region of the complex plane sampled by the BS equation. This forced us to apply analytic fits to the propagator along the positive real $p^2$-axis for use over the required part of the complex plane. Our propagator fits were found to have conjugate poles located close to those of the direct solution for small to moderate fermion masses. This, combined with the accuracy of the fits throughout much the complex $p^2$ plane, made our choice of propagator very attractive. The singularities in the fits were found to move onto the negative real $p^2$-axis as the fermion mass increased. This was not interpreted as a loss of confinement but instead attributed to a lack of accuracy in the fits deep into the timelike region as the fermion mass became large. This reduction in accuracy of the fits for large $m$ was due to the nature of the functions along the positive real $p^2$-axis where the fits were made, and the presence of a singularity near the negative real $p^2$-axis in the vicinity of the bare fermion mass pole $p^2 = -m^2$, but off the timelike axis. BS solutions were found for four pairs of parity degenerate states. These pairs were the scalar/pseudoscalar ${\cal C}=+1$ and ${\cal C}=-1$ and the axi-scalar/axi-pseudoscalar ${\cal C}=+1$ and ${\cal C}=-1$ states. For small to moderate fermion mass the bound state mass was found to increase smoothly with $m$. The axi-scalar [C]{}=+1 doublet, analogous to the pion, was the lowest in energy, with a mass rising roughly with the square root of the bare fermion mass. For moderately large bare fermion masses ($m/e^2$ greater than unity) the positronium masses rise as twice the bare fermion mass, plus a possible logarithmic correction. However, an unacceptable level of noise was found to develop in our results for these larger masses, which we attribute to inaccuracies in the analytically continued fermion propagators in the important region near the bare fermion mass pole. No negative charge parity (${\cal C}=-1$) solutions were found for bare fermion masses above $m/e^2 \approx 1.0$, consistent with the generalised Pauli exclusion principle of non-relativistic QCD4. The conjugate poles in the fermion propagators were found to keep clear of the integration regions required for the BS solutions for the lowest state in each of the four space parity/charge parity sectors considered. However, it appeared that this would not be so for any excited states. We therefore conjecture that the excited positronium states have zeros in their BS amplitudes positioned so as to cancel the poles in the propagators encountered within the integral in the BS equation (\[eq:BS\]). This requirement of compensating zeros was too demanding on our current numerical code, and as a result, no excited states were found. In vector calculations under way at present, where the bound state masses are expected to be larger, the conjugate poles in the fermion propagator seen in this work may interfere. Since the fits used in this work appear to have their singularities close to those in the actual Schwinger-Dyson solution, we may find that the rainbow approximation and the resulting propagator fits will be inadequate for a study of vector states in QED3. This is a very challenging problem and we hope to report on our results in the near future. A non-relativistic analysis of the BS equation was also carried out assuming, in the first instance, a 1-loop approximation to the fermion propagator. However it was shown that, in order to cancel infrared divergences completely between the photon propagator and fermion self energy, as proposed by Sen [@S90] and Cornwall [@C80], it is necessary to evaluate the fermion self energy non-perturbatively. Only if this is done can the Schrödinger equation be rigorously obtained in the large fermion mass limit. In spite of this, numerical solutions of the 1-loop equations give reasonable agreement with the Schrödinger equation for moderately large fermion masses $m/e^2 \approx 5$. In summary, we were able to carry out an acceptable analysis of the bound state spectrum of QED3 near the chiral limit $m \rightarrow 0$ by using analytic fits to the spacelike fermion propagators in the BS Bethe-Salpeter equation, and in the non-relativistic limit $m \rightarrow \infty$ by expanding to lowest order in inverse powers of the fermion mass to obtain a Schrödinger equation. However, there remains an intermediate mass range $m/e^2 \approx 1$ for which neither of these techniques is adequate. It is clear that a more careful non-perturbative analysis of the fermion propagator in the vicinity of the bare fermion mass pole is necessary before an accurate determination of the QED3 positronium spectrum at intermediate fermion masses can be made. If a direct analogy with QCD models based on the Bethe-Salpeter equations is made, we conclude that particular care must be taken in modelling quark propagators for quarks whose mass is close to the mass scale of the theory, namely charm quarks. Appendix - Transformation Properties in QED3 {#appendix---transformation-properties-in-qed3 .unnumbered} -------------------------------------------- The four-component QED3 action in Minkowski space [@Pi84] $$S[A,\overline{\psi},\psi ] = \int \mbox{$ \, d^3x \,$} [ -\frac{1}{4} F_{\mu \nu} F^{\mu \nu} +\overline{\psi} \gamma_{\mu} (i\partial^{\mu} +eA^{\mu})\psi + m \overline{\psi} \psi ], \label{eq:ACT}$$ involves $4 \times 4$ matrices $\gamma_{\mu}$ which satisfy $\{\gamma_{\mu},\gamma_{\nu}\}=2\eta_{\mu \nu}$ where $\eta_{\mu \nu}={\rm diag}(1,-1,-1)$ with $\mu$ = 0, 1 and 2. These three matrices belong to a complete set of 16 matrices $\{\gamma_A\}=\{I,\gamma_{4},\gamma_{5},\gamma_{45},\gamma_{\mu}, \gamma_{\mu 4},\gamma_{\mu 5},\gamma_{\mu 45} \}$ satisfying $\frac{1}{4} {\rm tr}(\gamma_A \gamma^B) = \delta^B_A$; $$\gamma_0= \left( \begin{array}{cc} \sigma_3 & 0 \\ 0 & -\sigma_3 \end{array}\right),\;\;\; \gamma_{1,2}= -i\left( \begin{array}{cc} \sigma_{1,2} & 0 \\ 0 & -\sigma_{1,2} \end{array}\right),$$ $$\gamma_4=\gamma^4= \left( \begin{array}{cc} 0 & I \\ I & 0 \end{array}\right),\;\;\; \gamma_5=\gamma^5= \left( \begin{array}{cc} 0 & -iI \\ iI & 0 \end{array}\right),\;\;\; \gamma_{45}=\gamma^{45}= -i \gamma_4 \gamma_5,$$ $$\gamma_{\mu 4}=i \gamma_{\mu} \gamma_4,\;\;\; \gamma_{\mu 5}=i \gamma_{\mu} \gamma_5,\;\;\; \gamma_{\mu 45}=-i \gamma_{\mu} \gamma_4 \gamma_5,\;\;\; \gamma^{\mu 4,\mu 5\,{\rm or} \,\mu 45} = \eta^{\mu \nu} \gamma_{\nu 4,\nu 5\,{\rm or} \,\nu 45}$$ The three $\gamma_{\mu}$, and $\gamma_4$ and $\gamma_5$ are five mutually anti-commuting matrices. This is unlike the 4-dimensional case where no analogue of $\gamma_4$ exists. The action Eq. (\[eq:ACT\]) in the massless case $m=0$ exhibits global $U(2)$ symmetry with generators $\{I,\gamma_{4},\gamma_{5},\gamma_{45}\}$ which is broken by the generation of a dynamical fermion mass [@DK89; @Pi84] to a $U(1)\times U(1)$ symmetry $\{I,\gamma_{45}\}$. The action is also invariant with respect to discrete parity and charge conjugation symmetries, which for the fermion fields are given by $$\mbox{$\psi(x)$} \rightarrow \psi^\prime(x^\prime) = \Pi \mbox{$\psi(x)$}, \;\;\; \mbox{$\overline{\psi}(x)$} \rightarrow \overline{\psi}^\prime (x^\prime) = \mbox{$\overline{\psi}(x)$} \Pi^{-1}, \label{eq:PAR}$$ $$\mbox{$\psi(x)$} \rightarrow \psi^\prime(x) = C \overline{\psi}(x)^{\rm T}, \;\;\; \mbox{$\overline{\psi}(x)$} \rightarrow \overline{\psi}^\prime(x) = -\mbox{$\psi(x)$}^{\rm T} C^{\dagger}, \label{eq:CH}$$ where $x^{\prime}=(x^0,-x^1,x^2)$. The matrices $\Pi$ and $C$ are each determined only up to an arbitrary phase by the condition that the action Eq. (\[eq:ACT\]) be invariant [@Bu92]: $$\Pi=\gamma_{14} e^{i\phi_P \gamma_{45}}, \;\;\; C=\gamma_{2} e^{i\phi_C \gamma_{45}}, (0\leq \phi_P,\phi_C < 2\pi)$$ Scalars, pseudoscalars, axi-scalars and axi-pseudoscalars are defined by the following transformation properties under parity transformations $$\begin{aligned} \Phi^{S}(x) & \rightarrow & \Phi^{S\prime}(x^{\prime}) = \Phi^{S}(x),\nonumber \\ \Phi^{PS}(x) & \rightarrow & \Phi^{PS\prime}(x^{\prime}) = -\Phi^{PS}(x), \nonumber \\ \Phi^{AS}(x) & \rightarrow & \Phi^{AS\prime}(x^{\prime}) = R_P\Phi^{AS}(x),\nonumber \\ \Phi^{APS}(x) & \rightarrow & \Phi^{APS\prime}(x^{\prime}) = -R_P\Phi^{APS}(x), \label{eq:PTY} \end{aligned}$$ where $\Phi^{AS}$ and $\Phi^{APS}$ are doublet states $\Phi=(\Phi_4,\Phi_5)^T$, and $$\begin{aligned} R_P & = & \left(\begin{array}{cc} -\cos 2\phi_P & -\sin 2\phi_P \\ -\sin 2\phi_P & \cos 2\phi_P \end{array} \right). \label{eq:RL}\end{aligned}$$ Similar transformation properties exist for charge conjugation. The most general forms of the Bethe-Salpeter amplitudes [@LS69] for bound scalar and pseudoscalar states are $$\begin{aligned} \Gamma^S(q,P) & = & If+\not \! qg +\not \! Ph +\epsilon_{\mu \nu \rho} P^{\mu}q^{\nu}\gamma^{\rho 45} k , \label{eq:GS}\\ \Gamma^{PS}(q,P) & = & \gamma_{45} \Gamma^S(q,P), \label{eq:GP}\end{aligned}$$ where $f,g,h$ and $k$ are functions only of $q^2,P^2$ and $q\cdot P$. BS amplitudes corresponding to the components $\Phi_4$ and $\Phi_5$ of axi-scalars and axi-pseudoscalars take the general form $$\left(\begin{array}{c} \Gamma^{(4)}(q,P) \\ \Gamma^{(5)}(q,P) \end{array} \right)^{AS} = \left(\begin{array}{c} \gamma_4 \\ \gamma_5\end{array} \right)f + \left(\begin{array}{c} \gamma_{\mu 4} \\ \gamma_{\mu 5}\end{array} \right) (q^{\mu}g+P^{\mu}h) +\epsilon_{\mu \nu \rho} P^{\mu} q^{\nu} \left(\begin{array}{c} \gamma^{\rho 5} \\ - \gamma^{\rho 4}\end{array} \right)k, \label{eq:GAS}$$ and $$\left(\begin{array}{c} \Gamma^{(4)}(q,P) \\ \Gamma^{(5)}(q,P) \end{array} \right)^{APS} = \gamma_{45} \left(\begin{array}{c} \Gamma^{(4)}(q,P) \\ \Gamma^{(5)}(q,P) \end{array} \right)^{AS}. \label{eq:GAPS}$$ Furthermore, the charge parity ${\cal C}=\pm 1$ of the bound states is determined by the parity of the functions $f,g,h$ and $k$ under the transformation $q\cdot P \rightarrow -q\cdot P$. The quantity $q\cdot P$ is the only Lorentz invariant which changes sign under charge conjugation and thus determines the charge parity of those functions. Our conventions for Euclidean space quantities are summarised in Appendix A of Ref. [@Bu92]. In particular Euclidean momenta and Dirac matrices are defined by $$P_3^{({\rm E})} = -iP_0^{({\rm M})},\hspace{5 mm} P_{1,2}^{({\rm E})} = P_{1,2}^{({\rm M})},\hspace{5 mm} \gamma_3^{({\rm E})} = \gamma_0^{({\rm M})}, \hspace{5 mm} \gamma_{1,2}^{({\rm E})} = i\gamma_{1,2}^{({\rm M})}.$$ Acknowledgments {#acknowledgments .unnumbered} --------------- We are grateful to C. J. Hamer, A. Tam and P. Maris for helpful discussions, and the National Centre for Theoretical Physics at the Australian National University for hosting the Workshop on Non-Perturbative Methods in Field Theory where part of this work was completed. [99]{} C. J. Burden, J. Praschifka. and.  C. D. Roberts, Phys. Rev. [**D46**]{} (1992) 2695. J. Praschifka, C. D. Roberts, R. T. Cahill, Intern. J. Mod. Phys. A 4 (1989) 4929; Y.-b. Dai, C.-s. Huang and D.-s. Liu, Phys. Rev. D 43 (1991) 1717; K.-I. Aoki, T. Kugo and M. G. Mitchard, Phys. Lett. B 266 (1991) 467; H. J. Munczek and P. Jain, Phys. Rev. D 46 (1992) 438; P. Jain and H. J. Munczek, Phys. Rev. D 48 (1993) 5403; C. J. Burden, et al. [*Separable approximation to the Bethe-Salpeter in QCD*]{}, proceedings of the Lattice ’95 Conference, 1995 (to appear); R. T. Cahill and S. T. Gunner, [*Quark and gluon propagators from meson data*]{}, Flinders University preprint, 1995. S. J. Stainsby, and.  R. T. Cahill, Mod. Phys. Lett. [**A9**]{} (1994) 3551. C. J. Burden, Nucl. Phys. [**B387**]{} (1992) 419. P. Maris, [*Nonperturbative Analysis of the Fermion Propagator: Complex Singularities and Dynamical Mass Generation*]{}, PhD thesis, (1993). P. Maris, [*Confinement and complex singularities in QED3*]{}, University of Nagoya pre-print, 1995. R. Delbourgo, and.  M. Scadron, J. Phys. [**G5**]{} (1979) 1621. C. M. Yung and C. J. Hamer, Phys. Rev. [**D44**]{} (1991) 2595. A. Tam, C. J. Hamer and C. M. Yung, [*Light-cone quantisation approach to quantum electrodynamics in (2+1) dimensions*]{}, UNSW preprint PRINT-94-0182, 1994; see also V. G. Koures, [*Solving the Coulomb Schrödinger equation in $d = 2+1$ via sinc collocation*]{}, Univ. of Utah preprint UTAH-IDR-CP-05. D. Sen, Phys. Rev. [**D41**]{} (1990) 1227. J. M. Cornwall, Phys. Rev. [**D22**]{} (1980) 1452. C. D. Roberts and A. G. Williams, Prog.  Part. and. Nucl. Phys. [**33**]{} (1994) 475. S. J. Stainsby, and.  R. T. Cahill, Int. J. Mod. Phys. [**A7**]{} (1992) 7541. D. Flamm and F. Schöberl, [*Introduction to the Quark Model of Elementary Particles*]{}, Gordon and Breach, 1982. C. H. Llewellyn Smith, Ann. Phys. [**53**]{} (1969) 521, Llewellyn Smith’s vertex $\chi$ is related to our vertex $\Gamma$ via: $\chi(\frac{1}{2}P,q) = S(\frac{1}{2}P+q) \Gamma(q,P) S(\frac{1}{2}P-q)$. R. D. Pisarski, Phys. Rev. [**D29**]{} (1984) 2423. E. Dagotto, J. B. Kogut and A. Kocic, Phys. Rev. Lett. [**62**]{} (1989) 1083, Nucl. Phys. [**B334**]{} (1990) 279. [|c|c|c|]{} ------------------------------------------------------------------------ [$m$]{} & [$p^2$]{} & [$M_{max}$]{}\ ------------------------------------------------------------------------ 0.000 & $-$0.0034 $\pm i $ 0.0057 & 0.142\ ------------------------------------------------------------------------ 0.001 & $-$0.0041 $\pm i $ 0.0064 & 0.153\ ------------------------------------------------------------------------ 0.004 & $-$0.0060 $\pm i $ 0.0086 & 0.182\ ------------------------------------------------------------------------ 0.009 & $-$0.0081 $\pm i $ 0.0140 & 0.206\ ------------------------------------------------------------------------ 0.016 & $-$0.0121 $\pm i $ 0.0192 & 0.247\ ------------------------------------------------------------------------ 0.025 & $-$0.0216 $\pm i $ 0.0260 & 0.325\ ------------------------------------------------------------------------ 0.036 & $-$0.0314 $\pm i $ 0.0345 & 0.386\ ------------------------------------------------------------------------ 0.049 & $-$0.0468 $\pm i $ 0.0387 & 0.464\ ------------------------------------------------------------------------ 0.064 & $-$0.0618 $\pm i $ 0.0417 & 0.522\ ------------------------------------------------------------------------ 0.081 & $-$0.0815 $\pm i $ 0.0440 & 0.590\ ------------------------------------------------------------------------ 0.1 & $-$0.0647 $\pm i $ 0.0000 & 0.509\ ------------------------------------------------------------------------ 0.5 & $-$0.4894 $\pm i $ 0.0000 & 1.399\ ------------------------------------------------------------------------ 1 & $-$1.4260 $\pm i $ 0.0000 & 2.388\ ------------------------------------------------------------------------ 2 & $-$4.8925 $\pm i $ 0.0000 & 4.424\ ------------------------------------------------------------------------ 3 & $-$10.3776 $\pm i $ 0.0000 & 6.443\ ------------------------------------------------------------------------ 4 & $-$17.9613 $\pm i $ 0.0000 & 8.476\ ------------------------------------------------------------------------ 5 & $-$27.3616 $\pm i $ 0.0000 & 10.462\ [|c|c|c|c|c|]{} ------------------------------------------------------------------------ [$m$]{} & [Scalar ${\cal C}=+1$]{} & [Scalar ${\cal C}=-1$]{} & [Axi-scalar ${\cal C}=+1$]{} & [Axi-scalar ${\cal C}=-1$]{}\ ------------------------------------------------------------------------ 0 \[Ref. [@Bu92]\] & 0.080 $\pm$ 0.001 & 0.123 $\pm$ 0.002 & 0 & 0.111 $\pm$ 0.002\ ------------------------------------------------------------------------ 0 & 0.077 & 0.118 & 0 & 0.108\ ------------------------------------------------------------------------ 0.001 & 0.087 & 0.126 & 0.004 & 0.116\ ------------------------------------------------------------------------ 0.004 & 0.110 & 0.151 & 0.054 & 0.140\ ------------------------------------------------------------------------ 0.009 & 0.140 & 0.178 & 0.090 & 0.167\ ------------------------------------------------------------------------ 0.016 & 0.175 & 0.217 & 0.127 & 0.204\ ------------------------------------------------------------------------ 0.025 & 0.215 & 0.269 & 0.167 & 0.254\ ------------------------------------------------------------------------ 0.036 & 0.256 & 0.316 & 0.208 & 0.300\ ------------------------------------------------------------------------ 0.049 & 0.298 & 0.367 & 0.248 & 0.350\ ------------------------------------------------------------------------ 0.064 & 0.343 & 0.411 & 0.293 & 0.390\ ------------------------------------------------------------------------ 0.081 & 0.389 & 0.456 & 0.340 & 0.431\ ------------------------------------------------------------------------ 0.1 & 0.439 & 0.496 & 0.391 & 0.479\ ------------------------------------------------------------------------ 0.5 & 1.311 & 1.388 & 1.261 & 1.352\ ------------------------------------------------------------------------ 1 & 2.297 & 2.387 & 2.243 & 2.336\ ------------------------------------------------------------------------ 2 & 4.330 & - & 4.233 & -\ ------------------------------------------------------------------------ 3 & 6.348 & - & 6.227 & -\ ------------------------------------------------------------------------ 4 & 8.379 & - & 8.243 & -\ ------------------------------------------------------------------------ 5 & 10.365 & - & 10.219 & -\ [|c|c|c|c|c|c|]{} ------------------------------------------------------------------------ [$m$]{} & [$\delta_{max}$]{} & [Scalar]{} & [Axi-scalar]{} & [Eq.(\[eq:THY1\])]{} with & [Eq.(\[eq:THY1\])]{} with\ ------------------------------------------------------------------------ & & [Eq.(\[eq:NREQ1\])]{} & [Eq.(\[eq:NREQ1A\])]{} & [$\lambda=\lambda_{0}$]{} & [$\lambda=\lambda_{1}$]{}\ ------------------------------------------------------------------------ 1 & 0.332 & 0.285 & 0.262 & 0.322 & 0.503\ ------------------------------------------------------------------------ 2 & 0.421 & 0.371 & 0.338 & 0.377 & 0.558\ ------------------------------------------------------------------------ 3 & 0.473 & 0.419 & 0.381 & 0.409 & 0.590\ ------------------------------------------------------------------------ 4 & 0.511 & 0.452 & 0.410 & 0.432 & 0.613\ ------------------------------------------------------------------------ 5 & 0.540 & 0.476 & 0.433 & 0.450 & 0.631\ ------------------------------------------------------------------------ 100 & 0.947 & 0.792 & 0.734 & 0.688 & 0.869\ ------------------------------------------------------------------------ 1000 & 1.272 & 1.030 & 0.968 & 0.872 & 1.052\ Figures {#figures .unnumbered} ======= Figure 1: : Diagrammatic representation of Eq. (\[eq:BS\]). Figure 2: : Figures 2a and 2b show 1-loop approximations using eqs.(\[eq:SIGDEF\]), (\[eq:ABDEF\]) and (\[eq:A1LOOP\]) – (\[eq:SIGB1\]) for $m^2\sigma_V$ and $m\sigma_S$ respectively (solid lines). These are compared with the vector and scalar parts of the approximation Eq. (\[eq:SAPX\]) (dashed lines). The curves are drawn for fermion masses (from bottom to top) $1$, $2$, $4$, $8$, and $\infty$. Figure 3: : The first and second (deformed) contours of integration $C_1$ and $C_2$ for solution to Eq. (\[eq:ABINTS\]). Figure 4: : Figure 4a compares the SDE solutions and fitting functions for $A-1$ for fermion masses (from top to bottom) $m=$0, 0.025, 0.1, 1 and 5. Figure 4b shows $B-m$ for fermion masses (from bottom to top) $m=$0, 0.025, 0.1, 1 and 5. Figure 5: : Figures 5a and 5b show SDE solutions and function fits for $\sigma_V$ and $\sigma_S$ respectively for fermion mass $0$ with angles $\phi=0$ ($\Diamond$), $\frac{\pi}{8}$ ($+$) and $\frac{\pi}{4}$ ($\Box$). Figure 6: : Figure 6a compares the function $A(p^2)$ from the rainbow SDE calculation (solid curve) and the 1-loop result (dashed curve) and figure 6b compares $B(p^2)$ results along the positive real $p$ axis for fermion mass $m=5.0$. Figure 7: : Figure 7a shows bound state masses ($M$) against fermion mass 0–0.1. Figure 7b is a plot of $M-2m$ for $m=$0–5. In each plot the scalar ${\cal C}=+1$ ($\Diamond$), scalar ${\cal C}=-1$ ($+$), axi-scalar ${\cal C}=+1$ ($\Box$) and axi-scalar ${\cal C}=-1$ ($\times$) states are drawn with solid curves. The non-relativistic predictions of Eq. (\[eq:NREQ1\]) and Eq. (\[eq:NREQ1A\]) are the scalar ${\cal C}=+1$ ($\Diamond$) and axi-scalar ${\cal C}=+1$ ($\Box$) states respectively and are drawn with dashed lines. Eq. (\[eq:THY1\]) with $\lambda=\lambda_0$ (lower solid curve with no symbols) and with $\lambda_1$ (upper solid curve with no symbols) are also plotted in figure 7b.
--- abstract: | Person re-identification (re-ID) is the task of matching person images across camera views, which plays an important role in surveillance and security applications. Inspired by great progress of deep learning, deep re-ID models began to be popular and gained state-of-the-art performance. However, recent works found that deep neural networks (DNNs) are vulnerable to adversarial examples, posing potential threats to DNNs based applications. This phenomenon throws a serious question about whether deep re-ID based systems are vulnerable to adversarial attacks. In this paper, we take the first attempt to implement robust physical-world attacks against deep re-ID. We propose a novel attack algorithm, called advPattern, for generating adversarial patterns on clothes, which learns the variations of image pairs across cameras to pull closer the image features from the same camera, while pushing features from different cameras farther. By wearing our crafted “invisible cloak”, an adversary can evade person search, or impersonate a target person to fool deep re-ID models in physical world. We evaluate the effectiveness of our transformable patterns on adversaries’ clothes with Market1501 and our established PRCS dataset. The experimental results show that the rank-1 accuracy of re-ID models for matching the adversary decreases from 87.9% to 27.1% under Evading Attack. Furthermore, the adversary can impersonate a target person with 47.1% rank-1 accuracy and 67.9% mAP under Impersonation Attack. The results demonstrate that deep re-ID systems are vulnerable to our physical attacks. author: - | Zhibo Wang$^{\dagger}$, Siyan Zheng$^{\dagger}$, Mengkai Song$^{\dagger}$, Qian Wang$^{\dagger,\ast}$,, Alireza Rahimpour$^{\ddagger}$, Hairong Qi$^{\ddagger}$\ $^{\dagger}$Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education,\ School of Cyber Science and Engineering, Wuhan University, P. R. China\ $^{\ddagger}$Dept. of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, USA\ [{zbwang, zhengsy, mksong, qianwang}@whu.edu.cn, {arahimpo, hqi}@utk.edu]{} bibliography: - 'paperbib.bib' title: 'advPattern: Physical-World Attacks on Deep Person Re-Identification via Adversarially Transformable Patterns' --- Introduction ============ [^1] Person re-identification (re-ID) [@gong2014re] is an image retrieval problem that aims at matching a person of interest across multiple non-overlapping camera views. It has been increasingly popular in research area and has broad applications in video surveillance and security, such as searching suspects and missing people [@wang2013intelligent], cross-camera pedestrian tracking [@yu2013harry], and activity analysis [@loy2009multi]. Recently, inspired by the success of deep learning in various vision tasks [@he2016deep; @krizhevsky2012imagenet; @simonyan2014very; @szegedy2015going; @zhang2017age; @zhang2019image], deep neural networks (DNNs) based re-ID models [@ahmed2015improved; @chen2016deep; @chen2017multi; @cheng2016person; @ding2015deep; @li2014deepreid; @wang2016joint; @xiao2016learning; @yi2014deep] started to become a prevailing trend and have achieved state-of-the-art performance. Existing deep re-ID methods usually solve re-ID as a classification task [@ahmed2015improved; @li2014deepreid; @yi2014deep], or a ranking task [@chen2016deep; @cheng2016person; @ding2015deep], or both [@chen2017multi; @wang2016joint]. ![The illustration of Impersonation Attack on re-ID models. The adversary with the adversarial patterns lures re-ID models into mismatching herself as the target person.[]{data-label="fig:example_advPattern"}](fig1.pdf){width="0.6\columnwidth"} Recent studies found that DNNs are vulnerable to adversarial attack [@carlini2017towards; @goodfellow6572explaining; @kos2018adversarial; @li2014feature; @li2015scalable; @moosavi2016deepfool; @papernot2016limitations; @szegedy2013intriguing]. These carefully modified inputs generated by adding visually imperceptible perturbations, called adversarial examples, can lure DNNs into working in abnormal ways, which pose potential threats to DNNs based applications, e.g., face recognition [@sharif2016accessorize], autonomous driving [@eykholt2018robust], and malware classification [@grosse2016adversarial]. The broad deployment of deep re-ID in security related systems makes it critical to figure out whether such adversarial examples also exist on deep re-ID models. Serious consequences will be brought if deep re-ID systems are proved to be vulnerable to adversarial attacks, for example, a suspect who utilizes this vulnerability can escape from the person search of re-ID based surveillance systems. To the best of our knowledge, we are the first to investigate robust physical-world attacks on deep re-ID. In this paper, we propose a novel attack algorithm, called advPattern, to generate adversarially transformable patterns across camera views that cause image mismatch in deep re-ID systems. An adversary cannot be correctly matched by deep re-ID models by printing the adversarial pattern on his clothes, like wearing an “invisible cloak”. We present two different kinds of attacks in this paper: Evading Attack and Impersonation Attack. The former can be viewed as an untargeted attack that the adversary attempts to fool re-ID systems into matching him as an arbitrary person except himself. The latter is a targeted attack which goes further than Evading Attack: the adversary seeks to lure re-ID systems into mismatching himself as a target person. Figure \[fig:example\_advPattern\] gives an illustration of Impersonation Attack on deep re-ID models. The main challenge with generating adversarial patterns is that *how to cause deep re-ID systems to fail to correctly match the adversary’s images across camera views with the same pattern on clothes*. Furthermore, the adversary might be captured by re-ID systems in any position, *but the adversarial pattern generated specifically for one shooting position is difficult to remain effective in other varying positions*. In addition, other challenges with physically realizing attacks also exist: (1) *How to allow cameras to perceive the adversarial patterns but avoid arousing suspicion of human supervisors*? (2) *How to make the generated adversarial patterns survive in various physical conditions, such as printing process, dynamic environments and shooting distortion of cameras*? To address these challenges, we propose advPattern that formulates the problem of generating adversarial patterns against deep re-ID models as an optimization problem of minimizing the similarity scores of the adversary’s images across camera views. The key idea behind advPattern is to amplify the difference of person images across camera views in the process of extracting features of images by re-ID models. To achieve the scalability of adversarial patterns, we approximate the distribution of viewing transformation with a multi-position sampling strategy. We further improve adversarial patterns’ robustness by modeling physical dynamics (e.g., weather changes, shooting distortion), to ensure them survive in physical-world scenario. Figure \[fig:example\_impersonate\] shows an example of our physical-world attacks on deep re-ID systems. To demonstrate the effectiveness of advPattern, we first establish a new dataset, PRCS, which consists of 10,800 cropped images of 30 identities, and then evaluate the attack ability of adversarial patterns on two deep re-ID models using the PRCS dataset and the publicly available Market1501 dataset. We show that our adversarially transformable patterns generated by advPattern achieve high success rates under both Evading Attack and Impersonation Attack: the rank-1 accuracy of re-ID models for matching the adversary decreases from 87.9% to 27.1% under Evading Attack, meanwhile the adversary can impersonate as a target person with 47.1% rank-1 accuracy and 67.9% mAP under Impersonation Attack. The results demonstrate that deep re-ID models are indeed vulnerable to our proposed physical-world attacks. In summary, our main contributions are three-fold: - To the best of our knowledge, we are the first to implement physical-world attacks on deep re-ID systems, and reveal the vulnerability of deep re-ID modes. - We design two different attacks, Evading Attack and Impersonation Attack, and propose a novel attack algorithm advPattern for generating adversarially transformable patterns, to realize adversary mismatch and target person impersonation, respectively. - We evaluate our attacks with two state-of-the-art deep re-ID models and demonstrate the effectiveness of the generated patterns to attack deep re-ID in both digital domain and physical world with high success rate. The remainder of this paper is organized as follows: we review some related works in Section \[sec:related\] and introduce the system model in Section \[sec:system\]. In Section \[sec:attack\], we present the attack methods for implementing physical-world attacks on deep re-ID models. We evaluate the proposed attacks and demonstrate the effectiveness of our generated patterns in Section \[sec:experiments\] and conclude with Section \[sec:conclusion\]. ![An example of Impersonation Attack in physical world. Left: the digital adversarial pattern; Middle: the adversary wearing a clothes with the physical adversarial pattern; Right: The target person randomly chosen from Market1501 dataset.[]{data-label="fig:example_impersonate"}](fig2.pdf){width="0.7\columnwidth"} Related Work {#sec:related} ============ [**Deep Re-ID Models.**]{} With the development of deep learning and increasing volumes of available datasets, deep re-ID models have been adopted to automatically learn better feature representation and similarity metric [@ahmed2015improved; @chen2016deep; @chen2017multi; @cheng2016person; @ding2015deep; @li2014deepreid; @wang2016joint; @xiao2016learning; @yi2014deep], achieving state-of-the art performance. Some methods treat re-ID as a classification issue: Li et al. [@li2014deepreid] proposed a filter pairing neural network to automatically learn feature representation. Yi et al. [@yi2014deep] used a siamese deep neural network to solve the re-ID problem. Ahmed et al. [@ahmed2015improved] added a different matching layer to improve original deep architectures. Xiao et al. [@xiao2016learning] utilized multi-class classification loss to train model with data from multiple domains. Other approaches solve re-ID as a ranking task: Ding et al. [@ding2015deep] trained the network with the proposed triplet loss. Cheng et al. [@cheng2016person] introduced a new term to the original triplet loss to improve model performance. Besides, two recent works [@chen2017multi; @wang2016joint] considered two tasks simultaneously and built networks to jointly learn representation from classification loss and ranking loss during training. [**Adversarial Examples.**]{} Szegedy et al. [@szegedy2013intriguing] discovered that neural networks are vulnerable to adversarial examples. Given a DNNs based classifier $f(\cdot)$ and an input $x$ with ground truth label $y$, an adversarial example $x'$ is generated by adding small perturbations to $x$ such that the classifier makes a wrong prediction, as $f(x')\neq y$, or $f(x')=y^{*}$ for a specific target $y^{*}\neq y$. Existing attack methods generate adversarial examples either by one-step methods, like the Fast Gradient Sign Method (FGSM) [@goodfellow6572explaining], or by solving optimization problems iteratively, such as L-BFGS [@szegedy2013intriguing], Basic Iterative Methods(BIM) [@kurakin2016adversarial], DeepFool [@moosavi2016deepfool], and Carlini and Wagner Attacks(C&W) [@carlini2017towards]. Kurakin et al. [@kurakin2016adversarial] explored adversarial attack in physical world by printing adversarial examples on paper to cause misclassification when photographed by cellphone camera. Sharif et al. [@sharif2016accessorize] designed eyeglass frame by printing adversarial perturbations on it to attack face recognition systems. Evtimov et al.  [@eykholt2018robust] created adversarial road sign to attack road sign classifiers under different physical conditions. Athalye et al. [@athalye2017synthesizing] constructed physical 3D-printed adversarial objects to fool a classifier when photographed over a variety of viewpoints. In this paper, to the best of our knowledge, we are the first to investigate physical-world attacks on deep re-ID models, which differs from prior works targeting on classifiers as follows: (1) Existing works on classification task failed to generate transformable patterns across camera views against image retrieval problems. (2) Attacking re-ID systems in physical world faces more complex physical conditions, for instance, adversarial patterns should survive in printing process, dynamic environments and shooting distortion under any camera views. These differences make it impossible to directly apply existing physical realizable methods on classifiers to attack re-ID models. System Model {#sec:system} ============ In this section, we first present the threat model and then introduce the our design objectives. Threat Model {#sec:threat} ------------ Our work focuses on physically realizable attacks against DNNs based re-ID systems, which capture pedestrians in real-time and automatically search a person of interest across non-overlapping cameras. By comparing the extracted features of a probe (the queried image) with features from a set of continuously updated gallery images collected from other cameras in real time, a re-ID system outputs images from the gallery which are considered to be the most similar to the queried image. We choose re-ID system as our target model because of the wild deployment of deep re-ID in security-critical settings, which will throw dangerous threats if successfully implementing physical-world attacks on re-ID models. For instance, a criminal can easily escape from the search of re-ID based surveillance systems by physically deceiving deep re-ID models. We assume the adversary has *white-box access* to well-trained deep re-ID models, so that he has knowledge of model structure and parameters, and *only* implements attacks on re-ID models in the inference phase. The adversary is not allowed to manipulate either the digital queried image or gallery images gathered from cameras. Moreover, the adversary is not allowed to change his physical appearance during attacking re-ID systems in order to avoid arousing human supervisor’s suspicion. These reasonable assumptions make it challenging to realize successfully physical-world attacks on re-ID systems. Considering that the stored video recorded by cameras will be copied and re-ID models will be applied for person search only when something happens, the adversary has no idea of when he will be treated as the person of interest and which images will be picked for image matching, which means that the queried image and gallery images are completely unknown to the adversary. However, with the white-box access assumption, the adversary is allowed to construct a generating set $X$ by taking images at each different camera view, which can be realized by stealthily placing cameras at the same position of surveillance cameras to capture images before implementing attacks. Design Objectives {#sec:objective} ----------------- We propose two attack scenarios, Evading Attack and Impersonation Attack, to deceive deep-ID models. [**Evading Attack.**]{} An Evading Attack is an *untargeted attack*: *Re-ID models are fooled to match the adversary as an arbitrary person except himself, which looks like that the adversary wears an “invisible cloak”*. Formally, a re-ID model ${f_\theta }\left( { \cdot ,\left. \cdot \right)} \right.$ outputs a similarity score of an image pair, where $\theta$ is the model parameter. Given a probe image $p_{adv}$ of an adversary, and an image $g_{adv}$ belonging to the adversary in the gallery $G_t$ at time $t$, we attempt to find an adversarial pattern $\delta$ attached on the adversary’s clothes to fail deep re-ID models in person search by solving the following optimization problem: $$\max D(\delta ),\;\;\;s.t.\;Rank({f_\theta }\left( {{p_{adv + \delta }},\left. {{g_{adv + \delta }}} \right))} \right. > K$$ where $D(\cdot)$ is used to measure the reality of the generated pattern. Unlike previous works aiming at generating visually inconspicuous perturbations, we attempt to generate *visible patterns* for camera sensing, while *making generated patterns indistinguishable from naturally decorative pattern on clothes*. $Rank(\cdot)$ is a sort function which ranks similarity scores of all gallery images with $p_{adv}$ in the decreasing order. An adversarial pattern is successfully crafted only if the image pair $(p_{adv + \delta }, g_{adv + \delta })$ ranks behind the top-$K$ results, which means that the re-ID systems cannot realize cross-camera image match of the adversary. [**Impersonation Attack.**]{} An Impersonation Attack is a *targeted attack* which can be viewed as an extension of Evading Attack: The adversary attempts to *deceive re-ID models into mismatching himself as a target person*. Given our target’s image ${I_t}$, we formulate Impersonation Attack as the following optimization problem: $$\max D(\delta ), \;\; s.t.\left\{\begin{array}{l} \hspace{-2mm} Rank ({f_\theta }\left( {{p_{adv + \delta }},\left. {{g_{adv + \delta }}} \right))} \right. > K\\ \hspace{-2mm} Rank ({f_\theta }\left( {{p_{adv + \delta }},\left. {{I_t}} \right))} \right. < K \end{array} \right. \hspace{-2mm}$$ we can see that, besides the evading constraint, the optimization problem for an Impersonation Attack includes another constraint that the image pair $(p_{adv + \delta }, {I_t})$ should be within the top-K results, which implies that the adversary successfully induces the re-ID systems into matching him to the target person. Since the adversary has no knowledge of the queried image and the gallery, it is impossible for the adversary to solve the above optimization problems. In the following section, we will present the solution that approximately solve the above optimization problems. Adversarial Pattern Generation {#sec:attack} ============================== ![Overview of the attack pipeline.[]{data-label="fig3"}](fig3.pdf){width="0.9\columnwidth"} In this section, we present a novel attack algorithm, called advPattern, to generate adversarial patterns for attacking deep re-ID systems in real-world. Figure \[fig3\] shows an overview of the pipeline to implement an Impersonation Attack in physical world. Specifically, we first generate transformable patterns across camera views for attacking the image retrieval problem as described in Section \[sec:transformable\]. To implement position-irrelevant and physical-world attacks, we further improve the scalability and robustness of adversarial patterns in Section \[sec:scalable\] and Section \[sec:robust\]. Transformable Patterns across Camera Views {#sec:transformable} ------------------------------------------ Existing works [@cheng2016person; @zhong2018camera] found that there exists a common image style within a certain camera view, while dramatic variations across different camera views. To ensure that the same pattern can cause cross-camera image mismatch in deep re-ID models, we propose an adversarial pattern generation algorithm to generate transformable patterns that amplify the distinction of the adversary’s images across camera views in the process of extracting features of images by re-ID models. For the [**Evading Attack**]{}, given the generating set $X =(x_{1},x_{2},...,x_{m})$ constructed by the adversary, which consists of the adversary’s images captured from $m$ different camera views. For each image $x_i$ from $X$, we compute the adversarial image ${x_i}^\prime = o({x_i},{T_i}(\delta))$. $o({x_i},{T_i}(\delta))$ denotes overlaying the corresponding areas of $x_i$ after transformation $T_{i}(\cdot)$ with the generated pattern $\delta$. Here $T_{i}(\delta)$ is a perspective transformation operation of the generated pattern $\delta$, which ensures the generated pattern to be in accordance with transformation on person images across camera views. We generate the transformable adversarial pattern $\delta$ by solving the following optimization problem: $$\label{eq:3} \mathop {\arg \min }\limits_\delta \sum\limits_{{\rm{i}} = 1}^m {\sum\limits_{j = 1}^m {{f_\theta }({x_i}^\prime ,{x_j}^\prime )} } , \;\;s.t.\;\;\;i \ne j$$ We iteratively minimize the similarity scores of images of an adversary from different cameras to gradually pull farther extracted features of the adversary’s images from different cameras by the generated adversarial pattern. For the [**Impersonation Attack**]{}, given a target person’s image ${I_t}$, we optimize the following problem: $$\begin{split} \mathop {\arg \min }\limits_\delta &\sum\limits_{{\rm{i}} = 1}^m {\sum\limits_{j = 1}^m {{f_\theta }({x_i}^\prime ,{x_j}^\prime )} } \\ &- \alpha ({f_\theta }({x_i}^\prime ,{I_{\rm{t}}}) + {f_\theta }({x_j}^\prime ,{I_{\rm{t}}})) , \;\;s.t.\;\;\;i \ne j \end{split}\label{eq:4}$$ where $\alpha$ controls the strength of different objective terms. By adding the second term in Eq. \[eq:4\], we additionally maximize similarity scores of the adversary’s images with the target person’s image to generate a more powerful adversarial pattern to pull closer the extracted features of the adversary’s images and the target person’s image. Scalable Patterns in Varying Positions {#sec:scalable} -------------------------------------- The adversarial patterns should be capable of implementing successful attacks at any position, which means our attacks should be position-irrelevant. To realize this objective, we further improve the scalability of the adversarial pattern in terms of varying positions. Since we cannot capture the exact distribution of viewing transformation, we augment the volume of the generating set with a multi-position sampling strategy to approximate the distribution of images for generating scalable adversarial patterns. The augmented generating set $X^{C}$ for an adversary is built by collecting the adversary’s images with various distances and angles from each camera view, and synthesized instances generated by image transformation such as translation and scaling on original collected images. For the [**Evading Attack**]{}, given a triplet ${tri_k} = < x_k^o,x_k^ + ,x_k^ - >$ from $X^{C}$, where $x_k^o$ and $x_k^ +$ are person images from the same camera, while $x_k^ -$ is the person image from a different camera, for each image $x_k$ from ${tri_k}$, we compute the adversarial image ${x_k}^\prime$ as $o({x_k},{T_k}(\delta))$. We randomly chose a triplet at each iteration for solving the following optimization problem: $$\label{eq:3} \mathop {\arg \min }\limits_\delta {\mathbb{E}_{_{{tri_k} \sim {X^C}}}}{f_\theta }((x_k^o)',(x_k^ - )') - \beta {f_\theta }((x_k^o)',(x_k^ + )')$$ where $\beta$ is a hyperparameter that balances different objectives during optimization. The objective of Eq.(\[eq:3\]) is to minimize the similarity scores of $x_k^o$ with $x_k^ -$ to discriminate person images across camera views, while maximizing similarity scores of $x_k^o$ with $x_k^ +$ to preserve the similarity of person images from the same camera view. During optimization the generated pattern learns the scalability from the augmented generating set $X^{C}$ to pull closer the extracted features of person images from the same camera, while pushing features from different cameras farther, as shown in Figure \[fig4\]. ![The illustration of how scalable adversarial patterns work. By adding the generated adversarial pattern, the adversarial images from the same camera view are clustered together in the feature space. Meanwhile, the distance of adversarial images from different cameras becomes farther.[]{data-label="fig4"}](fig4.pdf){width="0.6\columnwidth"} For the [**Impersonation Attack**]{}, given an image set ${I^t}$ of the target person, and a quadruplet ${quad_k} = < x_k^o,x_k^ + ,x_k^ - ,{t_k}>$ consisting of a triplet ${tri_k}$ and a person image $t_k$ from ${I^t}$, we randomly choose a quadruplet at each iteration, and iteratively solve the following optimization problem: $$\begin{split} \mathop {\arg \max }\limits_\delta &{\mathbb{E}_{_{{quad_k} \sim \{ {X^C},{I^t}\} }}}{f_\theta }((x_k^o)',{t_k}) \\ &+ {\lambda _1}{f_\theta }((x_k^o)',(x_k^ + )') - {\lambda _2}{f_\theta }((x_k^o)',(x_k^ - )') \end{split}$$ where $\lambda _1$ and $\lambda _2$ are hyperparameters that control the strength of different objectives. We add an additional objective that maximizes the similarity score of $x_k^o$ with $t_k$ to pull closer the extracted features of the adversary’s images to the features of the target person’s images. Robust Patterns for Physically Realizable Attack {#sec:robust} ------------------------------------------------ Our goal is to implement physically realizable attacks on deep re-ID systems by generating physically robust patterns on adversaries’ clothes. To ensure adversarial patterns to be perceived by cameras, we generate large magnitude of patterns with no constraints over them during optimization. However, introducing conspicuous patterns will in turn make adversaries be attractive and arouse suspicion of human supervisors. To tackle this problem, we design unobtrusive adversarial patterns which are visible but difficult for humans to distinguish them from the decorative patterns on clothes. To be specific, we choose a mask $M_x$ to project the generated pattern to a shape that looks like a decorative pattern on clothes (e.g., commonplace logos or creative graffiti). In addition, to generate smooth and consistent patches in our pattern, in other words, colors change only gradually within patches, we follow Sharif et al. [@sharif2016accessorize] that adds total variation ($TV$) [@mahendran2015understanding] into the objective function: $$TV(\delta) = \sum\limits_{p,q} {{{({{({\delta _{p,q}} - {\delta _{p + 1,q}})}^2} + {{({\delta _{p,q}} - {\delta _{p,q + 1}})}^2})}^{\frac{1}{2}}}}$$ where $\delta_{p,q}$ is a pixel value of the pattern $\delta$ at coordinates $(p,q)$, and $TV(\delta)$ is high when there are large variations in the values of adjacent pixels, and low otherwise. By minimizing $TV(\delta)$, the values of adjacent pixels are encouraged to be closer to each other to improve the smoothness of the generated pattern. Implementing physical-world attacks on deep re-ID systems requires adversarial patterns to survive in various environmental conditions. To deal with this problem, we design a degradation function $\varphi(\cdot)$ that randomly changes the brightness or blurs the adversary’s images from the augmented generating set $X^{C}$. During optimization we replace $x_{i}$ with the degraded image $\varphi(x_{i})$ to improve the robustness of our generated pattern against physical dynamics and shooting distortion. Recently, the non-printability score (NPS) was utilized in [@eykholt2018robust; @sharif2016accessorize] to account for printing error. We introduce NPS into our objective function but find it hard to balance NPS term with other objectives. Alternatively, we constrain the search space of the generated pattern $\delta$ in a narrower interval $\mathbb{P}$ to avoid unprintable colors (e.g., high brightness and high saturation). Thus, for each image $x_k$ from ${tri_k}$, we use $o({x_k},{T_k}(M_x\cdot\delta))$ to compute the adversarial images ${x_k}^\prime$, and generate robust adversarial patterns to implement physical-world attack as solving the following optimization problem: $$\begin{split} &\mathop {\arg \min }\limits_\delta {\mathbb{E}_{_{{tri_k} \sim {X^C}}}} {f_\theta }(\varphi (x_k^o)',\varphi (x_k^ - )') \\ &- \beta {f_\theta }(\varphi (x_k^o)',\varphi (x_k^ + )') + \kappa \cdot TV(\delta), \;\;s.t.\;\;\;\delta \in \mathbb{P} \end{split}$$ where $\lambda$ and $\kappa$ are hyperparameters that control the strength of different objectives. Similarly, the formulation of the Impersonation Attack is analogous to that of the Evading Attack, which is as follows: $$\begin{split} &\mathop {\arg \max }\limits_\delta {\mathbb{E}_{_{{quad_k} \sim \{ {X^C},{I^t}\} }}}{f_\theta }(\varphi(x_k^o)',{t_k}) \\ &+ {\lambda _1}{f_\theta }(\varphi(x_k^o)',\varphi(x_k^ + )') - {\lambda _2}{f_\theta }(\varphi(x_k^o)',\varphi(x_k^ - )') \\ &+ \kappa \cdot TV(\delta ), \;\;s.t.\;\;\;\delta \in \mathbb{P} \end{split}$$ Finally, we print the generated pattern over the the adversary’s clothes to deceive re-ID into mismatching him as an arbitrary person or a target person. Experiments {#sec:experiments} =========== In this section, we first introduce the datasets and the target deep re-ID models used for evaluation in Section \[sec:data\]. We evaluate the proposed advPattern for attacking deep re-ID tools both under digital environment (Section \[sec:digital\]) and in physical world (Section \[sec:physical\]). We finally discuss the implications and limitations of advPattern in Section \[sec:discuss\]. ![The scene setting of physical-world tests. We choose 14 testing points under each camera which vary in distances and angles.[]{data-label="fig5"}](fig5.pdf){width="0.4\columnwidth"} Datasets and re-ID Models {#sec:data} ------------------------- [**Market1501 Dataset.**]{} Market1501 contains 32,668 annotated bounding boxes of 1501 identities, which is divided into two non-overlapping subsets: the training dataset contains 12,936 cropped images of 751 identities, while the testing set contains 19,732 cropped images of 750 identities. [**PRCS Dataset.**]{} We built a Person Re-identification in Campus Streets (PRCS) dataset for evaluating the attack method. PRCS contains 10,800 cropped images of 30 identities. During dataset collection, three cameras were deployed to capture pedestrians in different campus streets. Each identity in PRCS was captured by both three cameras and has at least 100 cropped images per camera. We chose 30 images of each identity per camera to construct the testing dataset for evaluating the performance of the trained re-ID models and our attack method. 5.2pt \[tab1\] [ccllllp[2mm]{}]{} Model & Dataset & & & & &\ & & & & & &\ & & & & & &\ \ & & & & & &\ & & & & & &\ 6.5pt \[tab2\] [cclllll]{} Model & Dataset & & & & &\ & & & & & &\ & & & & & &\ \ & & & & & &\ & & & & & &\ \[tab:digital\] [**Target Re-ID Models.**]{} We evaluated the proposed attack method on two different types of deep re-ID models: model A is a siamese network proposed by Zheng et al. [@zheng2017discriminatively], which is trained by combining verification loss and identification loss; model B utilizes a classification model to learn the discriminative embeddings of identities as introduced in [@zheng2016person]. The reason why we choose the two models as target models is that classification networks and siamese networks are widely used in the re-ID community. The effectiveness of our attacks on the two models can imply the effectiveness on other models. Both of the two models achieve the state-of-the-art performance (i.e., rank-$k$ accuracy and mAP) on Market1501 dataset, and also work well on PRCS dataset. The results are given in Table \[tab1\]. We use the ADAM optimizer to generate adversarial patterns with the following parameters setting: learning rate $=0.01$, $\beta_1 = 0.9$, $\beta_2 = 0.999$. We set the maximum number of iterations to 700. Digital-Environment Tests {#sec:digital} ------------------------- We first evaluate our attack method in digital domain where the adversary’s images are directly modified by digital adversarial patterns$\footnote{The code is avaliable at \url{https://github.com/whuAdv/AdvPattern}}$. It is worth noting that attacking in digital domain is actually not a realistic attack, but a necessary evaluation step before successfully implementing real physical-world attacks. [**Experiment Setup.**]{} We first craft the adversarial pattern over a generating set for each adversary, which consists of real images from varying positions, viewing angles and synthesized samples. Then we attach generated adversarial pattern to the adversary’s images in digital domain to evaluate the attacking performance on the target re-ID models. We choose every identity from PRCS as an adversary to attack deep re-ID. In each query, we choose an image from adversarial images as the probe image, and construct a gallery by combining 12 adversarial images from other cameras with images from 29 identities in PRCS, and 750 identities in Market1501. For Impersonation Attack, we take two identities as target for each adversary: one is randomly chosen from Market1501, and another one is chosen from PRCS. We ran 100 queries for each attack. [**Experiment Results.**]{} Table \[tab:digital\] shows the attack results on two re-ID models under Evading Attack in digital environment. We can see that the matching probability and mAP drops significantly for both re-ID models, which demonstrate the high success rate of implementing the Evading Attack. The similarity score of the adversary’s images decreases to less than 0.5, making it hard for deep re-ID models to correctly match images of the adversary in the large gallery. Note that the attack performance on testing set is close to generating set, e.g., rank-1 accuracy from 4.2% to 0% and mAP from 7.3% to 4.4%, which demonstrates the scalability of the digital adversarial patterns when implementing attacks with unseen images. Table \[tab:digital\_impersonate\] shows the attack results to two re-ID models under Impersonation Attack in the digital environment. In PRCS, the average rank-1 accuracy is above 85% when matching adversarial images from the generating set as a target person, which demonstrates the effectiveness of implementing the targeted attack. The patterns are less effective when targeting an identity from Market1501: the rank-1 accuracy of model A decreases to 68.0% for the generating set, and 41.7% for the testing set. We attribute it to the large variations in physical appearance and image styles between two datasets. Though, the high rank-5 accuracy and mAP demonstrate the strong capability of digital patterns to deceive target models. Note that the rank-$k$ accuracy and mAP decrease significantly for matching the adversary’s images across cameras, which means that the generated patterns can also cause mismatch across camera views in targeted attacks. Again, that the attack performance on testing set is close to generating set demonstrates the scalability of the adversarial patterns with unseen images. ![Examples of physically realizable attacks. Top row: an Evading Attack (the adversary: ID1 from PRCS). Middle row: an Impersonation Attack targeting identity from PRCS (the adversary: ID2 in PRCS, the target: ID12 from PRCS). Bottom row: an Impersonation Attack targeting identity from Market1501 (the adversary: ID3 in PRCS, the target: ID728 from Market1501)[]{data-label="fig6"}](fig6.pdf){width="0.8\columnwidth"} Physical-World Evaluation {#sec:physical} ------------------------- On the basis of digital-environment tests, we further evaluate our attack method in physical world. We print the adversarial pattern and attach it to the adversary’s clothes for implementing physical-world attacks. [**Experiment Setup.**]{} The scene setting of physical-world tests is shown in Figure \[fig5\], where we take images of the adversary with/without the adversarial pattern in 14 testing points with variations in distances and angles from cameras. These 14 points are sampled with a fix interval in the filed of cameras views. We omit the left-top point in our experiment due to the constraint of shooting conditions. The distance between cameras and the adversary is about from $5m$ to $10m$ for better perceiving the adversarial pattern. We choose 5 identities as adversaries from PRCS to implement physically realizable attacks. In each query, we randomly choose the adversary’s image under a testing point as the probe image, while adding 12 adversarial images from other cameras into the gallery. Two identities are randomly chosen from Market1501 and PRCS respectively to serve as target person. 100 queries for each testing point are performed. We evaluate physical-world attacks for Evading Attack with model A, while Impersonation Attack with model B. [**Experiment Results.**]{} Table \[tab4\] shows the physical-world attack results of 14 different testing positions with varying distances and angles from cameras. Note that $\Delta$rank-1 denotes the drop of match probability due to adversarial patterns. Similar meanings happen to $\Delta$mAP and $\Delta$ss. For the Evading Attack, we can see that it significantly decreases the match probability to the adversary with the crafted adversarial pattern. The average $\Delta$rank-1 and $\Delta$mAP are 62.2% and 61.1%. The average of the rank-1 accuracy and mAP are 47.1% and 67.9% under Impersonation Attack, respectively. The results demonstrates the effectiveness of adversarial patterns to implement physical-world attacks in varying positions with considerable success rate. For Evading Attack, the average rank-1 accuracy drops to 11.1% in 9 of 14 positions, which demonstrates that the generated adversarial patterns can physically attack deep re-ID systems with high success rate. Note that adversarial patterns are less effective in some testing points, e.g., P12 and P13. We attribute it to the larger angles and farther distance between these points and cameras, which makes it more difficult for cameras to perceive the patterns. For Impersonation Attack, The rank-1 accuracy for matching the adversary as the target person is 56.4% in 11 of 14 positions , which is close to the result of digital patterns targeting on Market1501. The high mAP and similarity scores when matching the adversary as the targeted person demonstrate the effectiveness of adversarial patterns to implement targeted attack in physical world. Still, there exists few points (P3, P5, P14) where the adversary has trouble to implement successful attack with adversarial patterns. Figure \[fig6\] shows examples of physical-world attacks on deep re-ID systems. Discussion {#sec:discuss} ---------- [**Black Box Attacks.**]{} In this paper, we start with the white-box assumption to investigate the vulnerability of deep re-IDs models. Nevertheless, it would be more meaningful if we can realize adversarial patterns with black-box setting. Prior works [@liu2016delving; @papernot2016transferability] demonstrated successful attacks without any knowledge of model’s internals by utilizing the transferability of adversarial examples. We will leave black-box attacks as our future work. [**AdvPattern vs. Other Approaches.**]{} AdvPattern allows the adversary to deceive deep re-ID systems without any digital modifications of person images or any physical appearance change. Although there are simpler ways to attack re-ID systems, e.g., directly object removal in digital domain, or changing physical appearance in different camera view, we argue that our adversarial pattern is the most reasonable method because: (1) for object removal methods, it is unrealistic to control the queried image and gallery images; (2) changing physical appearance makes adversaries attractive to human supervisors. Conclusion {#sec:conclusion} ========== This paper designed Evading Attack and Impersonation Attack for deep re-ID systems, and proposed advPattern for generating adversarially transformable patterns to realize adversary mismatch and target person impersonation in physical world. The extensive evaluations demonstrate the vulnerability of deep re-ID systems to our attacks. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported in part by National Natural Science Foundation of China (Grants No. 61872274, 61822207 and U1636219), Equipment Pre-Research Joint Fund of Ministry of Education of China (Youth Talent) (Grant No. 6141A02033327), and Natural Science Foundation of Hubei Province (Grants No. 2017CFB503, 2017CFA047), and Fundamental Research Funds for the Central Universities (Grants No. 2042019gf0098, 2042018gf0043). [^1]: This work was accepted by IEEE ICCV 2019.
--- abstract: 'Deep learning models such as convolutional neural network have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels. To better leverage the multi-modalities, we propose a deep encoder-decoder structure with cross-modality convolution layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM to model a sequence of 2D slices, and jointly learn the multi-modalities and convolutional LSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two-phase training to handle the label imbalance. Experimental results on BRATS-2015 [@info:doi/10.2196/jmir.2930] show that our method outperforms state-of-the-art biomedical segmentation approaches.' author: - 'Kuan-Lun Tseng' - 'Yen-Liang Lin' - Winston Hsu - 'Chung-Yang Huang' bibliography: - 'egbib.bib' title: 'Joint Sequence Learning and Cross-Modality Convolution for 3D Biomedical Segmentation' --- Introduction ============ 3D image segmentation plays a vital role in biomedical analysis. Brain tumors like gliomas and glioblastomas have different kinds of shapes, and can appear anywhere in the brain, which make it challenging to localize the tumors precisely. Four different modalities of magnetic resonance imaging (MRI) image are commonly referenced for the brain tumor surgery: T1 (spin-lattice relaxation), T1C (T1-contrasted), T2 (spin-spin relaxation), and FLAIR (fluid attenuation inversion recovery). Each modality has distinct responses for different tumor tissues. We leverage multiple modalities to automatically discriminate the tumor tissues for assisting the doctors in their treatment planning. ![image](modf.pdf){height="0.45\linewidth"} \[fig:onecol\] Recently, deep learning methods have been adopted in biomedical analysis and achieve the state-of-the-art performance. Patch-based methods [@brats; @havaei2016brain] extract small patches of an image (in a sliding window fashion) and predict the label for each central pixel. These methods suffer from slow training, as the features from the overlapped patches are re-computed. Besides, they only take a small region into a network, which ignores the global structure information (, image contents and label correlations). Some methods apply 2D segmentation to 3D biomedical data [@UNet] [@chen2016dcan]. They slice a 3D medical image into several 2D planes and apply 2D segmentation for each 2D plane. 3D segmentation is then generated by concatenating the 2D results. However, these 2D approaches ignore the sequential information between consecutive slices. For examples, there may have rapid shape changes in the consecutive depths. 3D-based approaches [@lai2015deep] use 3D convolution to exploit different views of a 3D image. But, they often require a larger number of parameters and are prone to overfitting on the small training dataset. The above methods often stack modalities as different input channels for deep learning models, which do not explicitly consider the correlations between different modalities. To address these problems, we propose a new deep encoder-decoder structure that incorporates spatial and sequential information between slices, and leverage the responses from multiple modalities for 3D biomedical segmentation. Figure \[fig:fig1\] shows the system overview of our method. Given a sequence of slices of multi-modal MRI data, our method accurately predicts the different types of tumor issues for each pixel. Our model consists of three main parts: multi-modal encoder, cross-modality convolution and convolutional LSTM. The slices from different modalities are stacked together by the depth values (b). Then, they pass through different CNNs in the multi-modal encoder (each CNN is applied to a different modality) to obtain a semantic latent feature representation (c). Latent features from multiple modalities are effectively aggregated by the proposed cross-modality convolution layer (d). Then, we leverage convolutional LSTM to better exploit the spatial and sequential correlations of consecutive slices (e). A 3D image segmentation is generated (h) by concatenating a sequence of 2D prediction results (g). Our model jointly optimizes the slice sequence learning and multi-modality fusion in an end-to-end manner. The main contributions of this paper are summarized as following: - We propose an end-to-end deep encoder-decoder network for 3D biomedical segmentation. Experimental results demonstrate that we outperform state-of-the-art 3D biomedical segmentation methods. - We propose a new cross-modality convolution to effectively aggregate the multiple resolutions and modalities of MRI images. - We leverage convolution LSTM to model the spatial and sequential correlations between slices, and jointly optimize the multi-modal fusion and convolution LSTM in an end-to-end manner. Related Work ============ [**Image Semantic Segmentation.**]{} Various deep methods have been developed and achieve significant progress in image segmentation [@long2015fully; @badrinarayanan2015segnet; @chen2014semantic; @noh2015learning]. These methods use convolution neural network (CNN) to extract deep representations and up-sample the low-resolution feature maps to produce the dense prediction results. SegNet [@badrinarayanan2015segnet] adopts an encoder-decoder structure to further improve the performance while requiring fewer model parameters. We adopt the encoder-decoder structure for 3D biomedical segmentation and further incorporate cross-modality convolution and convolutional LSTM to better exploit the multi-modal data and sequential information for consecutive slices. [**3D Biomedical Image Segmentation.**]{} There have been much research work that adopts deep methods for biomedical segmentation. Havaei et al. [@havaei2016brain] split 3D MRI data into 2D slices and crop small patches at 2D planes. They combine the results from different-sized patches and stack multiple modalities as different channels for the label prediction. Some methods utilize full convolutional network (FCN) structure [@long2015fully] for 3D biomedical image segmentation. U-Net [@UNet] consists of a contracting path that contains multiple convolutions for downsampling, and a expansive path that has several deconvolution layers to up-sample the features and concatenate the cropped feature maps from the contracting path. However, depth information is ignored by these 2D-based approaches. To better use the depth information, Lai et al. [@lai2015deep] utilize 3D convolution to model the correlations between slices. However, 3D convolution network often requires a larger number of parameters and is prone to overfitting on small dataset. kU-Net [@kUNet] is the most related to our work. They adopt U-Net as their encoder and decoder and use recurrent neural network (RNN) to capture the temporal information. Different from kU-Net, we further propose a cross-modality convolution to better combine the information from multi-modal MRI data, and jointly optimize the slice sequence learning and cross-modality convolution in an end-to-end manner. [**Multi-Modal Images.**]{} In brain tumor segmentation, multi-modal images are used to identify the boundaries between the tumor, edema and normal brain tissue. Cai et al. [@cai2007probabilistic] combine MRI images with diffusion tensor imaging data to create an integrated multi-modality profile for brain tumors. Their brain tissue classification framework incorporates intensities from each modality into an appearance signature of each voxel to train the classifiers. Menze et al. [@menze2010generative] propose a generative probabilistic model for reflecting the differences in tumor appearance across different modalities. In the process of manual segmentation of a brain tumor, different modalities are often cross-checked to better distinguish the different types of brain tissue. For example, according to Menze et al. [@menze2015multimodal], the edema is segmented primarily from T2 images and FLAIR is used to cross-check the extension of the edema. Also, enhancing and non-enhancing structures are segmented by evaluating the hyper-intensities in T1C. Existing CNN-based methods (e.g., [@brats; @havaei2016brain]) often treat modalities as different channels in the input data. However, the correlations between them are not well utilized. To our best knowledge, we are the first to jointly exploit the correlations between different modalities, and the spatial and sequential dependencies for consecutive slices. ![image](modality-fuse.pdf){height="0.5\linewidth"} Method ====== Our method is composed of three parts, multi-modal encoder and decoder, cross-modality convolution, and convolution LSTM. Encoder is used for extracting the deep representation of each modality. Decoder up-samples the feature maps to the original resolution for predicting the dense results. Cross-modality convolution performs 3D convolution to effectively combine the information from different modalities. Finally, convolutional LSTM further exploits the sequential information between consecutive slices. Encoder and Decoder ------------------- Due to the small size of BRATS-2015 training dataset [@info:doi/10.2196/jmir.2930], we want the parameter space of our multi-modal encoder and decoder to be relatively small for avoiding the overfitting. Figure \[fig:fusion\] shows our multi-modal encoder and decoder structure. We adopt the similar architecture as in SegNet [@badrinarayanan2015segnet] for our encoder, which comprises four convolution layers and four max pooling layers. Each convolution layer uses the kernel size $3\times3$ to produce a set of feature maps, which are further applied by a batch normalization layer [@ioffe2015batch] and an element-wise rectified-linear non-linearity (ReLU). Batch normalization layer is critical for training our network, as the distributions of tumor and non-tumor tissues can vary from one slice to another even in the same brain. Then, a max pooling layer with size 2 and stride 2 is applied and the output feature maps are down-sampled by a factor of 2. For the decoder network, each deconvolution layer performs the transposed convolution. Then, a convolution and batch normalization are applied. After up-sampling the feature maps to the original resolution, we pass the output of the decoder to a multi-class soft-max classifier to produce the class probabilities of each pixel. Multi-Resolution Fusion (MRF) ----------------------------- Recent image segmentation models [@eigen2015predicting; @long2015fully; @hariharan2015hypercolumns; @UNet] fuse multi-resolution feature maps with the concatenation. Feature concatenation often requires additional learnable weights because of the increase of channel size. In our method, we use the feature multiplication instead of concatenation. The multiplication does not increase feature map size and therefore no additional weights are required. We combine the feature maps from the encoder and decoder networks, and train the whole network end-to-end. The overall structure of cross-modality convolution (CMC) and multi-resolution fusion are shown in Figure \[fig:fig2\]. We perform CMC after each pooling layer in the multi-modal encoder, and multiply it with the up-sampled feature maps from the decoder to combine multi-resolution and multi-modality information. We will explain the details of CMC layer in the next section. Cross-Modality Convolution (CMC) -------------------------------- We propose a cross-modality convolution (CMC) to aggregate the responses from all modalities. After the multi-modal encoder, each modality is encoded to a feature map of size $h \times w \times C$, where $w$ and $h$ are feature dimensions, and $C$ is the number of channels. We stack the features of the same channels from four modalities into one stack. After reshaping, we have $C \times 4 \times h \times w$ feature maps. Our cross-modality convolution performs 3D convolution with the kernel size $4 \times 1 \times 1$, where 4 is the number of modalities. As the 3D kernel convolves across different stacks, it assigns different weights to each modality and sums the feature values in the output feature maps. The proposed cross-modality convolution combines the spatial information of each feature map and models the correlations between different modalities. Slice Sequence Learning ----------------------- We propose an end-to-end slice sequence learning architecture to capture the sequential dependencies. We use image depths as a sequence of slices and leverage convolutional LSTM [@xingjian2015convolutional] (convLSTM) to model the slice dependencies. Different from traditional LSTM [@hochreiter1997long], convLSTM replaces the matrix multiplication by the convolution operators in state-to-state and input-to-state transitions, which preservers the spatial information for long-term sequences. [**Convolutional LSTM (convLSTM).**]{} The mechanism is similar to the traditional LSTM except that we replace the matrix multiplication by a convolution operator ${*}$. The overall network is defined as following: $$\begin{array}{l} {i_t} = \sigma ({x_t}*{W_{xi}} + {h_{t - 1}}*{W_{hi}} + {b_i})\\ {f_t} = \sigma ({x_t}*{W_{xf}} + {h_{t - 1}}*{W_{hf}} + {b_f})\\ {c_t} = {c_{t - 1}} \circ {f_t} + {i_t} \circ \tanh ({x_t}*{W_{xc}} + {h_{t - 1}}*{W_{hc}} + {b_c})\\ {o_t} = \sigma ({x_t}*{W_{xo}} + {h_{t - 1}}*{W_{ho}} + {b_o})\\ {h_t} = {o_t} \circ \tanh ({c_t}) \end{array}$$ Where $\sigma$ is a sigmoid function and $\tanh$ is a hyperbolic tangent function. There are three gates, namely input gate ${i_t}$, forget gate ${f_t}$ and output gate ${o_t}$. The forget gate controls whether to remember previous cell state ${c_{t-1}}$ by the output of activation function $\sigma$. Similarly, input gate controls whether new candidate value should be added into new cell state ${c_t}$. Finally, the output gates controls which parts we want to produce. The output size of feature maps depends on the kernel size and padding methods. Our slice sequence learning architecture combines the encoder-decoder network with a convLSTM to better model a sequence of slices. ConvLSTM takes a sequence of consecutive brain slices encoded by multi-modal encoder and CMC (Figure \[fig:fig1\] (e)). The weights in convLSTM are shared for different slices, therefore the parameter size does not increase linearly as the length of slice sequence growths. The output feature maps of convLSTM are up-sampled by the decoder network (Figure \[fig:fig1\] (f)). ![ System overview of our multi-resolution fusion strategy. In the multi-modal encoder, the feature maps generated after each pooling layer are applied with cross-modality convolution to aggregate the information between modalities. Following that, those feature maps are multiplied with the up-sampled feature maps from the decoder to combine the multi-resolution information. []{data-label="fig:fig2"}](multi-res.pdf){height="0.6\linewidth"} \[fig:onecol\] Experiments =========== We conduct experiments on two datasets to show the utility of our model. We compare our cross-modality convolution with the traditional methods that stack modalities as different channels. We evaluate our sequence learning scheme on typical video datasets to verify our method for modeling the temporal dependency. We also evaluate our methods on a 3D biomedical image segmentation dataset. Dataset ------- [**CamVid dataset.**]{} Cambridge-driving labelled video database (CamVid) dataset [@BrostowSFC:ECCV08] is captured from the perspective of a driving automobile with fixed-position CCTV-style cameras. CamVid provides videos with object class semantic labels. The database provides ground truth labels that associate each pixel with one of 32 semantic classes. We split the CamVid dataset with 367 training, 100 validation and 233 testing images. The evaluation criteria is the mean intersection over union (Mean IU). Mean IU is a commonly-used segmentation performance measurement that calculates the ratio of the area of intersection to the area of unions. Selected images are sampled from the original videos and down-sampled to $640\times480$. The length between two consecutive image is 30 frames long. [**BRATS-2015 dataset.**]{} BRATS-2015 training dataset comprises of 220 subjects with high grade gliomas and 54 subjects with low grade gliomas. The size of each MRI image is $155\times240\times240$. We use 195 high grade gliomas and 49 low grade gliomas for training, and the rest 30 subjects for testing. We also conduct five-fold evaluation by using BRATS-2015 online judge system for avoiding overfitting. All brain in the dataset have the same orientation and the four modalities are synchronized. The label image contains five labels: non-tumor, necrosis, edema, non-enhancing tumor and enhancing tumor. The evaluation system separates the tumor structure into three regions due to practical clinical applications. - Complete score: it considers all tumor areas and evaluates all labels 1, 2, 3, 4 (0 for normal tissue, 1 for edema, 2 for non-enhancing core, 3 for necrotic core, and 4 for enhancing core). - Core score: it only takes tumor core region into account and measures the labels 1, 3, 4. - Enhancing score: it represents the active tumor region, i.e., only containing the enhancing core (label 4) structures for high-grade cases. There are three kinds of evaluation criteria: Dice, Positive Predicted Value and Sensitivity. $$Dice = \frac{P_1 \bigcap T_1}{(P_1 + T_1)/2}$$ $$PPV = \frac{P_1 \bigcap T_1}{P_1}$$ $$Sensitivity = \frac{P_1 \bigcap T_1}{T_1},$$ where T is ground truth label and P is predicted result. $T_1$ is the true lesion area and $P_1$ is the subset of voxels predicted as positives for the tumor region. Method label 0 label 1 label 2 label 3 label 4 MeanIU ---------------------------------------- -------------- -------------- -------------- -------------- -------------- -------------- U-Net [@UNet] 92.3 42.9 73.6 45.3 62.0 54.2 U-Net + two phase 98.6 43.8 67.4 24.0 60.5 59.3 MME + MRF + CMC 98.2 47.0 72.2 41.0 72.2 61.8 MME + MRF + CMC + two-phase [**99.1**]{} 48.8 63.8 36.2 76.9 64.0 MME + MRF + CMC + convLSTM 96.6 [**94.3**]{} 71.2 32.8 [**96.0**]{} 62.5 MME + MRF + CMC + convLSTM + two-phase 98.5 92.1 [**77.3**]{} [**55.9**]{} 78.6 [**73.5**]{} ------------------------------------ --------------- -- -- -- Method MeanIU SegNet [@badrinarayanan2015segnet] 47.85 Encoder-Decoder + convLSTM 48.16 Encoder-Decoder + MRF 49.91 Encoder-Decoder + convLSTM + MRF [**52.13**]{} ------------------------------------ --------------- -- -- -- : MeanIU on CamVid test set. Our encoder-decoder model with convolutional LSTM and multi-resolution fusion achieve the best results. []{data-label="t3"} ---------------------------------------- --------------- -- -- -- Method MeanIU U-Net [@UNet] 54.3 U-Net+two-phase 59.3 Encoder-Decoder 44.14 MME + CMC 45.80 Encoder-Decoder + MRF 55.37 MME + CMC + MRF 61.83 MME + CMC + MRF + two-phase 64.02 MME + CMC + MRF + convLSTM 62.15 MME + CMC + MRF + convLSTM + two phase [**73.52**]{} ---------------------------------------- --------------- -- -- -- : Segmentation results of our models on BRATS-2015 testing set with 30 unseen patients.[]{data-label="t4"} Training -------- [**Single Slice Training.**]{} The critical problem in training a fully convolutional network in BRATS-2015 dataset is that the label distribution is highly imbalanced. Therefore, the model easily converges into local minimum, i.e., predicting every pixel as non-tumor tissue. We use *median frequency balancing* [@eigen2015predicting] for handling the data imbalance, where the weight assigned to a class in the cross-entropy loss function is defined as: $${\alpha _c} = median\_freq/freq(c)$$ where $freq(c)$ is the number of pixels of class $c$ divided by the total number of pixels in images where $c$ is present, and $median\_freq$ is the median of all class frequencies. Therefore the dominant labels will be assigned with the lowest weight which balances the training process. In our experiments, we find that weighted loss formulation will overly suppress the learning of dominant labels (e.g., label 0 for normal tissues) and wrongly predict the labels. [**Two-Phase Training.**]{} In the first phase, we only sample the slices that contain tumor tissues and use *median frequency balancing* method for de-weighting the losses of the dominant classes (e.g., normal tissue and background). In the second phase, we replace the *median frequence balancing* and use a lower learning rate (, 10e-6 in our experiments) for training the model. With the true distribution reflected on loss function, we can train our model in a more balanced way to preserve diversity and adjust the output prediction to be closer to the real distribution of our data. Two-phase training alleviates the problem of overly suppressing the dominant classes and learn much better results. [**Slice Sequence Training.**]{} We avoid sampling the empty sequences (all the slices in the sequence are normal brain tissues) in the first training phase to prevent the model from getting trapped into local minimum and apply two-phase training scheme for slice sequence leanring. For training the convolutional LSTM, we adopt the orthogonal initialization [@saxe2013exact] for handling the missing gradient issue. For CamVid dataset, we use the batch size of 5 and sequence length of 3. For BRATS-2015 dataset, we use the batch size of 3 and sequence length of 3. The initial learning rate is 10e-4 and we use Adam optimizer for training the network. ![image](visual.pdf){height="0.6\linewidth"} \[fig:onecol\] Baseline -------- The most relevant work to our method are kU-Net [@kUNet] and U-Net [@UNet]. Both models achieve the state-of-the-art results in 3D biomedical image segmentation. However, kU-Net is not originally designed for brain tumor segmentation, and the source code is not publicly available. Therefore, we compare our method with U-Net, which shows competitive performance with kU-Net. Original implementation of U-Net does not adopt batch normalization. However, we find that it can not converge when training on BRATS-2015 dataset. Thus, we re-implement their model with Tensorflow [@tensorflow2015-whitepaper] and incorporate batch normalization layer before every non-linearity in the contracting and expansive path of U-Net model. We use orthogonal initialization and set the batch size to 10. The inputs for U-Net is 4-channel MRI slices that stack four modalities into different channels. We also investigate two-phase training and re-weighting for U-Net. --------------- -------- -------- -------- -------- -------- -------- -------- -------- -------- U-Net [@UNet] 0.8504 0.6174 0.6793 0.8727 0.5296 0.7229 0.8376 0.7876 0.7082 Ours 0.8522 0.6835 0.6877 0.8741 0.6545 0.7735 0.9117 0.7945 0.7212 --------------- -------- -------- -------- -------- -------- -------- -------- -------- -------- Experimental Results -------------------- We conduct the experiments to evaluate cross-modality convolution (CMC). We compare the performance of our multi-modal encoder and CMC layers with an encoder-decoder model (see Table \[t4\]). The encoder-decoder model refers to a single encoder and decoder network without fusion. The input of the encoder-decoder model stacks different modalities as different channels, while the input of our MME+CMC is illustrated in Figure \[fig:fig1\](b). The performance of our MME+CMC outperforms the basic encoder-decoder structure by approximately two percent in Mean IU. Currently, the feature maps extracted by MME are down-sampled to a lower resolution, thus a certain amount of spatial information is lost. We conjecture that the performance of our CMC could be further improved by incorporating higher resolution feature maps. We conduct another experiment by using multi-resolution feature maps with CMC to verify whether multiple resolution helps. In Table \[t4\], we can observe that MRF significantly improves the performance, e.g., encoder-decoder with MRF improves the basic encoder-decoder by 10 percent. We also evaluate our feature multiplication and feature concatenation used by U-Net, and find that they achieve similar performance. Table \[t1\] shows that MME+CMC+MRF outperforms U-Net (similar to our encoder-decoder+MRF) on Mean IU (almost every label except for label 0 (normal tissues)). Because of the number of normal tissues are much larger than the other labels, the accuracy of label 0 is critical in Mean IU metric. As a result, we use two-phase training to refine our final prediction. After the second phase of training, the accuracy for label 0 is improved and our model shows much clean prediction results (cf. Figure \[fig:fig3\]). To verify the generalizability of our sequence learning method, we further perform the slice-to-sequence experiments on CamVid dataset. The sequence length used in the CamVid experiment is three and the settings for encoder-decoder are the same as BRATS dataset. We incorporate convolutional LSTM with both basic encoder-decoder model and encoder-decoder+MRF. Results show that convolutional LSTM consistently improves the performance for both settings (cf. Table \[t3\]) and outperforms SegNet [@badrinarayanan2015segnet]. Due to the ability of convolutional LSTM for handling long-short term sequences and preserving the spatial information at the same time, the dependencies between slices are well learned. Our slice-to-sequence also improves the results of BRATS dataset. In Table \[t1\], we can see that the accuracy of label 1 to 4 (row 5 and 6) is much better than the single slice training (row 3 and 4). After the second training phase, the accuracy of label 0 is improved and the model achieves the Mean IU 73.52, which outperforms single slice training model by a large margin. In Table \[t2\], we compare our slice-to-sequence model with U-Net on BRATS-2015 online system (we upload the five fold cross validation results to BRATS-2015 online evaluation system). Two-phase training is applied to both methods and trained with the same epochs (without post-processing). Our slice-to-sequence model outperforms U-Net in different measurements. The visualized results also show that sequential information improves the predictions for detailed structures (cf. Figure \[fig:fig3\]). Experimental results show that the proposed cross-modality convolution can effectively aggregate the information between modalities and seamlessly work with multi-resolution fusion. The two components are combined to achieve the best results. The slice-to-sequence architecture further utilizes the sequential dependencies to refine the segmentation results. This end-to-end trainable architecture shows many potentials since it provides consistent improvement on every configurations of our model. Conclusions =========== In this paper, we introduce a new deep encoder-decoder architecture for 3D image biomedical segmentation. We present a new cross-modality convolution to better exploit the multi-modalities, and a sequence learning method to integrate the information from consecutive slices. We jointly optimizing sequence learning and cross-modality convolution in an end-to-end manner. Experimental results on BRATS-2015 dataset demonstrate that our method improves the state-of-the-art methods. Acknowledgement =============== This work was supported in part by the Ministry of Science and Technology, Taiwan, under Grant MOST 104-2622-8-002-002 and MOST 105-2218-E-002 -032, and in part by MediaTek Inc, and grants from NVIDIA and the NVIDIA DGX-1 AI Supercomputer.
--- abstract: 'The general idea to modify Einstein’s field equations by promoting Newton’s constant $G$ to a covariant differential operator $G_\Lambda(\Box_g)$ was apparently outlined for the first time in [@Dvali1; @Dvali2; @Barvinsky1; @Barvinsky2]. The modification itself originates from the quest of finding a mechanism which is able to [*degravitate*]{} the vacuum energy on cosmological scales. We present in this article a precise covariant coupling model which acts like a high-pass filter with a macroscopic distance filter scale $\sqrt{\Lambda}$. In the context of this particular theory of gravity we work out the effective relaxed Einstein equations as well as the effective 1.5 post-Newtonian total near-zone mass of a many body system. We observe that at any step of computation we recover in the limit of vanishing modification parameters the corresponding general relativistic result.' author: - Alain Dirkes title: Degravitation and the relaxed Einstein equations --- Introduction: ============= In this chapter we will introduce the nonlocally modified Einstein field equations and outline how the vacuum energy is effectively degravitated on cosmological scales. In the second chapter we will briefly review the standard relaxed Einstein equations and their solutions in terms of a post-Newtonian expansion. In the third chapter we will work out the effective wave equation and provide a formal solution for a far away wave zone field point. Chapter four is devoted to the study of the nonlocally modified effective energy-momentum pseudotensor. In the penultimate chapter we combine the results worked out in the previous chapters in order to compute the effective total near-zone mass. It should be noticed that each chapter has a separate appendix-section in which we present additional computational details. The nonlocally modified Einstein equations: ------------------------------------------- It is well known that the essence of Einstein’s field equations [@Einstein1] can be elegantly summarized by John A. Wheeler’s famous words: matter tells spacetime how to curve and spacetime tells matter how to move. They relate indeed, by means of the Einstein curvature tensor $G^{\alpha\beta}$ and the total energy-momentum tensor $T^{\alpha\beta}$, the curvature of spacetime to the distribution of energy within spacetime, $G_{\alpha\beta}\,=\,\frac{8\pi}{c^4} \ G \ T_{\alpha\beta}$. Long before Albert Einstein published his theory of general relativity the relation between matter $\rho$ and the gravitational field $U$ had already been discovered and concisely summarized by the famous Poisson equation $\Delta U=-4\pi G \rho$. This law is purely phenomenological whereas Einstein’s theory provides, via the concept of spacetime curvature, a deeper understanding of the true nature of gravity. One year after the final formulation of the theory of general relativity, Albert Einstein predicted the existence of gravitational radiation. He realized that the linearised weak-field equations admit solutions in the form of gravitational waves travelling at the speed of light. He also recognized that the direct experimental detection of these waves, which are generated by time variations of the mass quadrupole moment of the source, will be extremely challenging because of their remarkably small amplitude [@Einstein2; @Einstein3]. However gravitational radiation has been detected indirectly since the mid seventies of the past century in the context of binary-systems [@Taylor1; @Burgay1; @Stairs1; @Stairs2; @Taylor2]. Precisely one century after Einstein’s theoretical prediction, an international collaboration of scientists (LIGO Scientific Collaboration and Virgo Collaboration) reported the first direct observation of gravitational waves [@LIGO1; @LIGO2; @LIGO3]. The wave signal GW150914 was detected independently by the two LIGO detectors and its basic features point to the coalescence of two stellar black holes. Albeit the great experimental success of Einstein’s theory, some issues, like the missing mass problem or the dark energy problem, the physical interpretation of black hole curvature singularities or the question of how a possible unification with quantum mechanics could be achieved, remain yet unsolved. In this regard many potentially viable alternative theories of gravity have been developed over the past decades. The literature on theories of modified gravity is rather long and we content ourselves here by providing an incomplete list of papers addressing this subject [@Will1; @Esposito1; @Clifton1; @Tsujikawa1; @Woodard1; @BertiBuonannoWill]. In this article we aim to outline a particular model of a nonlocally modified theory of general relativity. The main difference between the standard field equations and the modified theory of gravity is that we promote the gravitational constant to a covariant differential operator, $$\label{NonlocalEinstein} G_{\alpha\beta}\,=\, \frac{8\pi}{c^4} \ G_\Lambda(\Box_g) \ T_{\alpha\beta},$$ where $\Box_g=\nabla^{\alpha}\nabla_\alpha$ is the covariant d’Alembert operator and $\sqrt{\Lambda}$ is the scale at which infrared modifications become important. The general idea of a differential coupling was apparently formulated for the first time in [@Dvali1; @Barvinsky1; @Dvali2; @Barvinsky2] in order to address the cosmological constant problem [@Weinberg1]. However the idea of a varying coupling constant of gravitation dates back to early works of Dirac [@Dirac1] and Jordan [@Jordan1; @Jordan2]. Inspired by these considerations Brans and Dicke published in the early sixties a theory in which the gravitational constant is replaced by the reciprocal of a scalar field [@Brans1]. Further developments going in the same direction can be inferred from [@Narlikar1; @Isern1; @Uzan2]. Although we are going to present a purely bottom-up constructed model, it is worth mentioning that many theoretical approaches, such as models with extra dimensions, string theory or scalar–tensor models of quintessence [@Peebles1; @Steinhardt1; @Lykkas1] contain a built–in mechanism for a possible time variation of the couplings [@Dvali3; @Dvali4; @Dvali5; @Parikh1; @Damour1; @Uzan1; @Lykkas1]. The main difference between the standard general relativistic theory and our nonlocally modified theory is how the energy-momentum tensor source term is translated into spacetime curvature. In the usual theory of gravity this translation is assured by the gravitational coupling constant $G$, whereas in our modified approach the coupling between the energy source term and the gravitational field will be in the truest sense of the word more differentiated. The covariant d’Alembert operator is sensitive to the characteristic wavelength of the gravitating system under consideration $1/\sqrt{-\Box_g} \sim \lambda_c$. We will see that our precise model will be constructed in such a way that the long-distance modification is almost inessential for processes varying in spacetime faster than $1/\sqrt{\Lambda}$ and large for slower phenomena at wavelengths $\sim \sqrt{\Lambda}$ and larger. In this regard spatially extended processes varying very slowly in time, with a small characteristic frequency $\nu_c\sim 1/\lambda_c$, will produce a less stronger gravitational field than smaller fast moving objects like solar-system planets or even earth sized objects. The latter possess rather small characteristic wavelengths and will therefore couple to the gravitational field in the usual way. Cosmologically extended processes with a small characteristic frequency will effectively decouple from the gravitational field. In this regard it is of course understood that John Wheeler’s famous statement about the mutual influence of matter and spacetime curvature remains essentially true, however the precise form of the coupling differs according to the dynamical nature of the gravitating object under consideration. Indeed promoting Newton’s constant $G$ to a differential operator $G_{\Lambda}(\Box_g)$ allows for an interpolation between the Planckian value of the gravitational constant and its long distance magnitude [@Barvinsky1; @Barvinsky2], $$\begin{aligned} G_P>G_\Lambda(\Box_g)>G_{L}.\end{aligned}$$ Thus the differential operator acts like a high-pass filter with a macroscopic distance filter scale $\sqrt{\Lambda}$. In this way sources characterized by characteristic wavelengths much smaller than the filter scale ($\lambda_c\ll\sqrt{\Lambda}$) pass undisturbed through the filter and gravitate normally, whereas sources characterized by wavelengths larger than the filter scale are effectively filtered out [@Dvali1; @Dvali2]. In a more quantitative way we can see how this filter mechanism works by introducing the dimensionless parameter $z\,=\,-\Lambda \Box_g\sim \Lambda/\lambda_c^2$, $$\begin{aligned} G(z)\rightarrow G, \ |z|\gg 1 \ (\lambda_c \ll 1),\quad \quad G(z)\rightarrow 0, \ \ |z|\ll 1 \ (\lambda_c \gg 1).\end{aligned}$$ For small and fast moving objects with large values of $|z|$ (small characteristic wavelengths) the covariant coupling operator will essentially reduce to Newton’s constant $G$, whereas for slowly varying processes characterized by small values of $|z|$ (large characteristic wavelengts) the coupling will be much smaller. Although the equations of motion $\eqref{NonlocalEinstein}$ are themselves generally covariant, they cannot, for nontrivial $G_\Lambda(\Box_g)$, be represented as a metric variational derivative of a diffeomorphism invariant action. The solution of this problem was suggested in [@Barvinsky1; @Barvinsky2; @Modesto1] by viewing equation $\eqref{NonlocalEinstein}$ only as a first, linear in the curvature, approximation for the correct equations of motion. Their covariant action can be constructed as a weak-field expansion in powers of the curvature with nonlocal coefficients. The nonlocally modified action $S_{NL}[g_{\mu\nu}]$ should be derived from the variational equation, $$\label{Variational} \frac{\delta S_{NL}[g_{\mu\nu}]}{\delta g_{\mu\nu}(x)}\,=\,\frac{c^3}{16 \pi G_\Lambda(\Box_g)}\sqrt{-g}\ G^{\mu\nu}+\mathcal{O}[R^2_{\mu\nu}],$$ where we remind that $G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R$ is the Einstein tensor and $R_{\mu\nu}$ the Riemannian curvature tensor. In order to obtain the leading term of $S_{NL}$, the equation above can be functionally integrated with the aid of the covariant curvature expansion technique presented in [@Barvinsky1; @Barvinsky2; @Barvinsky3; @Barvinsky4; @Barvinsky5]. The essence of this technique consists in the possibility to convert noncovariant series in powers of gravitational potentials $h_{\mu\nu}$ into series of spacetime curvature and its derivatives with the covariant nonlocal coefficients [@Barvinsky1; @Barvinsky2; @Modesto1]. The resulting nonlocal action generating equation $\eqref{Variational}$ begins with the quadratic order in the curvature, $$S_{NL}[g_{\mu\nu}]\,=\,-\frac{c^3}{16\pi } \int \ d^4x\ \sqrt{-g} \Big\{G^{\mu\nu}\frac{G_\Lambda^{-1}(\Box_g)}{\Box_g}R_{\mu\nu}+\mathcal{O}[R^3_{\mu\nu}]\Big\},$$ It can be shown that in the simplest case of constant $G(\Box_g)$ the the nonlocal action outlined above reproduces the Einstein-Hilbert action [@Barvinsky1; @Barvinsky2]. In the context of the cosmological constant problem we aim to present in this article a precise differential coupling model which contains the degravitation properties mentioned above, $$\begin{aligned} G_{\Lambda}(\Box)\,=\,\mathcal{G}_{\kappa}(\Box_g) \ \mathcal{F}_\Lambda(\Box_g),\end{aligned}$$ where $\mathcal{G}_{\kappa}=\frac{G}{1-\sigma e^{\kappa\Box_g}}$ is a purely ultraviolet (UV) modification term and $\mathcal{F}_\Lambda=\frac{\Lambda \Box_g}{\Lambda \Box_g-1}$ is the nonlocal infrared (IR) contribution. We remind that $\Box_g=\nabla^{\alpha}\nabla_\alpha$ is the covariant d’Alembert operator and $G$ the Newtonian coupling constant. We see that we recover in the limit of infinitely large frequencies (vanishing wavelengths) Einstein gravity as the UV-term $\lim_{z\rightarrow +\infty} \mathcal{G}_{\kappa}(z)=G$ reduces to the Newtonian coupling constant and the IR-term $\lim_{z\rightarrow +\infty}\mathcal{F}_\Lambda=1$ goes to one. The IR-degravitation essentially comes from $\lim_{z\rightarrow 0}\mathcal{F}_\Lambda(z)=0$ while the UV-term $\lim_{z\rightarrow 0}\mathcal{G}_\kappa(z)=\frac{G}{1-\sigma}$ taken alone does not vanish in this limit. The dimensionless UV-parameter $\sigma$ is a priori not fixed, however in order to make the infrared degravitation mechanism work properly $\sigma$ should be different from one. We will see in the next chapter that we will restrain the general theory by assuming that $|\sigma|<1$ is rather small. The second UV-parameter $\kappa$ and the IR-degravitation parameter $\Lambda$ are of dimension length squared. The constant factor $\sqrt{\Lambda}$ is the cosmological scale at which the infrared degravitation process sets in. In the context of the cosmological constant problem this parameter needs to be typically of the order of the horizon size of the present visible Universe $\sqrt{\Lambda} \sim 10^{30} m$ [@Dvali1; @Barvinsky1; @Barvinsky2; @Dvali2]. In addition we assume that $\sqrt{\kappa}\ll\sqrt{\Lambda}$, so that we can perform a formal series-expansion $G_\kappa(z)=\sum_{n=0}^{+\infty}\sigma^n e^{n\frac{\kappa}{\Lambda}z} $ in the UV-regime ($|z|\ll1$). The parameter $\kappa$, although named differently, was encountered in the context of various nonlocal modified theories of gravity which originate from the pursuit of constructing a UV-complete theory of quantum gravity or coming from models of noncommutative geometry [@Modesto1; @Modesto2; @Spallucci1; @Sakellariadou1]. To conclude this subsection we would like to point out that in the limit of vanishing UV parameters and infinitely large IR parameter, $\lim_{\sigma,\kappa\rightarrow 0}\lim_{\Lambda\rightarrow +\infty}G_{\Lambda}(\Box_g)=G$, we recover the usual Einstein field equations. Degravitation of the vacuum energy: ----------------------------------- We intend to briefly outline the basic features of the cosmological constant problem before we return to our precise nonlocal coupling model. In the quest of generating a static universe Einstein originally introduced an additional term on the right hand side of his field equations, the famous cosmological constant. Later he dismissed this term by arguing that it was nothing else than an unnecessary complication to the field equations [@Einstein5; @Weinberg1; @Weinberg2]. However from a microscopic point of view it is not so straightforward to discard such a term, because anything that contributes to the energy density of the vacuum acts just like a cosmological constant. Indeed from a quantum point of view the vacuum is a very complex state in the sense that it is constantly permeated by fluctuating quantum fields of different origins. In agreement to Heisenberg’s energy-time uncertainty principle $\Delta E \Delta t\geq 0$, one important contribution to the vacuum energy comes from the spontaneous creation of virtual particle-antiparticle pairs which annihilate shortly after [@Weinberg1]. Although there is some freedom in the precise computation of the vacuum energy, the most reasonable estimates range around a value of $\rho_{th}\approx 10^{111} J/m^3$ [@Carroll1]. Towards the end of the past century two independent research groups, the [*High-Z Supernova Team*]{} and the [*Supernova Cosmology Project*]{}, searched for distant type Ia supernovae in order to determine parameters that were supposed to provide information about the cosmological dynamics of the Universe. The two research groups were able to obtain a deeper understanding of the expansion history of the Universe by observing how the brightness of these supernovae varies with redshift. They initially expected to find signs that the expansion of the Universe is slowing down as the expansion rate is essentially determined by the energy-momentum density of the Universe. However in 1998 they published their results in two separate papers and came both independently from each other to the astonishing result that the opposite is true: the expansion of the Universe is accelerated. The supernovae results in combination with the Cosmic Microwave Background data [@Planck1] interpreted in terms of the Standard Model of Cosmology ($\Lambda$CDM-model) allow for a precise determination of the matter and vacuum energy density parameters of the present Universe: $\Omega_m\approx 0.3$ and $\Omega_\Lambda\approx 0.7$. This corresponds to an observational vacuum energy density of the order of $\rho_{ob}\sim 10^{-9} J/m^3$. Thus the supernova studies have provided direct evidence for a non zero value of the cosmological constant. These investigations together with the theoretically computed value for the vacuum energy $\rho_{th}$ lead to the famous 120-orders-of-magnitude discrepancy which makes the cosmological constant problem such a glaring embarrassment [@Carroll1], $$\rho_{th} \sim 10^{120} \rho_{ob}.$$ Most efforts in solving this problem have focused on the question why the vacuum energy is so small. However, since nobody has ever measured the energy of the vacuum by any means other than gravity, perhaps the right question to ask is why does the vacuum energy gravitates so little [@Dvali1; @Barvinsky1; @Dvali2; @Barvinsky2]. In this regard our aim is not to question the theoretically computed value of the vacuum energy density, but we will rather try to see if we can find a mechanism by which the vacuum energy is effectively degravitated at cosmological scales. In order to demonstrate how the degravitation mechanism works in the context of our precise model we introduce an effective but very illustrative macroscopic description of the vacuum energy on cosmological scales. In good agreement to cosmological observations [@Planck1], we will assume that the Universe is essentially flat, so that the differential coupling operator can be approximated by its flat spacetime counterpart. We further assume that the quantum vacuum energy can be modelled, on macroscopic scales, by an almost time independent Lorentz-invariant energy process, $\langle T_{\alpha\beta}\rangle_{v}\,\simeq\, T_v \ \cos(\textbf{k}_c \cdot \textbf{x}) \ \eta_{\alpha\beta}$, where $T_v$ is the average vacuum energy density and $\textbf{k}_c=1/\mathbf{\lambda}_c$ is the three dimensional characteristic wave-vector $(|\mathbf{\lambda}_c|\gg 1)$. Moreover we suppose that the vacuum energy is homogeneously distributed throughout the whole universe so that the components of the wave-vector $k_x=k_y=k_z\,\sim 1/\lambda_c$ are the same in all three spatial directions, $$G_\Lambda(\Box) \ \langle T_{\alpha\beta} \rangle_{v}\, =\,\mathcal{G}(\kappa/\lambda_c^2) \ \mathcal{F}(\Lambda/\lambda_c^2) \ \langle T_{\alpha\beta} \rangle_{v},$$ where $\mathcal{G}(\kappa/\lambda_c^2)=\frac{G}{1-\sigma e^{-\kappa/\lambda_c^2} }$ and $\mathcal{F}(\Lambda/\lambda_c^2)=\frac{\Lambda /\lambda_c^2}{1+\ \Lambda/\lambda_c^2}$ [@Kragler1]. We observe that energy processes with a characteristic wavelength, much larger than the macroscopic filter scale $\lambda_c \gg \sqrt{\Lambda}$ effectively decouple from the gravitational field $\lim_{\lambda_c \rightarrow+\infty} G_\Lambda(\Box) \langle T_{\alpha\beta} \rangle_{v}\,=\, 0$. In the extreme but unlikely limit of energy processes with infinitely large frequencies, $\lim_{\lambda_c\rightarrow 0}\mathcal{G}(\kappa/\lambda^2_c)=G$, $\lim_{\lambda_c\rightarrow 0}\mathcal{F}(\Lambda/\lambda^2_c)=1$ we would recover the Newtonian coupling. ![\[Degravitation1\]The function $\frac{G_\Lambda(\Box)}{G}=\frac{\mathcal{G}(\kappa/\lambda_c^2)\mathcal{F}(\Lambda/\lambda^2_c)}{G}$ is plotted against the characteristic wavelength $\lambda_c$ (m) for $\sigma=2 \ 10^{-4}$, $\kappa=5 \ 10^{-3}$ m$^2$ and $\Lambda=10^{60}$ m$^2$. A strong degravitational effect is observed for energy processes with a characteristic wavelength larger or equal to $\lambda_c=10^{29}$m.](Degravitation1.pdf){width="8cm" height="4.2cm"} This situation is illustrated in FIG. \[Degravitation1\], where we plotted the function $\big[\frac{G_\Lambda(\Box)}{G}\langle T_{\alpha\beta}\rangle_v\big][\langle T_{\alpha\beta}\rangle_v]^{-1}=\frac{\mathcal{G}(\kappa/\lambda_c^2)\mathcal{F}(\Lambda/\lambda^2_c)}{G}$ for the following UV and IR parameters, $\sigma=2\ 10^{-4}$, $\kappa=5 \ 10^{-3}$ m$^2$ and $\Lambda=10^{60}$ m$^2$, against the characteristic wavelength. We infer from FIG. \[Degravitation1\] that in the context of our vacuum energy model we have for small characteristic wavelengths $G_\Lambda(\Box)\sim G$ while for large wavelengths of the order $\lambda_c=10^{29}$m we observe a strong degravitational effect. In the remaining chapters of this article we will investigate how much the relaxed Einstein equations are affected by the nonlocal UV-term $\mathcal{G}_\kappa(\Box_g)$. In particularly we will examine in the penultimate chapter in how far the total mass of an N-body system deviates from the purely general relativistic result. However before we embark for these computations we will shortly review the standard relaxed Einstein equations and their solutions in the context of the post-Newtonian theory. The relaxed Einstein equations: =============================== The purpose of this chapter is to work out the relaxed Einstein equations and related quantities by using the very elegant Landau-Lifshitz formulation of the Einstein field equations [@LandauLifshitz; @MisnerThroneWheeler; @Will2; @PatiWill1; @Blanchet1; @Buonanno1; @Poisson; @PatiWill2], $$\partial_{\mu\nu} H^{\alpha\mu\beta\nu}\,=\, \frac{16\pi G}{c^4}(-g)\ \big(T^{\alpha\beta}+t^{\alpha\beta}_{LL}\big),$$ where $H^{\alpha\mu\beta\nu}\,\equiv\,\mathfrak{g}^{\alpha\beta} \mathfrak{g}^{\mu\nu}-\mathfrak{g}^{\alpha\nu}\mathfrak{g}^{\beta\nu}$ is a tensor density which possesses the same symmetries as the Riemann tensor. In the Landau-Lifshitz formulation of gravity the main variables are not the components of the metric tensor $g_{\alpha\beta}$ but those of the gothic inverse metric, $\mathfrak{g}^{\alpha\beta}\,\equiv\, \sqrt{-g} \ g^{\alpha\beta}$, where $g^{\alpha\beta}$ is the inverse metric and $g$ the metric determinant [@LandauLifshitz; @MisnerThroneWheeler; @Will1; @PatiWill1; @PatiWill2; @Blanchet1; @Buonanno1; @Poisson]. $T^{\alpha\beta}$ is the energy-momentum tensor of the matter source term and the Landau-Lifshitz pseudotensor, $$\begin{split} (-g)t^{\alpha\beta}_{LL}\,=\,& \frac{c^4}{16 \pi G} \big[\partial_\lambda \mathfrak{g}^{\alpha\beta} \partial_\mu \mathfrak{g}^{\lambda\mu}-\partial_\lambda \mathfrak{g}^{\alpha\lambda}\partial_\mu \mathfrak{g}^{\beta\mu}+ \frac{1}{2} g^{\alpha\beta} g_{\lambda\mu} \partial_\rho \mathfrak{g}^{\lambda\nu}\partial_\nu \mathfrak{g}^{\mu\rho} -g^{\alpha\lambda}g_{\mu\nu} \partial_\rho \mathfrak{g}^{\beta\nu}\partial_\lambda \mathfrak{g}^{\mu\rho}- g^{\beta\lambda}g_{\mu\nu}\partial_\rho \mathfrak{g}^{\alpha\nu}\partial_\lambda \mathfrak{g}^{\mu\rho}\\ &\quad\quad\quad\quad\quad +g_{\lambda\mu}g^{\nu\rho}\partial_\nu \mathfrak{g}^{\alpha\lambda} \partial_\rho \mathfrak{g}^{\beta\mu}+ \frac{1}{8} (2g^{\alpha\lambda} g^{\beta \mu}-g^{\alpha\beta}g^{\lambda\mu})(2 g_{\nu\rho} g_{\sigma\tau}-g_{\rho\sigma}g_{\nu\tau}) \partial_\lambda \mathfrak{g}^{\nu\tau}\partial_\mu \mathfrak{g}^{\rho\sigma}\big], \end{split}$$ can be interpreted as an energy momentum (pseudo)tensor for the gravitational field. Although this interpretation should not be taken literally, after all it is based on a very specific formulation of the Einstein field equations, it is however supported by the fact that the $(-g)t^{\alpha\beta}_{LL}$ is quadratic in $\partial_\mu \mathfrak{g}^{\alpha\beta}$, just as the energy-momentum tensor of the electromagnetic field is quadratic in the derivative of the electromagnetic potential $\partial_\mu A^{\alpha}$. By virtue of the antisymmetry of $H^{\alpha\mu\beta\nu}$ in the last pair of indices, we have that the equation $\partial_{\beta\mu\nu} H^{\alpha\mu\beta\nu}=0$ holds as an identity. This together with the equation of the Landau-Lifshitz formulation of general relativity implies that, $\partial_\beta \big[(-g)\big(T^{\alpha\beta}+t^{\alpha\beta}_{LL}\big)\big]=0$. These are conservation equations for the total energy-momentum pseudotensor expressed in terms of the partial-derivative operator. The latter are equivalent to the energy-momentum conservation $\nabla_\beta T^{\alpha\beta}=0$ involving only the matter energy-momentum tensor and the covariant derivative operator. However there is an important conceptual difference between the two conservation relations. $\nabla_\beta T^{\alpha\beta}=0$ is a direct consequence of the local conservation of energy-momentum, as observed in a local inertial frame and is valid whether or not general relativity is the correct theory of gravity. The second conservation equation is a consequence of Einstein’s field equations. If Einstein’s equations are satisfied than either equation may be adopted to express energy-momentum conservation and the two statements are equivalent in this sense. It is advantageous to impose the four conditions $\partial_\beta \mathfrak{g}^{\alpha\beta}=0$ on the gothic inverse metric, known as the harmonic coordinate conditions. It is also useful to introduce the gravitational potentials defined by $h^{\alpha\beta}:=\eta^{\alpha\beta}-\mathfrak{g}^{\alpha\beta}$, where $\eta^{\alpha\beta}=diag(-,+,+,+)$ is the Minkowski metric expressed in Lorentzian coordinates [@Blanchet1; @Blanchet3; @Blanchet4; @Will2; @PatiWill1; @PatiWill2; @Buonanno1]. In terms of the potentials the harmonic coordinate conditions read $\partial_\beta h^{\alpha\beta}=0$, and in this context they are usually referred to as the harmonic gauge conditions. It is straightforward to verify that the left-hand side of the Landau-Lifshitz formulation of the Einstein field equations reduces to $\partial_{\mu\nu}H^{\alpha\mu\beta\nu}=-\Box h^{\alpha\beta}+h^{\mu\nu}\partial_{\mu\nu}h^{\alpha\beta}-\partial_\mu h^{\alpha\nu}\partial_\nu h^{\beta\mu}$, where $\Box=\eta^{\mu\nu}\partial_{\mu\nu}$ is the flat-spacetime d’Alembert operator. The right-hand side of the field equations remains essentially unchanged, but the harmonic conditions do slightly simplify the form of the Landau-Lifshitz pseudotensor, namely the first two terms in $(-g)t^{\alpha\beta}_{LL}$ vanish. Isolating the wave operator on the left-hand side and putting the remaining terms on the other side, gives rise to the formal wave equation [@Poisson; @Will2; @PatiWill1; @PatiWill2; @Blanchet1; @Blanchet3; @Blanchet4; @Maggiore1; @Buonanno1], $$\Box h^{\alpha\beta}\,=\,-\frac{16 \pi G}{c^4} \tau^{\alpha\beta},$$ where $\tau^{\alpha\beta}:=-\frac{16 \pi G}{c^4} \big[ \tau^{\alpha\beta}_m+\tau^{\alpha\beta}_{LL}+\tau^{\alpha\beta}_H\big]$ is defined as the effective energy-momentum pseudotensor composed by a matter $\tau_m^{\alpha\beta}=(-g) T^{\alpha\beta}$ contribution, the Landau-Lifshitz contribution $\tau^{\alpha\beta}_{LL}=(-g)t^{\alpha\beta}_{LL}$ and the harmonic gauge contribution, $\tau^{\alpha\beta}_H=(-g)t^{\alpha\beta}_H=\frac{c^4}{16\pi G} \big(\partial_\mu h^{\alpha\nu}\partial_\nu h^{\beta\mu}-h^{\mu\nu}\partial_{\mu\nu}h^{\alpha\beta}\big)$. It is easy to verify that because of the harmonic gauge condition this additional contribution is separately conserved, $\partial_\beta\big[(-g)t^{\alpha\beta}_H\big]=0$. This together with the conservation relation introduced previously leads to a conservation relation for the effective energy-momentum tensor $\partial_\beta \tau^{\alpha\beta}=0$. It should be noticed that so far no approximations have been introduced, so that the wave equation, together with the harmonic gauge conditions, is an exact formulation of the Einstein field equations. It is the union of these two sets of equations that is equivalent to the standard Einstein equations outlined in the previous chapter. The wave equation taken by itself, independently of the harmonic gauge condition or the conservation condition, is known as the relaxed Einstein field equation [@Poisson; @Will2; @PatiWill1; @PatiWill2]. It is well known that the wave equation can be solved by the following ansatz $h^{\alpha\beta}(x)\,=\,-\frac{16G}{c^4} \int d^4y \ G(x,y) \ \tau^{\alpha\beta}(y)$, where $\Box G(x,y)= \delta(x-y)$ is the condition for the Green function, $x=(ct,\textbf{x})$ is a field point and $y=(ct',\textbf{y})$ a source point. Inserting the retarded Green function solution $G(x,y)=\frac{-1}{4\pi}\frac{\delta(ct-ct'-|\textbf{x}-\textbf{y}|)}{|\textbf{x}-\textbf{y}|}$ into the ansatz outlined above and integrating over $y^0$ yields the formal retarded solution to the gravitational wave equation [@Poisson; @Will2; @PatiWill1; @PatiWill2; @Blanchet1; @Blanchet3; @Blanchet4; @Maggiore1; @Buonanno1; @Maggiore2], $$h^{\alpha\beta}(t,\textbf{x})\,=\,\frac{4G}{c^4} \int d\textbf{y} \ \frac{\tau^{\alpha\beta}(y^0-|\textbf{x}-\textbf{y}|,\textbf{y})}{|\textbf{x}-\textbf{y}|},$$ where the domain of integration extends over the past light cone of the field point $x=(ct,\textbf{x})$. In order to work out this integral we need to present the important notions of near and wave zones in the general context of the wave equation and its formal solution. To do so we need to introduce the characteristic length scale of the source $r_c$ which is defined such that the matter variables vanish outside a sphere of radius $r_c$. The characteristic time scale $t_c$ is the time required for noticeable changes to occur within the source. These two important scaling quantities are related through the characteristic velocity within the source $v_c=\frac{r_c}{t_c}$. The characteristic wavelength of the radiation $\lambda_c$ produced by the source is directly related to the source’s characteristic time scale $\lambda_c=c\ t_c$. This finally allows us to define the near and wave zone domains [@Poisson; @Will2; @PatiWill1; @PatiWill2; @Blanchet1; @Maggiore1], $$\text{near-zone:} \quad r\,\ll\, \lambda_c,\quad\ \ \text{wave-zone:} \quad r\,\gg\, \lambda_c.$$ Thus the near zone is the region of three dimensional space in which $r=|\textbf{x}|$ is small compared with a charcateristic wavelength $\lambda_c$, while the wave zone is the region in which $r$ is large compared with this length scale. We introduce the arbitrarily selected radius $\mathcal{R}\lesssim\lambda_c$ to define the near-zone domain $\mathcal{M}:|\textbf{x}|<\mathcal{R}$. The near-zone and wave-zone domains ($\mathcal{W}:|\textbf{x}|>\mathcal{R}$) join together to form the complete light cone of some field point $y$, $\mathcal{C}(y)=\mathcal{M}(y)+\mathcal{W}(y)$. Although $\mathcal{R}$ is typically of the same order of magnitude as the characteristic wavelength of the gravitational radiation, it was shown in [@Poisson; @Will2; @PatiWill1; @PatiWill2] that the precise choice of $\mathcal{R}$ is irrelevant because we observe a mutual cancellation between terms being proportional to $\mathcal{R}$ coming from the near and wave zones. While the gravitational potentials originating from the two different intgration domains will individually depend on the cutoff radius their sum is guaranteed to be $\mathcal{R}$-independent and we will therefore discard such terms in the remaining part of this article [@Poisson; @Will2; @PatiWill1]. The gravitational potentials behave very differently in the two zones: in the near zone the difference between the retarded time $\tau=t-r/c$ and $t$ is small, so that the field retardation is unimportant. In the wave zone the difference between $\tau$ and $t$ is large and time derivatives are comparable to spatial derivatives. The post-Minkowskian theory is an approximation method that will not only reproduce the predictions of Newtonian theory but is a method that can be pushed systematically to higher and higher order to produce an increasingly accurate description of a weak gravitational field $||h^{\alpha\beta}||<1$. In this sense the metric of the spacetime will be constructed by considering a formal expansion of the form $h^{\alpha\beta}=Gk_1^{\alpha\beta}+G^2k_2^{\alpha\beta}+G^3k_3^{\alpha\beta}+...$ for the gravitational potentials. Such an approximation in powers of $G$ is known as post-Minkowskian expansion with the aim to obtain, at least in a useful portion of spacetime, an acceptable approximation to the true metric [@Poisson]. The spacetime deviates only moderately from Minkowski spacetime and we can construct the spacetime metric $g_{\alpha\beta}$ from the gravitational potentials, $$g_{\alpha\beta}=\eta_{\alpha\beta}+h_{\alpha\beta}-\frac{1}{2}h\eta_{\alpha\beta}+h_{\alpha\mu}h^{\mu}_\beta-\frac{1}{2}hh_{\alpha\beta}+\Big(\frac{1}{8}h^2-\frac{1}{4}h^{\mu\nu}h_{\mu\nu}\Big)\eta_{\alpha\beta}+\mathcal{O}(G^3),$$ where the indices on $h^{\alpha\beta}$ are lowered with the Minkowski metric $h_{\alpha\beta}=\eta_{\alpha\mu}\eta_{\beta\nu}h^{\mu\nu}$ and $h=\eta_{\mu\nu}h^{\mu\nu}$. The method is actually so successful that it can handle fields that are not so weak at all and therefore be employed for a description of gravity at a safe distance from neutron stars or even binary-black hole systems. The link between the spacetime metric $g_{\alpha\beta}$ and the gravitational potentials is provided by the gothic inverse metric $\mathfrak{g}^{\alpha\beta}=\eta^{\alpha\beta}-h^{\alpha\beta}$ [@Poisson; @Will2; @PatiWill1; @PatiWill2; @Blanchet1; @Maggiore1; @Buonanno1] and the metric determinant is given by $(-g)=1-h+\frac{1}{2}h^2-\frac{1}{2}h^{\mu\nu}h_{\mu\nu}+\mathcal{O}(G^3)$. The post-Minkowskian expansion of the metric, adjusted to the context of our modified theory of gravity, will be frequently used in the next chapters. In what follows we will assume that the matter distribution of the source is deeply situated within the near zone $r_c\ll \lambda_c$, where we remind that $r_c$ is the characteristic length scale of the source. It is straightforward to observe that this equation is tantamount to a slow motion condition $v_c\ll c$ for the matter source term. The post-Newtonian theory (pN) is an approximation method to the theory of general relativity that incorporates both weak-field and slow-motion. The dimensionless expansion parameter in this approximation procedure is $(Gm_c)/(c^2r_c)=v^2_c/c^2$, where $m_c$ is the characteristic mass of the system under consideration. In the context of this article, we are primarily interested in the near-zone piece of the gravitational potentials $h^{ab}_{\mathcal{N}}$. It can be shown [@Poisson; @Will2; @PatiWill1; @PatiWill2] that the formal near-zone solution to the wave equation, for a far-away wave-zone field point ($|\textbf{x}|\gg\lambda_c$) can be rephrased in the following way, $$h^{ab}_\mathcal{N}=\frac{4G}{c^4r}\sum_{l=0}^{+\infty}\frac{n_L}{l!}\Big(\frac{d}{du}\Big)^l\int_{\mathcal{M}} d\textbf{y} \ \tau^{ab}(u,\textbf{y}) \ y^L+\mathcal{O}(r^{-2}),$$ by expanding the ratio $\frac{\tau^{\alpha\beta}(t-|\textbf{x}-\textbf{y}|/c,\textbf{y})}{|\textbf{x}-\textbf{y}|}=\frac{1}{r} \ \sum_{l=0}^\infty \frac{y^L}{l!} \ n_L \ \Big(\frac{\partial}{\partial u}\Big)^l \ \tau^{\alpha\beta}(u,\textbf{y})+\mathcal{O}(1/r^2)$ in terms of the retarded time $u=c\tau$ and the unit radial vectors $\textbf{n}=\frac{\textbf{x}}{r}$. The far away wave zone is characterized by the fact that only leading order terms $1/r$ need to be retained and $y^Ln_L=y^{j1}\cdots y^{jl}n_{j1}\cdots n_{jl}$. We will return to this expansion in chapter three where we outline a similar computation in the framework of the nonlocally modified theory of gravity presented in the introduction of the present article. We model the material source term by a collection of N-fluid balls with negligible pressure, $T^{\alpha \beta}=\rho\ u^\alpha u^\beta$, where $\rho\big(m_A,\textbf{r}_A(t)\big)$ is the energy-density and $u^\alpha=\gamma_A (c,\textbf{v}_A)$ is the relativistic four-velocity of the fluid ball with mass $m_A$ and individual trajectory $r_A(t)$. Further details on this important quantity can be withdrawn from the appendix-section related to this chapter. The slow-motion condition gives rise to a hierarchy between the components of the energy-momentum tensor $T^{0b}/T^{00}\sim v_c/c$ and $T^{ab}/T^{00}\sim (v_c/c)^2$, where we used the approximate relations $T^{00}\approx \rho\ c^2$, $T^{0b}\approx\rho\ v^b c$, $T^{ab}\approx \rho\ v^a v^b$ and $\textbf{v}$ is the three-dimensional velocity vector of the fluid balls. A glance at the relaxed Einstein equations reveals that this hierarchy is inherited by the gravitational potentials $h^{0b}/h^{00}\sim v_c/c$, $h^{ab}/h^{00}\sim (v_c/c)^2$. Taking into account the factor $c^{-4}$ in the field equations, we have for the potentials $h^{00}=\mathcal{O}(c^{-2})$, $h^{0b}=\mathcal{O}(c^{-3})$ and $h^{ab}=\mathcal{O}(c^{-4})$, where $c^{-2}$ is a post-Newtonian expansion parameter. We remind that this notation serves only as a powerful mnemonic to judge the importance of various terms inside a post-Newtonian expansion, while the real dimensionless expansion parameter is rather $(Gm_c)/(c^2r_c)=v^2_c/c^2$. The precise shape of the 1.5 post-Newtonian time-time matter component of the energy-momentum pseudotensor, $$c^{-2}(-g)T^{00}=\sum_A m_A\ \Big[1+\frac{1}{c^2}(\frac{\textbf{v}^2}{2}+3U)\Big]\ \delta(\textbf{x}-\textbf{r}_A)+\mathcal{O}(c^{-4}),$$ is worked out in the appendix-section related to this chapter. $U$ is the Newtonian potential of a N-body system with point masses $m_A$ and $h^{00}=\frac{4}{c^2}U+\mathcal{O}(c^{-4})$ is the corresponding gravitational potential at the 1.5 post-Newtonian order of accuracy. Another important relation, that will be frequently used in chapter five, is the time-time component of the Landau-Lifshitz tensor $\tau^{00}_{LL}=(-g)t^{00}_{LL}$ worked out to the required degree of accuracy [@Poisson; @Will2; @PatiWill1; @PatiWill2]. Here again we will see that in the context of our modified theory of gravity, we need to adapt the result, $$c^{-2}(-g)t^{00}_{LL}\,=\,-\frac{7}{8\pi G c^2}\ \big[\partial_pU\partial^pU\big]+\mathcal{O}(c^{-4}).$$ Further computational details regarding the derivation of this quantity can be inferred from the appendix-section related to this chapter. Using the information gathered previously we see that the harmonic gauge contribution is beyond the 1.5 post-Newtonian order of accuracy $c^{-2}\tau_H^{00}=\mathcal{O}(c^{-4})$. To conclude this chapter we aim to introduce the total mass $M_V=c^{-2}\int_Vd\textbf{x}\ (-g)(T^{00}+t_{LL}^{00})$ contained in a three-dimensional region $V$ and bounded by the surface $S$. The latter is a direct consequence of the energy-momentum conservation and we will return to this integral relation in chapter five. To conclude this chapter we would like to mention that the approach which we use to integrate the wave equation is usually referred to as the Direct Integration of the Relaxed Einstein equations or DIRE approach for short. An alternative method, based on a formal multipolar expansion of the potential outside the source was nicely outlined in [@Blanchet1; @Blanchet5; @BlanchetDamour1]. Additional information on these and related issues together with applications to binary-systems can be found in a vast number of excellent articles[@BlanchetDamourIyerWillWiseman; @DamourJaranowskiSchaefer1; @BlanchetDamourIyer; @BlanchetDamourEsposito-FareseIyer1; @BlanchetDamourEsposito-FareseIyer2; @DamourJaranowskiSchaefer2]. The modified relaxed Einstein equations: ======================================== The main objective of this section is to work out and to solve the nonlocally modified wave equation. The latter merely arises from the quest of rewriting the relaxed Einstein equation, containing the effective energy-momentum tensor $\mathcal{T}^{\alpha\beta}= \frac{G_\Lambda(\Box)}{G} \ T^{\alpha\beta}$, in such a way that it can be solved most easily. This goal can be achieved by spreading out some of the differential complexity inside the effective energy-momentum tensor to both sides of the differential equation. We will see that the distribution of nonlocality between both sides of the wave equation will be done in a way that the gravitational potentials $h^{\alpha\beta}$ can be evaluated similarly to the purely general relativistic case. However before we can come to the actual derivation of the modified wave equation we first need to carefully prepare the grounds by setting in place a couple of important preliminary results. The effective energy-momentum tensor: ------------------------------------- The major difference between our nonlocally modified theory and the standard theory gravity lies in the way in which the energy (matter or field energy) couples to the gravitational field. In the purely Einsteinian theory the (time-dependent) distribution of energy is translated via the constant coupling $G$ into spacetime curvature. We saw in the introduction that in the case of the modified theory the coupling-strength itself varies according to the characteristic wavelength $\lambda_c$ of the source term under consideration. From a strictly formal point of view however, the cosmologically modified field equations can be formulated in a very similar way to Einstein’s field equations, $$G^{\alpha\beta}\,=\, \frac{8\pi}{ c^4}\ G \ \mathcal{T}^{\alpha\beta}.$$ $G^{\alpha\beta}$ is the usual Einstein tensor and $\mathcal{T}^{\alpha\beta}$ is the modified energy-momentum tensor outlined in the introduction of this chapter. We see that this formulation is possible only because the nonlocal modification can be put entirely into the source term $\mathcal{T}^{\alpha\beta}$, leaving in this way the geometry ($G^{\alpha\beta}$) unaffected. In this regard we can easily see that, by virtue of the contracted Bianchi identities $\nabla_\beta G^{\alpha\beta}=0$, the modified energy-momentum tensor is conserved $\nabla_\beta \mathcal{T}^{\alpha\beta}=0$. This allows us to use the Landau-Lifshitz formalism introduced previously by simply replacing the energy-momentum tensor $T^{\alpha\beta}$, inside the relaxed Einstein field equations, through its nonlocal counterpart, $$\Box h^{\alpha\beta}\,=\,-\frac{16\pi G}{c^4} \ (-g) \ \Big[\mathcal{T}^{\alpha\beta}+t^{\alpha\beta}_{LL}+t^{\alpha\beta}_H\Big].$$ Instead of trying to integrate out by brute force the nonlocally modified relaxed Einstein field equations, we rather intend to bring part of the differential complexity, stored inside the effective energy momentum tensor, to the left-hand-side of the field equation. These efforts will finally bring us to an equation that will be more convenient to solve. Loosely speaking we aim to separate inside the nonlocal covariant differential coupling the flat spacetime contribution from the the curved one. In this way we can rephrase the relaxed Einstein equations in a form that we will eventually call the nonlocally modified wave equation or effective relaxed Einstein equation. This new equation will have the advantage that the nonlocal complexity will be distributed to both sides of the equation and hence it will be easier to work out its solution to the desired post-Newtonian order of accuracy. In this context we aim to rewrite the covariant d’Alembert operator $\Box_g$ in terms of a flat spacetime contribution $\Box$ plus an additional piece $w$ depending on the gravitational potentials $h^{\alpha\beta}$. The starting point for the splitting of the differential operator $\Box_g=\nabla_\alpha\nabla^\alpha$ is the well known relation [@Poisson; @Woodard1; @Weinberg1; @Maggiore2], $$\Box_g\,=\,\frac{1}{\sqrt{-g}}\partial_\mu\big(\sqrt{-g}g^{\mu\nu}\partial_\nu)\,=\,\Box+w(h,\partial),$$ where $\Box=\partial^\alpha\partial_\alpha$ is the flat spacetime d’Alembert operator and the differential operator function $w(h,\partial)\,=\,-h^{\mu\nu}\partial_\mu\partial_\nu+\tilde{w}(h)\Box-\tilde{w}(h)h^{\mu\nu}\partial_{\mu}\partial_\nu+\mathcal{O}(G^4)$ is composed by the four-dimensional spacetime derivatives $\partial_\beta$ and the potential function $\tilde{w}(h)= \frac{h}{2}-\frac{h^2}{8}+\frac{h^{\rho\sigma}h_{\rho\sigma}}{4}+\mathcal{O}(G^3)$. We remind that the actual expansion parameter in a typical situation involving a characteristic mass $m_c$ confined to a region of characteristic size $r_c$ is the dimensionless quantity $Gm_c/(c^2 r_c)$. The result above was derived by employing the post-Minkoskian expansion of the metric $g_{\alpha\beta}$ in terms of the gravitational potentials [@Poisson; @Will2; @PatiWill1; @PatiWill2; @Blanchet1] outlined in the previous chapter. Further computational details can be found in the appendix relative to this chapter. With this result at hand we are ready to split the nonlocal gravitational coupling operator $G(\Box_g)$ into a flat spacetime contribution $G(\Box)$ multiplied by a piece $\mathcal{H}(\Box,w)$ that may contain correction terms originating from a possible curvature of spacetime, $$\mathcal{T}^{\alpha\beta}\,=\,G(\Box) \ \mathcal{H}(\Box,w) \ T^{\alpha\beta},$$ For astrophysical processes confined to a rather small volume of space $r_c\ll \sqrt{\Lambda}$ we can reduce the nonlocal coupling operator $G_{\Lambda}(\Box_g)$ to its ultraviolet component $G(\Box_g)=$G$ \big[1-\sigma e^{\kappa\Box_g}\big]^{-1}$ only. Using the relation for the general covariant d’Alembert operator, we can split the differential UV-coupling into two separate contributions, $$G(\Box)\,=\,\frac{1}{1-\sigma e^{\kappa\Box}},\quad \ \mathcal{H}(w,\Box)\,=\,1+\sigma \frac{e^{\kappa\Box}}{1-\sigma e^{\kappa\Box}} \sum_{n=1}^{+\infty} \frac{\kappa^n}{n!} w^n+...$$ The price to pay to obtain such a concise result is to assume that the modulus of the dimensionless parameter $\sigma$ has to be smaller than one ($|\sigma|<1$). Here again the reader interested in the computational details is referred to the appendix where a detailed derivation of this result can be found. It will turn out that the splitting of the nonlocal coupling operator, into two independent pieces, will be of serious use when it comes to the integration of the relaxed Einstein equations. For later purposes we need to introduce the effective curvature energy-momentum tensor, $$\mathcal{B}^{\alpha\beta}\,=\,\mathcal{H}(\Box,w) \ T^{\alpha\beta}.$$ It is understood that a nonlocal theory involves infinitely many terms. However in the context of a post-Newtonian expansion, the newly introduced curvature energy-momentum tensor $\mathcal{B}^{\alpha\beta}$, can be truncated at a certain order of accuracy. In this sense the first four leading terms (appendix) of the effective curvature energy-momentum tensor are, $$\begin{split} \mathcal{B}^{\alpha\beta}_1\,&=\,\Big[\frac{\tau^{\alpha\beta}_m}{(-g)}\Big],\\ \mathcal{B}^{\alpha\beta}_2\,&=\,\epsilon e^{\kappa\Box}\Big[\frac{w}{1-\sigma e^{\kappa \Box}}\Big] \Big[\frac{\tau^{\alpha\beta}_m}{(-g)}\Big], \end{split}$$ $$\begin{split} \mathcal{B}^{\alpha\beta}_3\,&=\,\epsilon\frac{\kappa}{2} e^{\kappa\Box}\Big[\frac{w^2}{1-\sigma e^{\kappa \Box}}\Big] \Big[\frac{\tau^{\alpha\beta}_m}{(-g)}\Big],\\ \mathcal{B}^{\alpha\beta}_4\,&=\,\epsilon\frac{\kappa^2}{3!} e^{\kappa\Box}\Big[\frac{w^3}{1-\sigma e^{\kappa \Box}}\Big] \Big[\frac{\tau^{\alpha\beta}_m}{(-g)}\Big]. \end{split}$$ $\newline$ For clarity reasons we introduced the parameter $\epsilon=\kappa \sigma$ of dimension length squared. Moreover we will see in the next chapter that the infinitely many remaining terms are in the sense of a post-Newtonian expansion beyond the degree of accuracy at which we aim to work at in this article. To conclude this subsection we would like to point out that the leading term in the curvature energy-momentum tensor can be reduced to the matter source term, $\mathcal{B}^{\alpha\beta}_1=T^{\alpha\beta}$. The nonlocally modified wave equation: -------------------------------------- We are now ready to come to the main part of this chapter in which we intend to work out the nonlocally modified wave equation. As already hinted in the introduction of this chapter, the modified wave equation naturally originates from the quest of sharing out some of the complexity of the nonlocal coupling operator $G(\Box_g)$ to both sides of the relaxed Einstein equations. We have shown in the previous subsection (and in the corresponding appendix-section) that it is possible to split the nonlocal coupling operator, acting on the matter source term $T^{\alpha\beta}$, into a flat space contribution $G(\Box)$ multiplied by a highly nonlinear differential piece $\mathcal{H}(\Box,w)$. We aim to summarize first what this means for the effective energy-momentum tensor, $\mathcal{T}^{\alpha\beta}=G(\Box) \ \mathcal{H}(w,\partial) \ T^{\alpha\beta}$. In the pursuit of removing some of the differential complexity from the effective energy-momentum tensor $\mathcal{T}^{\alpha\beta}$ we will apply the inverse flat spacetime operator $G^{-1}(\Box)$ to both sides of the relaxed Einstein field equation, $G^{-1}(\Box) \ \Box h^{\alpha\beta}\,=\,-\frac{16 \pi G}{c^4} \ G^{-1}(\Box) \big[(-g)\mathcal{T}^{\alpha\beta}+\tau_{LL}^{\alpha\beta}+\tau_H^{\alpha\beta}\big]$. We will see that it is precisely this mathematical operation which will finally lead us to the modified wave equation, $$\begin{aligned} \Box_{c} \ h^{\alpha\beta}(x)\,=\, -\frac{16 \pi G}{c^4}N^{\alpha\beta}(x).\end{aligned}$$ where $\Box_{c}$ is the effective d’Alembert operator $\Box_{c}=\big[1-\sigma e^{\kappa\Delta}\big] \ \Box$. $N^{\alpha\beta}$ is a pseudotensorial quantity which we will call in the remaining part of this article the effective energy-momentum pseudotensor, $N^{\alpha\beta}=G^{-1}(\Box) \big[(-g)\mathcal{T}^{\alpha\beta}+\tau_{LL}^{\alpha\beta}+\tilde{\tau}_H^{\alpha\beta}\big]$, where $\tilde{\tau}^{\alpha\beta}_m=(-g)\mathcal{T}^{\alpha\beta}$ is the effective matter pseudotensor, $\tau_{LL}^{\alpha\beta}=(-g)t_{LL}^{\alpha\beta}$ is the Landau-Lifshitz pseudotensor and $\tilde{\tau}_H^{\alpha\beta}=(-g)t^{\alpha\beta}_H+G(\Box)\mathcal{O}^{\alpha\beta}(h)$ is the effective harmonic gauge pseudotensor where $\mathcal{O}^{\alpha\beta}(h)=-\sigma\sum_{n=1}^{+\infty}\frac{(\kappa)^n}{n!}\partial^{2n}_0 e^{\kappa\Delta} \Box h^{\alpha\beta}$ is the iterative post-Newtonian potential correction contribution. This term is added to the right-hand-side of the wave equation very much like the harmonic gauge contribution is added to the right-hand-side for the standard relaxed Einstein equation [@Poisson; @Will2; @PatiWill1; @PatiWill2; @Blanchet1]. It should be noticed that the modified d’Alembert operator $\Box_{c}$ is of the same post-Newtonian order than the standard d’Alembert operator, $\Box_c=\mathcal{O}(c^{-2})$ and reduces to the usual one in the limit of vanishing UV modification parameters $\lim_{\sigma,\kappa \rightarrow 0}\ \Box_{c}\,=\, \Box$. In the same limits the effective pseudotensor $N^{\alpha\beta}$ reduces to the general relativistic one, $\lim_{\sigma,\kappa \rightarrow 0} \ N^{\alpha\beta}=\tau^{\alpha\beta}$. The second limit is less straight forward, but from the precise form of $\mathcal{T}^{\alpha\beta}$ as well as from the inverse differential operator $G^{-1}(\Box)$ we can see that we recover usual effective energy-momentum pseudotensor $\tau^{\alpha\beta}=\tau^{\alpha\beta}_m+\tau^{\alpha\beta}_{LL}+\tau^{\alpha\beta}_H$. Further conceptual and computational details on this very important quantities will be provided in the next chapter. At the level of the wave equations, these two properties can be summarized by the following relation, $$\Box_c \ h^{\alpha\beta}(x)\,=\, -\frac{16 \pi G}{c^4}N^{\alpha\beta}(x) \ \ \underset{\sigma,\kappa\rightarrow 0}{\Longrightarrow} \ \ \Box \ h^{\alpha\beta}(x)\,=\, -\frac{16 \pi G}{c^4}\tau^{\alpha\beta}(x).$$ In order to solve this equation we will use, in analogy to the standard wave equation, the following ansatz, $h^{\alpha\beta}(x)\,=\,-\frac{16 \pi G_N}{c^4} \int d^4y \ G(x-y) \ N^{\alpha\beta}(y)$ together with the identity for the effective Green function, $\Box_{c} G(x-y)\,=\,\delta(x-y)$, to solve for the potentials $h^{\alpha\beta}$ of the modified wave equation. Following the usual procedure [@Poisson; @Maggiore1; @Buonanno1] we obtain the Green function in momentum space, $$G(k)\,=\, \frac{1}{(k^0)^2-|\textbf{k}|^2}+\sigma \ \frac{ \ e^{-\kappa|\textbf{k}|^2}}{(k^0)^2-|\textbf{k}|^2}+\cdots.$$ In the remaining part of this article we will retain only the first two leading terms. It should be noticed that the first of these two contributions will eventually give rise to the usual Green function. Additional terms could have been added but as the dimensionless parameter $\sigma<1$ is by assumption strictly smaller than one the remaining terms, each by itself, contribute less than those that we have retained. Further computational details can be found in the appendix related to this chapter. These considerations finally permit us to work out an expression for the retarded Green function, $$G_r(x-y)\,=\,G_r^{GR}+G_r^{NL},$$ where $G_r^{GR}=\frac{-1}{4\pi}\frac{\delta(x^0-|\textbf{x}-\textbf{y}|-y^0)}{|\textbf{x}-\textbf{y}|}$ is the well known retarded Green function and $G_r^{NL}=\frac{-1}{4\pi}\frac{1}{|\textbf{x}-\textbf{y}|}\frac{\sigma}{2\sqrt{\kappa\pi}}e^{-\frac{(x^0-|\textbf{x}-\textbf{y}|-y^0)^2}{4\kappa}}$ is the nonlocal correction term. In this way we are able to recover in the limit of vanishing modification parameters the usual retarded Green function, $\lim_{\sigma,\kappa \rightarrow 0} \ G_r(x-y)=G_r^{GR}$. In addition it should be pointed out that we have, by virtue of the exponential representation of the dirac distribution, $\lim_{\kappa\rightarrow 0} \frac{1}{2\sqrt{\kappa\pi}}e^{-\frac{(x^0-|\textbf{x}-\textbf{y}|-y^0)}{4\kappa}}\,=\, \delta(x^0-|\textbf{x}-\textbf{y}|-y^0)$. In analogy to the purely general relativistic case, we can write down the formal solution to the modfied wave equation, $$h^{\alpha\beta}(x)\,=\, \frac{4 \ G}{c^4} \int d\textbf{y} \ \frac{N^{\alpha\beta}(x^0-|\textbf{x}-\textbf{y}|,\textbf{y})}{|\textbf{x}-\textbf{y}|}.$$ The retarded effective pseudotensor can be decomposed into two independent pieces according to the two contributions coming from the retarded Green function, $N^{\alpha\beta}(x^0-|\textbf{x}-\textbf{y}|,\textbf{y})=\mathcal{D} N^{\alpha\beta}(y^0,\textbf{y})+\sigma \mathcal{E} N^{\alpha\beta}(y^0,\textbf{y})$, where for later convenience we introduced the following two integral operators, $\mathcal{D}= \int dy^0 \ \delta(x^0-|\textbf{x}-\textbf{y}|-y^0)$ and $\mathcal{E}=\int dy^o \ \frac{1}{2\sqrt{\pi \kappa}} e^{-\frac{(x^o-|\textbf{x}-\textbf{y}|-y^o)^2}{4\kappa}}$. We would like to conclude this subsection by taking a look at the modified Newtonian potential which is frequently used in post-Newtonian developments. The corresponding gravitational potential, $h^{00}(x)=\frac{4}{c^2} V(x)$, is obtained from the integral outlined above where the leading order contribution of the effective energy-momentum pseudotensor was used $N^{00}=\sum_A m_A c^2 \ \delta(\textbf{x}-\textbf{r}_A)+\mathcal{O}(c^{-1})$. From this we obtain the modified Newtonian potential for a N-body-system $V(x)=\sum_A \ \frac{G\tilde{m}_A}{|\textbf{x}-\textbf{r}_A|}=(1+\sigma) \ U(\textbf{x})$, where $U(x)$ is the standard Newtonian potential term and and $\tilde{m}_A=(1+\sigma) \ m_A$ is the effective mass of the body $A$. Further computational details are provided in the appendix related to this chapter. It should be noticed that the usual Newtonian potential is recovered in the limit of vanishing $\sigma$. Experimental results [@Chiaverini1; @Kapner1] from deviation measurements of the Newtonian law at small length scales ($\sim 25 \mu m$) suggest that the dimensionless correction constant needs to be of the order $\sigma \lesssim 10^{-4}$. We see that this experimental bound confirms our theoretical assumption of a small dimensionless parameter $\sigma$. Solution for a far away wave-zone field point: ---------------------------------------------- In the context of astrophysical systems [@Wex1; @LIGO1] we can restrain the general solution for the gravitational potentials to a situation in which the potentials will be evaluated for a far away wave-zone field point ($|\mathbf{x}|\gg \lambda_c$). Furthermore we will only focus in this article on the near zone energy-momentum contribution to the gravitational potentials $h^{ab}_\mathcal{N}$. In order to determine the precise form of the spatial components of the near-zone gravitational potentials we need to expand the ratio inside the formal solution [@Poisson; @Will2; @PatiWill1] in terms of a power series, $$\begin{split} \frac{N^{ab}(x^0-|\textbf{x}-\textbf{y}|,\textbf{y})}{|\textbf{x}-\textbf{y}|}\, %&=\, \sum_{l=0}^{\infty} \frac{(-1)^l}{l!} \textbf{y}^L\partial_L \Big[\frac{N^{\alpha\beta}(x^0-r,\textbf{y})}{r}\Big]\\ =\,\frac{1}{r} \ \sum_{l=0}^\infty \frac{y^L}{l!} \ n_L \ \Big(\frac{\partial}{\partial u}\Big)^l \ N^{ab}(u,\textbf{y})+\mathcal{O}(1/r^2), \end{split}$$ where $u=c\tau$ and $\tau=t-r/c$ is the retarded time. The distance from the matter source term’s center of mass to the far away field point is given by $r=|\textbf{x}|$ and its derivative with respect to spatial coordinates is $\frac{\partial r}{\partial x^a}=n^a$, where $n^a=\frac{x^a}{r}$ is the a-th component of the unit radial vector. The far away wave zone is characterized by the fact that only leading order terms $1/r$ need to be retained and $y^Ln_L=y^{j1}\cdots y^{jl}n_{j1}\cdots n_{jl}$. More technical details can be found in the appendix relative to this subsection. By introducing the far away wave zone expansion of the effective energy-momentum-distance ratio into the formal solution of the potentials we finally obtain the near zone contribution to the gravitational potentials for a far away wave zone field point in terms of the retarded derivatives, $$\begin{split} h^{ab}_{\mathcal{N}}(x)\,&=\, \frac{4 G}{c^4 r} \sum_{l=0}^\infty \frac{n_L}{l!} \Big(\frac{\partial}{\partial u}\Big)^l \Big[ \int_{\mathcal{M}} d\textbf{y} \ N^{ab}(u,\textbf{y}) \ y^L\Big]+\mathcal{O}(r^{-2}), %&=\,\frac{4 G}{c^4 r}\Big[\int_{\mathcal{M}} d\textbf{y} \ N^{ab}(u,\textbf{y}) + \frac{n_c }{c} \frac{\partial}{\partial t_r} \int_{\mathcal{M}} d\textbf{y} \ N^{ab}(u,\textbf{y}) \ y^c\\ %&\quad+\frac{n_c n_d}{2c^2} \frac{\partial^2}{\partial t_r^2} \int_{\mathcal{M}} d\textbf{y} \ N^{ab}(u,\textbf{y})y^c \ y^d\\ %&\quad +\frac{n_c n_d n_e}{6c^3} \frac{\partial^3}{\partial t^3_r} \int_{\mathcal{M}} d\textbf{y} \ N^{ab}(u,\textbf{y}) y^c \ y^d \ y^e+[l\ge4]\Big]+\mathcal{O}(r^{-2}) \end{split}$$ where $\mathcal{M}$ is the three-dimensional near zone integration domain (sphere) defined by $|\textbf{x}| <\mathcal{R}\leq \lambda_c$. Further computational details can be inferred from the related appendix-subsection. In order to unfold the near zone potentials in terms of the radiative multipole moments we need to introduce the modified conservation relations. They originate, like for the purely general relativistic case [@Poisson; @Will2], from the conservation of the effective energy momentum pseudotensor, $\partial_\beta N^{\alpha\beta}=0$. This quantity is indeed conserved because we can store the complete differential operator complexity inside the effective energy-momentum tensor $\mathcal{T}^{\alpha\beta}=G(\Box_g)T^{\alpha\beta}$. We saw in the previous chapter that as long as the the geometry is not affected by the modification ($\nabla_\beta G^{\alpha\beta}=0$) we have, no matter what the precise form of the energy-momentum tensor is, the following conservation relation, $\partial_\beta N^{\alpha\beta}=G^{-1}(\Box) \ \partial_\beta \big[(-g)\mathcal{T}^{\alpha\beta}+\tau^{\alpha\beta}_{LL}+\tilde{\tau}^{\alpha\beta}_H\big]=0$. It should be noticed that similarly to the harmonic gauge contribution $\partial_\beta t_H^{\alpha\beta}=0$ the iterative potential contribution is separately conserved $\partial_\beta \mathcal{O}^{\alpha\beta}(h)=0$ because of the harmonic gauge condition. As the linear differential operator with constant coefficients $G^{-1}(\Box)$, commutes with the partial derivative ($[G(\Box)^{-1},\partial_\beta]=0$), we can immediately conclude for the conservation of the effective energy-momentum pseudotensor and deduce the modified conservation relations, $$\begin{split} N^{ab}\,&=\, \frac{1}{2} \frac{\partial^2}{\partial u^2} (N^{00} x^a x^b)+\frac{1}{2} \partial_c (N^{ac} x^b+N^{bc} x^a-\partial_d N^{cd} x^a x^b),\\ N^{ab} x^c\,&=\, \frac{1}{2} \frac{\partial}{\partial u} (N^{0a} x^b x^c+N^{0b} x^a x^c -N^{0c} x^a x^b)+\frac{1}{2} \partial_d(N^{ad} x^b x^c +N^{bd} x^a x^c-N^{cd} x^a x^b). \end{split}$$ A more detailed derivation of the latter is provided in the appendix related to this chapter. Finally we can rephrase the spatial components of the near-zone gravitational potentials for a far away wave zone field point in terms of the radiative multipole moments, $$h^{ab}_{\mathcal{N}}\,=\, \frac{2G}{c^4 r} \frac{\partial^2}{\partial \tau^2} \Big[Q^{ab}+Q^{abc} \ n_c+Q^{abcd} \ n_c n_d+\frac{1}{3}Q^{abcde} \ n_c n_d n_e+[l\geq 4]\Big]+\frac{2 G}{c^4 r}\left[P^{ab}+P^{abc} n_c\right]+\mathcal{O}(r^{-2}).$$ We see that in analogy to the purely general relativistic case [@Poisson; @Maggiore1; @Will2; @PatiWill1; @PatiWill2; @Buonanno1; @Blanchet1] the leading order term is proportional to the second derivative in $\tau$ of the radiative quadrupole moment . The first four modified radiative multipole moments are, $$\begin{split} Q^{ab}\,=&\, \frac{1}{c^2} \int_{\mathcal{M}} N^{00} y^a y^b d\textbf{y},\\ Q^{abc}\,=&\, \frac{1}{c^2} \int_{\mathcal{M}} (N^{0a} y^b y^c+N^{0b} y^a y^c- N^{0c} y^a y^b ) \ d\textbf{y}, \end{split}$$ $$\begin{split} Q^{abcd}\,=&\, \frac{1}{c^2} \int_{\mathcal{M}} N^{ab} y^c y^d d\textbf{y},\\ Q^{abcde}\,=&\, \frac{1}{c^2} \int_{\mathcal{M}} N^{ab} y^c y^d y^e d\textbf{y}. \end{split}$$ It can be shown that the surface terms $P^{ab}$ and $P^{abc}$, outlined in the appendix, will give rise to $\mathcal{R}$-dependent contributions only. These terms will eventually cancel out with contributions coming from the wave-zone as was shown in [@Will2]. The effective energy-momentum pseudotensor: =========================================== In the previous chapter we transformed the original wave equation, in which all the nonlocal complexity was stored inside the effective energy-momentum tensor $\mathcal{T}^{\alpha\beta}$, into a modified wave equation which is much easier to solve. This effort gave rise to a new pseudotensorial quantity, the effective energy-momentum pseudotensor $N^{\alpha\beta}$. This chapter is devoted to the analysis of this important quantity by reviewing the matter, field and harmonic gauge contributions separately. We will study these three terms $N^{\alpha\beta}_m$, $N^{\alpha\beta}_{LL}$, $N^{\alpha\beta}_H$ one after the other and extract all the relevant contributions that are within the 1.5 post-Newtonian order of accuracy. The effective matter pseudotensor: ---------------------------------- We recall from the previous chapter the precise expression for the matter contribution of the effective pseudotensor, $$N_{m}^{\alpha\beta}\,=\,G^{-1}(\Box) \big[(-g) \ \mathcal{T}^{\alpha\beta}\big]\,=\,G(\Box)^{-1} \big[(-g) \ G(\Box) \ \mathcal{B}^{\alpha\beta}\big].$$ In order to extract from this expression all the relevant pieces that lie within the order of accuracy that we aim to work at in this article, we essentially need to address two different tasks. In a first step we have to review the leading terms of $\mathcal{B}^{\alpha\beta}$ (previous chapter) and see in how far they may contribute to the 1.5 post-Newtonian order of accuracy. In a second step we have to analyze how the differential operator $G^{-1}(\Box)$ acts on the product of the metric determinant $(-g)$ multiplied by the effective energy-momentum tensor $\mathcal{T}^{\alpha\beta}=G(\Box) \ \mathcal{B}^{\alpha\beta}$. Although this formal operation will lead to additional terms, the annihilation of the operator $G(\Box)$ with is inverse counterpart will substantially simplify the differential structure of the original effective energy-momentum tensor $\mathcal{T}^{\alpha\beta}$. Before we can come to the two tasks mentioned above we first need to set in place a couple of preliminary results. From a technical point of view we need to introduce the operators of instantaneous potentials [@Blanchet1; @Blanchet3; @Blanchet4], $ \Box^{-1}[ \bar{\tau}]=\sum_{k=0}^{+\infty} \Big(\frac{\partial}{c\partial t}\Big)^{2k} \ \Delta^{-k-1}[\bar{\tau}]$. This operator is instantaneous in the sense that it does not involve any integration over time. However one should be aware that unlike the inverse retarded d’Alembert operator, this instantaneous operator will be defined only when acting on a post-Newtonian series $\bar{\tau}$. Another important computational tool which we borrow from [@Blanchet1; @Blanchet3; @Blanchet4] are the generalized iterated Poisson integrals, $\Delta^{-k-1}[\bar{\tau}_m](\textbf{x},t)=-\frac{1}{4\pi} \int d\textbf{y} \ \frac{|\textbf{x}-\textbf{y}|^{2k-1}}{2k!} \ \bar{\tau}_m(\textbf{y},t)$, where $\bar{\tau}_m$ is the $m$-th post-Newtonian coefficient of the energy-momentum source term $\bar{\tau}=\sum_{m=-2}^{+\infty} \bar{\tau}_m/c^{m}$. An additional important result that needs to be mentioned is the following generalized regularization prescription[^1], $\big[\nabla^m \frac{1}{|\textbf{x}-\textbf{r}_A|}\big] \ \big[\nabla^n \delta(\textbf{x}-\textbf{r}_A)\big]\equiv 0, \quad \forall n,m\in \mathbb{N}$. The need for this kind of regularization prescription merely comes from the fact that inside a post-Newtonian expansion, the nonlocality of the modified Einstein equations, will lead to additional derivatives which will act on the Newtonian potentials. It is easy to see that in the limit $m=0$ and $n=0$ we recover the well known regularization prescription [@Poisson; @Blanchet1; @Blanchet2]. We are now ready to come to the first of the two tasks mentioned in the beginning of this subsection. In order to extract the pertinent pieces from $\mathcal{B}^{\alpha\beta}= \mathcal{H}(w,\Box)\ \big[\tau_m^{\alpha\beta}/(-g)\big]$ to the required order of precision, we need first to have a closer look at the differential curvature operator $\mathcal{H}(w,\Box)$. From the previous chapter we know that it is essentially composed by the potential operator function $w(h,\partial)$ and the flat spacetime d’Alembert operator, $$w(h,\partial)\,=\,-h^{\mu\nu} \partial_{\mu\nu}+\tilde{w}(h)\Box-\tilde{w}(h) h^{\mu\nu}\partial_{\mu\nu}\,=\,-\frac{h^{00}}{2}\Delta+\mathcal{O}(c^{-4}).$$ We see that at the 1.5 post-Newtonian order of accuracy, the potential operator function $w(h,\partial)$ reduces to one single contribution, composed by the potential $h^{00}=\mathcal{O}(c^{-2})$ [@Poisson; @Will2; @PatiWill1] and the flat spacetime Laplace operator $\Delta$. Further computational details can be found in the appendix section related to this chapter. With this in mind we can finally take up the leading four contributions of the curvature energy-momentum tensor $\mathcal{B}^{\alpha\beta}$, $$\begin{split} \mathcal{B}^{\alpha\beta}_{1}\,&=\,\tau^{\alpha\beta}_{m}(c^{-3})-\tau^{\alpha\beta}_m(c^0) \ h^{00}+\mathcal{O}(c^{-4}),\\ B^{\alpha\beta}_{2}\,&=\,-\frac{\epsilon}{2}\sum_A m_A v^\alpha_A v^\beta_A \ \Big[\sum_{n=0}^\infty \sigma^n e^{(n+1)\kappa \Delta} \Big] \ \Big[h^{00}\Delta \delta(\textbf{y}-\textbf{r}_A)\Big]+\mathcal{O}(c^{-4}),\\ \mathcal{B}^{\alpha\beta}_3\,&=\,\frac{\epsilon\kappa}{2} e^{\kappa\Box}\Big[\frac{w^2}{1-\sigma e^{\kappa \Box}}\Big] \Big[\frac{\tau^{\alpha\beta}_m}{(-g)}\Big]\,\propto\, w^2\,=\mathcal{O}(c^{-4}),\\ \mathcal{B}^{\alpha\beta}_4\,&=\,\frac{\epsilon\kappa^2}{3!} e^{\kappa\Box}\Big[\frac{w^3}{1-\sigma e^{\kappa \Box}}\Big] \Big[\frac{\tau^{\alpha\beta}_m}{(-g)}\Big]\,\propto\, w^3\,=\mathcal{O}(c^{-6}). \end{split}$$ The terms $\mathcal{B}^{\alpha\beta}_3$ and $\mathcal{B}^{\alpha\beta}_4$ are beyond the order of accuracy at which we aim to work at in this article because $\omega^2=\mathcal{O}(c^{-4})$ and $\omega^3=\mathcal{O}(c^{-6})$ and $\tau_m(c^0)$ is the matter pseudotensor at the Newtonian order of accuracy. We will see later in this chapter that $\mathcal{B}^{\alpha\beta}_1$ will generate the usual 1.5 post-Newtonian matter source term as the second piece of the latter will precisely cancel out with another contribution. This allows us to come to the second task, namely to look at the differential operation mentioned in the introduction of this chapter, $$G^{-1}(\Box) \big[(-g) \mathcal{T}^{\alpha\beta}\big]\,=\,\big[1-\sigma e^{\kappa\Box}\big] \ \big[(-g)\mathcal{T}^{\alpha\beta}\big]$$ We will perform this computation using a weak-field expansion $(-g)=1+h^{00}-h^{aa}\eta_{aa}+\frac{h^2}{2}-\frac{h^{\mu\nu}h_{\mu\nu}}{4}+...$ and see how many additional terms we will produce at the 1.5 post-Newtonian order until the differential operator $G(\Box)$ and its inverse finally annihilate each other. The first term is rather simple and together with the post-Newtonian expansion for the metric determinant we obtain, $$1 \ \big[(-g) \ \mathcal{T}^{\alpha\beta}\big]\,=\,\big[1+h^{00}\big]\mathcal{T}^{\alpha\beta}+\mathcal{O}(c^{-4}).$$ It should be noticed that in this relation the post-Newtonian order of $\mathcal{T}^{\alpha\beta}$ varies according to the pN-order of the quantity which it is multiplied with. The remaining contribution is by far less straightforward and needs a more careful investigation. After a rather long computation (appendix) we obtain the following result, $$-\sigma e^{\kappa \Box} \ \big[(-g) \ \mathcal{T}^{\alpha\beta}\big]\,=\,-\big[1+h^{00}\big] \big[\sigma e^{\kappa \Box} \mathcal{T}^{\alpha\beta}\big]-\sigma D^{\alpha\beta}(c^{-3})+\mathcal{O}(c^{-4}) ,$$ where we have to take into account the additional tensor contribution, $D^{\alpha\beta}= \sum_{n=1}^{+\infty} \sum_{m=1}^{2n} \dbinom{2n}{m} \ \big[\nabla^{2n-m}\mathcal{T}^{\alpha\beta}\big] \big[\nabla^m h^{00}\big]$. Coming back to the initial equation for the effective matter pseudotensor $N^{\alpha\beta}_m$, we obtain by virtue of the two previous results, the following elegant expression for the modified effective matter pseudotensor, $$N_{m}^{\alpha\beta}\,=\,G^{-1}(\Box)\big[(-g) \ \mathcal{T}^{\alpha\beta}\big]\,=\,\big[1+h^{00}\big] \ \mathcal{B}^{\alpha\beta}-\sigma D^{\alpha\beta}+\mathcal{O}(c^{-4})\,=\,\mathcal{B}^{\alpha\beta}+\mathcal{B}^{\alpha\beta}h^{00}-\sigma D^{\alpha\beta}+\mathcal{O}(c^{-4}),$$ where we remind that $\mathcal{T}^{\alpha\beta}=G(\Box) \mathcal{B}^{\alpha\beta}$ and $\mathcal{B}^{\alpha\beta}=\mathcal{B}^{\alpha\beta}_{1}+\mathcal{B}^{\alpha\beta}_{2}+\mathcal{O}(c^{-4})$. Further computational steps are provided in the appendix related to this chapter. It is understood that there are numerous additional terms which we do not list here because they are beyond the degree of precision of this article. The two leading contributions of $N^{\alpha\beta}_m$ give rise to the usual 1.5 post-Newtonian contribution [@Poisson; @Will2; @PatiWill1; @PatiWill2], $$\mathcal{B}^{\alpha\beta}_{1}(c^{-3})+\mathcal{B}_{1}^{\alpha\beta}(c^{-1})h^{00}\,=\,\sum_A m_A v_A^\alpha v^\beta_A \Big[1+\frac{\textbf{v}_A^2}{2c^2}+\frac{3V}{c^2}\Big] \ \delta(\textbf{x}-\textbf{r}_A)+\mathcal{O}(c^{-4}),$$ where $V=(1+\sigma) U$ is the modified Newtonian potential. The remaining task is to extract the 1.5 pN contribution out of the tensor $D^{\alpha\beta}$. We point out that this contribution is proportional to $G(\Box) \mathcal{B}^{\alpha\beta}$ as well as to the potential $h^{00}$ which is of the order $\mathcal{O}(c^{-2})$ [@Poisson; @Will2; @PatiWill1; @PatiWill2]. In order to work out the contribution to the required degree of precision we need first to come back to the effective energy-momentum tensor, $$\mathcal{T}^{\alpha\beta}\,=\,G(\Box) \ \mathcal{H}(w,\Box) \ T^{\alpha\beta}\,=\,\sum_{s=0}^{+\infty} \sigma^s \sum_{p=0}^{+\infty} \frac{(s\kappa)^p}{p!} \Delta^p \bigg[\sum_A m_A v_A^\alpha v^\beta_A \ \delta(\textbf{x}-\textbf{r}_A) \bigg]+\mathcal{O}(c^{-2}),$$ where $G(\Box)=G(\Delta)+\mathcal{O}(c^{-2})$, $|\sigma|<1$, $\mathcal{H}(w,\Box)=1+\mathcal{O}(c^{-2})$ and $T^{\alpha\beta}=\sum_A m_A v^\alpha_A v^\beta_A \ \delta(\textbf{x}-\textbf{r}_A)+\mathcal{O}(c^{-2})$. Further computational details can be withrawn from the appendix-section related to the present chapter. With this result at hand we can finally write down $D^{\alpha\beta}$ to the required order of accuracy, $$\begin{aligned} D^{\alpha\beta}\,=&\,\sum_A m_A v_A^\alpha v^\beta_A \ \mathcal{S}(\sigma,\kappa) \ \Big[ \nabla^{2p+2n-m} \delta(\textbf{x}-\textbf{r}_A)\Big]\Big[ \nabla^m h^{00}\Big] +\mathcal{O}(c^{-4}).\end{aligned}$$ For simplicity reasons we introduced $\mathcal{S}(\sigma,\kappa)\,=\,\sum_{n=1}^{\infty} \frac{\kappa^n}{n!} \sum_{m=1}^{2n} \binom{2n}{m} \ \sum_{s=0}^{+\infty} \sigma^s \ \sum_{p=0}^{+\infty} \frac{(s\kappa)^p}{p!}$ to summarize the four sums inside $D^{\alpha\beta}$ (appendix). We remind that the first two sums come from the inverse differential operator $G^{-1}(\Box)$ while the last two sums originate from the extraction of the 1.5 pN contribution of the effective energy-momentum tensor $\mathcal{T}^{\alpha\beta}=G(\Box) \mathcal{B}^{\alpha\beta}$ and $\binom{2n}{m}=\frac{(2n)!}{(2n-m)!m!}$ is the binomial coefficient. To conclude this section we would like to point out that despite the fact that many of the contributions encountered so far contain infinitely many derivatives, we will see in the upcoming chapter that a natural post-Newtonian truncation will set in when it comes to precise computation of physical observables. The modified Landau-Lifshitz pseudotensor: ------------------------------------------ In this section we will restrain our efforts to the time-time-component of the modified Landau-Lifshitz pseudotensor $N^{00}_{LL}=G^{-1}(\Box) \ \tau^{00}_{LL}$, where $\tau^{00}_{LL}=\frac{-7}{8\pi G} \partial_jV\partial^jV+\mathcal{O}(c^{-2})$ [@Poisson; @Will2; @PatiWill1]. We will see in the next chapter that this term will suffice to work out the physical quantity that we are interested in, $$\begin{aligned} c^{-2}N^{00}_{LL}\,=\, c^{-2}\Big[ \big(1-\sigma\big)\tau^{00}_{LL} -\epsilon \Delta \tau^{00}_{LL}-\sigma\sum_{m=2}\frac{\kappa^m}{m!}\Delta^m \tau^{00}_{LL}\Big]+\mathcal{O}(c^{-4}).\end{aligned}$$ This result was derived by using a series expansion of the exponential differential operator and by taking into account that $\partial_0=\mathcal{O}(c^{-1})$. Further computational details are provided in the appendix-section related to this chapter. The modified Landau-Lifshitz tensor contribution was scaled by the factor $c^{-2}$ for later convenience. From the leading term we will eventually be able to recover the standard post-Newtonian contribution. The modified harmonic gauge pseudotensor: ----------------------------------------- The modified harmonic gauge pseudotensor contribution has the following appearance, $$N_H^{\alpha\beta}\,=\,G^{-1}(\Box) \ \tilde{\tau}^{\alpha\beta}_H\,=\,G^{-1}(\Box) \ \tau^{\alpha\beta}_H+\mathcal{O}^{\alpha\beta}.$$ where we remind that $\tau_H=(-g)t_H^{\alpha\beta}$ is the standard harmonic gauge pseudotensor contribution and $\mathcal{O}^{\alpha\beta}(h)=-\sigma\sum_{n=1}^{+\infty}\frac{(\kappa)^n}{n!}\partial^{2n}_0 e^{\kappa\Delta} \Box h^{\alpha\beta}$ is the iterative potential contribution. Taking into account that $h^{00}=\mathcal{O}(c^{-2})$, $h^{0a}=\mathcal{O}(c^{-3})$ and $h^{ab}=\mathcal{O}(c^{-4})$ [@Poisson; @Will2; @PatiWill1; @PatiWill2] we deduce that the leading term of $\mathcal{O}^{\alpha\beta}$(h) is of the order $\mathcal{O}(c^{-4})$ or beyond, $\frac{-\epsilon}{c^{2}} e^{\kappa\Delta} \Delta \partial_t^2h^{\alpha\beta}=\mathcal{O}(c^{-4})$. As we limit ourselves in this article to the 1.5 post-Newtonian order we do not need to consider additional correction terms coming from this contribution. It should be noticed that the higher the post-Newtonian precision is the more correction terms have to be taken into account. On the other hand we have that $\lim_{\sigma,\kappa\rightarrow 0}\mathcal{O}^{\alpha\beta}=0$ and for the same reasons mentioned in the previous subsection we will be interested in the time-time-component only. The easiest piece of the calculation by far is the computation of $\tau^{\alpha\beta}_H=(-g)t^{\alpha\beta}_H$ to the required degree of accuracy. Using the results from [@Poisson; @Will2; @PatiWill1] for the purely general relativistic harmonic gauge contribution we can easily deduce that, $\frac{16\pi G}{c^4} (-g)t^{00}_H=\mathcal{O}(c^{-6})$, is beyond the order of accuracy that we are interested in in this article. It is straightforward to see that the same is true for the modified pseudotensor $\frac{16\pi G}{c^4} N^{00}_H\,=\,\mathcal{O}(c^{-6})$. The effective total mass: ========================= The total near zone mass of a N-body system [@Poisson; @Will2; @PatiWill1; @PatiWill2] is composed by the matter and the field energy confined in the region of space defined by $\mathcal{M}:|\textbf{x}|<\mathcal{R}$, $M=c^{-2} \int_\mathcal{M} d\textbf{x} \ \big(N^{00}_m+N^{00}_{LL}\big)\,=\,M_m+M_{LL}+\mathcal{O}(c^{-4})$. The modified harmonic gauge contribution, $N_H=\mathcal{O}(c^{-4})$, is beyond the 1.5 post-Newtonian order of accuracy. The matter and field contributions will be worked out separately before they will be combined to form the effective total near zone mass. We saw in the previous chapter that the matter contribution can be rephrased in a more detailed way by splitting up the modified matter pseudotensor $N^{00}_m$ into its different components. This partition will eventually allow to review the different contributions one after the other and to retain all the terms that are within desired degree of accuracy, $$M_m\,=\,c^{-2}\int_{\mathcal{M}}d\textbf{x}\ N^{00}_m\,=\,c^{-2}\int_{\mathcal{M}}d\textbf{x} \ \big[\mathcal{B}^{00}+\mathcal{B}^{00}h^{00}-\sigma D^{00}\big]+\mathcal{O}(c^{-4}).$$ We will start our investigation by analysing a piece that will essentially lead to the general relativistic 1.5 pN term $M_m^{GR}$ [@Poisson]. $$\begin{split} M_{\mathcal{B}_1+\mathcal{B}_1h^{00}}=&\,c^{-2}\int_{\mathcal{M}} d \textbf{x} \ \big[\mathcal{B}_1^{00}+\mathcal{B}_1^{00}h^{00}\big]\,=\,M_m^{GR}+3\sigma\frac{G}{c^2} \sum_A\sum_{B \neq A} \frac{m_A m_B}{ r_{AB}}+\mathcal{O}(c^{-4}), \end{split}$$ where $r_{AB}=|\textbf{r}_A-\textbf{r}_B|$ is the distance between body $A$ and body $B$. It should be noticed that the second term in this expression, which could have been presented in a more succinct way by simply writing $M^{NL}_M$, merely originates from the modified Newtonian potential introduced in the third chapter of this article. The next contribution of the effective curvature energy-momentum tensor $\mathcal{B}^{\alpha\beta}=\mathcal{H}(\Box,\omega) T^{\alpha\beta}$ which could potentially contribute to the effective total near zone mass, at the 1.5 pN order of accuracy, is the $\mathcal{B}^{\alpha\beta}_2$ piece. A careful analysis (appendix) however reveals that, at this order of accuracy, it cannot contribute to the total mass because of its high order nonlocal structure, $M_{\mathcal{B}_2}=c^{-2}\int_{\mathcal{M}} d \textbf{x} \ \mathcal{B}_2^{00}=0$. Indeed after multiple partial integration the differential operator acting on the potential $h^{00}$ is of order two or higher, so that we will encounter surface terms and terms proportional to, $\sum_A\sum_{B\neq A} m_Am_B \ \nabla^m \delta(\textbf{r}_A-\textbf{r}_B)\,=\,0, \quad \forall m\in \mathbb{N}$ only. It can be easily seen, by a having a look at its differential operator structure, that the same reasoning is true for the derivative term $D^{\alpha\beta}$ encountered for the first time in the previous chapter, $M_{D}=c^{-2}\int_{\mathcal{M}} d \textbf{x} \ D^{00}=0$. In both cases surface terms can be freely discarded as we limit ourselves in this article only to the near zone domain, $\int_{\partial \mathcal{M}} dS^p \ \partial_p h^{00}\frac{\delta(\mathbf{y}-\mathbf{r}_A)}{|\mathbf{x}-\mathbf{y}|} \propto \delta(\mathcal{R}-|\mathbf{r}_A|)=0$. Additional inside on the derivation of this and the previous result can be found in the appendix related to this chapter. The remaining series of terms belonging to the effective matter pseudotensor $N^{00}_m$ are beyond the 1.5 post-Newtonian order of accuracy. Summing up all the non-vanishing terms we finally obtain the total effective matter contribution, $$\begin{split} M_m\,=M_m^{GR}+M_m^{NL}+\mathcal{O}(c^{-4}), \end{split}$$ where we introduced for clarity reasons the following two independent mass-terms, $M_m^{GR}=\sum_Am_A+\frac{1}{2c^2}\sum_A m_A v^2_A+3\frac{G}{c^2}\sum_A\sum_{B\neq A} \frac{m_Am_B}{r_{AB}}$ and $M_m^{NL}=3\sigma\frac{G}{c^2}\sum_A\sum_{B\neq A} \frac{m_Am_B}{r_{AB}}$. Here $M_m^{GR}$ is the standard general relativistic term at the 1.5 post-Newtonian order of accuracy [@Poisson] and $M_m^{NL}$ is the additional contribution originating from the nonlocal coupling operator $G(\Box_g)$, worked out to the same order of accuracy. It is straightforward to observe that in the limit of vanishing modification parameters ($\sigma,\kappa\rightarrow 0$) this result gently reduces to the general relativistic one. It was seen in the previous chapters that the field contribution of the total effective mass is obtained by evaluating the following integral, $$\begin{split} M_{LL}\,=\,c^{-2}\int_{\mathcal{M}} d\textbf{x} \ N^{00}_{LL}\,= \,c^{-2}\int_{\mathcal{M}} d\textbf{x} \ \big[ (1-\sigma)\tau^{00}_{LL}-\epsilon \Delta \tau^{00}_{LL} -\sigma \sum_{m=2}^{+\infty} \frac{\kappa^m}{m!} \Delta^m \tau^{00}_{LL}\big]+\mathcal{O}(c^{-4}), \end{split}$$ where we remind the important result $\tau^{00}_{LL}=\frac{-7}{8\pi G} \partial_pV\partial^pV$ and $V=(1+\sigma) \ U$ is the effective Newtonian potential introduced in chapter three. We will review the three different contributions one after the other and study in how far they will eventually contribute to the total effective gravitational near zone mass. The first integral gives essentially rise to the usual 1.5 pN general relativistic Landau-Lifshitz field-term [@Poisson], $$\begin{split} c^{-2}\int_{\mathcal{M}} d\textbf{x} \ \tau^{00}_{LL}\,=\,-\frac{7G}{2c^2} \sum_A\sum_{B\neq A} \frac{ \tilde{m}_A \tilde{m}_B}{|\textbf{r}_A-\textbf{r}_B|}\,=\,(1+\sigma)^2 M_{LL}^{GR}, \end{split}$$ where we recall that $\tilde{m}_A=(1+\sigma) \ m_A$ is the effective mass of body $A$. We refer the reader interested in the precise derivation of this result to the appendix related to this chapter. The remaining two terms do not contribute for the same reasons that were outlined before when we investigated a possible 1.5 pN contribution from the $\mathcal{B}^{\alpha\beta}_2$ and $D^{\alpha\beta}$ terms, $$\begin{split} \frac{\epsilon}{c^2}\int_{\mathcal{M}} d\textbf{x} \ \Delta \tau_{LL}^{00}\,=\,0,\quad \ \frac{\sigma}{c^2}\int_{\mathcal{M}} d\textbf{x} \ \sum_{m=2}^{+\infty} \frac{\kappa^m}{m!} \Delta^m \tau^{00}_{LL}\,=\,0. \end{split}$$ We provide additional computational details about the precise derivation of these two results in the appendix relative to this chapter. Strictly speaking these two terms give rise to $\mathcal{R}$-dependent terms. However they can be discarded as they will cancel out with their wave zone counterparts as was shown in [@PatiWill1]. In analogy to the previous subsection, we intend to terminate the present one by providing the total near zone field (Landau-Lifshitz) mass at the 1,5 pN order of precision, $$M_{LL}\,=\,M_{LL}^{GR}+M_{LL}^{NL}+\mathcal{O}(c^{-4}).$$ We distinguish between the standard general relativistic piece $M_{LL}^{GR}$ [@Poisson] and the the additional contribution originating from the nonlocal coupling operator, $$\begin{split} M_{LL}^{GR}\,=\,-\frac{7G}{2c^2} \sum_A\sum_{B\neq A} \frac{m_A m_B}{|\textbf{r}_A-\textbf{r}_B|},\quad M_{LL}^{NL}\,=\,-v(\sigma)\frac{7G}{2c^2} \sum_A\sum_{B\neq A} \frac{m_A m_B}{|\textbf{r}_A-\textbf{r}_B|}. \end{split}$$ We see that the nonlocal contribution is of 1.0 pN order, $M_{LL}^{NL}=\mathcal{O}(c^{-2})$. In the limit of vanishing $\sigma$ the nonlocal field term disappears as the polynomial, $v(\sigma)=\sigma-\sigma^2-\sigma^3$, depends only on the dimensionless parameter $\sigma$. We obtain, after joining the matter and field the contributions, the total gravitational near zone mass, $$M=M^{GR}+M^{NL}+\mathcal{O}(c^{-4}),$$ where we have have introduced the following two quantities, $M^{GR}=M_m^{GR}+M_{LL}^{GR}$ and $M^{NL}=M_m^{NL}+M_{LL}^{NL}$ in order to distinguish between the standard general relativistic terms and the nonlocal contributions, $$\begin{split} M^{GR}\,=\,\sum_Am_A+\frac{1}{c^2}\sum_A \frac{m_A v^2_A}{2}-\frac{1}{2}\frac{G}{c^2}\sum_A\sum_{B\neq A} \frac{m_Am_B}{r_{AB}},\quad M^{NL}\,=\,z(\sigma)\frac{G}{c^2}\sum_A\sum_{B\neq A} \frac{m_Am_B}{r_{AB}}. \end{split}$$ The newly introduced function $z(\sigma)$ is another polynomial of the modification parameter $\sigma$: $z(\sigma)=3\sigma-\frac{7}{2}v(\sigma)=-\sigma/2 +7/2 \ (\sigma^2+\sigma^3)$. It is obvious from what has been said previously that we recover the usual 1.5 PN general relativistic near-zone mass in the limit of vanishing $\sigma$. Conclusion: =========== In this article we outlined a precise model of a nonlocally modified theory of gravity in which Newton’s constant $G$ is promoted to a differential operator $G_\Lambda(\Box_g)$. Although the nonlocal equations of motion are themselves generally covariant, they cannot (for nontrivial $G_\Lambda(\Box_g)$) be presented as a metric variational derivative of a diffeomorphism invariant action unless you assume that they are only a first, linear in the curvature, approximation for the complete equations of motion [@Barvinsky1; @Barvinsky2]. The general idea of a differential coupling was apparently formulated for the first time in [@Dvali1; @Barvinsky1; @Dvali2; @Barvinsky2] in order to address the cosmological constant problem [@Weinberg1]. However the idea of a varying coupling constant of gravitation dates back to early works of Dirac [@Dirac1] and Jordan [@Jordan1; @Jordan2]. Inspired by these considerations Brans and Dicke published in the early sixties a theory in which the gravitational constant is replaced by the reciprocal of a scalar field [@Brans1]. We presented the general idea of infrared degravitation in which $G_\Lambda(\Box_g)$ acts like a high-pass filter with a macroscopic distance filter scale $\sqrt{\Lambda}$. In this way sources characterized by characteristic wavelengths much smaller than the filter scale ($\lambda_c\ll\sqrt{\Lambda}$) pass (almost) undisturbed through the filter and gravitate normally, whereas sources characterized by wavelengths larger than the filter scale are effectively filtered out [@Dvali1; @Dvali2]. We concluded chapter one by reviewing the cosmological constant problem and outlined a precise differential coupling model by which we can observe an effective degravitation of the vacuum energy on cosmological scales. In the second chapter we worked out the relaxed Einstein equations in the context of ordinary gravity and we briefly introduced the post-Newtonian theory as well as related concepts that were used in the subsequent chapters. In chapter three we derived the effective relaxed Einstein equations and showed that in the limit of vanishing UV parameters and infinitely large IR parameter we recover the standard wave equation. In analogy to the purely general relativistic case we worked out a formal near-zone solution for a far away wave-zone field point in terms of the effective energy-momentum pseudotensor $N^{\alpha\beta}$. The latter forms the main body of chapter four in which we worked out separately its matter, field and harmonic gauge contributions ($N^{\alpha\beta}_m$, $N^{\alpha\beta}_{LL}$, $N^{\alpha\beta}_H$) up to the 1.5 post-Newtonian order of accuracy. In the penultimate chapter the previous results were gathered in order to work out the effective total 1.5 post-Newtonian near-zone mass. We observe that in the limit of vanishing UV parameters we recover the standard 1.5 post-Newtonian total near-zone mass. The author would like to thank Professor Eric Poisson (University of Guelph) for useful comments regarding the generalized regularization prescription. A. D. gratefully acknowledges support by the Ministry for Higher Education and Research of the G.-D. of Luxembourg (MESR-Cedies). The relaxed Einstein equations: =============================== We consider a material source consisting of a collection of fluid balls [@Poisson; @Will2; @PatiWill1; @PatiWill2] whose size is typically small compared to their separations, $T^{\alpha \beta}=\rho\ u^\alpha u^\beta$, where $\rho=\frac{\rho^*}{\sqrt{-g}\gamma_A}$ is the energy-density and $u^\alpha=\gamma_A (c,\textbf{v}_A)$ is the four-velocity of the fluid ball with point mass $m_A$ and individual trajectory $r_A(t)$. Taking into account that for point masses we have $\rho^*=\sum_{A=1}^Nm_A\ \delta\big(\textbf{x}-\textbf{r}_A(t)\big)$, $\frac{1}{\sqrt{-g}}=1-\frac{1}{2}h^{00}+\mathcal{O}(c^{-4})$ and $\gamma_A^{-1}=\sqrt{-g_{\mu\nu}\frac{v^\mu_Av^\nu_A}{c^2}}=1-\frac{1}{2}\frac{\textbf{v}_A^2}{c^2}-\frac{1}{4}h^{00}+\mathcal{O}(c^{-4})$, we obtain the 1.5 post-Newtonian matter energy-momentum pseudotensor outlined in chapter two, $c^{-2}(-g)T^{00}=(1+\frac{4U}{c^2})\sum_Am_A\delta(\textbf{x}-\textbf{r}_A)(1+\frac{v_A^2}{c^2}-\frac{U}{c^2})+\mathcal{O}(c^{-4})$. Bearing in mind that $h^{00}=\mathcal{O}(c^{-2})$, $h^{0a}=\mathcal{O}(c^{-3})$, $h^{ab}=\mathcal{O}(c^{-4})$ and that $\partial_0h^{00}$ is of order $c^{-1}$ relative to $\partial_ah^{00}$ we see that the dominant piece of $\tau^{\alpha\beta}_{LL}$ will come from $\partial_ah^{00}=\frac{4}{c^2}\partial_a U$, where we remind that $U$ is the Newtonian potential of the N-body system. Moreover each occurrence of $g_{\alpha\beta}$ can be replaced by $\eta_{\alpha\beta}$ because each factor of $h^{\alpha\beta}$ contributes a power of $G$ and we aim to compute $\tau^{\alpha\beta}_{LL}$ to order $G^2$ in the second post-Minkowskian approximation. Using $\mathfrak{g}^{\alpha\beta}=\eta^{\alpha\beta}-h^{\alpha\beta}$ and the harmonic gauge $\partial_\beta h^{\alpha\beta}=0$, we can review the remaining six contributions of the time-time component of the Landau-Lifshitz pseudotensor $\tau_{LL}^{00}$ presented in chapter two, $$\begin{split} \frac{1}{2}g^{00}g_{\lambda\mu}\partial_\rho \mathfrak{g}^{\lambda\nu}\partial_\nu\mathfrak{g}^{\mu\rho}\,=&\,-\frac{1}{2}\eta_{\lambda\mu}\partial_\rho h^{\lambda\nu}\partial_\nu h^{\mu\rho}=\mathcal{O}(c^{-6}),\\ -g^{0\lambda}g_{\mu\nu}\partial_\rho\mathfrak{g}^{0\nu}\partial_\lambda\mathfrak{g}^{\mu\rho}\,=&\,-\eta^{0\lambda}\eta_{\mu\nu}\partial_\rho h^{0\nu}\partial_\lambda h^{\mu\rho}\,=\,\mathcal{O}(c^{-6}),\\ -g^{0\lambda}g_{\mu\nu}\partial_\rho\mathfrak{g}^{0\nu}\partial_\lambda\mathfrak{g}^{\mu\rho}\,=&\,-\eta^{0\lambda}\eta_{\mu\nu}\partial_\rho h^{0\nu}\partial_\lambda h^{\mu\rho}\,=\,\mathcal{O}(c^{-6}),\\ g_{\lambda\mu}g^{\nu\rho}\partial_\nu\mathfrak{g}^{0\lambda}\partial_\rho\mathfrak{g}^{0\mu}\,=&\,-\partial^bh^{00}\partial_b h^{00}+\mathcal{O}(c^{-6}),\\ \frac{1}{4}(2g^{0\lambda}g^{0\mu}-g^{00}g^{\lambda\mu})g_{\nu\rho}g_{\sigma\tau}\partial_\lambda\mathfrak{g}^{\nu\tau}\partial_\mu\mathfrak{g}^{\rho\sigma}\,=&\,\frac{1}{4}\partial^bh^{00}\partial_bh^{00}+\mathcal{O}(c^{-6}),\\ -\frac{1}{8}(2g^{0\lambda}g^{0\mu}-g^{00}g^{\lambda\mu})g_{\rho\sigma}g_{\nu\tau}\partial_\lambda\mathfrak{g}^{\nu\tau}\partial_\mu\mathfrak{g}^{\rho\sigma}\,=&\,-\frac{1}{8}\partial^bh^{00}\partial_bh^{00}+\mathcal{O}(c^{-6}). \end{split}$$ Summing up all terms that make up $\tau^{00}_{LL}$ we finally obtain, $\frac{16 \pi G}{c^4}(-g)t^{00}_{LL}=-\frac{7}{8}\partial_bh^{00}\partial^bh^{00}+\mathcal{O}(c^{-6})$. The modified relaxed Einstein equations: ======================================== The effective energy-momentum tensor: ------------------------------------- We have for an arbitrary contravariant rank two tensor $f^{\alpha\beta}(x)$ [@Poisson; @Weinberg2; @Woodard1; @Maggiore2], $$\begin{split} \Box_g f^{\alpha\beta}(x)\, %=&\,\nabla^\mu \nabla_\mu f^{\alpha\beta}(x)\\ %&=\,\frac{1}{\sqrt{-g}}\partial_\mu\big(\sqrt{-g}g^{\mu\nu}\partial_\nu f^{\alpha\beta}(x)\big)\\ =\,\frac{1}{\sqrt{-g}}\partial_\mu\Big[(\eta^{\mu\nu}-h^{\mu\nu})\partial_\nu f^{\alpha\beta}(x)\Big]\, %=&\,\frac{1}{\sqrt{-g}} \Big[\eta^{\mu\nu}\partial_\mu \partial_\nu f^{\alpha\beta}(x)-\partial_\mu h^{\mu\nu} \partial_\nu f^{\alpha\beta}(x)-h^{\mu\nu}\partial_\mu \partial_\nu f^{\alpha\beta}(x)\Big]\\ &=\,\frac{1}{\sqrt{-g}}\Big[\Box f^{\alpha\beta}(x)-h^{\mu\nu}\partial_\mu\partial_\nu f^{\alpha\beta}(x)\Big]\\ &=\,\Big[1-\frac{h}{2}+\frac{h^2}{8}-\frac{h^{\rho\sigma}h_{\rho\sigma}}{4}+\mathcal{O}(G^3)\Big]^{-1}\Big[\Box f^{\alpha\beta}(x)-h^{\mu\nu}\partial_\mu\partial_\nu f^{\alpha\beta}(x)\Big]\\ %=&\,\Big[1+\frac{h}{2}-\frac{h^2}{8}+\frac{h^{\rho\sigma}h_{\rho\sigma}}{4}+\mathcal{O}(G^3)\Big]\Big[\Box f^{\alpha\beta}(x)-h^{\mu\nu}\partial_\mu \partial_\nu f^{\alpha\beta}(x)\Big]\\ %=&\,\Box f^{\alpha\beta}(x) -h^{\mu\nu}\partial_{\mu\nu}f^{\alpha\beta}(x)\\ %&+\,\Big[\frac{h}{2}-\frac{h^2}{8}+\frac{h^{\rho\sigma}h_{\rho\sigma}}{4}+\mathcal{O}(G^3)\Big] \Box f^{\alpha\beta}(x)\\ %&-\,\Big[\frac{h}{2}-\frac{h^2}{8}+\frac{h^{\rho\sigma}h_{\rho\sigma}}{4}+\mathcal{O}(G^3)\Big]h^{\mu\nu}\partial_{\mu} \partial_\nu f^{\alpha\beta}(x)\\ &=\, \big[\Box-h^{\mu\nu}\partial_\mu\partial_\nu+\tilde{w}(h)\Box-\tilde{w}(h)h^{\mu\nu}\partial_{\mu}\partial_\nu\big] f^{\alpha\beta}(x)\\ %&=\, \big[\Box+w(h,\partial)\big]f^{\alpha\beta}(x), \end{split}$$ where the harmonic gauge conditions $\partial_\mu h^{\mu\nu}\,=\,0$ were used together with $\sqrt{-g}g^{\mu\nu}=\eta^{\mu\nu}-h^{\mu\nu}$ [@Poisson; @Will2; @PatiWill1; @PatiWill2; @Blanchet1] and the following definition $\tilde{w}(h)= \frac{h}{2}-\frac{h^2}{8}+\frac{h^{\rho\sigma}h_{\rho\sigma}}{4}+\mathcal{O}(G^3)$ was introduced for the potential function. In the quest of decomposing the effective energy-momentum tensor $\mathcal{T}^{\alpha\beta}=G(\Box) \ \mathcal{H}(w,\Box) \ T^{\alpha\beta}$ we should remind the important result for linear differential operators, $[A,B]=0\Rightarrow [\frac{1}{A},\frac{1}{B}]=0$, where $A$ and $B$ are supposed to be two linear differential operators. Let $f$ be a function with $f\in \mathcal{C}^{\infty}(\mathbb{R})$. We have that $ABf=BAf\Leftrightarrow BA^{-1}g=A^{-1}Bg\Leftrightarrow A^{-1}B^{-1}h=B^{-1}A^{-1}h$, where $f=A^{-1}g$ and $g=B^{-1}h$ and therefore $g$, $h$ $\in \mathcal{C}^{\infty}(\mathbb{R})$. This result will be used in the splitting of the effective energy-momentum tensor, $$\begin{split} \mathcal{T}^{\alpha\beta}\,=\, G\big[\Box_g\big] \ T^{\alpha\beta} %&=\, \frac{1}{1-\sigma e^{\kappa\Box_g}} \ T^{\alpha\beta}\\ &=\, \frac{1}{1-\sigma e^{\kappa \Box}} \ \Big[1-\sigma \frac{e^{\kappa \Box}}{1-\sigma e^{\kappa \Box}}\sum_{n=1}^\infty \frac{\kappa^n}{n!} w^n\Big]^{-1} \ \ T^{\alpha\beta}\\ &=\,\Big[\frac{1}{1-\sigma e^{\kappa \Box}} \ \Big(1+\sigma \frac{e^{\kappa \Box}}{1-\sigma e^{\kappa \Box}} \sum_{n=1}^\infty \frac{\kappa^n}{n!} w^n+\mathcal{O}(\sigma^2)\Big)\Big] \ T^{\alpha\beta}\\ &=\,G\big[\Box\big] \ \Big[\sum_{n=0}^{+\infty}\mathcal{B}^{\alpha\beta}_n+\mathcal{O}(\sigma^2)\Big], \end{split}$$ where we used $1-\sigma e^{\kappa\Box_g}=1-\sigma e^{\kappa[\Box+\omega(h,\partial)]}=1-\sigma e^{\kappa\Box}\ \sum_{n=0}^{+\infty} \frac{\kappa^n}{n!}\omega^n$. Moreover we needed to constrain the range for the modulus of the dimensionless parameter $\sigma$ which, from now on, has to be smaller than one in order to make the perturbative expansion work. We adopt the convention that differential operators appearing in the numerator act first ($[w,\Box]\neq 0$). For linear differential operators this prescription is not needed as they commute anyway. The leading four contributions of the curvature tensor are displayed in the main part of the article. The modified relaxed Einstein Equations: ---------------------------------------- In the present appendix-subsection we would like to present some additional computational details regarding the modified Green function outlined in the main part. By substituting the Fourier representation of the modified Green function $G(x-y)=(2\pi)^{-1}\int dk\ G(k) e^{ik(x-y)}$, where $x=(ct,\textbf{x})$ and $k=(k^0,\textbf{k})$, inside Green the function condition $(1-\sigma e^{\kappa \Delta})\Box G(x-y)=\delta(x-y)$ we obtain the latter in momentum-space, $G(k)= \frac{1}{(k^0)^2-|\textbf{k}|^2} \ \frac{1}{1-\sigma e^{-\kappa \textbf{k}^2}} = \frac{1}{(k^0)^2-|\textbf{k}|^2}+\sigma \ \frac{ \ e^{-\kappa|\textbf{k}|^2}}{(k^0)^2-|\textbf{k}|^2}+...$ The first term in this infinite expansion is the usual Green function followed by correction terms. We also remind that the modulus of the dimensionless parameter is assumed to be strictly smaller than one, $|\sigma|<1$. By making use of the residue theorem we can derive the modified Green function, $G=G^{GR}+G^{NL}$, in terms of its retarded and advanced contributions, $G^{GR}=\frac{-1}{4\pi} \frac{1}{|\textbf{x}-\textbf{y}|} \Big[\delta(x^0-|\textbf{x}-\textbf{y}|-y^0)-\delta(x^0+|\textbf{x}-\textbf{y}|-y^0)\Big]$ and $G^{NL}=\frac{-1}{4\pi} \frac{1}{|\textbf{x}-\textbf{y}|} \frac{\sigma}{2\sqrt{\kappa \pi}}\Big[ e^{-\frac{x^0-|\textbf{x}-\textbf{y}|-y^0}{4\kappa}}-e^{-\frac{x^0+|\textbf{x}-\textbf{y}|-y^0}{4\kappa}}\Big]$. The modified Newtonian potential is obtained from the formal solution of the modified relaxed Einstein equation, $h^{00}=\frac{4G}{c^4}\int dy \ G_r(x-y) N^{00}(x)+\mathcal{O}(c^{-1})=\frac{4G}{c^4}\int d\textbf{y}\int dy^0\ \Big[\delta(x^0-|\textbf{x}-\textbf{y}|-y^0)+\frac{\sigma}{2\sqrt{\kappa \pi}}e^{-\frac{(x^0-|\textbf{x}-\textbf{y}|-y^0)^2}{4\kappa}}\Big]\frac{\sum_A m_A c^{2} \delta(\textbf{y}-\textbf{r}_A)}{|\textbf{x}-\textbf{y}|}+\mathcal{O}(c^{-1})$. After performing the four dimensional integration we recover the result for the time-time component of the gravitational potential outlined in the main text of this article. Solution for a far way wave-zone field point: --------------------------------------------- In analogy to [@Poisson; @Will2; @PatiWill1; @PatiWill2], we aim to expand the retarded effective pseudotensor in terms of a power series , $$\begin{split} \frac{N^{\alpha\beta}(x^0-|\textbf{x}-\textbf{y}|,\textbf{y})}{|\textbf{x}-\textbf{y}|}\,&=\, \sum_{l=0}^{\infty} \frac{(-1)^l}{l!} \textbf{y}^L\partial_L \Big[\frac{N^{\alpha\beta}(x^0-r,\textbf{y})}{r}\Big]\\ &=\, \frac{N^{\alpha\beta}}{r}-y^a \frac{\partial}{\partial x^a} \big[\frac{N^{\alpha\beta}}{r}\big]+\frac{y^ay^b}{2} \frac{\partial^2}{\partial x^a\partial x^b} \big[\frac{N^{\alpha\beta}}{r}\big]-\cdots\\ %&=\,\frac{1}{r} \ \sum_{l=0}^\infty \frac{(-1)^l}{l!} y^L\partial_L \ N^{\alpha\beta}(u,\textbf{y})+\mathcal{O}(1/r^2)\\ &=\,\frac{1}{r} \ \sum_{l=0}^\infty \frac{y^L}{l!} \ n_L \ \Big(\frac{\partial}{\partial u}\Big)^l \ N^{\alpha\beta}(u,\textbf{y})+\mathcal{O}(1/r^2), %&=\,\frac{1}{r} \ \sum_{l=0}^\infty \frac{1}{l!}\frac{1}{c^l} y^L \ n_L \ \Big(\frac{\partial}{\partial t_r}\Big)^l \ N^{\alpha\beta}(u,\textbf{y})+\mathcal{O}(1/r^2) \end{split}$$ where we used the following result, $$\begin{split} \partial_L N^{\alpha\beta}\,=\, \frac{\partial}{\partial x^{a1}} \ \cdots \ \frac{\partial}{\partial x^{al}} N^{\alpha\beta} %&=\, \frac{\partial}{\partial x^{a1}} \ \cdots \ \frac{\partial N^{\alpha\beta}}{\partial u} \frac{\partial u}{\partial x^{al}}\\ \,=\, \Big(\frac{\partial}{\partial u}\Big)^l \ N^{\alpha\beta} \ \frac{\partial u }{\partial x^{a1}} \ \cdots \ \frac{\partial u}{\partial x^{al}} %&=\, \Big(\frac{\partial}{\partial u}\Big)^l \ N^{\alpha\beta} \ \frac{\partial (x^0-r) }{\partial x^{a1}} \ \cdots \ \frac{\partial (x^0-r)}{\partial x^{al}}\\ %&=\, \Big(\frac{\partial}{\partial u}\Big)^l \ N^{\alpha\beta} \ (-1)^l \ n_{a1} \ \cdots \ n_{al}\\ \,=\, (-1)^l \ \Big(\frac{\partial}{\partial u}\Big)^l \ N^{\alpha\beta} \ n_{L}, \end{split}$$ and where $\frac{\partial r}{\partial x^a}=\frac{x^a}{r}=n_a$ is the a-th componant of the radial unit vector and $u=c\tau=x^0-r$. The far away wave zone is characterized by the fact that we only need to consider the potentials contribution proportional to $1/r$. This allows us to derive the near-zone contribution of the gravitational potentials for a wave-zone field point in terms of the retarded derivatives, $$\begin{split} h^{ab}_{\mathcal{N}}(x)\,=\, \frac{4 G}{c^4} \sum_{l=0}^\infty \frac{(-l)^l}{l!} \partial_L \Big[\frac{1}{r} \int_{\mathcal{M}} d\textbf{y} \ N^{ab}(u,\textbf{y}) \ y^L \Big]\,&=\, \frac{4 G}{c^4 r} \sum_{l=0}^\infty \frac{n_L}{c^ll!} \Big(\frac{\partial}{\partial \tau}\Big)^l \Big[ \int_{\mathcal{M}} d\textbf{y} \ N^{ab}(u,\textbf{y}) \ y^L\Big]+\mathcal{O}(r^{-2})\\ &=\,\frac{4 G}{c^4 r}\Bigg[\int_{\mathcal{M}} d\textbf{y} \ N^{ab}(u,\textbf{y}) + \frac{n_c }{c} \frac{\partial}{\partial \tau} \int_{\mathcal{M}} d\textbf{y} \ N^{ab}(u,\textbf{y}) \ y^c\\ &\quad \ +\frac{n_c n_d}{2c^2} \frac{\partial^2}{\partial \tau^2} \int_{\mathcal{M}} d\textbf{y} \ N^{ab}(u,\textbf{y}) \ y^c \ y^d\\ &\quad \ +\frac{n_c n_d n_e}{6c^3} \frac{\partial^3}{\partial \tau^3} \int_{\mathcal{M}} d\textbf{y} \ N^{ab}(u,\textbf{y}) \ y^c \ y^d \ y^e+[l\ge4]\Bigg]+\mathcal{O}(r^{-2}) \end{split}$$ We provide some additional details on how the modified conservation relations were derived for an arbitrary domain of integration $\mathcal{M}$ with boundary $\partial \mathcal{M}$, $$\begin{split} \partial^2_0 \int_{\mathcal{M}} d\textbf{x} \ N^{00} \ x^ax^b\,&=\,\partial_0\Big[\int_{\mathcal{M}} d\textbf{x} \ \Big(N^{0a}x^b+N^{0b}x^a-\partial_c(N^{0c}x^ax^b)\Big)\Big]\\ &=\,\int_{\mathcal{M}} d\textbf{x} \ \Big(2N^{ab}+\partial_c(\partial_dN^{dc}x^ax^b)\Big)-\int_{\partial\mathcal{M}} dS_c\ \big(N^{ca}x^b+N^{cb}x^a\big)\\ &=\,\int_\mathcal{M} d\textbf{x} \ \Big(2N^{ab}-\partial_c(N^{ca}x^b+N^{cb}x^a-\partial_dN^{dc}x^ax^b\Big). \end{split}$$ Additional details on the derivation of the second conservation identity, $$\begin{split} \partial_0\int_{\mathcal{M}} d\textbf{x} \ \Big(N^{0a}x^bx^c+N^{0b} x^ax^c -N^{0c} x^ax^b\Big)\,=&\,\int_{\mathcal{M}} d\textbf{x} \Big(-\partial_dN^{da} x^bx^c-\partial_dN^{db}x^ax^c+\partial_dN^{dc} x^ax^b\Big)\\ =&\,\int_{\mathcal{M}} d\textbf{x} \ \Big(2N^{ab} x^c-\partial_d(N^{da}x^bx^c+N^{db}x^ax^c-N^{dc}x^ax^b)\Big). \end{split}$$ This and the previous result have been worked out by making use of the conservation relation ($\partial_\beta N^{0\beta}=0$), (multiple) partial integration and the Gauss-Ostrogradsky-theorem. Furthermore it should be pointed out that we can easily replace the derivative $\partial_0$ by $\partial u$ as the two variables only differ by a constant shift in time. The precise form of the surface terms, mentioned in the main part of the article, is, $$P^{ab}\,=\, \int_{\partial \mathcal{M}} dS_c \ (N^{ac} y^b+N^{bc} y^a-\partial_d N^{cd} y^a y^b), \ P^{abc}\,=\, \frac{1}{c}\frac{\partial}{\partial \tau} \int_{\partial \mathcal{M}} dS_d \ (N^{ad} y^b y^c+N^{bd} y^a y^c -N^{cd} y^a y^b).$$ The effective energy-momentum pseudotensor: =========================================== The effective matter pseudotensor: ---------------------------------- Taking into account that $h^{00}=\mathcal{O}(c^{-2})$, $h^{0a}=\mathcal{O}(c^{-3})$ and $h^{ab}=\mathcal{O}(c^{-4})$ [@Poisson; @Will2; @PatiWill1], we see that the three contributions of the potential operator function $w(h,\partial)$ are of the following post-Newtonian orders ($h=\eta_{\alpha\beta}h^{\alpha\beta}$), $h^{\mu\nu}\partial_{\mu\nu}=\mathcal{O}(c^{-4})$, $\tilde{w}(h)=\frac{h}{2}-\frac{h^2}{8}+\frac{h^{\rho\sigma}h_{\rho\sigma}}{4}+\mathcal{O}(G^3)=-\frac{h^{00}}{2}+\mathcal{O}(c^{-4})$, $\tilde{w}(h)h^{\mu\nu}\partial_{\mu\nu}=\mathcal{O}(c^{-6})$. The leading contribution of $\mathcal{B}^{\alpha\beta}$ gives rise to the usual 1.5 post-Newtonian matter contribution, $$\begin{split} \mathcal{B}^{\alpha\beta}_1\,=\,\frac{\tau^{\alpha\beta}_m}{(-g)}\,&=\,\Big[\tau^{\alpha\beta}_m(c^{-3})+\mathcal{O}(c^{-4})\Big]\Big[1-h^{00}+h^{aa}-\frac{h^2}{2}+\cdots\Big]\,=\,\tau^{\alpha\beta}_{m}(c^{-3})-\tau^{\alpha\beta}_m(c^0) \ h^{00}+\mathcal{O}(c^{-4}), \end{split}$$ where $\tau^{\alpha\beta}_m$ is the effective matter pseudotensor introduced in chapter two. The second contribution of $\mathcal{B}^{\alpha\beta}$ is more advanced and can be decomposed at the 1.5 pN order of accuracy into three different contributions, $$\begin{split} \mathcal{B}^{\alpha\beta}_2\,&=\,\epsilon e^{\kappa \Box}\Big[\frac{w}{1-\sigma e^{\kappa\Box}}\Big] \ \Big[\frac{\tau_m^{\alpha\beta}}{(-g)}\Big]\, %&=\,-\frac{\epsilon}{2} \sum_A m_A v_A^\alpha v^\beta_A \ \sum_{n=0}^{+\infty} \sigma^n\Bigg[1+[n+1]\kappa \Delta+\sum_{m=2}^{+\infty} \frac{[(n+1)\kappa]^m}{m!} \Delta^m\Bigg] \\ %&\quad\quad \Bigg[h^{oo} \Big(\Delta \delta(\textbf{y}-\textbf{r}_A)\Big)\Bigg]+\mathcal{O}(c^{-4})\\ =\,-\frac{\epsilon}{2}\sum_A m_A v^\alpha_A v^\beta_A \ \Big[\sum_{n=0}^\infty \sigma^n e^{(n+1)\kappa \Delta} \Big] \ \Big[h^{00}\Delta \delta(\textbf{y}-\textbf{r}_A)\Big]+\mathcal{O}(c^{-4}), \end{split}$$ where we used $\tau_m^{\alpha\beta}=\sum_A m_A v_A^\alpha v_A^\beta \delta(\textbf{y}-\textbf{r}_A)+\mathcal{O}(c^{-2})$ with $(-g)=1+h^{00}+\mathcal{O}(c^{-4})$ and $\omega=-\frac{h^{00}}{2}+\mathcal{O}(c^{-4})$. For later purposes we will decompose this quantity into three different pieces, $\mathcal{B}^{\alpha\beta}_2= \mathcal{B}_{2a}+\mathcal{B}_{2b}+\mathcal{B}_{2c}+\mathcal{O}(c^{-4})$ where, $$\begin{split} \mathcal{B}_{2a}^{\alpha\beta}\,=&\,-\frac{\epsilon}{2} \frac{1}{1-\sigma} \sum_A m_A v_A^\alpha v^\beta_A \ \Big[h^{00} \big(\Delta \delta(\textbf{y}-\textbf{r}_A)\big)\Big], \ \quad \mathcal{B}_{2b}^{\alpha\beta}\,=\, -\frac{\epsilon}{2} \frac{\kappa}{(1-\sigma)^2} \sum_A m_A v_A^\alpha v^\beta_A \Delta \Big[h^{00} \big(\Delta \delta(\textbf{y}-\textbf{r}_A)\big)\Big],\\ \mathcal{B}_{2c}^{\alpha\beta}\,= &\,-\frac{\epsilon}{2} \sum_A m_A v_A^\alpha v^\beta_A \ \sum_{n=0}^{+\infty} \sigma^n \ \sum_{m=2}^{+\infty} \frac{[(n+1)\kappa]^m}{m!} \Delta^m \Big[h^{00} \big(\Delta \delta(\textbf{y}-\textbf{r}_A)\big)\Big]. \end{split}$$ This splitting was obtained using the definition [@Spallucci1] for an exponential differential operator $e^{(n+1)\kappa\Delta}=1+[n+1]\kappa \Delta+\sum_{m=2}^{+\infty} \frac{[(n+1)\kappa]^m}{m!} \Delta^m$ and we assumed that the modulus of the dimensionless parameter is smaller than one $|\sigma|<1$, so that we have, $\sum_{n=0}^{+\infty} \sigma^n=\frac{1}{1-\sigma}$ and $\sum_{n=0}^{+\infty} [n+1]\sigma^n=\frac{1}{(1-\sigma)^2}$. To work out the 1.5 pN terms originating from the exponential differential operator acting on the product of the effective energy-momentum tensor and the metric determinant, we have to rely on the generalized Leibniz product rule $\forall\ q(x), \ v(x) \in \mathcal{C}^\infty(\mathbb{R}),\quad \big(q(x)v(x)\big)^{(n)}=\sum_{k=0}^n\binom{n}{k} q^{(k)} v^{(n-k)}$, where $\binom{n}{k}=\frac{n!}{k!(n-k)!}$ are the binomial coefficients, $$\begin{split} \sigma e^{\kappa \Box} \Big[\mathcal{T}^{\alpha\beta} (-g)\Big] &\,=\, \sigma e^{-\kappa\partial^2_{0}} e^{\kappa \Delta} \Big[\mathcal{T}^{\alpha\beta} (-g)\Big]\\ &=\, \sigma \Big[\sum_{s=0}^{\infty} \frac{(-\kappa)^s}{s!} \Big(\partial^2_0\Big)^s\Big] \Big[\sum_{n=0}^{\infty} \frac{(\kappa)^n}{n!} \Big(\nabla^2\Big)^n\Big] \Big[\mathcal{T}^{\alpha\beta} (-g)\Big]\\ &=\, \sigma \sum_{s=0}^{\infty} \frac{(-\kappa)^s}{s!}\sum_{n=0}^{\infty} \frac{\kappa^n}{n!} \sum_{m=0}^{2n} \binom{2n}{m} \sum_{p=0}^{2s} \binom{2s}{p}\Big[\partial^{2s-p}_0\Big(\nabla^{2n-m} \mathcal{T}^{\alpha\beta}\Big)\Big] \Big[\partial_0^p\Big(\nabla^m (-g)\Big)\Big]\\ &=\,\sigma \sum_{s=0}^{\infty} \frac{(-\kappa)^s}{s!}\sum_{n=0}^{\infty} \frac{\kappa^n}{n!} \binom{2n}{0} \binom{2s}{0} \Big[\partial_0^{2s}\Big(\nabla^{2n}\mathcal{T}^{\alpha\beta}\Big)\Big](-g)+\\ &\quad \ \,\sigma \sum_{s=1}^{\infty} \frac{(-\kappa)^s}{s!}\sum_{n=0}^{\infty} \frac{\kappa^n}{n!} \sum_{p=1}^{2s} \binom{2n}{0} \binom{2s}{p} \Big[\partial_0^{2s-p}\Big(\nabla^{2n} \mathcal{T}^{\alpha\beta}\Big)\Big] \Big[\partial_0^p (-g)\Big]+\\ &\quad \ \,\sigma \sum_{s=0}^{\infty} \frac{(-\kappa)^s}{s!}\sum_{n=1}^{\infty} \frac{\kappa^n}{n!} \sum_{m=1}^{2n} \binom{2n}{m} \binom{2s}{0}\Big[\partial_0^{2s} \Big(\nabla^{2n-m} \mathcal{T}^{\alpha\beta}\Big) \Big] \Big[\nabla^m (-g)\Big]+\\ &\quad \ \,\sigma \sum_{s=1}^{\infty} \frac{(-\kappa)^s}{s!}\sum_{n=1}^{\infty} \frac{\kappa^n}{n!} \sum_{m=1}^{2n} \binom{2n}{m}\sum^{2s}_{p=1} \binom{2s}{p} \Big[\partial_0^{2s-p} \nabla^{2n-m} \mathcal{T}^{\alpha\beta}\Big] \Big[\partial_0^p\Big(\nabla^m(-g)\Big)\Big]\\ %&=\, \Big[\sigma e^{\kappa \Box} \mathcal{T}^{\alpha\beta}\Big] \Big(1+f(h)\Big)+\\ %&\,\sigma \frac{(-\kappa)^1}{1!} \sum_{p=1}^2 C^{2}_{p} \Big[\partial_0^{2-p} \Big(e^{\kappa \Delta} \mathcal{T}^{\alpha\beta}\Big)\Big] \Big[\partial_0^p f(h)\Big]+\\ %&\, \sigma \sum_{s=2}^{\infty}\frac{(-\kappa)^s}{s!} \sum_{p=1}^{2s} C^{2s}_{p} \Big[\partial_0^{2s-p} \Big(e^{\kappa \Delta} \mathcal{T}^{\alpha\beta}\Big)\Big] \Big[\partial_0^p f(h)\Big]+\\ %&\, \sigma \sum_{n=1}^{\infty} \frac{\kappa^n}{n!} \sum_{m=1}^{2n} C^{2n}_{m} \Big[ e^{-\kappa \partial_0^2} \Big(\nabla^{2n-m} \mathcal{T}^{\alpha\beta}\Big) \Big] \Big[ \nabla^m f(h)\Big]-\\ %&\,\sigma\kappa \sum_{n=1}^{\infty} \frac{\kappa^n}{n!} \sum_{m=1}^{2n} \sum_{p=1}^{2} C^{2n}_{m} C^{2}_{p} \Big[\partial_0^{2-p} \nabla^{2n-m} \mathcal{T}^{\alpha\beta}\Big] \Big[\partial_0^p \Big(\nabla^m f(h)\Big)\Big]+\\ %&\,\sigma \sum_{s=2}^{\infty} \frac{(-\kappa)^s}{s!}\sum_{n=1}^{\infty} \frac{\kappa^n}{n!} \sum_{m=1}^{2n} C^{2n}_{m} \sum^{2s}_{p=1} C^{2s}_{p} \Big[\partial_0^{2s-p} \nabla^{2n-m} \mathcal{T}^{\alpha\beta}\Big] \Big[\partial_0^p\Big(\nabla^m f(h)\Big)\Big]\\ &=\,\sigma [1+h^{00}] \ e^{\kappa\Box}\mathcal{T}^{\alpha\beta}+ \sigma \sum_{n=1}^{\infty} \frac{\kappa^n}{n!} \sum_{m=1}^{2n} \binom{2n}{m} \Big[ \Big(\nabla^{2n-m} \mathcal{T}^{\alpha\beta}\Big) \Big] \Big[ \nabla^m h^{00}\Big]+\mathcal{O}(c^{-4}). \end{split}$$ We remind that $(-g)=1+h^{00}+\mathcal{O}(c^{-4})$, $\partial_0=\mathcal{O}(c^{-1})$ and $\binom{2n}{0}=\binom{2s}{0}=1$. In order to compute $D^{\alpha\beta}= \sum_{n=1}^{\infty} \frac{\kappa^n}{n!} \sum_{m=1}^{2n} \binom{2n}{m} \big[ \big(\nabla^{2n-m} \mathcal{T}^{\alpha\beta}\big) \big]\big[\nabla^m h^{00}\big]$ to the required order of accuracy wee need to work out the effective energy-momentum tensor to lowest post-Newtonian order $\mathcal{T}^{\alpha\beta}= G(\Box) \ \mathcal{H}(w,\Box) \ T^{\alpha\beta}$, where $G(\Box)=\sum_{s=0}^{+\infty} \sigma^s e^{s\kappa\Box}\,=\,\sum_{s=0}^{+\infty}\sigma^s\sum_{p=0}^{+\infty}\frac{(s\kappa\Delta)^p}{p!}+\mathcal{O}(c^{-2})$ and $\mathcal{H}(\omega,\Box)=1+\sum_{s=0}^{+\infty}\sigma^{s+1}\sum_{p=0}^{+\infty}\frac{\big((s+1)\kappa\Box\big)^p}{p!}\sum_{n=1}^{+\infty}\frac{\kappa^n}{n!} \omega^n+\cdots=1+\mathcal{O}(c^{-2})$. We remind that we assumed $|\sigma|<1$ and we have $\Box=\Delta+\mathcal{O}(c^{-2})$ and the potential operator function $\omega(h,\partial)=\mathcal{O}(c^{-2})$. We would like to conclude this appendix section by recalling the energy-momentum tensor of a system composed by N particles with negligible pressure, $T^{\alpha\beta}=\rho \ u^\alpha u^\beta$, where $\rho=\frac{\rho^*}{\sqrt{-g} \gamma_A}$, $\rho^*=\sum_{A=1}^Nm_A\delta\big(\textbf{x}-\textbf{r}_A(t)\big)$, $\gamma_A^{-1}=\sqrt{-g_{\mu\nu}\frac{v^\mu_Av^\nu_A}{c^2}}=1-\frac{1}{2}\frac{\textbf{v}_A^2}{c^2}-\frac{1}{4}h^{00}+\mathcal{O}(c^{-4})$, $u^\alpha=\gamma_A (c,\textbf{v}_A)$ is the relativistic velocity [@Poisson; @Will2; @PatiWill1; @PatiWill2] and $\textbf{r}_A(t)$ is the individual trajectory of the particle with mass $m_A$. The effective Landau-Lifshitz pseudotensor: ------------------------------------------- From the series expansion of the exponential differential operator [@Spallucci1] we obtain $G^{-1}(\Box)\tau^{00}_{LL}=\big[1-\sigma \sum_{m=0}^{+\infty}\frac{(\kappa\Box)^n}{n!}\big]\tau^{00}_{LL}$. The effective total mass: ========================= Matter contribution $M_m$: -------------------------- We provide additional computational steps in order to show how the results, presented in the main text, have been derived. In this appendix we will focus mainly on technical issues. We therefore refer the reader to the explanations given in the main text for the notations and conceptual points. The leading order matter contribution is, $$\begin{split} M_{\mathcal{B}_1+\mathcal{B}_1h^{00}}\,=&\,c^{-2}\int_{\mathcal{M}} d \textbf{x} \ \Big[\mathcal{B}_1^{00}+\mathcal{B}^{00}h^{00}\Big]\\ =&\,c^{-2}\int_{\mathcal{M}} d \textbf{x} \ \sum_A m_A c^2 \Big[1+\frac{v_A^2}{2c^2}+3(1+\sigma)\frac{G}{c^2}\sum_{B\neq A}\frac{m_B}{|\textbf{x}-\textbf{r}_B|}\Big]\delta(\textbf{x}-\textbf{r}_A)+\mathcal{O}(c^{-4})\\ =&\,M_m^{GR}+3\sigma\frac{G}{c^2} \sum_A\sum_{B \neq A} \frac{m_A m_B}{ r_{AB}}+\mathcal{O}(c^{-4}), \end{split}$$ where we used the standard regularization prescription $\frac{\delta(\textbf{x}-\textbf{r}_A)}{|\textbf{x}-\textbf{r}_A|}\equiv 0$ for point masses [@Poisson; @Blanchet1; @Blanchet2]. We would like to illustrate why $M_{\mathcal{B}_{2}}=0$. In order to do so we isolate the lowest derivative term in $\mathcal{B}_{2}$ presented in chapter four and its related appendix-section, $M_{\mathcal{B}_{2}}=-\frac{\epsilon}{2} \frac{1}{1-\sigma} \sum_A m_A \int_{\mathcal{M}}d\textbf{x} \ h^{00} \Big[\Delta \delta(\textbf{y}-\textbf{r}_A)\Big]+\cdots$. In what follows we will show that this term cannot contribute to the total mass, $$\begin{split} \sum_A m_A \int_{\mathcal{M}}d\textbf{x} \ h^{00} \Big[\Delta \delta(\textbf{x}-\textbf{r}_A)\Big]\,=&\,\sum_A m_AS_1-\sum_A m_AS_2-4\pi \frac{4G}{c^2} \sum_A \sum_{B\neq A} m_A\tilde{m}_B \int_{\mathcal{M}}d\textbf{x} \ \delta(\textbf{x}-\textbf{r}_B) \ \delta(\textbf{x}-\textbf{r}_A)\\ =&\,-4\pi\frac{4G}{c^2} \sum_A \sum_{B\neq A} m_A\tilde{m}_B \ \delta(\textbf{r}_B-\textbf{r}_A)\,=\,0, \end{split}$$ where $S_1=\oint_{\partial\mathcal{M}}dS^ph^{00}[\partial_p\delta(\textbf{x}-\textbf{r}_A)]$ and $S_2=\oint_{\partial\mathcal{M}}dS^p[\partial_ph^{00}]\delta(\textbf{x}-\textbf{r}_A)$ are the surface integrals originating from partial integration. We remind that the time-time-component of the gravitational potential for a N-body system is $h^{00}=\frac{4}{c^2} V=(1+\sigma)\frac{4}{c^2}\sum_{B} \frac{m_B}{|\textbf{x}-\textbf{r}_B|}=\frac{4}{c^2}\sum_{B} \frac{\tilde{m}_B}{|\textbf{x}-\textbf{r}_B|}$ and we used the well known identity, $\Delta \frac{1}{|\textbf{x}-\textbf{r}_B|}=-4\pi \delta(\textbf{x}-\textbf{r}_B)$. Surface terms of the form, $\oint_{\partial \mathcal{M}} dS^p \ \partial_p h^{00}\delta(\mathbf{x}-\mathbf{r}_A) \propto \delta(\mathcal{R}-|\mathbf{r}_A|)=0$, coming from partial integration, vanish in the near zone defined by $\mathcal{M}: \ |\textbf{x}|<\mathcal{R}$ [@Poisson], For the higher order derivative terms in $\mathcal{B}_2$ the situation is very similar in the sense that we will always encounter, after (multiple) partial integration, terms of the form, $\sum_A\sum_{B\neq A} m_Am_B \ \int_{\mathcal{M}} d\textbf{x} \ \delta(\textbf{x}-\textbf{r}_A) \ \nabla^m \delta(\textbf{x}-\textbf{r}_B)=0, \quad \forall m\in \mathbb{N}$. Very similar arguments show that we have, at this order of accuracy, for the contribution $M_D=c^{-2}\int_\mathcal{M}D^{00}=\sum_Am_A\mathcal{S}(\sigma,\kappa)[\nabla^{2p+2n-m}\delta(\textbf{x}-\textbf{r}_A)][\nabla^mh^{00}]=0$, where we remind that $\mathcal{S}(\sigma,\kappa)=\sum_{n=1}^\infty\frac{\kappa^n}{n!}\sum_{m=1}^{2n}\dbinom{2n}{m}\sum_{s=0}^{+\infty}\sigma^s\sum_{p=0}^{+\infty}\frac{(s\kappa)^p}{p!}$ is the four-sum introduced in the previous chapter. Field contribution $M_{LL}$: ---------------------------- Here we will have a closer look at the important term, $$\begin{split} c^{-2}\int_{\mathcal{M}} d\textbf{x} \ \tau^{00}_{LL}\,=\,-\frac{7}{8c^{2}\pi G}\int_{\mathcal{M}} d\textbf{x} \ \partial_pV\partial^pV\,=&\,-\frac{7}{8c^2\pi G}\int_{\mathcal{M}} d\textbf{x} \ \Big[\partial_p(V\partial^pV)-V\nabla^2V\Big]\\ =&\,-\frac{7}{8c^2\pi G}\int_{\mathcal{M}} d\textbf{x} \ \Big[\partial_p(V\partial^pV)+4\pi G \sum_A\tilde{m}_A\delta(\textbf{x}-\textbf{r}_A)V\Big]\\ =&\,f_a(\mathcal{R})-\frac{7G}{2c^2} \sum_A\sum_{B\neq A} \frac{ \tilde{m}_A \tilde{m}_B}{|\textbf{r}_A-\textbf{r}_B|}, \end{split}$$ where we remind that $V=(1+\sigma)\ U$ is the effective Newtonian potential and $\tilde{m}_A=(1+\sigma)\ m_A$ is the effective mass of body $A$. By virtue of the Gauss-Ostrogradsky theorem the first term gives rise to an $\mathcal{R}$-dependent contribution which will eventually cancel out with the corresponding wave zone term [@Poisson; @Will2; @PatiWill1], $$\begin{split} -\frac{8c^{2}\pi G}{7}f_a(\mathcal{R})\,=\,\int_\mathcal{M}d\textbf{x}\ \partial_p(V\partial^pV)\,=\,\oint_{\partial_\mathcal{M}}dS_p(V\partial^pV)\,=&\,\sum_{A,B}\tilde{m}_A\tilde{m}_B\oint_{\partial\mathcal{M}}dS^p\frac{G^2}{|\textbf{x}-\textbf{r}_A|}\frac{(\textbf{r}_B-\textbf{x})_p}{|\textbf{x}-\textbf{r}_B|^3}\\ =&\,\sum_{A,B}\tilde{m}_A\tilde{m}_B\frac{G^2}{\mathcal{R}}\int N^pN_p\ d\Omega+\mathcal{O}(r_{AB}/\mathcal{R})\\ =&\,4\pi\sum_{A,B}\tilde{m}_A\tilde{m}_B\frac{G^2}{\mathcal{R}}+\mathcal{O}(r_{AB}/\mathcal{R}). \end{split}$$ According to [@Poisson] this integral can be evaluated by using the substitution $\textbf{y}=\textbf{x}-\textbf{r}_B$, so that $\textbf{x}-\textbf{r}_A=\textbf{y}-\textbf{r}_{AB}$, where $\textbf{r}_{AB}=\textbf{r}_A-\textbf{r}_B$ is the relative separation between body $A$ and $B$, $\textbf{N}=\textbf{y}/y$ is the surface element on the boundary defined by $y=\mathcal{R}$ and $dS^p=\mathcal{R}^2 N^p d\Omega$. We used the fact that the relative distance between the bodies is much smaller than the scale of the near zone domain, $\frac{1}{|\textbf{y}-\textbf{r}_{AB}|}\big|_{|\textbf{y}|=\mathcal{R}}=\frac{1}{\mathcal{R}}+\mathcal{O}(r_{AB}/\mathcal{R})$, where $r_{AB}=|\textbf{r}_{AB}|$ as well as $\int N^pN_p\ d\Omega=4\pi$ and $d\Omega=\sin\theta d\theta d\phi$ is an element of solid angle in the direction specified by $\theta$ and $\phi$ [@Poisson]. The higher order derivative contributions are, $$\begin{split} \frac{\epsilon}{c^2}\int_{\mathcal{M}} d\textbf{x} \ \Delta \tau_{LL}^{00}[c^{-3}]\,=&\,f_b(\mathcal{R})-\epsilon\frac{7}{2}\frac{G}{c^2} \sum_A\sum_{B\neq A} \tilde{m}_A \tilde{m}_B \ \int_{\mathcal{M}} d\textbf{x} \ \Delta \bigg[ \frac{\delta(\textbf{x}-\textbf{r}_A)}{|\textbf{x}-\textbf{r}_B|}\bigg]\,=\,f_b(\mathcal{R})\\ \frac{\sigma}{c^2}\int_{\mathcal{M}} d\textbf{x} \ \sum_{m=2}^{+\infty} \frac{\kappa^m}{m!} \Delta^m \tau^{00}_{LL}[c^{-3}]\,=&\, f_c(\mathcal{R})-\sigma\frac{7}{2} \frac{G}{c^2} \sum_A\sum_{B\neq A} \tilde{m}_A \tilde{m}_B \ \sum_{m=2}^{+\infty} \frac{\kappa^m}{m!}\int_{\mathcal{M}} d\textbf{x} \ \Delta^m \bigg[\frac{\delta(\textbf{x}-\textbf{r}_A)}{|\textbf{x}-\textbf{r}_B|}\bigg]\,=\,f_c(\mathcal{R}). \end{split}$$ The last two results have been derived by using the relation, $\partial_pV\partial^pV=\partial_p(V\partial^pV)+4\pi G\sum_A\tilde{m}_A\delta(\textbf{x}-\textbf{r}_A)V$ ($V=(1+\sigma) \ U$). The second term in the first of the two integrals above, $$\begin{split} \sum_A \tilde{m}_A \int_\mathcal{M}d\textbf{x}\ \Delta [\delta(\textbf{x}-\textbf{r}_A)h^{00}]\,=\,\sum_A \tilde{m}_AS_1+\sum_A \tilde{m}_AS_2+(2-2)4\pi\frac{4G}{c^2} \sum_A \sum_{B\neq A} \tilde{m}_A\tilde{m}_B \ \delta(\textbf{r}_B-\textbf{r}_A)\,=\,0, \end{split}$$ vanishes after double partial integration. The surface integrals $S_1$ and $S_2$ were defined in the context of the computation of $M_{\mathcal{B}_2}$. Multiple partial integration was used and surface terms, being proportional to $\delta(\mathcal{R}-\textbf{r}_A)$, were discarded as they do not contribute to the near zone $\mathcal{M}: \ |\textbf{x}|<\mathcal{R}$. Discarding all the remaining $\mathcal{R}$-depending terms, $f_i(\mathcal{R})$ with $i\in \{a,b,c\}$, for the same reasons that have been mentioned above, we finally obtain the results given in the main chapter [@Poisson; @Will2; @PatiWill1]. [1]{} A. Einstein, Sitzungsber. K. Preuss. Akad. Wiss. (Berlin), 844 (1915). A. Einstein, Sitzungsber. K. Preuss. Akad. Wiss. 1, 688 (1916). A. Einstein, Sitzungsber. K. Preuss. Akad. Wiss. 1, 154 (1918). R. A. Hulse, J. H. Taylor, [*Discovery of a pulsar in a binary-system*]{}, [Astrophys. J. 195:L51-L53 (1975)](http://adsabs.harvard.edu/abs/1975ApJ...195L..51H). M. Burgay et al., [*An increased estimate of the merger rate of double neutron stars from observations of a highly relativistic system*]{}, [Nature 426 (2003) 531-533](http://www.nature.com/nature/journal/v426/n6966/full/nature02124.html), [arXiv:astro-ph/0312071](http://arxiv.org/abs/astro-ph/0312071). I. H. Stairs, [*Testing General Relativity with Pulsar Timing*]{}, [LivingRev.Rel.6:5, 2003](http://relativity.livingreviews.org/Articles/lrr-2003-5/), [arXiv:astro-ph/0307536](http://arxiv.org/abs/astro-ph/0307536). I. H. Stairs, S. E. Thorsett, J. H. Taylor, A. Wolszczan, [*Studies of the Relativistic Binary Pulsar PSR B1534+12: I. Timing Analysis*]{}, [Astrophys.J. 581 (2002) 501-508](http://iopscience.iop.org/article/10.1086/344157/meta), [arXiv:astro-ph/0208357](http://arxiv.org/abs/astro-ph/0208357). J. H. Taylor, J. M. Weisberg, [*A new test of general relativity - Gravitational radiation and the binary pulsar PSR 1913+16*]{}, [Astrophys. J. 253, 908 (1982)](http://adsabs.harvard.edu/abs/1982ApJ...253..908T). B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration), [*Observation of Gravitational Waves from a Binary Black Hole Merger*]{}, [Phys. Rev. Lett. 116, 061102 (2016)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.061102), [arXiv:1602.03837 \[gr-qc\]](http://arxiv.org/abs/1602.03837). B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration), [*GW150914: The Advanced LIGO Detectors in the Era of First Discoveries*]{}, [Phys. Rev. Lett. 116, 131103 (2016)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.131103), [ arXiv:1602.03838 \[gr-qc\]](http://arxiv.org/abs/1602.03838). B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration), [*GW150914: First results from the search for binary black hole coalescence with Advanced LIGO*]{}, [arXiv:1602.03839 \[gr-qc\]](http://arxiv.org/abs/1602.03839). N. Arkani-Hamed, S. Dimopoulos, Gia Dvali, G. Gabadadze, [*Non-Local Modification of Gravity and the Cosmological Constant Problem*]{}, [arXiv:hep-th/0209227v1](http://arxiv.org/abs/hep-th/0209227v1). G. Dvali, S. Hofmann, J. Khoury, [*Degravitation of the cosmological constant and graviton width*]{}, Phys. Rev. D 76, 084006 (2007), [arXiv:hep-th/0703027](http://arxiv.org/abs/hep-th/0703027). A. O. Barvinsky, [*Nonlocal action for long-distance modifications of gravity theory*]{}, Phys. Lett. B 572 (2003) 109-116, [arXiv:hep-th/0304229](http://arxiv.org/abs/hep-th/0304229). A. O. Barvinsky, [*Covariant long-distance modifications of Einstein theory and strong coupling problem*]{}, Phys. Rev. D 71, 084007 (2005), [arXiv:hep-th/0501093](http://arxiv.org/abs/hep-th/0501093). C. M. Will, Alan G. Wiseman, [*Gravitational radiation from compact binary systems: Gravitational waveforms and energy loss to second post-Newtonian order*]{}, [Phys. Rev. D 54, 4813 (1996)](http://dx.doi.org/10.1103/PhysRevD.54.4813), [arXiv:gr-qc/9608012 ](http://arxiv.org/abs/gr-qc/9608012). M. E. Pati, C. M. Will, [*PostNewtonian gravitational radiation and equations of motion via direct integration of the relaxed Einstein equations. 1. Foundations*]{}, [Phys. Rev. D 62, 124015 (2000)](http://dx.doi.org/10.1103/PhysRevD.62.124015), [ arXiv:gr-qc/0007087 ](http://arxiv.org/abs/gr-qc/0007087). M. E. Pati, C. M. Will, [*PostNewtonian gravitational radiation and equations of motion via direct integration of the relaxed Einstein equations. 2. Two-body equations of motion to second postNewtonian order, and radiation reaction to 3.5 postNewtonia order*]{},[Phys. Rev. D 65, 104008 (2002)](http://dx.doi.org/10.1103/PhysRevD.65.104008), [arXiv:gr-qc/0201001](http://arxiv.org/abs/gr-qc/0201001). L. D. Landau, E. M. Lifshitz, The Classical Theory of Fields (Volume 2 of A Course of Theoretical Physics) Pergamon Press (1971). C. W. Misner, K. S. Throne, J. A. Wheeler, Gravitation, Palgrave Macmillan (1973). E. Poisson & C. M. Will, [*Gravity (Newtonian, Post-Newtonian, Relativistic)*]{}, Cambridge University Press (2014). L. Blanchet, [*Gravitational Radiation from Post Newtonian Sources and Inspiralling Compact Binaries*]{}, Living Reviews in Relativity (2014). M. Maggiore, [*Gravitational Waves: Theory and Experiments*]{}, Oxford University Press (2014). A. Buonanno, [*Gravitational Waves*]{}, [arXiv:0709.4682 \[gr-qc\]](http://arxiv.org/abs/0709.4682). M. Jaccard, M. Maggiore, E. Mitsou, [*Nonlocal theory of massive gravity*]{}, [Phys. Rev. D 88, 044033 (2013)](http://journals.aps.org/prd/abstract/10.1103/PhysRevD.88.044033), [arXiv:1305.3034 \[hep-th\]](https://arxiv.org/abs/1305.3034). A. O. Barvinsky, G. A. Vilkovisky, [*Beyond the Schwinger-Dewitt Technique: Converting Loops Into Trees and In-In Currents*]{}, [Nucl.Phys. B282 (1987) 163-188 ](http://www.sciencedirect.com/science/article/pii/055032138790681X). A. O. Barvinsky, G. A. Vilkovisky, [*Covariant perturbation theory. 2: Second order in the curvature. General algorithms* ]{}, [Nucl.Phys. B333 (1990) 471-511 ](http://www.sciencedirect.com/science/article/pii/055032139090047H). A. O. Barvinsky, Yu. V. Gusev, G. A. Vilkovisky, V. V. Zhytnikov, [*The basis of nonlocal curvature invariants in quantum gravity theory*]{}, [J.Math.Phys.35:3525-3542 (1994)](http://scitation.aip.org/content/aip/journal/jmp/35/7/10.1063/1.530427), [arXiv:gr-qc/9404061](http://arxiv.org/abs/gr-qc/9404061). J. V. Narlikar, [*Cosmologies with Variable Gravitational Constant*]{}, [Found. of Phys. Vol. 13, No. 3 (1983)](http://link.springer.com/article/10.1007/BF01906180). M. K. Parikh, S. N. Solodukhin, [*de Sitter Brane Gravity: from Close-Up to Panorama*]{}, [Phys.Lett. B503 (2001) 384-393](http://www.sciencedirect.com/science/article/pii/S0370269301002337), [arXiv:hep-th/0012231](http://arxiv.org/abs/hep-th/0012231). G. Dvali, G. Gabadadze, M. Shifman, [*Diluting Cosmological Constant via Large Distance Modification of Gravity*]{}, [ arXiv:hep-th/0208096](http://arxiv.org/abs/hep-th/0208096). C. Deffayet, G. Dvali, G. Gabadadze, A. Lue, [*Braneworld Flattening by a Cosmological Constant*]{},[Phys.Rev.D64:104002,2001](http://journals.aps.org/prd/abstract/10.1103/PhysRevD.64.104002), [arXiv:hep-th/0104201](http://arxiv.org/abs/hep-th/0104201). G. Dvali, G. Gabadadze, M. Porrati, [*4D Gravity on a Brane in 5D Minkowski Space*]{}, [Phys.Lett.B485:208-214,2000](http://www.sciencedirect.com/science/article/pii/S0370269300006699), [arXiv:hep-th/0005016](http://arxiv.org/abs/hep-th/0005016). T. Damour, [*String theory, cosmology and varying constants*]{}, [Astrophys.Space Sci.283:445-456,2003](http://link.springer.com/article/10.1023%2FA%3A1022596316014), [arXiv:gr-qc/0210059](http://arxiv.org/abs/gr-qc/0210059). E. Garcia-Berro, Yu. A. Kubyshin, P. Loren-Aguilar, J. Isern, [*Variation of the gravitational constant inferred from the SNe data* ]{}[Int.J.Mod.Phys.D15:1163-1174, 2006](http://www.worldscientific.com/doi/abs/10.1142/S0218271806008772), [arXiv:gr-qc/0512164](http://arxiv.org/abs/gr-qc/0512164). B. Ratra and P. J. E. Peebles, [*Cosmological consequences of a rolling homogeneous scalar field*]{}, [Phys. Rev. D 37, 3406 (1988)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.80.1582). R. R. Caldwell, R. Dave, P. J. Steinhardt, [*Cosmological Imprint of an Energy Component with General Equation of State*]{}, [Phys. Rev. Lett. 80, 1582 (1998)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.80.1582). J.-P. Uzan, [*Varying constants, Gravitation and Cosmology*]{}, [Living Rev. Relativity 14 (2011), 2](http://relativity.livingreviews.org/Articles/lrr-2011-2/), [arXiv:1009.5514 \[astro-ph.CO\]](http://arxiv.org/abs/1009.5514). J.-P. Uzan, [*Fundamental constants and tests of general relativity - Theoretical and cosmological considerations*]{}, [Sp. Sc. Rev. Vol. 148, Is. 1 (2009)](http://link.springer.com/article/10.1007%2Fs11214-009-9503-z), [arXiv:0907.3081 \[gr-qc\]](http://arxiv.org/abs/0907.3081). A. Lykkas, L. Perivolaropoulos, [*Scalar-Tensor Quintessence with a linear potential: Avoiding the Big Crunch cosmic doomsday*]{}, [Phys. Rev. D 93, 043513 (2016)](http://journals.aps.org/prd/abstract/10.1103/PhysRevD.93.043513), [arXiv:1511.08732 \[gr-qc\]](http://arxiv.org/abs/1511.08732). C. M. Will. [*The confrontation between General Relativity and Experiment*]{}, [Living Rev. Relativity, 17, (2014), 4](http://relativity.livingreviews.org/Articles/lrr-2014-4/), [arXiv:1403.7377 \[gr-qc\]](http://arxiv.org/abs/1403.7377). C. Deffayet, G. Esposito-Farèse, R. P. Woodard, [*Field equations and cosmology for a class of nonlocal metric models of MOND*]{}, Phys. Rev. D 90, 089901 (2014), [arXiv:1405.0393v1](http://arxiv.org/abs/1405.0393v1). T. Clifton, P. G. Ferreira, A. Padilla , Constantinos Skordis, [*Modified gravity and cosmology*]{}, Physics Reports 513 (2012) 1-189, [arXiv:1106.2476](http://arxiv.org/abs/1106.2476). A. De Felice, S. Tsujikawa, [*f($R$) Theories*]{}, [ Living Rev. Relativity, 13, (2010),3](http://relativity.livingreviews.org/Articles/lrr-2010-3/), [arXiv:1002.4928 \[gr-qc\]](http://arxiv.org/abs/1002.4928). R. P. Woodard, [*Nonlocal Models of Cosmic Acceleration*]{}, Found Phys (2014) 44:213-233, [arXiv:1401.0254]( http://arxiv.org/abs/1401.0254). E. Berti, A. Buonanno, Clifford M. Will, [*Testing general relativity and probing the merger history of massive black holes with LISA*]{}, [Class.Quant.Grav. 22 (2005) S943-S954 ](http://iopscience.iop.org/article/10.1088/0264-9381/22/18/S08/meta), [arXiv:gr-qc/0504017](http://arxiv.org/abs/gr-qc/0504017). L. Modesto, [*Super-renormalizable quantum gravity*]{}, [ 10.1103/PhysRevD.86.044005 (2012)](http://journals.aps.org/prd/abstract/10.1103/PhysRevD.86.044005), [arXiv:1107.2403 \[hep-th\]](https://arxiv.org/abs/1107.2403). L. Modesto, J. W. Moffat, P. Nicolini, [*Black holes in an ultraviolet complete quantum gravity*]{}, [Phys.Lett.B695:397-400 (2011)](http://www.sciencedirect.com/science/article/pii/S0370269310013213), [arXiv:1010.0680 \[gr-qc\]](https://arxiv.org/abs/1010.0680). M. Sakellariadou, [*Gravitational Waves in the spectral action of noncommutative geometry*]{}, [Phys.Rev.D82:085021 (2010)](http://journals.aps.org/prd/abstract/10.1103/PhysRevD.82.085021), [arXiv:1005.4276 \[hep-th\]](https://arxiv.org/abs/1005.4276). P. A. M. Dirac, [*The Cosmological Constants*]{}, [Nature 139, 323-323 (20 February 1937)](http://www.nature.com/nature/journal/v139/n3512/abs/139323a0.html). E. A. Milne, [*Kinematics, Dynamics, and the Scale of Time*]{}, [Proc. Roy. Soc., A, 158, 324 (1937)](http://www.jstor.org/stable/96821?seq=1#page_scan_tab_contents). P. A. M. Dirac,[*A new Basis for Cosmology*]{}, [Proc. R. Soc. Lond. A 1938 165 199-208](http://rspa.royalsocietypublishing.org/search/pubyear%3A1938%20volume%3A165%20firstpage%3A199%20jcode%3Aroyprsa%20numresults%3A10%20sort%3Arelevance-rank%20format_result%3Astandard). P. Jordan, [*Formation of the Stars and Development of the Universe*]{}, [ Nature, 164, 637 (1949)](http://www.nature.com/nature/journal/v164/n4172/pdf/164637a0.pdf). P. Jordan. [*Die physikalischen Weltkonstanten*]{}, [Die Naturwissenschaften, 25, 513?17, (1937)](http://link.springer.com/article/10.1007%2FBF01498368). C. Brans, R.H. Dicke, [*Mach’s Principle and a Relativistic Theory of Gravitation*]{}, [Phys. Rev. 124, 925 (1961)](http://journals.aps.org/pr/abstract/10.1103/PhysRev.124.925). S. Weinberg, [*The cosmological constant problem*]{}, Reviews of Modern Physics, Vol. 61 1, January 1989. S. Weinberg, [*Gravitation and Cosmology*]{}, John Wiley & Sons, Inc. (1972). A. Einstein. [*Zum kosmologischen Problem der allgemeinen Relativitätstheorie*]{}, Sitzungsber. Koeniglicher. Preuss. Akad. Wiss., phys.-math. Klasse XII, 3 (1931). S. M. Carroll, [*The Cosmological Constant*]{}, [LivingRev.Rel.4:1,2001](http://relativity.livingreviews.org/Articles/lrr-2001-1/), [arXiv:astro-ph/0004075](http://arxiv.org/abs/astro-ph/0004075). Planck Collaboration, [*Planck 2015 results. XI. CMB power spectra, likelihoods, and robustness of parameters*]{}, [arXiv:1507.02704](http://arxiv.org/abs/1507.02704). A.G. Riess et al. [*Observational evidence from supernovae for an accelerating universe and a cosmological constant*]{}. Astron. J., 116, 1009-1038 (1998), href[http://arxiv.org/abs/astro-ph/9805201]{}[arXiv:astro-ph/9805201]{}. Perlmutter et al. [*Measurement of $\Omega$ and $\Lambda$ from 42 high-redshift supernovae*]{}, Astrophys. J., 517, 565-586 (1999), [arXiv:astro-ph/9812133](http://arxiv.org/abs/astro-ph/9812133). J. Chiaverini, S. J. Smullin, A. A. Geraci, D.M.Weld, and A. Kapitulnik, [*New Experimental Constraints on Non-Newtonian Forces below 100$\mu$m*]{}, [Phys.Rev.Lett.90.151101](http://dx.doi.org/10.1103/PhysRevLett.90.151101), [arXiv:hep-ph/0209325](http://arxiv.org/pdf/hep-ph/0209325.pdf). D. J. Kapner, T. S. Cook, E. G. Adelberger, J. H. Gundlach, B. R. Heckel, C. D. Hoyle, and H. E. Swanson, [*Tests of the Gravitational Inverse-Square Law below the Dark-Energy Length Scale*]{}, [Phys. Rev. Lett. 98, 021101](http://dx.doi.org/10.1103/PhysRevLett.98.021101), [arXiv:hep-ph/0611184](http://arxiv.org/abs/hep-ph/0611184). K. Liu, R.P. Eatough, N. Wex, M. Kramer, [*Pulsar-black hole binaries: prospects for new gravity tests with future radio telescopes*]{}, [Mon.Not.Roy.Astron.Soc. 445 (2014) 3, 3115-3132](http://mnras.oxfordjournals.org/content/445/3/3115), [arXiv:1409.3882 \[astro-ph.GA\]](http://arxiv.org/abs/1409.3882). L. Blanchet, G. Faye, [*Hadamard regularization*]{}, [J. of Math. Phys. 41, 7675 (2000)](http://dx.doi.org/10.1063/1.1308506 ), [arXiv:gr-qc/0004008](http://arxiv.org/abs/gr-qc/0004008). O. Poujade, L. Blanchet, [*Post-Newtonian approximation for isolated systems calculated by matched asymptotic expansion*]{}, [Phys.Rev. D65 (2002) 124020](http://journals.aps.org/prd/abstract/10.1103/PhysRevD.65.124020), [arXiv:gr-qc/0112057](http://arxiv.org/abs/gr-qc/0112057). L. Blanchet, [*Post-Newtonian theory and the two body problem*]{}, [Fundam.Theor.Phys. 162 (2011) 125-166](http://link.springer.com/chapter/10.1007%2F978-90-481-3015-3_5), [arXiv:0907.3596 \[gr-qc\]](http://arxiv.org/abs/0907.3596). E. Spallucci, A. Smailagic, P. Nicolini, [*Trace anomaly on a quantum spacetime manifold*]{}, [http://journals.aps.org/prd/abstract/10.1103/PhysRevD.73.084004](Phys.Rev.D73:084004, 2006), [arXiv:hep-th/0604094](http://arxiv.org/abs/hep-th/0604094). Luc Blanchet, Thibault Damour, [*Multipolar radiation reaction in general relativity* ]{}, [Phys.Lett. A104 (1984) 82-86](http://www.sciencedirect.com/science/article/pii/0375960184909678). L. Blanchet, T. Damour, B. R. Iyer, [*Gravitational radiation damping of compact binary systems to second postNewtonian order*]{}, [Phys.Rev.Lett. 74 (1995) 3515-3518 ](http://dx.doi.org/10.1103/PhysRevLett.74.3515), [arXiv:gr-qc/9501027](http://arxiv.org/abs/gr-qc/9501027). L. Blanchet, [*Time-asymmetric structure of gravitational radiation*]{}, [Phys.Rev. D47 (1993) 4392-4420](http://dx.doi.org/10.1103/PhysRevD.47.4392). T. Damour, P. Jaranowski, G. Schäfer, [*Non-local-in-time action for the fourth post-Newtonian conservative dynamics of two-body-sytems*]{}, [Phys.Rev. D91 (2015) 8, 084024](http://dx.doi.org/10.1103/PhysRevD.91.084024), [arXiv:1502.07245 \[gr-qc\]](http://arxiv.org/abs/1502.07245). L. Blanchet, T. Damour, B. Iyer, [*Gravitational waves from inspiralling compact binaries: Energy loss and waveform to second-post-Newtonian order*]{}, [Phys.Rev. D51 (1995) 5360](http://dx.doi.org/10.1103/PhysRevD.51.5360), [arXiv:gr-qc/9501029](http://arxiv.org/abs/gr-qc/9501029). L. Blanchet, T. Damour, G. Esposito-Farèse, B. R. Iyer, [*Gravitational radiation from inspiralling compact binaries completed at the third post-Newtonian order*]{}, [Phys.Rev.Lett. 93 (2004) 091101](http://dx.doi.org/10.1103/PhysRevLett.93.091101), [arXiv:gr-qc/0406012](http://arxiv.org/abs/gr-qc/0406012). L. Blanchet, T. Damour, G. Esposito-Farèse, B. R. Iyer, [*Dimensional regularization of the third post-Newtonian gravitational wave generation from two point masses*]{}, [Phys.Rev. D71 (2005) 124004 ](http://dx.doi.org/10.1103/PhysRevD.71.124004), [arXiv:gr-qc/0503044](http://arxiv.org/abs/gr-qc/0503044). T. Damour, P. Jaranowski, G. Schäfer, [*Equivalence between the ADM-Hamiltonian and the harmonic-coordinates approaches to the third post-Newtonian dynamics of compact binaries*]{}, [Phys.Rev. D63 (2001) 044021](http://dx.doi.org/10.1103/PhysRevD.63.044021),[arXiv:gr-qc/0010040 ](http://arxiv.org/abs/gr-qc/0010040), (Erratum [Phys.Rev. D66 (2002) 029901](http://dx.doi.org/10.1103/PhysRevD.66.029901)). R. Kragler, [*Method of Inverse Differential Operators Applied to certain classes of nonhomogeneous PDEs and ODEs*]{}, [DOI: 10.13140/2.1.2716.0966](https://www.researchgate.net/publication/265598610_Method_of_Inverse_Differential_Operators_applied_to_certain_classes_of_nonhomogeneous_PDEs_and_ODEs). R. Kragler, [*The Method of Inverse Differential Operators Applied for the Solution of PDEs*]{} [DOI: 10.13140/2.1.3764.6722](https://www.researchgate.net/publication/265598760_The_Method_of_Inverse_Differential_Operators_Applied_for_the_Solution_of_PDEs). [^1]: The author would like to thank Professor E. Poisson for useful comments regarding this particular issue.
--- author: - Maria Salatino - Jacob Lashner - Martina Gerbino - 'Sara M. Simon' - Joy Didier - Aamir Ali - 'Peter C. Ashton' - Sean Bryan - Yuji Chinone - Kevin Coughlin - 'Kevin T. Crowley' - Giulio Fabbian - Nicholas Galitzki - 'Neil Goeckner-Wald' - 'Joseph E. Golec' - 'Jon E. Gudmundsson' - 'Charles A. Hill' - Brian Keating - Akito Kusaka - 'Adrian T. Lee' - Jeffrey McMahon - 'Amber D. Miller' - Giuseppe Puglisi - 'Christian L. Reichardt' - Grant Teply - Zhilei Xu - Ningfeng Zhu bibliography: - 'report.bib' nocite: '[@*]' title: 'Studies of Systematic Uncertainties for Simons Observatory: Polarization Modulator Related Effects' --- Introduction {#sec:intro} ============ Mueller Matrix Model of a HWP {#sec:Mueller} ============================= Sapphire HWPs {#sec:sapphire_mueller} ------------- Variation with incident angle and frequency {#sec:hwpnutheta} ------------------------------------------- Differential Absorption and Emission {#sec:absorption} ------------------------------------ Estimating the HWP Synchronous Signal {#sec:HWPSS_model} ===================================== The SAT Cryogenic HWP {#sec:sac_hwp} --------------------- Light Propagation and Estimation of the HWPSS --------------------------------------------- HWPSS contributions from the HWP {#sec:hwp_hwpss} -------------------------------- HWPSS contributions from optics upstream and downstream of the HWP {#sec:hwp_updown} ------------------------------------------------------------------ Polarization Leakage from Nonlinearity {#sec:NL} ====================================== Simulating Nonlinearity $I \rightarrow P$ ----------------------------------------- Conclusion and Future Work {#sec:conclusions} ========================== Meta-material HWPs {#sec:metamaterial} ==================
--- abstract: 'In an equilibrium axisymmetric galactic disc, the mean galactocentric radial and vertical velocities are expected to be zero everywhere. In recent years, various large spectroscopic surveys have however shown that stars of the Milky Way disc exhibit non-zero mean velocities outside of the Galactic plane in both the Galactocentric radial and vertical velocity components. While radial velocity structures are commonly assumed to be associated with non-axisymmetric components of the potential such as spiral arms or bars, non-zero vertical velocity structures are usually attributed to excitations by external sources such as a passing satellite galaxy or a small dark matter substructure crossing the Galactic disc. Here, we use a three-dimensional test-particle simulation to show that the global stellar response to a spiral perturbation induces both a radial velocity flow and non-zero vertical motions. The resulting structure of the mean velocity field is qualitatively similar to what is observed across the Milky Way disc. We show that such a pattern also naturally emerges from an analytic toy model based on linearized Euler equations. We conclude that an external perturbation of the disc might not be a requirement to explain all of the observed structures in the vertical velocity of stars across the Galactic disc. Non-axisymmetric internal perturbations can also be the source of the observed mean velocity patterns.' author: - | Carole Faure$^1$[^1], Arnaud Siebert$^1$, Benoit Famaey$^1$\ $^1$Observatoire Astronomique, Université de Strasbourg, CNRS UMR 7550, France title: 'Radial and vertical flows induced by galactic spiral arms: likely contributors to our “wobbly Galaxy”' --- Introduction ============ The Milky Way has long been known to possess spiral structure, but studying the nature and the dynamical effects of this structure has proven to be elusive for decades. Even though its fundamental nature is still under debate today, it has nevertheless started to be recently considered as a key player in galactic dynamics and evolution (e.g., Antoja et al. 2009; Quillen et al. 2011; Lépine et al. 2011; Minchev et al. 2012; Roskar et al. 2012 for recent works, or Sellwood 2013 for a review). However, zeroth order dynamical models of the Galaxy still mostly rely on the assumptions of a smooth time-independent and axisymmetric gravitational potential. For instance, recent determinations of the circular velocity at the Sun’s position and of the peculiar motion of the Sun itself all rely on the assumption of axisymmetry and on minimizing the non-axisymmetric residuals in the velocity field (Reid et al. 2009; McMillan & Binney 2010; Bovy et al. 2012; Schönrich 2012). Such zeroth order assumptions are handy since they allow us to develop dynamical models based on a phase-space distribution function depending only on three isolating integrals of motion, such as the action integrals (e.g., Binney 2013; Bovy & Rix 2013). Actually, an action-based approach does not necessarily have to rely on the axisymmetric assumption, as it is also possible to take into account the main non-axisymmetric component (e.g., the bar, see Kaasalainen & Binney 1994) by modelling the system in its rotating frame (e.g., Kaasalainen 1995). However the other non-axisymmetric components such as spiral arms rotating with a different pattern speed should then nevertheless be treated through perturbations (e.g., Kaasalainen 1994; McMillan 2013). The main problem with such current determinations of Galactic parameters, through zeroth order axisymmetric models, is that it is not clear that assuming axisymmetry and dynamical equilibrium to fit a benchmark model does not bias the results, by e.g. forcing this benchmark model to fit non-axisymmetric features in the observations that are not present in the axisymmetric model itself. This means that the residuals from the fitted model are not necessarily representative of the true amplitude of non-axisymmetric motions. In this respect, it is thus extremely useful to explore the full range of possible effects of non-axisymmetric features such as spiral arms in both fully controlled test-particle simulations as well as self-consistent simulations, and to compare these with observations. With the advent of spectroscopic and astrometric surveys, observational phase-space information for stars in an increasingly large volume around the Sun have allowed us to see more and more of these dynamical effect of non-axisymmetric components emerge in the data. Until recently, the most striking features were found in the solar neighbourhood in the form of moving groups, i.e. local velocity-space substructures shown to be made of stars of very different ages and chemical compositions (e.g., Chereul et al. 1998, 1999; Dehnen 1998; Famaey et al. 2005, 2007, 2008; Pompéia et al. 2011). Various non-axisymmetric models have been argued to be able to represent these velocity structures equally well, using transient (e.g., De Simone et al. 2004) or quasi-static spirals (e.g., Quillen & Minchev 2005; Antoja et al. 2011), with or without the help of the outer Lindblad resonance from the central bar (e.g., Dehnen 2000; Antoja et al. 2009; Minchev et al. 2010; McMillan 2013; Monari et al. 2013). The effects of non-axisymmetric components have also been analyzed a bit less locally by Taylor expanding to first order the planar velocity field in the cartesian frame of the Local Standard of Rest, i.e. measuring the Oort constants $A$, $B$, $C$ and $K$ (Kuijken & Tremaine 1994; Olling & Dehnen 2003), a procedure valid up to distances of less than 2 kpc. While old data were compatible with the axisymmetric values $C=K=0$ (Kuijken & Tremaine 1994), a more recent analysis of ACT/Tycho2 proper motions of red giants yielded $C = -10 \, {\rm km}\,{\rm s}^{-1}\,{\rm kpc}^{-1}$ (Olling & Dehnen 2003). Using line-of-sight velocities of 213713 stars from the RAVE survey (Steinmetz et al. 2006; Zwitter et al. 2008; Siebert et al. 2011a; Kordopatis et al. 2013), with distances $d<2 \,$kpc in the longitude interval $-140^\circ < l < 10^\circ$, Siebert et al. (2011b) confirmed this value of $C$, and estimated a value of $K= +6\,{\rm km}\,{\rm s}^{-1}\,{\rm kpc}^{-1}$, implying a Galactocentric radial velocity[^2] gradient of $C+K = \partial V_R / \partial R \simeq - 4\,{\rm km}\,{\rm s}^{-1}\,{\rm kpc}^{-1}$ in the solar suburb (extended solar neighbourhood, see also Williams et al. 2013). The projection onto the plane of the mean line-of-sight velocity as a function of distance towards the Galactic centre ($|l|<5^\circ$) was also examined by Siebert et al. (2011b) both for the full RAVE sample and for red clump candidates (with an independent method of distance estimation), and clearly confirmed that the RAVE data are not compatible with a purely axisymmetric rotating disc. This result is not owing to systematic distance errors as considered in Binney et al. (2013), because the [*geometry*]{} of the radial velocity flow cannot be reproduced by systematic distance errors alone (Siebert et al. 2011b; Binney et al. 2013). Assuming, to first order, that the observed radial velocity map in the solar suburb is representative of what would happen in a razor-thin disc, and that the spiral arms are long-lived, Siebert et al. (2012) applied the classical density wave description of spiral arms (Lin & Shu 1964; Binney & Tremaine 2008) to constrain their parameters in the Milky Way. They found that the best-fit was obtained for a two-armed perturbation with an amplitude corresponding to $\sim 15$% of the background density and a pattern speed $\Omega_P \simeq 19 \,{\rm Gyr}^{-1}$, with the Sun close to the 4:1 inner ultra-harmonic resonance (IUHR). This result is in agreement with studies based on the location of moving groups in local velocity space (Quillen & Minchev 2005; Antoja et al. 2011; Pompéia et al. 2011). This study was advocated to be a useful first order benchmark model to then study the effect of spirals in three dimensions. In three dimensions, observations of the solar suburb from recent spectroscopic surveys actually look even more complicated. Using the same red clump giants from RAVE, it was shown that the mean [*vertical*]{} velocity was also non-zero and showed clear structure suggestive of a wave-like behaviour (Williams et al. 2013). Measurements of line-of-sight velocities for 11000 stars with SEGUE also revealed that the mean vertical motion of stars reaches up to 10 km/s at heights of 1.5 kpc (Widrow et al. 2012), echoing previous similar results by Smith et al. (2012). This is accompanied by a significant wave-like North-South asymmetry in SDSS (Widrow et al. 2012; Yanny & Gardner 2013). Observations from LAMOST in the outer Galactic disc (within 2 kpc outside the Solar radius and 2 kpc above and below the Galactic plane) also recently revealed (Carlin et al. 2013) that stars above the plane exhibit a net outward motion with downward mean vertical velocities, whilst stars below the plane exhibit the opposite behaviour in terms of vertical velocities (moving upwards, i.e. towards the plane too), but not so much in terms of radial velocities, although slight differences are also noted. There is thus a growing body of evidence that Milky Way disc stars exhibit velocity structures across the Galactic plane in [*both*]{} the Galactocentric radial and vertical components. While a global radial velocity gradient such as that found in Siebert et al. (2011b) can naturally be explained with non-axisymmetric components of the potential such as spiral arms, such an explanation is [*a priori*]{} less self-evident for vertical velocity structures. For instance, it was recently shown that the central bar cannot produce such vertical features in the solar suburb (Monari et al. 2014). For this reason, such non-zero vertical motions are generally attributed to vertical excitations of the disc by external means such as a passing satellite galaxy (Widrow et al. 2012). The Sagittarius dwarf has been pinpointed as a likely culprit for creating these vertical density waves as it plunged through the Galactic disc (Gomez et al. 2013), while other authors have argued that these could be due to interaction of the disc with small starless dark matter subhalos (Feldmann & Spolyar 2013). Here, we rather investigate whether such vertical velocity structures can be expected as the response to disc non-axisymmetries, especially spiral arms, in the absence of external perturbations. As a first step in this direction, we propose to qualitatively investigate the response of a typical old thin disc stellar population to a spiral perturbation in controlled test particle orbit integrations. Such test-particle simulations have revealed useful in 2D to understand the effects of non-axisymmetries and their resonances on the disc stellar velocity field, including moving groups (e.g., Antoja et al. 2009, 2011; Pompéia et al. 2011), Oort constants (e.g., Minchev et al. 2007), radial migrations (e.g., Minchev & Famaey 2010), or the dip of stellar density around corotation (e.g., Barros et al. 2013). Recent test-particle simulations in 3D have rather concentrated on the effects of the central bar (Monari et al. 2013, 2014), while we concentrate here on the effect of spiral arms, with special attention to mean vertical motions. In Sect. 2, we give details on the model potential, the initial conditions and the simulation technique, while results are presented in Sect. 3, and discussed in comparison with solutions of linearized Euler equations. Conclusions are drawn in Sect. 4. Model ===== To pursue our goal, we use a standard test-particle method where orbits of massless particles are integrated in a time-varying potential. We start with an axisymmetric background potential representative of the Milky Way (Sect. 2.1), and we adiabatically grow a spiral perturbation on it within $\sim 3.5$ Gyr. Once settled, the spiral perturbation is kept at its full amplitude. This is not supposed to be representative of the actual complexity of spiral structure in real galaxies, where self-consistent simulations indicate that it is often coupled to a central bar and/or a transient nature with a lifetime of the order of only a few rotations. Nevertheless, it allows us to investigate the stable response to an old enough spiral perturbation ($\sim 600 \,$Myr to $1 \,$Gyr in the self-consistent simulations of Minchev et al. 2012). The adiabatic growth of this spiral structure is not meant to be realistic, as we are only interested in the orbital structure of the old thin disk test population once the perturbation is stable. We generate initial conditions for our test stellar population from a discrete realization of a realistic phase-space distribution function for the thin disc defined in integral-space (Sect 2.2), and integrate these initial conditions forward in time within a given time-evolving background+spiral potential (Sect. 2.3). We then analyze the mean velocity patterns seen in configuration space, both radially and vertically, and check whether such patterns are stable within the rotating frame of the spiral. Axisymmetric background potential --------------------------------- The axisymmetric part of the Galactic potential is taken to be Model I of Binney & Tremaine (2008). Its main parameters are summarized in Table 1 for convenience. The central bulge has a truncated power-law density of the form $$\rho_b(R,z) = \rho_{b0} \times \left( \frac{\sqrt{R^2 + (z/q_b)^2}}{a_b} \right)^{-\alpha_b} {\rm exp}\left( -\frac{R^2+(z/q_b)^2}{r_b^2} \right)$$ where $R$ is the Galactocentric radius within the midplane, $z$ the height above the plane, $\rho_{b0}$ the central density, $r_b$ the truncation radius, and $q_b$ the flattening. The total mass of the bulge is $M_b = 5.18 \times 10^9 {\ensuremath{{\rm M}_\odot}}$. The stellar disc is a sum of two exponential profiles (for the thin and thick discs): $$\rho_d(R,z) = \Sigma_{d0} \times \left( \sum_{i=1}^{i=2} \frac{\alpha_{d,i}}{2z_{d,i}} {\rm exp}\left(-\frac{|z|}{z_{d,i}}\right) \right) {\rm exp}\left(-\frac{R}{R_d}\right)$$ where $\Sigma_{d0}$ is the central surface density, $\alpha_{d,1}$ and $\alpha_{d,2}$ the relative contributions of the thin and thick discs, $z_{d,1}$ and $z_{d,2}$ their respective scale-heights, and $R_d$ the scale-length. The total mass of the disc is $M_d = 5.13 \times 10^{10} {\ensuremath{{\rm M}_\odot}}$.The disc potential also includes a contribution from the interstellar medium of the form $$\rho_g(R,z) = \frac{\Sigma_g}{2z_g} \times {\rm exp}\left(-\frac{R}{R_g} -\frac{R_m}{R} - \frac{|z|}{z_g}\right)$$ where $R_m$ is the radius within which there is a hole close to the bulge region, $R_g$ is the scale-length, $z_g$ the scale-height, and $\Sigma_g$ is such that it contributes to 25% of the disc surface density at the galactocentric radius of the Sun. Finally, the dark halo is represented by an oblate two-power-law model with flattening $q_h$, of the form $$\begin{aligned} \rho_{h}(R,z)&=&\rho_{h0} \times \left( \frac{\sqrt{R^2 + (z/q_h)^2}}{a_h} \right)^{-\alpha_h}\times\nonumber\\&&\left( 1 + \frac{\sqrt{R^2 + (z/q_h)^2}}{a_h} \right)^{\alpha_h - \beta_h}.\end{aligned}$$ Parameter Axisymmetric potential ------------------------------------------------------------------------- ------------------------ $M_b({\ensuremath{{\rm M}_\odot}})$ $5.18 \times 10^9$ $M_d({\ensuremath{{\rm M}_\odot}})$ $5.13 \times 10^{10}$ $M_{h, <100 {\ \mathrm{kpc}}}({\ensuremath{{\rm M}_\odot}})$ $6. \times 10^{11}$ $\rho_{b0}({\ensuremath{{\rm M}_\odot}}{\, {\rm pc} }^{-3})$ $0.427$ ${a_{\mathrm{b}}}({\ \mathrm{kpc}})$ $1.$ $r_b({\ \mathrm{kpc}})$ $1.9$ $\alpha_b$ $1.8$ $q_b$ $0.6$ $\Sigma_{d0}+\Sigma_g({\ensuremath{{\rm M}_\odot}}{\, {\rm pc} }^{-2})$ 1905. $R_d({\ \mathrm{kpc}})$ 2. $R_g({\ \mathrm{kpc}})$ 4. $R_m({\ \mathrm{kpc}})$ 4. $\alpha_{d,1}$ $14/15$ $\alpha_{d,2}$ $1/15$ $z_{d,1}({\ \mathrm{kpc}})$ 0.3 $z_{d,2}({\ \mathrm{kpc}})$ 1. $z_g({\ \mathrm{kpc}})$ 0.08 $\rho_{h0}({\ensuremath{{\rm M}_\odot}}{\, {\rm pc} }^{-3})$ 0.711 $a_h({\ \mathrm{kpc}})$ $3.83$ $\alpha_h$ $-2.$ $\beta_h$ $2.96$ $q_h$ $0.8$ : Parameters of the axisymmetric background model potential (Binney & Tremaine 2008)[]{data-label="tab:potaxi"} The potential is calculcated using the GalPot routine (Dehnen & Binney 1998). The rotation curve corresponding to this background axisymmetric potential is displayed in Fig. \[f:rc\]. For radii smaller than $11$ kpc, the total rotation curve (black line) is mostely infuenced by the disc (blue dashed line) and above by the halo (red dotted line). ![Rotation curve corresponding to the background axisymmetric potential[]{data-label="f:rc"}](fig1.ps){width="7cm"} Initial conditions ------------------ The initial conditions for the test stellar population are set from a discrete realization of a phase-space distribution function (Shu 1969, Bienaymé & Séchaud 1997) which can be written in integral space as: $$f(E_R,L_z,E_z)=\frac{\Omega \, \rho_d}{\sqrt{2} \kappa \pi^{\frac{3}{2}} \sigma^2_R \sigma_z} \exp \left( \frac{-(E_R-E_c)}{\sigma^2_R}-\frac{E_z}{\sigma^2_z} \right)$$ in which the angular velocity $\Omega$, the radial epicyclic frequency $\kappa$ and the disc density in the plane $\rho_d$ are all functions of $L_z$, being taken at the radius $R_c(L_z)$ of a circular orbit of angular momentum $L_z$. The scale-length of the disc is taken to be 2 kpc as for the background potential. The energy $E_c(L_z)$ is the energy of the circular orbit of angular momentum $L_z$ at the radius $R_c$. Finally, the radial and vertical dispersions $\sigma^2_R$ and $\sigma^2_z$ are also function of $L_z$ and are expressed as: $$\sigma^2_R=\sigma^2_{R_\odot}\exp\left( \frac{2R_{\odot}-2R_c}{R_{\sigma_R}}\right),$$ $$\sigma^2_z=\sigma^2_{z_\odot}\exp\left( \frac{2R_{\odot}-2R_c}{R_{\sigma_z}}\right)$$ where $R_{\sigma_R}/R_d = R_{\sigma_z}/R_d = 5$. The initial velocity dispersions thus decline exponentially with radius but at each radius, it is isothermal as a function of height. These initial values are set in such a way as to be representative of the old thin disc of the Milky Way after the response to the spiral perturbation. Indeed, the old thin disc is the test population we want to investigate the response of. From this distribution function, $4\times 10^7$ test particle initial conditions are generated in a 3D polar grid between $R=4$ kpc and $R=15$ kpc (see Fig. \[f:rhoini\]). This allows a good resolution in the solar suburb. Before adding the spiral perturbation, the simulation is run in the axisymmetric potential for two rotations ($\sim 500$ Myr), and is indeed stable. ![Initial conditions. Left panel: Number of stars per ${\rm kpc}^2$ (surface density) within the Galactic plane as a function of $R$. Right panel: Stellar density as a function of $z$ at $R=8$ kpc.[]{data-label="f:rhoini"}](fig2a.ps "fig:"){width="4cm"} ![Initial conditions. Left panel: Number of stars per ${\rm kpc}^2$ (surface density) within the Galactic plane as a function of $R$. Right panel: Stellar density as a function of $z$ at $R=8$ kpc.[]{data-label="f:rhoini"}](fig2b.ps "fig:"){width="4cm"} ![Positions of the main radial resonances of the spiral potential. $\Omega(R ) = v_c(R)/r$ is the local circular frequency, and $v_c(R )$ is the circular velocity. The $2:1$ ILR occurs along the curve $\Omega(R ) - \kappa/2$, where $\kappa$ is the local radial epicyclic frequency. The inner $4:1$ IUHR occurs along the curve $\Omega(R ) - \kappa/4$.[]{data-label="f:Omega"}](fig3.ps){width="7cm"} ![Positions of the 4:1, 6:1 and 8:1 vertical resonances. When $\Omega - \Omega_P = \nu/n$, where $\nu$ is the vertical epicyclic frequency, the star makes precisely $n$ vertical oscillations along one rotation within the rotating frame of the spiral.[]{data-label="f:Omeganu"}](fig4.ps){width="7cm"} Spiral perturbation and orbit integration ----------------------------------------- In 3D, we consider a spiral arm perturbation of the Lin-Shu type (Lin & Shu 1964; see also Siebert et al. 2012) with a sech$^2$ vertical profile (a pattern that can be supported by three-dimensional periodic orbits, see e.g. Patsis & Grosb[ø]{}l 1996) and a small ($\sim 100$ pc) scale-height: $$\Phi_{s}(R,\theta,z)=-A \cos\left[m\left( \Omega_P t - \theta+\frac{\ln(R)}{\tan p}\right) \right] {\rm sech}^2 \left(\frac{z}{z_0} \right) \label{spipot}$$ in which $A$ is the amplitude of the perturbation, $m$ is the spiral pattern mode ($m=2$ for a 2-armed spiral), $\Omega_P$ is the pattern speed, $p$ the pitch angle, and $z_0$ is the spiral scale-height. The edge-on shapes of orbits of these thick spirals are determined by the vertical resonances existing in the potential. The parameters of the spiral potential used in our simulation are inspired by the analytic solution found in Siebert et al. (2012) using the classical 2D Lin-Shu formalism to fit the radial velocity gradient observed with RAVE (Siebert et al. 2011b). The parameters used here are summarized in Table 2. The amplitude $A$ which we use corresponds to 1% of the background axisymmetric potential at the Solar radius (3% of the disc potential). The positions of the main radial resonances, i.e. the 2:1 inner Lindblad resonance (ILR) and 4:1 IUHR, are illustrated in Fig. \[f:Omega\]. The presence of the 4:1 IUHR close to the Sun is responsible for the presence of the Hyades and Sirius moving groups in the local velocity space at the Solar radius (see Pompéia et al. 2011), associated to square-shaped resonant orbital families in the rotating spiral frame. Vertical resonances are also displayed in Fig. \[f:Omeganu\]. Such a spiral perturbation can grow naturally in self-consistent simulations of isolated discs without the help of any external perturber (e.g. Minchev et al. 2012). As we are interested hereafter in the global response of the thin disc stellar population to a quasi-static spiral perturbation, we make sure to grow the perturbation adiabatically by multiplying the above potential perturbation by a growth factor starting at $t\approx 0.5$ Gyr and finishing at $t\approx 3.5$ Gyr: $\epsilon (t)=\frac{1}{2}(\tanh(1.7\times t-3.4)+1)$. The integration of orbits is performed using a fourth order Runge-Kutta algorithm run on Graphics Processing Units (GPUs). The growth of this spiral is not meant to be realistic, as we are only interested in the orbital structure of the old thin disk once the perturbation is stable. Parameter Spiral potential -------------------------------------------------------------- ------------------ $m$ 2 $A$ (km$^2$ s$^{-2}$) 1000 $p$ (deg) -9.9 $z_0({\ \mathrm{kpc}})$ 0.1 $\Omega_P({\, {\rm km} }{\rm s}^{-1} {\ \mathrm{kpc}}^{-1})$ 18.6 $R_{\rm ILR}({\ \mathrm{kpc}})$ 1.94 $R_{\rm IUHR}({\ \mathrm{kpc}})$ 7.92 ${R_{\mathrm{CR}}}({\ \mathrm{kpc}})$ 11.97 : Parameters of the spiral potential and location of the main resonances[]{data-label="tab:spiral"} ![image](fig5a.ps){width="6cm"} ![image](fig5b.ps){width="6cm"} ![image](fig6a.ps){width="6cm"} ![image](fig6b.ps){width="6cm"} ![image](fig6c.ps){width="6cm"} ![image](fig6d.ps){width="6cm"} ![Galactocentric radial velocities in the solar suburb, centered at $(R,\theta)=(8\, {\rm kpc}, 26^\circ)$ at $t=4 \,$Gyr. On this plot, the Sun is centered on $(x,y)=(0,0)$, positive $x$ indicates the direction of the Galactic centre, and positive $y$ the direction of galactic rotation (as well as the sense of rotation of the spiral pattern). The spiral potential contours overplotted (same as on Fig. \[f:cartevr\], delimiting the region where the spiral potential is between 80% and 100% of its absolute maximum) would correspond to the location of the Perseus spiral arm in the outer Galaxy. This Figure can be qualitatively compared to Fig. 4 of Siebert et al. (2011b) and Fig. 3 of Siebert et al. (2012).[]{data-label="f:RAVE"}](fig7.ps){width="8cm"} Results ======= Radial velocity flow -------------------- The histogram of individual galactocentric radial velocities, $v_R$, as well as the time-evolution of the radial velocity dispersion profile starting from $t=3.5 \,$Gyr (once the steady spiral pattern is settled) are plotted on Fig. \[f:sigmar\]. It can be seen that these are reasonably stable, and that the mean radial motion of stars is very close to zero (albeit slightly positive). Our test population is thus almost in perfect equilibrium. However, due to the presence of spiral arms, the mean galactocentric radial velocity $\langle v_R \rangle$ of our test population is non-zero at given positions within the frame of the spiral arms. The map of $\langle v_R \rangle$ as a function of position in the plane is plotted on Fig. \[f:cartevr\], for different time-steps (4 Gyr, 5 Gyr, 6 Gyr and 6.5 Gyr). Within the rotating frame of the spiral pattern, the locations of these non-zero mean radial velocities are stable over time: this means that the response to the spiral perturbation is stable, even though the amplitude of the non-zero velocities might slightly decrease with time. Within corotation, the mean $\langle v_R \rangle$ is negative within the arms (mean radial motion towards the Galactic centre) and positive (radial motion towards the anticentre) between the arms. Outside corotation, the pattern is reversed. This is exactly what is expected from the Lin-Shu density wave theory (see, e.g., Eq. 3 in Siebert et al. 2012). If we place the Sun at $(R,\theta)=(8\, {\rm kpc}, 26^\circ)$ in the frame of the spiral, we can plot the expected radial velocity field in the Solar suburb (Fig. \[f:RAVE\]). We see that the galactocentric radial velocity is positive in the inner Galaxy, as observed by Siebert et al. (2011b), because the inner Galaxy in the local suburb corresponds to an inter-arm region located within the corotation of the spiral. Observations towards the outer arm (which should correspond to the Perseus arm in the Milky Way) should reveal negative galactocentric radial velocities. An important aspect of the present study is the behaviour of the response to a spiral perturbation away from the Galactic plane. The spiral perturbation of the potential is very thin in our model ($z_0 = 100 \,$pc) but as we can see on Figs. \[f:cartevrRZ\] and \[f:cartevrRZ\_azimuth\], the radial velocity flow is not varying much as a function of $z$ up to five times the scale-height of the spiral perturber. This justifies the assumption made in Siebert et al. (2012) that the flow observed at $\sim 500 \,$pc above the plane was representative of what was happening in the plane. Nevertheless, above these heights, the trend seems to be reversed, probably due to the higher eccentrities of stars, corresponding to different guiding radii. This could potentially provide a useful observational constraint on the scale height of the spiral potential, a test that could be conducted with the forthcoming surveys. ![image](fig8a.ps){width="6cm"} ![image](fig8b.ps){width="6cm"} ![image](fig8c.ps){width="6cm"} ![image](fig8d.ps){width="6cm"} ![image](fig9a.ps){width="6cm"} ![image](fig9b.ps){width="6cm"} ![image](fig9c.ps){width="6cm"} ![image](fig9d.ps){width="6cm"} ![image](fig9e.ps){width="6cm"} ![image](fig9f.ps){width="6cm"} Non-zero mean vertical motions ------------------------------ ![image](fig10a.ps){width="6cm"} ![image](fig10b.ps){width="6cm"} ![image](fig11a.ps){width="6cm"} ![image](fig11b.ps){width="6cm"} ![image](fig11c.ps){width="6cm"} ![image](fig11d.ps){width="6cm"} ![image](fig12a.ps){width="6cm"} ![image](fig12b.ps){width="6cm"} ![image](fig12c.ps){width="6cm"} ![image](fig12d.ps){width="6cm"} ![image](fig12e.ps){width="6cm"} ![image](fig12f.ps){width="6cm"} ![image](fig13a.ps){width="6cm"} ![image](fig13b.ps){width="6cm"} ![image](fig13c.ps){width="6cm"} ![image](fig13d.ps){width="6cm"} ![image](fig14a.ps){width="6cm"} ![image](fig14b.ps){width="6cm"} ![image](fig14c.ps){width="6cm"} ![image](fig14d.ps){width="6cm"} ![image](fig14e.ps){width="6cm"} ![image](fig14f.ps){width="6cm"} ![image](fig15a.ps){width="8cm"} ![image](fig15b.ps){width="8cm"} If we now turn our attention to the vertical motion of stars, we see on Fig. \[f:sigmaz\] that the total mean vertical motion of stars remains zero at all times, but that there is still a slight, but reasonable, vertical heating going on in the inner Galaxy. What is most interesting is to concentrate on the mean vertical motion $\langle v_z \rangle$ as a function of position above or below the Galactic disc. As can be seen on Fig. \[f:cartevzRZ\] and Fig. \[f:cartevzRZ\_azimuth\], while the vertical velocities are generally close to zero right within the plane, they are non-zero outside of it. At a given azimuth within the frame of the spiral, these non-zero vertical velocity patterns are extremely stable over time (Fig. \[f:cartevzRZ\]). Within corotation the mean vertical motion is directed away from the plane at the outer edge of the arm and towards the plane at the inner edge of the arm. The pattern of $\langle v_z \rangle$ above and below the plane are thus mirror-images, and the direction of the mean motion changes roughly in the middle of the interarm region. This produces diagonal features in terms of isocontours of a given $\langle v_z \rangle$, corresponding precisely to the observation using RAVE by Williams et al. (2013, see especially their Fig. 13), where the change of sign of $\langle v_z \rangle$ precisely occurs in between the Perseus and Scutum main arms. Our simulation predicts that the $\langle v_z \rangle$ pattern is reversed outside of corotation (beyond 12 kpc), where stars move towards the plane on the outer edge of the arm (rather than moving away from the plane): this can indeed be seen, e.g., on the right panel of the second row of our Fig. \[f:cartevzRZ\_azimuth\]. If we now combine the information on $\langle v_R \rangle$ and $\langle v_z \rangle$, we can plot the global meridional velocity flow $\vec{\langle v \rangle} = \langle v_R \rangle \vec{1}_R + \langle v_z \rangle \vec{1}_z$ on Fig. \[f:cartevtotRZ\] and Fig. \[f:cartevtotRZ\_azimuth\]. The picture that emerges is the following: in the interarm regions located within corotation, stars move on average from the inner arm to the outer arm by going outside of the plane, and then coming back towards the plane at mid-distance between the two arms, to finally arrive back on the inner edge of the outer arm. For each azimuth, there are thus “source” points, preferentially on the outer edge of the arms (inside corotation, whilst on the inner edge outside corotation), out of which the mean velocity vector flows, while there are “sink” points, preferentially on the inner edge of the arms (inside corotation), towards which the mean velocity flows. This supports the interpretation of the observed RAVE velocity field of Williams et al. (2013) as “compression/rarefaction” waves. Interpretation from linearized Euler equations ---------------------------------------------- In order to understand these features found in the meridional velocity flow of our test-particle simulation, we now turn to the fluid approximation based on linearized Euler equations, developed, e.g., in Binney & Tremaine (2008, Sect. 6.2). A rigorous analytical treatment of a quasi-static spiral perturbation in a three-dimensional stellar disk should rely on the linearized Boltzmann equations, which we plan to do in full in a forthcoming paper, but the fluid approximation can already give important insights on the shape of the velocity flow expected in the meridional plane. In the full Boltzmann-based treatment, the velocity flow will be tempered by reduction factors both in the radial (see, e.g., Binney & Tremaine 2008, Appendix K) and vertical directions. Let us rewrite our perturber potential of Eq. \[spipot\] as $$\Phi_s= \mathbf{Re} \lbrace \Phi_a(R,z) \X \rbrace$$ with $$\Phi_a = - A \, {\mathrm sech}^2 \left(\frac{z}{z_0} \right) \exp \Big(i \frac{m \ln(R)}{\tan p} \Big).$$ Then if we write solutions to the linearized Euler equations for the response of a cold fluid as $$\left\{ \begin{array}{l} v_{Rs} = \mathbf{Re} \lbrace v_{Ra}(R,z) \X \rbrace\\ \\ v_{zs} = \mathbf{Re} \lbrace v_{za}(R,z)) \X \rbrace\\ \end{array} \right. \label{meanv}$$ we find, following the same steps as in Binney & Tremaine (2008, Sect. 6.2) $$\left\{ \begin{array}{l l} v_{Ra}=& - \frac{ m (\Omega - \Omega_P)}{\Delta} k \Phi_a \\ &+ i \frac{2 \Phi_a}{\Delta} \left( \frac{2 \Omega {\rm tanh}(z/z_0)}{m (\Omega - \Omega_P) z_0} + \frac {m \Omega}{R} \right)\\ \\ v_{za} =& - \frac{ 2 i}{m (\Omega -\Omega_P) z_0} {\rm tanh}\Big( \frac{z}{z_0}\Big) \Phi_a \\ \end{array} \right. \label{va}$$ where $k=m/(R \, {\rm tan \,}p)$ is the radial wavenumber and $\Delta = \kappa^2 - m^2(\Omega-\Omega_P)^2$. If we plot these solutions for $v_{Rs}$ and $v_{zs}$ at a given angle (for instance $\theta=30^\circ$) we get the same pattern as in the simulation (Fig. \[f:euler\]). Of course, the velocity flow plotted on Fig. \[f:euler\] would in fact be damped by a reduction factor depending on both radial and vertical velocity dispersions when treating the full linearized Boltzmann equation, which will be the topic of a forthcoming paper. Nevertheless, this qualitative consistency between analytical results and our simulations is an indication that the velocity pattern observed by Williams et al. (2013) is likely linked to the potential perturbation by spiral arms. Interestingly, this analytical model also predicts that the radial velocity gradient should become noticeably North/South asymmetric close to corotation. Discussion and conclusions ========================== In recent years, various large spectroscopic surveys have shown that stars of the Milky Way disc exhibit non-zero mean velocities outside of the Galactic plane in both the Galactocentric radial component and vertical component of the mean velocity field (e.g., Siebert et al. 2011b; Williams et al. 2013; Carlin et al. 2013). While it is clear that such a behaviour could be due to a large combination of factors, we investigated here whether spiral arms are able to play a role in these observed patterns. For this purpose, we investigated the orbital response of a test population of stars representative of the old thin disc to a stable spiral perturbation. This is done using a test-particle simulation with a background potential representative of the Milky Way. We found non-zero velocities both in the Galactocentric radial and vertical velocity components. Within the rotating frame of the spiral pattern, the location of these non-zero mean velocities in both components are stable over time, meaning that the response to the spiral perturbation is stable. Within corotation, the mean $\langle v_R \rangle$ is negative within the arms (mean radial motion towards the Galactic centre) and positive (radial motion towards the anticentre) between the arms. Outside corotation, the pattern is reversed, as expected from the Lin-Shu density wave theory (Lin & Shu 1964). On the other hand, even though the spiral perturbation of the potential is very thin, the radial velocity flow is still strongly affected above the Galactic plane. Up to five times the scale-height of the spiral potential, there are no strong asymmetries in terms of radial velocity, but above these heights, the trend in the radial velocity flow is reversed. This means that asymmetries could be observed in surveys covering different volumes above and below the Galactic plane. Also, forthcoming surveys like Gaia, 4MOST, WEAVE will be able to map this region of the disc of the Milky Way and measure the height at which the reversal occurs. Provided this measurement is successful, it would give a measurement of the scale height of the spiral potential. In terms of vertical velocities, within corotation, the mean vertical motion is directed away from the plane at the outer edge of the arms and towards the plane at the inner edge of the arms. The patterns of $\langle v_z \rangle$ above and below the plane are thus mirror-images (see e.g. Carlin et al. 2013). The direction of the mean vertical motion changes roughly in the middle of the interam region. This produces diagonal features in terms of isocontours of a given $\langle v_z \rangle$, as observed by Williams et al. (2013). The picture that emerges from our simulation is one of “source” points of the velocity flow in the meridional plane, preferentially on the outer edge of the arms (inside corotation, whilst on the inner edge outside corotation), and of “sink” points, preferentially on the inner edge of the arms (inside corotation), towards which the mean velocity flows. We have then shown that this qualitative structure of the mean velocity field is also the behaviour of the analytic solution to linearized Euler equations for a toy model of a cold fluid in response to a spiral perturbation. In a more realistic analytic model, this fluid velocity would in fact be damped by a reduction factor depending on both radial and vertical velocity dispersions when treating the full linearized Boltzmann equation. In a next step, the features found in the present test-particle simulations will also be checked for in fully self-consistent simulations with transient spiral arms, to check whether non-zero mean vertical motions as found here are indeed generic. The response of the gravitational potential itself to these non-zero motions should also have an influence on the long-term evolution of the velocity patterns found here, in the form of e.g. bending and corrugation waves. The effects of multiple spiral patterns (e.g., Quillen et al. 2011) and of the bar (e.g., Monari et al. 2013, 2014) should also have an influence on the global velocity fiel and on its amplitude. Once all these different dynamical effects and their combination will be fully understood, a full quantitative comparison with present and future datasets in 3D will be the next step. The present work on the orbital response of the thin disc to a small spiral perturbation by no means implies that no external perturbation of the Milky Way disc happened in the recent past, by e.g. the Sagittarius dwarf (e.g., Gomez et al. 2013). Such a perturbation could of course be responsible for parts of the velocity structures observed in various recent large spectrosocpic surveys. For instance, concerning the important north-south asymmetry spotted in stellar densities at relatively large heights above the disc, spiral arms are less likely to play an important role. Nevertheless, any external perturbation will also excite a spiral wave, so that understanding the dynamics of spirals is also fundamental to understanding the effects of an external perturber. The qualitative similarity between our simulation (e.g., Fig. \[f:cartevzRZ\]), as well as our analytical estimates for the fluid approximation (Fig. \[f:euler\]), and the velocity pattern observed by Williams et al. (2013, their Fig. 13) indicates that spiral arms are likely to play a non-negligible role in the observed velocity pattern of our “wobbly Galaxy”. Antoja T., Valenzuela O., Pichardo B., et al., 2009, ApJ, 700, L78 Antoja T., Figueras F., Romero-Gómez M., et al., 2011, MNRAS, 418, 1423 Barros D., Lépine J., Junqueira T., 2013, MNRAS, 435, 2299 Bienaymé O., Séchaud N., 1997, A&A, 323, 781 Binney J., Tremaine S., 2008, Galactic Dynamics, Princeton University Press Binney J., 2013, New Astronomy Reviews, 57, 29 Binney J., Burnett B., Kordopatis G., et al., 2013, arXiv:1309.4285 Bovy J., Allende Prieto C., Beers T., et al., 2012, ApJ, 759, 131 Bovy J., Rix H.-W., 2013, arXiv:1309.0809 Carlin J.L., DeLaunay J., Newberg H. J., et al., 2013, ApJ, 777, L5 Chereul E., Crézé M., Bienaymé O., 1998, A&A, 340,384 Chereul E., Crézé M., Bienaymé O., 1999, A&AS, 135, 5 Dehnen W., 1998, AJ, 115, 2384 Dehnen W., Binney J., 1998, MNRAS, 294, 429 Dehnen W., 2000, AJ, 119, 800 De Simone R., Wu X., Tremaine S., 2004, MNRAS, 350, 627 Famaey B., Jorissen A., Luri X., et al., 2005, A&A, 430, 165 Famaey B., Pont F., Luri X., et al., 2007, A&A 461, 957 Famaey B., Siebert A., Jorissen A., 2008, A&A, 483, 453 Feldmann R., Spolyar D., 2013, arXiv:1310.2243 Gomez F., Minchev I., O’Shea B., et al., 2013, MNRAS, 429, 159 Kaasalainen M., Binney J., 1994, MNRAS, 268, 1033 Kaasalainen M., 1994, MNRAS, 268, 1041 Kaasalainen M., 1995, Phys. Rev. E, 52, 1193 Kordopatis G., Gilmore G., Steinmetz M., et al., 2013, 146, 134 Kuijken K., Tremaine S., 1994, ApJ, 421, 178 Lépine J., Cruz P., Scarano S., et al., 2011, MNRAS, 417, 698 Lin C.C., Shu F.H., 1964, ApJ, 140, 646 McMillan P.J., Binney J., 2010, MNRAS, 402, 934 McMillan P.J., 2013, MNRAS, 430, 3276 Minchev I., Nordhaus J., Quillen A., 2007, ApJ, 664, L31 Minchev I., Famaey B., 2010, ApJ, 722, 112 Minchev I., Boily C., Siebert A., Bienaymé O., 2010, MNRAS, 407, 2122 Minchev I., Famaey B., Quillen A., et al., 2012, A&A, 548, A126 Monari G., Antoja T., Helmi A., 2013, arXiv:1306.2632 Monari G., Helmi A., Antoja T., Steinmetz M., 2014, arXiv:1402.4479 Olling R., Dehnen W., 2003, ApJ, 599, 275 Patsis P.A., Grosb[ø]{}l P., 1996, A&A, 315, 371 Pompéia L., Masseron T., Famaey B., et al., 2011, MNRAS, 415, 1138 Quillen A., Minchev I., 2005, AJ, 130, 576 Quillen A., Dougherty J., Bagley M., et al., 2011, MNRAS, 417, 762 Reid M., Menten K., Zheng X., et al. 2009, ApJ, 700, 137 Roskar R., Debattista V., Quinn T., Wadsley J., 2012, MNRAS, 426, 2089 Schönrich R., 2012, MNRAS, 427, 274 Sellwood J., 2013, Rev. Mod. Phys., arXiv:1310.0403 Shu F.H., 1969, ApJ, 158, 505 Siebert A., Williams M.E.K., Siviero A., et al., 2011a, AJ, 141, 187 Siebert A., Famaey B., Minchev I., et al., 2011b, MNRAS, 412, 2026 Siebert A., Famaey B., Binney J., et al., 2012, MNRAS, 425, 2335 Smith M., Whiteoak S. H., Evans N. W., 2012, ApJ, 746, 181 Steinmetz M., Zwitter T., Siebert A., et al., 2006, AJ, 132, 1645 Widrow L., Gardner S., Yanny B., et al., 2012, ApJ, 750, L41 Williams M., Steinmetz M., Binney J., et al., 2013, MNRAS, 436, 101 Yanny B., Gardner S., 2013, ApJ, 777, 91 Zwitter T., Siebert A., Munari U., et al., 2008, AJ, 136, 421 \[lastpage\] [^1]: carole.faure@astro.unistra.fr [^2]: In this paper, [*’radial velocity’*]{} refers to the Galactocentric radial velocity, not to be confused with the line-of-sight (l.o.s.) velocity.
--- abstract: 'We present the complete analytical solution of the geodesics equations in the supersymmetric BMPV spacetime [@Breckenridge:1996is]. We study systematically the properties of massive and massless test particle motion. We analyze the trajectories with analytical methods based on the theory of elliptic functions. Since the nature of the effective potential depends strongly on the rotation parameter $\omega$, one has to distinguish between the underrotating case, the critical case and the overrotating case, as discussed by Gibbons and Herdeiro in their pioneering study [@Gibbons:1999uv]. We discuss various properties which distinguish this spacetime from the classical relativistic spacetimes like Schwarzschild, Reissner-Nordström, Kerr or Myers-Perry. The overrotating BMPV spacetime allows, for instance, for planetary bound orbits for massive and massless particles. We also address causality violation as analyzed in [@Gibbons:1999uv].' address: ' Institut für Physik, Universität Oldenburg, D–26111 Oldenburg, Germany ' author: - 'Valeria Diemer (née Kagramanova), Jutta Kunz' title: Supersymmetric rotating black hole spacetime tested by geodesics --- ![image](orbit10.eps){width="6cm"} Introduction ============ The Breckenridge-Myers-Peet-Vafa (BMPV) spacetime [@Breckenridge:1996is] represents a fascinating solution of the bosonic sector of minimal supergravity in five dimensions. It describes a family of charged rotating extremal black holes with equal-magnitude angular momenta, that are associated with independent rotations in two orthogonal planes. The BMPV solution has been analyzed in various respects. At first interest focussed on the entropy of the extremal black hole solutions. Here a microscopic derivation of the entropy led to perfect agreement with the classical value obtained from the horizon area of the black holes $A=2 \pi^2 \sqrt{\mu^3-\omega^2}$, where $\mu$ is a charge parameter and $\omega$ is the rotation parameter [@Breckenridge:1996is]. Clearly, the entropy is largest in the static case, and vanishes in the critical case $\mu^3=\omega^2$. For still faster rotation the radicant would become negative. Gauntlett, Myers and Townsend [@Gauntlett:1998fz] analyzed the BMPV spacetime further, pointing out that it describes supersymmetric black hole solutions with a non-rotating horizon, but finite angular momentum. They argued that angular momentum can be stored in the gauge field, and a negative fraction of the total angular momentum resides behind the horizon, while the effect of the rotation on the horizon is to make it squashed [@Gauntlett:1998fz]. While Gauntlett, Myers and Townsend [@Gauntlett:1998fz] addressed already the presence of closed timelike curves (CTSs) in the BMPV spacetime, a thorough qualitative study of the geodesics and the possibility of time travel was given by Gibbons and Herdeiro [@Gibbons:1999uv]. Considering three cases for the BMPV spacetime, Gibbons and Herdeiro pointed out, that while in the underrotating case the CTCs are hidden behind the degenerate horizon, the CTCs occur in the exterior region in the overrotating case. Thus the BMPV spacetime contains naked time machines in this case. Moreover, in the overrotating case the horizon becomes ill-defined, since it becomes a timelike hypersurface, with the entropy becoming naively imaginary [@Gibbons:1999uv; @Herdeiro:2000ap; @Herdeiro:2002ft; @Cvetic:2005zi]. This hypersurface is then referred to as [*pseudo-horizon*]{}. In fact, as shown by Gibbons and Herdeiro, this pseudo-horizon cannot be traversed by particles or light following geodesics. Thus the interior region of an overrotating BMPV spacetime cannot be entered. Therefore the outer spacetime represents a [*repulson*]{}. The geodesics in the exterior region between the pseudo-horizon and infinity are complete. The repulson behavior of the overrotating BMPV spacetime has been analyzed further by Herdeiro [@Herdeiro:2000ap]. By studying the motion of charged testparticles in the spacetime he realized that the repulson effect is still present. Considering accelerated observers, however, he noted that it could be possible to travel into the interior region [@Herdeiro:2000ap]. Herdeiro further showed, that when oxidising the overrotating $D=5$ BMPV solution to $D=10$ the causal anomalies are resolved. Interestingly, here a relation between microscopic unitarity and macroscopic causality emerged. The breakdown of causality in the BMPV spacetime is associated with a breakdown of unitarity in the super conformal field theory [@Herdeiro:2000ap; @Herdeiro:2002ft] (see also [@Dyson:2006ia]). The BMPV solution may be considered as a subset of a more general family of solutions found by Chong, Cvetic, Lü and Pope [@Chong:2005hr]. This more general family of solutions exhibits close similarities and, in particular, a generic presence of CTCs. However, it also contains further supersymmetric black holes, where naked CTCs can be avoided, and in addition new topological solitons. Along these lines Cvetic, Gibbons, Lü and Pope [@Cvetic:2005zi] gave a thorough analysis of further exact solutions of gauged supergravities in four, five and seven dimensions. Again, these solutions in general may possess CTCs, but interesting regular black hole and soliton solutions are present as well. Here we revisit the properties of the BMPV spacetime by constructing the complete analytical solution of the geodesic equations in this spacetime. We systematically study the motion of massive and massless test particles and analyze the trajectories with analytical methods based on the theory of elliptic functions. We classify the possible orbits and present examples of these orbits to illustrate the various types of motion. Our paper is structured as follows. In section \[section:metric\] we introduce and discuss the metric of the BMPV spacetime. We present the Kretschmann scalar, which reveals the physical singularity at $r=0$ (in our coordinates). Subsequently, we here derive the equations of motion. In section \[sec:beta\] we discuss the properties of the $\vartheta$-motion and derive the solution of the $\vartheta$-equation. In section \[sec:radial\] we solve the radial equation in terms of the Weierstrass’ $\wp$-function. We then discuss in detail the properties of the motion in terms of the effective potential in subsection \[sec:pot\]. We distinguish between the underrotating case, the overrotating case and the critical case, which are definded in terms of the values of the rotation parameter $\omega$. We then discuss the corresponding dynamics of massive and massless test particles for these cases. In subsection \[sec:diag\] we exemplify the properties of motion with parameteric diagrams from the radial polynomial. In section \[sec:varphi\] and section \[sec:psi\] we solve the differential equations for the $\varphi$ and $\psi$ equations in terms of Weierstrass’ functions. We solve the $t$ equation in section \[sec:time\]. In section \[sec:ctc\] we address causality. To illustrate our analytical solutions we present in section \[section:orbits\] various types of trajectories. We show two-dimensional orbits in the $\theta=\pi/2$ plane in section \[section:2dorbits\], and three-dimensional projections of four-dimensional orbits in section \[section:3dorbits\]. In the last section we conclude. The metric and the equations of motion ======================================  \[section:metric\] The metric ---------- The five dimensional metric describing the BMPV spacetime can be expressed as follows [@Breckenridge:1996is] $$ds^2 = - \left( 1 - \frac{ \mu }{r^2} \right)^2 \left( dt - \frac{\mu\omega}{(r^2-\mu)} (\sin^2\vartheta d\varphi - \cos^2\vartheta d\psi ) \right)^2 + \left( 1 - \frac{ \mu }{r^2} \right)^{-2} {dr^2} + {r^2} \left( d\vartheta^2 + \sin^2\vartheta d\varphi^2 + \cos^2\vartheta d\psi^2 \right) \label{metric} \ .$$ The parameter $\mu$ is related to the charge and to the mass of these solutions, while the parameter $\omega$ is related to their two equal magnitude angular momenta. The coordinates $r$, $\vartheta$, $\varphi$, $\psi$ represent a spherical coordinate system, where the angular coordinates have the ranges $\vartheta\in [0, \frac{\pi}{2} ]$, $\varphi\in [0, 2\pi )$ and $\psi\in [0, 2\pi )$. It is convenient to work with a normalized metric of the form $$ds^2 = - \left( 1 - \frac{ 1 }{r^2} \right)^2 \left( dt - \frac{ \omega}{(r^2-1)} (\sin^2\vartheta d\varphi - \cos^2\vartheta d\psi ) \right)^2 + \left( 1 - \frac{ 1 }{r^2} \right)^{-2} {dr^2} + {r^2} \left( d\vartheta^2 + \sin^2\vartheta d\varphi^2 + \cos^2\vartheta d\psi^2 \right) \label{metric2} \$$ with dimensionsless coordinates and parameter $$\frac{r}{\sqrt{\mu}} \rightarrow r \ , \, \frac{t}{\sqrt{\mu}} \rightarrow t \ , \, \frac{\omega}{\sqrt{\mu}} \rightarrow \omega \ , \, \frac{ds}{\sqrt{\mu}} \rightarrow ds \ .$$ Note, that in order to avoid a complicated notation we retain the same notation for the normalized coordinates and quantities. The BMPV spacetime is a stationary asymptotically flat spacetime. It has two hypersurfaces relevant for its physical interpretation in the following discussion. The hypersurface where $g_{tt}$ vanishes looks like a non-rotating degenerate horizon located at $r=1$. The second hypersurface is associated with the causal properties of the spacetime. Representing the outer boundary of the region where CTCs arise, it is located at $r_L=\omega^{1/3}$ and referred to as the velocity of light surface (VLS) [@Gibbons:1999uv; @Herdeiro:2000ap; @Herdeiro:2002ft; @Cvetic:2005zi]. For the proper description of the BMPV spacetime we need to distinguish the following three cases: 1. $\omega<1$: underrotating case 2. $\omega=1$: critical case 3. $\omega>1$: overrotating case In the underrotating case the BMPV spacetime describes extremal supersymmetric black holes. Here indeed a degenerate horizon is located at $r=1$, and the VLS is hidden behind the horizon. Since CTCs arise only inside the VLS, the black hole spacetime outside the horizon is free of CTCs. In the overrotating case the surface $r=1$ becomes a timelike hypersurface. Therefore this surface does not describe a horizon. It is referred to as a [*pseudo-horizon*]{}, whose area would be imaginary. In the overrotating case the VLS resides outside the surface $r=1$. Thus the outer spacetime contains a naked time machine. Since no geodesics can cross the pseudo-horizon, the spacetime represents a [*repulson*]{} [@Gibbons:1999uv; @Herdeiro:2000ap; @Herdeiro:2002ft; @Cvetic:2005zi]. In the critical case the surface $r=1$ has vanishing area. Here the VLS coincides with this surface $r=1$. Thus there is no causality violation in the outer region $r>1$. The Kretschmann scalar $\mathcal{K}=R^{\alpha \beta \gamma \sigma } R_{\alpha \beta \gamma \sigma}$, where $R^{\alpha \beta \gamma \sigma }$ are the contravariant components of the Riemann tensor, in the BMPV spacetime has the form $$\mathcal{K} = \frac{(288 r^8+(-384 \omega^2-720) r^6+(1152 \omega^2+508) r^4-904 \omega^2 r^2+136\omega^4)}{r^{16}} \ ,$$ which indicates that the spacetime has a physical point-like singularity at $r=0$. At $r=1$ the Kretschmann scalar is finite. Our detailed analytical study of the geodesics of neutral particles and light in the BMPV spacetime fully supports the previous analysis of the properties of this spacetime. The Hamilton-Jacobi equation ---------------------------- The Hamilton-Jacobi equation for neutral test particles is of the form (see e.g. [@Misner]) $$-\frac{\partial S}{\partial \lambda} = \frac{1}{2} g^{\alpha \beta} \left( \frac{\partial S}{\partial x^\alpha} \right) \left( \frac{\partial S}{\partial x^\beta} \right) \label{eq:HJ} \ .$$ We therefore need the non-vanishing inverse metric components $g^{\alpha \beta}$ given by $$\begin{aligned} && g^{tt} = \frac{\omega^2-r^6}{r^2(r^2-1)^2} \ , \,\, g^{t\varphi} = \frac{\omega}{r^2(r^2-1)^2} \ , \,\, g^{t\psi} = - \frac{\omega}{r^2(r^2-1)^2} \nonumber \ , \\ && g^{\varphi\varphi} = \frac{1}{r^2 \sin^2 \vartheta} \ , \,\, g^{\psi\psi} = \frac{1}{r^2 \cos^2 \vartheta} \nonumber \ , \\ && g^{rr} = \frac{ (r^2-1)^2 }{r^4} \ , \,\, g^{\vartheta \vartheta} = \frac{ 1 }{r^2} \ . \end{aligned}$$ We search for the solution $S$ of the equations  in the form: $$S = \frac{1}{2}\delta \lambda - E t + \Phi \varphi + \Psi \psi + S_r(r) + S_\vartheta(\vartheta) \label{eq:S} \ ,$$ where $E$ is the conserved energy of a test particle with a mass parameter $\delta$, and $\Phi$ and $\Psi$ are its conserved angular momenta. $\delta=0$ for massless test particles and $\delta=1$ for massive test particles. Since the metric components are functions of the coordinates $r$ and $\vartheta$, we have to separate the equations of motion with respect to these coordinates. $\lambda$ is an affine parameter. We insert  into  and get $$\begin{aligned} && -r^2 \delta -\frac{(\omega^2 -r^6)E^2}{(r^2-1)^2} + 2 \frac{\omega E}{r^2-1} \left( \Phi - \Psi \right) - \frac{(r^2-1)^2}{r^2} S^2_r(r) \nonumber \\ && = S^2_\vartheta(\vartheta) + \frac{\Phi^2}{\cos^2 \vartheta} + \frac{\Psi^2}{\cos^2 \vartheta} \label{eq:HJ2} \ .\end{aligned}$$ Since the left and right hand sides of the equation  depend only on $r$ and $\vartheta$, respectively, we can equate both sides to a constant ${K}$, the separation constant. We obtain $$\begin{aligned} S^2_\vartheta(\vartheta) = {K} - \frac{\Phi^2}{\sin^2 \vartheta} - \frac{\Psi^2}{\cos^2 \vartheta} & \equiv & \Theta \label{eq:THETA} \ , \\ \frac{(r^2-1)^4}{r^2} S^2_r(r) = 2 \omega E ( \Phi - \Psi ) (r^2-1) - (\omega^2 -r^6)E^2 - ({K}+r^2\delta) (r^2-1)^2 & \equiv & R \label{eq:R} \ , \end{aligned}$$ where we have introduced new functions $\Theta$ and $R$. We can write the action $S$ (equation ) in the form $$S = \frac{1}{2}\delta \lambda - E t + \Phi \varphi + \Psi \psi + \int_r{ \frac{r\sqrt{R}}{(r^2-1)^2} dr } + \int_\vartheta{ \sqrt{\Theta} d\vartheta } \label{eq:S2} \ .$$ Following the standard procedure we differentiate equation  with respect to the constants ${K}$, $\delta$, $\Phi$, $\Psi$ and $E$. The result is a constant which can be set zero. Combining the derived differential equations we get the Hamilton-Jacobi equations in the form $$\begin{aligned} &&\frac{rdr}{d\tau} = \sqrt{R} \ , \label{reqn1} \\ && \frac{d\vartheta}{d\tau} = \sqrt{\Theta} \ , \label{varthetaeqn1} \\ && \frac{d\varphi}{d\tau} = \frac{\omega E}{r^2-1} - \frac{\Phi}{\sin^2\vartheta} \ , \label{varphieqn1} \\ && \frac{d\psi}{d\tau} = - \frac{\omega E}{r^2-1} - \frac{\Psi}{\cos^2\vartheta} \ , \label{psieqn1} \\ && \frac{dt}{d\tau} = \frac{ \omega(\Phi-\Psi)(r^2-1) - E(\omega^2 - r^6) }{(r^2-1)^2} \label{teqn1} \ , \end{aligned}$$ where $\Theta$ and $R$ are given by  and , and $\tau$ is a new affine parameter defined by [@Mino:2003yg] $$d\tau = \frac{d\lambda}{r^2} \ . \label{eq:tau}$$ Properties of the motion ========================  \[section:motion\] The $\vartheta$-equation ------------------------  \[sec:beta\] ### The restrictions from the $\vartheta$-equation Consider now the $\vartheta$-equation  with $\Theta$ defined in  $$d\tau = \frac{d\vartheta}{ \sqrt{\Theta} } \ , \quad \Theta={K} - \frac{\Phi^2}{\sin^2 \vartheta} - \frac{\Psi^2}{\cos^2 \vartheta} \ . \label{varthetaeqn2}$$ We introduce a new variable $\xi=\cos^2\vartheta$. The equation  reduces to $$d\tau = - \frac{d\xi}{ 2 \sqrt{\Theta_\xi} } \ , \quad \Theta_\xi= -{K}\xi^2 + (K+\Psi^2-\Phi^2)\xi -\Psi^2 = \sum^2_{i=0}{b_i\xi^i} \ , \label{xieqn1}$$ with $$b_2=-K \ , \quad b_1=K+\Psi^2-\Phi^2 \equiv {K}+{A}{B} \quad \mbox{and} \quad b_0=-\Psi^2\equiv -\frac{({A}+{B})^2}{4} \ , \label{b_coeffs}$$ where we introduced $$A= \Psi-\Phi \ , \quad B = \Psi+\Phi \label{AB} \ .$$ The discriminant $D_\xi$ of $\Theta_\xi$  takes the form $$D_\xi = b_1^2 - 4 b_2 b_0 = ({K}-{A}^2)({K}-{B}^2) \label{Dxi} \ .$$ The roots of the polynomial $\Theta_\xi$  read $$\xi_{1,2} = \frac{1}{2{K}} \left( {K} +{A}{B} \mp \sqrt{ D_\xi } \right) \ , \label{zeros_xi}$$ The discriminant $D_\beta$ must be positive or zero for the solutions  to be real. In  two cases are possible: $${K} \geq {A}^2 \,\,\, \cup \,\,\, {K} \geq {B}^2 \label{condj1}$$ or $${K} < {A}^2 \,\,\ \cup \,\,\, {K} < {B}^2 \label{condj2} \ .$$ For the upcoming analysis we keep in mind that since $\xi=\cos^2\beta$ the condition $$0 \leq \xi\leq 1 \, \label{xicond}$$ must be fulfilled. Under this condition we will see that only the case  is relevant. At first we consider ${K}>0$. Let $0< m \leq 1$ and $0 < n \leq 1$ and ${A} = m \sqrt{{K}}$ and ${B} = n \sqrt{{K}}$. Inserting this into  we get $$\xi_{1,2} = \frac{1}{2}(1+mn \mp \sqrt{(1-m^2)(1-n^2)}) \ . \label{zeros_xi2}$$ Then considering the limits for $m\rightarrow 0$ and $m\rightarrow 1$ we obtain: $$\begin{aligned} && \lim_{m\rightarrow 0} \xi_{1,2} = \frac{1}{2}(1-\sqrt{1-n^2}) \ , \\ && \lim_{m\rightarrow 1} \xi_{1,2} = \frac{1}{2}(1+n) \ .\end{aligned}$$ If we take the limits for $n$ instead of $m$, we can simply replace $n$ by $m$ in the result above. Taking into account the conditions on $m$ and $n$, we observe that $\xi_{1,2}$ lie in the allowed region . If $m=n$ then $$\xi_{1} = m^2 \quad \text{and} \quad \xi_{2}=\frac{1}{2} \ .$$ Let now $0< m < 1$ and $0 < n < 1$ and ${A} = \frac{1}{m} \sqrt{K}$ and ${B} = \frac{1}{n} \sqrt{{K}}$ (${K}$ still positive). Inserting this into  we get $$\xi_{1,2} = \frac{1+mn \mp \sqrt{(1-m^2)(1-n^2)}}{2mn} \ . \label{zeros_xi3}$$ Taking the limit for $m\rightarrow 1$ yields: $$\lim_{m\rightarrow 1} \xi_{1,2} = \frac{1}{2}(1+\frac{1}{n}) \ .$$ Again, if we take the limit for $n$ instead of $m$, we can simply replace $n$ by $m$ in the result above. Contrary to the case above, the condition  is not fulfilled for $0<n<1$, since the values of $\xi$ become larger than one. If $m=n$ then $$\xi_{1} = 1 \quad \text{and} \quad \xi_{2}=\frac{1}{m^2} \ .$$ In this case only $\xi_{1} = 1$ is an eligible root. Since both roots of the function $\Theta_\xi$ define the boundaries of the $\vartheta$–motion, they must satisfy . If $K<0$ and fulfills the conditions , then with the substitution ${A} = \frac{1}{m} \sqrt{-K}$ and ${B} = \frac{1}{n} \sqrt{{-K}}$ where $0< m < 1$ and $0 < n < 1$ one can show that one zero is negative and the other is larger than one. Both cases do not satisfy the condition . These observations show that ${K}$, ${A}$ and ${B}$ must satisfy the conditions . ### $\vartheta(\tau)$-solution Thus, the highest coefficient $b_2 = -K$ in $\Theta_\xi$  is negative. In this case the differential equation  can be integrated as $$\tau - \tau_0 = \left( \frac{1}{2\sqrt{-b_2} } \arcsin{ \frac{2b_2\xi + b_1}{\sqrt{D_\xi}} } \right) \Biggl|^{\xi(\tau)}_{\xi_{0} } \ . \label{betaeqn4}$$ Here and later on the index $0$ means an initial value. To find $\vartheta=\arccos(\pm\sqrt{\xi})$ as a function of $\tau$ we invert the solution . Thus, $$\vartheta (\tau) = \arccos\left( \pm\sqrt{ \frac{1}{2b_2} \left( \sqrt{D_\xi} \sin{( 2\sqrt{-b_2}(\tau-\tau^\prime) )} - b_1 \right) } \right) \ , \label{betaeqn5}$$ where $$\tau^\prime = \tau_0 - \frac{1}{2\sqrt{-b_2} } \arcsin{ \frac{2b_2\xi_0 + b_1}{\sqrt{D_\xi}} } \ . \label{betaeqn6}$$ The radial equation -------------------  \[sec:radial\] The differential equation  contains a polynomial of order 6 in $r$ on the right hand side: $$\left(\frac{dr}{d\tau}\right)^2 = \frac{1}{r^2} \sum^{3}_{i=0}{ a_i r^{2i} } \ , \label{reqn1_1}$$ where $$\begin{aligned} && a_3 = E^2-\delta \ , \quad a_2 = 2\delta - K \ , \nonumber \\ && a_1 = 2 K - 2\omega E {A} - \delta \ , \quad a_0 = - K+{A}^2 - (\omega E - {A} )^2 \label{req_coeff} \ .\end{aligned}$$ Next, we introduce the new variable $x=r^2$. Then the polynomial on the right hand side in equation  becomes of order $3$: $$\left(\frac{dx}{d\tau}\right)^2 = \sum^{3}_{i=0}{ 4 a_i x^{i} } \equiv P(x) \ , \label{reqn2}$$ with the coefficients $a_i$ defined by . To reduce the equation  to the Weierstrass form we use the transformation $x = \frac{1}{4a_3}\left(4y - \frac{4a_2}{3}\right)$. We get: $$d\tau = \frac{dy}{\sqrt{P_3(y)}} \ , \quad \mbox{with} \quad P_3(y) = 4y^3 - g_2 y -g_3 \ , \label{reqn3}$$ where $$g_2=\frac{4a_2^2}{3} - 4a_1 a_3 \, , \qquad g_3=\frac{a_1 a_2 a_3}{3} - 4 a_0 a_3^2 - \left(\frac{ 2 a_2 }{3} \right)^3 \ .$$ The differential equation is of elliptic type and is solved by the Weierstra[ß]{}’ $\wp$–function [@Markush] $$y(\tau) = \wp\left(\tau - \tau^\prime; g_2, g_3\right) \ , \label{soly}$$ where $$\tau^\prime=\tau_{ 0 }+\int^\infty_{y_{ 0 }}{\frac{dy}{\sqrt{4y^3-g_2y-g_3}}} \, \label{tauprime}$$ with $y_{ 0 }= a_3 r^2_{ 0 } + \frac{a_2}{3}$. $\tau_0$ and $r_0$ denote the initial values. Then the solution of  acquires the form $$r (\tau) = \sqrt{ \frac{1}{a_3} \left( \wp\left(\tau - \tau^\prime; g_2, g_3\right) - \frac{a_2}{3} \right) } \ . \label{solr}$$ In  we choose the positive sign of the square root, since the singularity located at $r=0$ prohibits particles to reach negative radial values. ### Properties of the motion. Effective potential  \[sec:pot\] The singularity is located at $r=0$, and the degenerate horizon or pseudo-horizon at $r=1$. For physical motion the values of $r$ (and $x=r^2$) must be real and positive therefore. We define the effective potential $V^\pm_{\rm eff}$ from  via $$\left(\frac{dr}{d\tau}\right)^2 = r^4 \Delta_\omega (E-V^+_{\rm eff})(E-V^-_{\rm eff}) \ , \label{rpot}$$ where $\Delta_\omega=1-\frac{\omega^2}{r^6}$. $\Delta_\omega=0$ corresponds to the VLS. The effective potential then reads $$V^\pm_{\rm eff} = \frac{1}{r^4}\frac{\Delta}{\Delta_\omega} \left( \omega {A} \pm \sqrt{ \Delta_{\rm eff} } \right) \,\,\, \text{with} \,\,\, \Delta_{\rm eff}=\omega^2 {A}^2 + r^6\Delta_\omega (K +r^2 \delta) \ , \label{rpot1}$$ where $\Delta=1-\frac{1}{r^2}$ and $\Delta=0$ describes the horizon or pseudo-horizon, while ${A}=\Psi-\Phi$  is a combination of the angular momenta of the test particle. A principal condition for physical $r$ values (real and positive) to exist is the positiveness of the RHS of . The counterpart of it will define forbidden regions for a test particle in the effective potential. The limit of the effective potential at infinity is defined by the test particle’s mass parameter $\delta$: $$\lim_{x\rightarrow\infty}{V^\pm_{\rm eff}} = \pm \sqrt{\delta} \ . \label{rpot_limit}$$ From the form of the potential  we recognize that the term $\Delta_\omega $ in the denominator together with the term $r^4$ lead to divergencies. Since $r=0$ is a physical singularity, we concentrate on the divergency caused by $\Delta_\omega \rightarrow 0$, i.e. $x\rightarrow \sqrt[3]{\omega^2}$ with $x=r^2$. Consider the Laurent series expansion of the potential  in the vicinity of $x=\sqrt[3]{\omega^2}$: $$V^\pm_{\rm eff} = \frac{1}{3} \frac{ (\omega {A} \pm \sqrt{\omega^2 {A}^2}) (\sqrt[3]{\omega^2} -1 ) }{ \sqrt[3]{\omega^4} (x - \sqrt[3]{\omega^2}) } + \, \mathrm{holomorphic \,\, part} \ . \label{rpot2}$$ We are interested in the coefficient at $(x-\sqrt[3]{\omega^2})^{-1}$, since the other terms are holomorphic. Analysing  we see that there are two factors which define the character of the potentials $V^{\pm}_{\rm eff}$ . The first one is the direction from which $x$ approaches the value $\sqrt[3]{\omega^2}$. In addition to this, the sign of the factor $\sqrt[3]{\omega^2} -1$, being a second factor, defines the final character of the potential and herewith the properties of the motion. In table \[tab1\] we show the asymptotic behaviour of the potential . -- ----------------------------------- ---------------------- ----------- $x\rightarrow \sqrt[3]{\omega^2}$ $\sqrt[3]{\omega^2}$ limit left $\infty$ right $-\infty$ left right left $-\infty$ right $+\infty$ left right -- ----------------------------------- ---------------------- ----------- : Behaviour of the effective potential  with respect to the divergency at $\Delta_\omega =0$ (VLS at $x=\sqrt[3]{\omega^2}$) (cp. eq. ). Due to this a potential barrier forms which may either lie behind the degenerate horizon or in front of the pseudo-horizon, allowing for planetary bound orbits (see discussion in the text). Inversion of the sign of ${A}$ mirrors the $V^+_{\rm eff}$ and $V^-_{\rm eff}$ parts w.r.t. the $x$-axis but does not change the general properties of the effective potential. Here $w=\frac{1}{2} \frac{1-\sqrt[3]{\omega^2}}{{A} \omega} \left( K + \delta \sqrt[3]{\omega^2} \right)$. When the minus (for ${A}>0$) or the plus part (for ${A}<0$) of the effective potential approaches $x=\sqrt[3]{\omega^2}$, which is a removable discontinuity in this case, it has a value $w$ there. \[tab1\] -- ----------------------------------- ---------------------- ----------- $x\rightarrow \sqrt[3]{\omega^2}$ $\sqrt[3]{\omega^2}$ limit left right left $-\infty$ right $+\infty$ left right left $\infty$ right $-\infty$ -- ----------------------------------- ---------------------- ----------- : Behaviour of the effective potential  with respect to the divergency at $\Delta_\omega =0$ (VLS at $x=\sqrt[3]{\omega^2}$) (cp. eq. ). Due to this a potential barrier forms which may either lie behind the degenerate horizon or in front of the pseudo-horizon, allowing for planetary bound orbits (see discussion in the text). Inversion of the sign of ${A}$ mirrors the $V^+_{\rm eff}$ and $V^-_{\rm eff}$ parts w.r.t. the $x$-axis but does not change the general properties of the effective potential. Here $w=\frac{1}{2} \frac{1-\sqrt[3]{\omega^2}}{{A} \omega} \left( K + \delta \sqrt[3]{\omega^2} \right)$. When the minus (for ${A}>0$) or the plus part (for ${A}<0$) of the effective potential approaches $x=\sqrt[3]{\omega^2}$, which is a removable discontinuity in this case, it has a value $w$ there. \[tab1\] Consider at first positive ${A}$. For $\omega<1$ the potential barrier extends all over the $V_{\rm eff}$-axis: when $x$ decreases (i.e. when $x\rightarrow \sqrt[3]{\omega^2}$ from the right) it stretches to $-\infty$ and for increasing $x$ (i.e. when $x\rightarrow \sqrt[3]{\omega^2}$ from the left) it stretches to $\infty$. The $V^+_{\rm eff}$ part of the effective potential is responsible for this effect, while $V^-_{\rm eff}$ has a finite limit. $V^\pm_{\rm eff}$ vanishes at $x=1$ ($\Delta=0$) which corresponds to the location of the horizon or pseudo-horizon. Both parts of the potential cross at this point. We show an example in the figure \[fig:pots\] and . The grey region shows the forbidden regions where the RHS of  becomes negative. It is important to note here that the positive zeros of $\Delta_{\rm eff}$ and the vanishing of $\Delta_\omega$ define the discontinuities of the potential  relevant for the physical motion. In general, by Descartes’ rule of signs, provided the condition  is fulfilled, $\Delta_{\rm eff}$ has one positive zero (and at most $3$ negative zeros or $1$ negative and $2$ complex conjugate with negative real part). It means that the region of $V^\pm_{\rm eff}$ lying between the origin of the coordinates and this point contains complex values and is forbidden entirely. The region to the right of this point contains real values of the potentials $V^\pm_{\rm eff}$ and is generally allowed. The possible regions with physical motion will be finally determined by the positive or vanishing RHS of . For $\omega>1$ the potential barrier also exists all over the $V_{\rm eff}$-axis. The asymptotic behaviour of the $V^\pm_{\rm eff}$ parts differs from the case with $\omega<1$: for decreasing $x$ (i.e. when $x\rightarrow \sqrt[3]{\omega^2}$ from the right) $V^+_{\rm eff}$ stretches to $+\infty$ and when $x$ increases (i.e. when $x\rightarrow \sqrt[3]{\omega^2}$ from the left) it goes to $-\infty$. The two parts of the potential do not always intersect at $x=1$ like in the previous case. It is possible that $x=1$ where the potential would become zero lies in the forbidden (grey) region as in the figs. \[pot3\] and , or, like in the figs. \[pot3jR\] and , an additional allowed region forms in the forbidden grey area. For negative ${A}$ the $V^\pm_{\rm eff}$-parts of the effective potential mirror w.r.t. the $x$ axis. #### Investigation of $x=1$ traverse of $V^\pm_{\rm eff} $. To understand this behaviour of the effective potential consider the equations  and . When the potentials intersect, i.e. $V^+_{\rm eff}=V^-_{\rm eff}$, one of the intersection points is $x=1$. But for test particle motion to be possible, this point must lie in an allowed region. This means that $\Delta_{\rm eff}$ must fulfill the condition $$\Delta_{\rm eff}\geq 0 \ . \label{cond_Delta}$$ Setting $x=r^2=1$ into $\Delta_{\rm eff}$ we get $$\Delta_{\rm eff} (x=1) = {A}^2 \omega^2 + (1-{\omega^2}) ( {K} + \delta) \ . \label{rpot_Delta}$$ With the condition  we get a restriction on the value of ${A}$ for which the potential  traverses $x=1$ (the equality sign in  we will consider below): $${A}^2 > \frac{\omega^2-1}{\omega^2}\left( {K} + \delta \right)={{A}^{c}}^2 \ , \label{cond_jR}$$ where ${A}^{c}$ implies a critical value. Choosing the values of ${K}$ and ${A}$ one has to take care of the condition ${K} \geq {A}^2$ from the inequality . It is clear now why for $\omega<1$ the potentials always intersect at $x=1$: ${A}^2$ as a positive number is always larger than the negative RHS of  and hence the condition  is always fulfilled. The situation is different for $\omega>1$. In this case for ${A}^2 < {{A}^{c}}^2$ the point $x=1$ lies in the forbidden region as in figs. \[pot3\] and  and for ${A}^2 > {{A}^{c}}^2$ – in the allowed region having the form of a loop as shown in figs. \[pot3jR\] and . For the special case when ${A}^2=K$ we infer a restriction on ${K}$ from  of the form $${K} > (\omega^2-1) \delta = {{K}^{c}} \ , \label{cond_j}$$ which defines a critical value of ${K}$ for ${A}^2=K$ which allows for an additional region for bound orbits to exist. Consider now the case when $${A}^2={{A}^{c}}^2=\frac{\omega^2-1}{\omega^2}\left( {K} + \delta \right)\ . \label{jR_special}$$ Then $\Delta_{\rm eff}$ reads $$\Delta_{\rm eff} = (x-1)( \delta x^3 + ( \delta + {K} )x^2+( {K} + \delta )x+ {K} + \delta (1-\omega^2) ) \ . \label{rpot_Deltax}$$ In this case $x=1$ is not only the intersection point of the plus and minus parts of the effective potentials but is also a discontinuity of $V^\pm_{\rm eff}$. With the Descartes’ rule of signs we find that the second bracket in  has either 1. no positive zeros if ${K} >\delta (\omega^2-1)$, 2. or one positive zero if ${K} <\delta (\omega^2-1)$. Inserting  into condition  we get $${K}-{A}^2 \geq 0 \quad \Rightarrow \quad {K} \geq (\omega^2-1) \delta \ .$$ Thus, only the possibility $1$ above is relevant for physical motion. For ${K}=(\omega^2-1)\delta$ (in this case ${K}={K}^{c}$) the point $x=0$ is also a zero of $\Delta_{\rm eff}$ and coincides with the physical singularity. Recalling the discussion above we conclude that for ${A}^2={{A}^{c}}^2$ and ${K} \geq {A}^2$ the region to the left of the point $x=1$ is generally not allowed since $V^\pm_{\rm eff}$ has complex values there. An example of the corresponding effective potential is shown in fig. \[pot:jRcrit\]. ![image](pot1jRcrit.eps){width="7cm"} ### Dynamics of massive test particles With this knowledge of the properties of the effective potential, we analyze now the radial motion by studying the polynomial $P(x)$  with the coefficients . The coefficient $a_0$ is shown to be negative, while the coefficient $a_3$ is positive or negative depending on whether $E^2 >\delta$ or $E^2<\delta$. Thus, only the sign of the coefficients $a_2$ and $a_1$ is not fixed. Using the Descartes’ rule of signs we can determine the number of real positive solutions of the polynomial $P(x)$ in . For $E^2<\delta$, i.e. $a_3<0$, at most $2$ real positive zeros are possible if $a_2>0$ for positive or negative $a_1$. If $a_2<0$ then for $a_1>0$ the number of real positive roots is $2$ and there are no positive solutions for $a_1<0$. Since we know that there is a potential barrier extending all over the $V_{\rm eff}$-axis, there must then be at least one positive root for any type of motion to exist. The last case would correspond to an energy value which lies in the forbidden region. This is for example the case for quite large $\omega$ and comparatively small ${K}$. For example, for this set of parameters all $a_i$ coefficients in the equation  are negative: $\delta=1$, $\omega=20$, ${K}=4$, ${A}=0.1 \sqrt{K}$ and an energy value of e.g. $E=0.99$. For $|E|<{1}$ for massive particles two positive turning points exist. This corresponds to 1. for $\omega<1$ a many-world-bound orbit [**[MBO]{}**]{} as in the figures \[fig:pots\] and , 2. for $\omega>1$ a planetary bound orbit [**[BO]{}**]{} in the pictures \[pot3\] and , or a bound orbit hidden for a remote observer behind the pseudo-horizon in the figures \[pot3jR\] and  or figure \[pot:jRcrit\]. Here the direction from which the potential approaches $\pm 1$ for $x\rightarrow \infty$ is important. Thus, in the pictures \[pot1\] and  the effective potential approaches $1$ from below (or $-1$ from above) and in the pictures \[pot2\] and  it approaches $1$ from above (or $-1$ from below). For $|E|>{\delta}$, i.e. $a_3>0$, the number of positive roots is $3$ if $a_2<0$ and $a_1>0$. From the analysis of the effective potential we conclude that in this case both (planetary) bound and escape orbits are possible. For other combinations of signs of the coefficients $a_2$ and $a_1$ the number of positive zeros is $1$ for $|E|>{\delta}$. Thus, for $|E|>1$ for massive particles the orbit types are 1. for $\omega>1$ a planetary bound orbit [**[BO]{}**]{} and an escape orbit [**[EO]{}**]{} or only an escape orbit which can be found in the figs.\[fig:pots\] and , or a bound orbit behind the pseudo-horizon and an escape orbit in the plots \[pot3jR\] and  2. for $\omega<1$ a two-world escape [**[TWE]{}**]{} shown in the plot \[pot1\] and a many-world-bound orbit and an escape orbit in the picture . Also bound orbits in the inner region exist as illustrated in the figure \[pot2jR\]. The condition $|E|>{1}$ for massive particles under which bound orbits in this BMPV spacetime exist differs from the usual condition $|E|<{1}$ known from a large number of classical relativistic spacetimes such as Schwarzschild, Reissner-Nordström or Kerr. Looking at the variation of the values of the parameters ${K}$ and ${A}$ over the plots \[fig:pots\], \[fig:potsjR\] and \[pot:jRcrit\] we see that not only the variation of the parameter $\omega$ influences the effective potential and correspondingly the types of orbits. Also the parameters ${K}$ and ${A}$ play a big role. Let us address this issue in detail. These parameters are not independent but related by the inequality  which defines the maximal and the minimal values of ${A}$, namely $\sqrt{{K}}$ and $-\sqrt{{K}}$ respectively. In fig. \[fig:potsjR\] we have plotted the potential for the same $\omega$ ($\omega<1$, i.e., underrotating case) and ${K}$ as in the fig. \[fig:pots\] but with ${A}=\sqrt{{K}}$. We see that the difference is small: the potential bends slightly into the direction of the singularity. Compare now pictures \[pot1jR\] and \[pot2jR\]. If we just let ${A}$ grow, for example set it to its maximum value, we get a similar bending as before. Increasing ${K}$ instead leads to dramatical changes and the appearance of new orbit types. In fig. \[pot2jR\] behind the degenerate horizon a hook is formed which allows for a bound orbit [**[BO]{}**]{}, that is not visible to an observer at infinity, together with a two-world-escape orbit [**[TWE]{}**]{}. This feature is reminiscent of the bound orbits behind the event horizons in the Reissner-Nordström spacetime with highly charged test particles [@Grunau:2010gd], in the Kerr-Newmann spacetimes [@Hackmann:2013pva], or Myers-Perry spacetimes [@Kagramanova:2012hw]. Also the Kerr spacetime possesses this interesting property [@Chandrasekhar83; @Oneil]. From the discussion in the previous paragraph we can expect that growing ${K}$ and ${A}$ would influence the form of the effective potential also for $\omega>1$ (overrotating case). Indeed, comparing figs. \[pot3jR\] and  with the figs. \[pot3\] and  we see that the potential forms a loop for large ${A}$. The loop is located in the positive part of the $V_{\rm eff}$-axis for ${A}>0$ and it is below the $x$-axis in the case of negative ${A}$. In this new region bound orbits are possible which are again hidden for a remote observer by the pseudo-horizon analogous to the previous case. The value of ${A}$ for which the loop forms is defined by the condition . The critical value ${A}^{c}$ given by  marks the beginning of the loop formation (see figure \[pot:jRcrit\]). In the table \[tab2\] we summarize the results on the orbit types from the previous paragraphs. [lcccl|c|c]{}type & region & + zeros & range of $x$ & orbit & $|E|$ & $\omega$\ B & (B) & 2 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.5,0)( 1.7,0) & MBO & &\ B$\rm _{E=0}$ & (B) & 2 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1,0)(1,0) & MCO & &\ Ė & (Ė) & 1 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.5,0)( 4.5,0) & TWE & &\ BE & (BE) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.5,0)( 1.7,0) (2.2,0)( 4.5,0) & MBO, EO & &\ BĖ & (BĖ) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.2,0)( 0.5,0) (0.8,0)( 4.5,0) & BO, TWE & &\ B & (B) & 2 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1.2,0)(2,0) & BO & &\ B$\rm _{E=0}$ & (B) & 2 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1,0)(1,0) & MCO & &\ B & (B) & 2 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.3,0)(0.7,0) & BO & &\ E & (E) & 1 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1.2,0)( 4.5,0) & EO & &\ BE & (BE) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1.2,0)( 2,0) (2.6,0)( 4.5,0) & BO, EO & &\ BE & (BE) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.3,0)( 0.7,0) (2.6,0)( 4.5,0) & BO, EO & &\ ### Dynamics of massless test particles  \[sec:delta0\] Here we will see that planetary bound orbits are also possible when $\delta=0$, i.e., for massless particles. The variety of possibilities for the signs of the coefficients $a_i$ in equation  (or eq. ) given by  reduces to a few cases when $\delta=0$. The coefficient $a_0$ stays negative. The coefficient $a_3$ is only positive now while the coefficient $a_2$ is only negative. Only the sign of the coefficient $a_1$ is not fixed. With the Descartes’ rule of signs we infer that at most either $3$ (if $a_1>0$) or $1$ (if $a_1<0$) positive roots are possible. For $\omega<1$ (underrotating case), when the potential barrier for $x\rightarrow \sqrt[3]{\omega}$ (VLS) is located behind the horizon, so that a test particle will necessary cross the degenerate horizon, the case of three positive zeros implies a bound orbit behind the horizon [**[BO]{}**]{} and a two-world escape orbit [**[TWE]{}**]{} or a many-world-bound orbit [**[MBO]{}**]{} and an escape orbit [**[EO]{}**]{}. In the case of one positive root only a two-world-escape orbit [**[TWE]{}**]{} exists. For $\omega>1$ (overrotating case) the potential barrier will keep a test particle from crossing the pseudo-horizon, which allows for a planetary bound orbit [**[BO]{}**]{} and an escape orbit [**[EO]{}**]{} in the case of $3$ positive roots of $P(x)$ . If $P(x)$ has one positive zero, only an escape orbit [**[EO]{}**]{} exists. We summarize these results in the table \[tab3\]. The discussion of the effective potential properties from section \[sec:pot\] together with the results of the table \[tab1\] applies also here. In figures \[fig:potsl\] and \[fig:potsljR\] we show examples of the effective potential for massless test particles. At infinity it tends to zero. Planetary bound orbits for particles with $\delta=0$, present in the overrotating case, are shown in fig. \[fig:potsl\] and fig. \[pot3ljR\]. This is another characteristic of the overrotating BMPV spacetime distinguishing it from the classical relativistic spacetimes. It is interesting to note, that because of the asymptotics of the potential at infinity the maximum for $\omega<1$ (underrotating case) in fig. \[fig:potsl\] always exists, while for very large $\omega$, satisfying $\omega>1$ (overrotating case), neither a minimum nor a maximum may survive. In this case the potential resembles $\pm1/x$ curves, where the asymptotes are the $x$ axis and a vertical line at $x=\sqrt[3]{\omega}$ (VLS). In fig. \[pot2l\], plotted for moderate values of $\omega$ and ${K}$, both a minimum and a maximum exist. For the same reason as before we notice that they always come in a pair for $\delta=0$. In fig. \[fig:potsljR\] we choose large values of ${A}$. In this case for $\omega<1$ (underrotating case) bound orbits appear behind the horizon (fig. ). For $\omega>1$ (overrotating case) in the fig.  the value of ${A}$ satisfies the inequality . Then the $V^\pm_{\rm eff}$ parts of the effective potential  cross at $x=1$ and form a loop behind the pseudo-horizon, where bound orbits become possible. In fig.  the value of ${A}$ coincides with the critical value , where $x=1$ is additionally a discontinuity of the effective potential. In the wedge planetary bound orbits exist even for tiny absolute values of the energy, so that the pseudo-horizon might be approached very closely. [lcccl|c]{}type & region & + zeros & range of $x$ & orbit & $\omega$\ Ė & (Ė) & 1 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.5,0)( 4.5,0) & TWE &\ BE & (BE) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.5,0)( 1.7,0) (2.2,0)( 4.5,0) & MBO, EO &\ B$\rm _{E=0}$ & (BE) & 2 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1,0)( 1,0) & MCO &\ BĖ & (BĖ) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.25,0)( 0.5,0) ( 0.8,0)( 4.5,0) & BO, TWE &\ B$\rm _0$Ė & (BĖ) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0,0)( 0,0) ( 0.8,0)( 4.5,0) & SCO, TWE &\ E & (E) & 1 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1.2,0)( 4.5,0) & EO &\ BE & (BE) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1.2,0)( 2,0) (2.6,0)( 4.5,0) & BO, EO &\ B$\rm _{E=0}$ & (BE) & 2 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1,0)( 1,0) & MCO &\ BĖ & (B[E]{}) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) ( 0.25 ,0)( 0.75 ,0) (1.5,0)( 4.5,0) & BO, EO &\ B$\rm _0$E & (BE) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0,0)( 0,0) ( 1.5,0)( 4.5,0) & SCO, EO &\ ### Reaching the singularity. The RHS of equation  or  must be non-negative to allow for physical motion for any test particle in principle. Setting $x=0$ in  for test particles which could reach the singularity we get $$P(x=0)=a_0=-({K}-{A}^2)-(\omega E - {A})^2 \geq 0 \ .$$ The condition above is fulfilled if (recall the condition  for ${K}$ and ${A}^2$) $${K}={A}^2 \quad \cup \quad E=\frac{{A}}{\omega} \label{cond_sing_omega} \ .$$ From the previous discussion we know that for negative $\Delta_{\rm eff}$ in the effective potential  when $V^\pm_{\rm eff}$ becomes complex no physical motion is possible in general. This is the region to the left of the only positive zero of $\Delta_{\rm eff}$. The region to the right of this zero is generally allowed and the specific types of motion are finally determined by the non-negative RHS of the equation  and corresponding conditions on the parameter $E$. Subsituting  into the expression for $\Delta_{\rm eff}$ in  we get $$\Delta_{\rm eff} ({K}={A}^2) = x ( x^3 \delta + {A}^2 x^2 -\omega^2 \delta ) \label{Delta_sing_omega} \ .$$ Analyzing the expression above we observe that for $\delta=1$ one positive zero always exists. Thus, between $x=0$ and the positive zero of  the potential  has complex values and this region is forbidden in general. Hence a test particle with $\delta=1$ cannot reach the singularity in this case. Consider now $\delta=0$. In this case all the roots of  are equal to zero and a test particle can reach the singularity. Substituting  into $P(x)$ in  and solving $P(x)=0$ for $x$ we get the turning points: $x_1=x_2=0$ and $x_3=\omega^2$. Here $x_1=x_2$ indicates a singular solution at $x=0$ and $x_3$ is a turning point for a two world escape orbit of a massless test particle with ${K}={A}^2$ and $E=\frac{{A}}{\omega}$. The singular solution is indicated in the figures \[pot1ljR\] and \[pot4ljR\] (SCO). It is not a physical solution, but completes the set of all mathematically possible cases. ### Features of motion for $\boldsymbol{\omega=1}$ {#sec:omega1} Let us now consider the critical case where $\omega=1$. In this case the area of the surface $x=1$ vanishes, while the VLS also resides at $x=1$. We address the critical case separately, since as we will see the features of the potential  change dramatically in this case. As in the overrotating case test particles travelling on geodesics may not cross the surface $x=1$ to enter the region $x<1$. However, the surface $x=1$ itself is reached by most types of geodesics. For $\omega=1$ the polynomial $P(x)$ in the equation  takes the form $$P(x) = 4 (x-1)(a x^2 + b x + c) = 4 (x-1) P_1(x) \label{reqn2_omega1} \ ,$$ with the coefficients $$\begin{aligned} && a = E^2-\delta \ , \quad b = \delta - {K} + E^2 \ , \nonumber \\ && c = {K} - {A}^2 + (E - {A})^2 \label{req_coeff_omega1} \ .\end{aligned}$$ We observe that $x=1$ is now a zero of $P(x)$. With the condition  the coefficient $c$ is non-negative. With the Descates rule of signs we conclude that $P_1(x)$ has at most 2 positive zeros for $a>0$ and $b<0$, none – if $a>0$ and $b>0$ and one positive zero if $a<0$ both for positive or negative $b$. Thus, the previously seen many world bound orbits will now convert into two types. One type is an exterior orbit ($x\ge 1$) with $x=1$ the inner boundary of the motion, while the other type is an interior orbit ($x\le 1$) with $x=1$ the outer boundary of the motion. This also means that no planetary bound orbits are possible in this case. Also the previously present two world escape orbit changes, since $x=1$ cannot be traversed. For this type of orbit $x=1$ is now also the inner boundary of the motion. Note, that we have kept the notation MBO and TWO for these orbits to see their connection with the underrotating and overrotating cases. The effective potential  becomes $$V^\pm_{{\rm eff}_1} = \frac{ {A} \pm \sqrt{ \Delta_{\rm eff} } }{x^2+x+1} \,\,\, \text{with} \,\,\, \Delta_{\rm eff}={A}^2 + (x^3-1) ({K} + x \delta) \ , \label{rpot1_omega1}$$ where we have substituted $x=r^2$. This effective potential has no pole at $x=\sqrt[3]{\omega^2}\equiv 1$ as compared to the effective potential in  for the values of $\omega$ in the underrotating and overrotating cases. Since $x=1$ is not any more a point where the plus and minus parts of the potential intersect and become zero (under certain conditions as discussed in the section \[sec:pot\]), the conditions  for $\Delta_{\rm eff}$ or  for ${A}$ as well as further conditions and conclusions there cannot be directly applied here. But since $x=1$ is always a root of $P(x)$, evaluation of $V^\pm_{{\rm eff}_1}$ at this point gives: $$\begin{aligned} && {A}>0 \quad \Rightarrow \quad V^+_{{\rm eff}_1}(x=1)=\frac{2}{3} {A} \quad \text{and} \quad V^-_{{\rm eff}_1}(x=1)=0 \ , \\ && {A}<0 \quad \Rightarrow \quad V^+_{{\rm eff}_1}(x=1)=0 \quad \text{and} \quad V^-_{{\rm eff}_1}(x=1)=\frac{2}{3} {A} \ .\end{aligned}$$ Consider again ${A}>0$ since its negative counterpart just mirrors the potential w.r.t. the $x$-axis. We observe that the plus $V^+_{{\rm eff}_1}$ and minus $V^-_{{\rm eff}_1}$ parts of the potential  are identically zero at $x=1$ only if ${A}=0$. If ${A} \neq 0$ then only the minus part of the potential is zero at that point, while the plus part has a non-negative value. Thus, for ${A} \neq 0$ the two parts of the potential cross behind the $x=1$-line, forming an additional allowed region between the singularity and the surface $x=1$. We show a few examples of the effective potential in the fig. \[fig:pot\_omega1\] for massive test particles and in the fig. \[fig:potl\_omega1\] for massless test particles. Analyzing the potentials in the figs. \[fig:pot\_omega1\] and \[fig:potl\_omega1\] we observe that the bound orbits are either located behind the surface $x=1$ or have one turning point exactly at $x=1$. A two world escape orbit always reaches $x=1$ and only an escape orbit has a turning point at a finite distancs from $x=1$. Consider in detail ${A}=0$, when the effective potential is symmetric w.r.t. the $x$-axis and at $x=1$ both parts of the effective potential vanish. For $E=0$, the polynomial $P(x)$ has three zeros in the form: $x_{1,2}=1$ and $x_3=-\frac{{K}}{\delta}<0$. Thus, for $E=0$ and ${A}=0$ a circular orbit at $x=1$ for a massive test particle exists. For $\delta=0$ and $E=0$ the polynomial $P(x)$ has a double zero at $x=1$. Thus, also massless test particles with ${A}=0$, $E=0$ can be on a circular orbit at $x=1$. In the tables \[tab4\] and \[tab5\] we list the types of orbits for $\delta=1$ and $\delta=0$ schematically. [lcccl|c]{}type & region & + zeros & range of $x$ & orbit & $|E|$\ B$\rm _1$ & (B$\rm _1$) & 2 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.5,0)( 1.0,0) & MBO &\ $\rm _1$B & ($\rm _1$B) & 2 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1.5,0)( 1.0,0) & MBO &\ B$\rm _{E=0}$ & ($\rm _1$B) & 2 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1,0)( 1.0,0) & MCO &\ B$\rm _1$E & (B$\rm _1$E) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.5,0)( 1.0,0) (1.3,0)( 4.5,0) & MBO, EO &\ $\rm _1$BE & ($\rm _1$BE) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1,0)( 1.5,0) (1.8,0)( 4.5,0) & MBO, EO &\ B $\rm _1$Ė & (B $\rm _1$Ė) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.3,0)( 0.6,0) (1,0)( 4.5,0) & BO, TWE &\ $\rm _1$Ė & ($\rm _1$Ė) & 1 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1,0)( 4.5,0) & TWE &\ #### Reaching the singularity when $\omega=1$. The singularity is located at $x=0$. Setting $x=0$ in the effective potential  we obtain: $$V^\pm_{{\rm eff}_1} (x=0) = {A} \pm \sqrt{{A}^2-{K}} \ . \label{rpot1_s1}$$ Taking into account the condition  the expression above makes sense if ${K}={A}^2$. In this case $$V^\pm_{{\rm eff}_1} (x=0) = {A} \ . \label{rpot1_s2}$$ Set now ${K}={A}^2$ and $E={A}$ in $P_1(x)$ in  and calculate its roots. They read $$x_1=0 \quad \text{and} \quad x_2=\frac{\delta}{\delta-{K}} \label{x12_s1} \ .$$ Consider $\delta=1$. If ${K}<1$ then $x_2>1$. Then the three roots of the polynomial $P(x)$ are: $0$, $1$ and $x_2>1$. Keeping in mind the form and properties of the effective potential we conclude that the singularity is located in the forbidden grey region. If ${K}>1$ then $x_2<0$ and there are two non-negative roots of $P(x)$: $1$ and $x_1=0$. Again, the form and properties of the effective potential tell us that the singularity is located in the forbidden grey region and $x=1$ is the only physically relevant solution being a boundary point of a two world escape orbit. Thus, a massive test particle cannot reach the singularity. Consider $\delta=0$. In this case $x_2=0$. A massless test particle with ${K}={A}^2$ and $E={A}$ may mathematically be on a circular orbit at $x=0$. This corresponds to the peak of the allowed white part of the effective potential behind the horizon in fig. \[potl2\_o1\]. [lcccl]{}type & region & + zeros & range of $x$ & orbit\ B$\rm _1$E & (B$\rm _1$E) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.5,0)( 1.0,0) (1.3,0)( 4.5,0) & MBO, EO\ $\rm _1$BE & ($\rm _1$BE) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1,0)( 1.5,0) (1.8,0)( 4.5,0) & MBO, EO\ B$\rm _{E=0}$ & ($\rm _1$BE) & 2 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1,0)( 1,0) & MCO\ B $\rm _1$Ė & (B $\rm _1$Ė) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0.3,0)( 0.6,0) (1,0)( 4.5,0) & BO, TWE\ B$\rm _0$ $\rm _1$Ė & (B $\rm _1$Ė) & 3 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (0,0)( 0,0) (1,0)( 4.5,0) & SCO, TWE\ $\rm _1$Ė & ($\rm _1$Ė) & 1 & (0,-0.2)(4.5,0.2) (0,0)(4.5,0) (1,-0.2)(1,0.2) (0,-0.2)(0,0.2) (1,0)( 4.5,0) & TWE\ ### The ${K}$-$E$ diagrams  \[sec:diag\] To supplement the study on the influence of the parameters of the spacetime and a test particle itself on the test particle’s dynamics which was done in the previous subsections in terms of the effective potentials, we use here the method of double zeros approach to polish our knowledge. For this we consider the resultant $\mathcal{R}$ of the two equations $$P(x)=0 \quad \text{and} \quad P^\prime(x) = 0 \ ,$$ where $P(x)$ is the polynomial in  and $P^\prime(x)$ is the derivative of $P(x)$ w.r.t. $x$. The resultant is an algebraic function of the form $\mathcal{R}=(E^2-\delta)P(\omega,\frac{{A}}{\sqrt{{K}}},{K},E,\delta)$ with a long polynomial $P(\omega,\frac{{A}}{\sqrt{{K}}},{K},E,\delta)$ of $\omega$, the ratio $\frac{{A}}{\sqrt{{K}}}$, ${K}$, $E$ and $\delta$, and we do not give it here. Instead, we visualize it giving $\omega$ some value and letting ${K}$ and $E$ vary. The ratio ${A}/\sqrt{{K}}$ is also fixed for a single plot. We use here the notation of regions introduced in table \[tab2\] for massive test particles and table \[tab3\] for massless test particles. #### Massive test particles. Consider first fig. \[fig:jEwlt1\] for $\omega<1$ (underrotating case) and massive test particles. In the pictures  and  we choose $\omega=0.7$. Having plotted the diagrams for other values of $\omega$ smaller than $1$ we do not see any qualitative difference between the plots. In this case it is the ratio ${A}/\sqrt{{K}}$ which influences the form of the diagram essentially. Looking at \[jE\_w\_lt1\_1\] where ${A}=0.1 \sqrt{{K}}$ we observe that the possible orbits in the region (B) are many world bound orbits for $|E|<1$. For $|E|>1$ region (Ė) with two world escape orbits, and region (BE) containing many world bound orbit and escape orbit are possible. This diagram generalizes the effective potential in fig. \[pot1\] for higher ${K}$ values. For ${A}=\sqrt{{K}}$ in fig. \[jE\_w\_lt1\_2\] for $\omega=0.7$ a new region (BĖ) with bound orbits hidden behind the horizon and a two world escape orbit appear. This happens for large ${K}$ values. Such regions we have seen in the effective potential for $\omega=0.9$ in fig. \[pot2jR\]. We note that the regions (Ė) separated from each other by a blue line indicating the presence of double roots in $P(x)$ have one positive zero but either 2 negative zeros or 2 complex conjugate, which are however not relevant for the physical motion. In fig. \[fig:jEwgt2\] we show the ${K}$-$E$ diagrams for $\omega>1$ (overrotating case), namely for $\omega=1.1$ in the first column and $\omega=2.1$ in the second column. Compare figures  and  for small ${A}$. In the region (B) planetary bound orbits BO for $|E|<1$ are possible. These orbits can be found in the potential plots \[pot3\] and . In the region (0) no orbits exist: it corresponds to the forbidden grey regions in the figures \[pot3\],  for $|E|<1$. Here the value of ${A}$ is smaller than the critical value ${{A}^{c}}^2$ defined in  for which a loop with bound orbits in the inner region forms (see also discussion in the section \[sec:pot\]). Regions (E) contain escape orbits. They are separated in the pictures since the number of negative or complex zeros, irrelevant for physical motion, varies there. In the region (BE) for larger ${K}$ values and $|E|>1$ both planetary bound and escape orbits exist. From these diagrams we can infer that for growing $\omega$ the region (BE) becomes smaller. It disappears for large $\omega$. For very large $\omega$ also the region (B) does not exist and only escape orbits in the region (E) are left. In this case the potential is reminiscent of $\pm 1/x$ curves (we have already observed this for massless test particles in the section \[sec:delta0\]). Continuing with the analysis of fig. \[fig:jEwgt2\] we compare figures  and  for ${A}=0.59 \sqrt{{K}}$. In the plot \[jE\_w\_gt1\_2\] for smaller $\omega$ new regions form. For $0<E<1$ this is a region (B) with bound orbits behind the pseudo-horizon like the one in the potentials \[pot3jR\] or \[pot4jR\]. For $-1<E<0$ the region ([B]{}) grows and the region (0) with no motion becomes smaller compared to the plot . Also for $E>1$ a new region (BE) with bound orbits behind the pseudo-horizon and an escape orbit appears. The region (BE) with planetary bound and escape orbits is still there, which is best seen in the inlay of the picture \[jE\_w\_gt1\_2\]. On the contrary this region disappears for $\omega=2.1$ in the plot \[jE\_w\_gt1\_5\], and no new region appears here. For the maximal ${A}$ value in the plots \[jE\_w\_gt1\_3\] and  the regions with inner bound orbits (B) and (BE) dominate in both plots for $E>0$ and the region (BE) is present only for negative $E$. Planetary bound orbits for positive energies are possible in the plots \[jE\_w\_gt1\_3\] and  only for $0<E<1$ and small ${K}$ in the region (B). For large $\omega$ and ${A}=\sqrt{{K}}$ the region (B) for $0<E<1$ disappears. The described behaviour is inverted of course for negative ${A}$: in this case the region (BE) is on the side with positive energies and the regions with inner bound orbits (B) and (BE) are on the negative energy side. An example of the effective potential for ${A}=-\sqrt{{K}}$ is shown in fig. \[pot4jR\]. Here planetary bound orbits and escape orbits located in the regions (B) and (BE) exist for positive $E$. #### Massless test particles. As we know from table \[tab3\] the variety of trajectory types for massless test particles is not that rich as for massive particles. But photons can still move on a bound trajectory. In the following diagrams we will see for which ${A}/\sqrt{{K}}$ and $E$ that is possible. In fig. \[fig:jElwlt1\] we present two ${K}$-$E$ diagrams for $\omega=0.7$ (underrotating case) and growing $\frac{{A}}{\sqrt{{K}}}$ ratio. In the figure  we have $\left. i \right)$ the region (BE) with many world bound and escape orbits, and $\left. ii \right)$ the region (Ė) with two world escape orbits. A typical effective potential for this diagram is shown in fig. \[pot1l\]. From the discussion in the section \[sec:pot\] we know that inner bound orbits hidden from external observers are possible. This happens for high $\frac{{A}}{\sqrt{{K}}}$ ratios and corresponds to a new region (BĖ) in the figure  with inner bound orbits and two world escape orbits. Such orbits can be found for example in the potential \[pot1ljR\] for $\omega=0.7$ and ${A}=\sqrt{{K}}$. Consider now $\omega>1$ (overrotating case) and fig. \[fig:jElwgt2\]. In the plot  where the ratio $\frac{{A}}{\sqrt{{K}}}$ is smaller (for positive ratio) than the critical value $\frac{{A}^{c}}{\sqrt{{K}}}=\pm\sqrt{\frac{\omega^2-1}{\omega^2}}$ from , the region (BE) with planetary bound and escape orbits exists both for positive and negative energies. This corresponds to the effective potential in the plot \[pot2l\]. The second possible region (E) contains escape orbits. When the ratio $\frac{{A}}{\sqrt{{K}}}$ is equal to the critical value $\pm\sqrt{\frac{\omega^2-1}{\omega^2}}$ the region (BE) for positive $\frac{{A}}{\sqrt{{K}}}$ disappears for positive energies. Thus, for $E>0$ only region (E) exists as it is shown in the plot . Regions (BE) and (E) are still there for $E<0$ and there is only one blue line for negative energies separating these regions. This is inverted for negative $\frac{{A}^{c}}{\sqrt{{K}}}$. For $E=0$ the polynomial $P(x)$ has 2 positive zeros equal to $1$ all over the ${K}$-axis. This corresponds to the MCO orbit in table \[tab3\]. In fig. \[pot3ljR\] we show an effective potential for $\frac{{A}}{\sqrt{{K}}}=-\sqrt{\frac{\omega^2-1}{\omega^2}}$. Let the ratio $\frac{{A}}{\sqrt{{K}}}$ be larger than $\frac{{A}^{c}}{\sqrt{{K}}}=\sqrt{\frac{\omega^2-1}{\omega^2}}$, choosing the positive sign (for the negative sign of the ratio the picture is inverted). Then a new region (BE) for positive energies appears. This region grows for increasing $\frac{{A}}{\sqrt{{K}}}$ and reaches a maximal size for ${A}=\sqrt{{K}}$ as it is shown in the diagram . An example of the effective potential for ${A} =0.95 \sqrt{{K}} > {A}^{c}$ is shown in fig. \[pot2ljR\], and ${A} = \sqrt{{K}} > {A}^{c}$ is presented in fig. \[pot4ljR\]. In the region (BE) in the diagram  inner bound and escape orbits exist. For $E<0$ outer bound and escape orbits are still present. For a bit larger value of $\omega$, e.g. $\omega=2.1$, and ${A}<{A}^{c}$ from equation  (we choose positive ${A}$) only the region (E) survives and the region (BE) for positive energies does not exist, since the effective potential has no minimum and no maximum for $E>0$ any longer. These exist only for negative energies. In this case the ${K}$-$E$ diagram has a form like in fig. \[jEl\_w\_gt1\_4\]. For the critical value ${A}={A}^{c}$ and for ${A}>{A}^{c}$ the diagrams look similar to the pictures \[jEl\_w\_gt1\_3\] and \[jEl\_w\_gt1\_2\], respectively. When further increasing $\omega$ and for non-critical (and positive) ${A}$ the minimum and maximum in the effective potential like in fig. \[pot2l\] do not exist any longer also for negative energies, and the potential is reminiscent of $\pm \frac{1}{x}$ curves as we already know. Both sides of the ${K}$-$E$ diagram for positive and negative energies contain only the (E) regions. For the value of the ratio $\frac{{A}}{\sqrt{{K}}} > \sqrt{\frac{\omega^2-1}{\omega^2}}$ (larger than the critical value), the regions with inner bound orbit and escape orbit (BE) for positive energies and bound and escape orbits (BE) for negative energies form. Here again the diagrams are of type \[jEl\_w\_gt1\_2\] for $\frac{{A}}{\sqrt{{K}}}>\frac{{A}^{c}}{\sqrt{{K}}}$ and of type \[jEl\_w\_gt1\_3\] for $\frac{{A}}{\sqrt{{K}}}=\frac{{A}^{c}}{\sqrt{{K}}}$, where the (BE) region for negative energies exist. #### ${K}$-$E$ diagrams for $\omega=1$. Consider the critical case $\omega=1$. We have already studied the properties of the motion in the section \[sec:omega1\]. A feature of this critical $\omega$-value is that most orbit types reach the surface $x=1$. But no orbits can pass the surface $x=1$ from the outer region to reach smaller values of $x$, or pass the surface from the inner region to reach larger values of $x$. Thus $x=1$ presents a boundary for the geodesic motion. Fig. \[fig:pot\_omega1\] and \[fig:potl\_omega1\] present effective potentials for massive test particles and massless test particles for large (till maximal) values of ${A}$. Tables \[tab4\] and \[tab5\] show all possible orbit types for $\delta=1$ and $\delta=0$. Here we present the ${K}$-$E$ diagrams for massive (figure \[fig:jEw1\]) and massless test particles (figure \[fig:jElw1\]). Consider first massive test particles. The diagram \[jE\_w\_1\] is plotted for ${A}=0.3 \sqrt{{K}}$. Here all orbits have as turning point $x=1$. For increasing ${A}$, as for example in the plot \[jE\_w\_2\] for ${A}=\sqrt{{K}}$, a region B$_1$Ė with a bound orbit behind the horizon and a two world escape orbit with a turning point at $x=1$ appear. Since we choose ${A}>0$ the many world bound orbits of the region (B$_1$) and the many world bound and escape orbits of the region (B$_1$E) exist only for positive energies. This corresponds to the white allowed region in the potential \[fig:pot\_omega1\]. In fig. \[fig:jElw1\] we show the diagrams for massless test particles again for ${A}=0.3 \sqrt{{K}}$ in the plot  and for ${A} = \sqrt{{K}}$ in the plot . All orbit types from the schematical representation in table \[tab5\] can be found there. The region (B$_1$Ė) with an inner bound orbit and a two world escape orbit exist also here (diagram ). But contrary to the diagrams for massive test particles in the figure \[jE\_w\_2\], it also exists for very small ${K}$ and $E$. The $\varphi$-equation ----------------------  \[sec:varphi\] The $\varphi$–equation  consists of $r$– and $\vartheta$–dependent parts $$d\varphi = \frac{\omega E}{r^2-1} d\tau - \frac{\Phi}{\sin^2\vartheta} d\tau = \frac{\omega E}{r^2-1} \frac{rdr}{\sqrt{R}} - \frac{\Phi}{\sin^2\vartheta} \frac{d\vartheta}{\sqrt{\Theta}} \ . \label{varphieqn2}$$ With the notations $$\begin{aligned} && I_r= \frac{1}{r^2-1} \frac{rdr}{\sqrt{R}} \label{Ir} \ , \\ && I^\varphi_\vartheta = \frac{1}{\sin^2\vartheta} \frac{d\vartheta}{\sqrt{\Theta}} \label{Ivartheta} \,\end{aligned}$$ we integrate the equation  getting $$\varphi - \varphi_0 = \omega E \int^r_{r_0} I_r - \Phi \int^\vartheta_{\vartheta_0} I^\varphi_\vartheta \ . \label{varphieqn3}$$ Consider first the radial differential $I_r$. We make the substitution $r^2=x=\frac{1}{a_3}(y-\frac{a_2}{3})$ as in the section \[sec:radial\]: $$I_r= \frac{a_3}{y-p} \frac{dy}{\sqrt{P_3(y)}} \label{Ir2} \ .$$ Substituting $y=\wp(v)$ from  where $$v=v(\tau)=\tau - \tau^\prime \, \label{t_v}$$ and $\tau^\prime$ is given by , we get: $$I_r = \frac{a_3}{(\wp(v)-p)} dv \ , \label{Ir3}$$ where $p = a_3 +\frac{a_2}{3}$. The integration  reads $$\int^v_{v_0} I_r = a_3 I_1 \ ,$$ with $I_1$ given by [@Markush; @Kagramanova:2010bk; @Grunau:2010gd; @Hackmann:2010zz; @Kagramanova:2012hw] $$I_1 = \int^v_{v_0} \frac{1}{\wp(v)-p} = \frac{1}{\wp^\prime(v_{p})} \Biggl( 2\zeta(v_{p})(v-v_{ 0 }) + \ln\frac{\sigma(v-v_{p})}{\sigma(v_0 - v_{p})} - \ln\frac{\sigma(v + v_{p})}{\sigma(v_{ 0 } + v_{p})} \Biggr) \ , \label{I1}$$ with $\wp(v_{p})=p$, $v(\tau)$ given by  and $v_0 = v(0)$. Consider now the angular part $I^\varphi_\vartheta$. Like in the section \[sec:beta\] we make the substitution $\xi=\cos^2\vartheta$: $$I^\varphi_\vartheta = \frac{1}{2(\xi-1)} \frac{d\xi}{\sqrt{\Theta_\xi}} \label{Ivartheta2} \ ,$$ where $\Theta_\xi$ is defined in . With the substitution $u=\frac{2b_2\xi+b_1}{\sqrt{D_\xi}}$, where the coefficients $b_i$ and the discriminant $D_\xi$ are given by  and , the integration of $I^\varphi_\vartheta$ is given by an elementary function: $$\int^\xi_{\xi_0} I^\varphi_\vartheta = \frac{1}{|{A}-{B}|} \arctan{ \frac{1-u\beta}{\sqrt{1-u^2}\sqrt{\beta^2-1}} } \Bigl|^{\xi(\tau)}_{\xi_{0} } \ , \label{Ivartheta3}$$ where $$\beta = \frac{-{K}+{A}{B}}{\sqrt{D_\xi}} \ , \quad \beta^2-1 = \frac{{K}({A}-{B})^2}{D_\xi} \geq 0 \ , \label{Ivartheta4}$$ Finally, the integration of the $\varphi$–equation yields: $$\varphi(\tau) = \varphi_0 + \omega E a_3 I_1 - \frac{\Phi}{|{A}-{B}|} \arctan{ \frac{1-u\beta}{\sqrt{1-u^2}\sqrt{\beta^2-1}} } \Bigl|^{\xi(\tau)}_{\xi_{0} } \ , \label{varphieqn4}$$ with $I_1$ given by . $\varphi(\tau)$ is a function of $\tau$ since $\xi(\tau)=\cos^2\vartheta(\tau)$ (equation ) is a function of $\tau$. The $\psi$-equation -------------------  \[sec:psi\] The $\psi$–equation  consists of $r$– and $\vartheta$ dependent parts similarly to the $\varphi$–equation in the section \[sec:varphi\] $$d\psi = - \frac{\omega E}{r^2-1} d\tau - \frac{\Psi}{\cos^2\vartheta} d\tau = - \frac{\omega E}{r^2-1} \frac{rdr}{\sqrt{R}} - \frac{\Psi}{\cos^2\vartheta} \frac{d\vartheta}{\sqrt{\Theta}} \ . \label{psieqn2}$$ With the same substitutions as in the section \[sec:varphi\] for the $\varphi$–equation we can write down the expression for the coordinate $\psi$: $$\psi (\tau)= \psi_0 - \omega E a_3 I_1 + \frac{\Psi}{|{A}+{B}|} \arctan{ \frac{1-u\beta_1}{\sqrt{1-u^2}\sqrt{\beta^2_1-1}} }\Bigl|^{\xi(\tau)}_{\xi_{0} } \ , \label{psieqn3}$$ with $I_1$ given by  and $$\beta_1 = \frac{{K}+{A}{B}}{\sqrt{D_\xi}} \ , \quad \beta_1^2-1 = \frac{{K}({A}+{B})^2}{D_\xi} \geq 0 \ , \label{psieq:beta1}$$ $\psi(\tau)$ is a function of $\tau$ since $\xi(\tau)=\cos^2\vartheta(\tau)$ (equation ) is a function of $\tau$. The $t$-equation ----------------  \[sec:time\] We replace $d\tau$ in  by  and make the substitution $r^2=\frac{1}{a_3}(y-\frac{a_2}{3})$ as in the section \[sec:radial\]. Next we apply the partial fraction decomposition and substitute as in the section \[sec:varphi\] $y=\wp(v)$ $$dt = \left( \frac{E}{a_3} \left(2 a_3 - \frac{a_2}{3}\right) + \frac{E}{a_3} \wp(v) + \frac{ a_3 (3 E - \omega {A})}{\wp(v)-p} + \frac{a_3^2 E ( 1 - \omega^2 )}{(\wp(v)-p)^2} \right) dv \ , \label{teqn4}$$ where $p = a_3 +\frac{a_2}{3}$ and again $v=v(\tau)=\tau - \tau^\prime$ as defined by  with $\tau^\prime$ given by . The integration of  reads $$t (\tau) = t_0 + \frac{E}{a_3} \left(2 a_3 - \frac{a_2}{3}\right) (v-v_0) - \frac{E}{a_3} (\zeta(v) - \zeta(v_0)) + a_3 (3 E - \omega A) I_1 + a_3^2 E ( 1 - \omega^2 ) I_2 \ , \label{teqn5}$$ where $I_2$ is given by [@Markush; @Kagramanova:2010bk; @Grunau:2010gd; @Hackmann:2010zz; @Kagramanova:2012hw] $$\begin{aligned} && I_2 = \int^v_{v_0} \frac{1}{(\wp(v)-p)^2} = -\frac{\wp^{\prime\prime}(v_p)}{(\wp^{\prime}(v_p))^2} I_1 \nonumber \\ && \quad - \frac{1}{(\wp^{\prime}(v_p))^2} \left( 2\wp(v_p)(v-v_0) + 2(\zeta(v)-\zeta(v_0)) + \frac{\wp^{\prime}(v)}{\wp(v)-\wp(v_p)}-\frac{\wp^{\prime}(v_0)}{\wp(v_0)-\wp(v_p)} \right) \label{I2}\end{aligned}$$ and $I_1$ by  with $\wp(v_p)=p$, $v(\tau)$ given by  and $v_0 = v(0)$. Causality ---------  \[sec:ctc\] The equation  can be written in the form $$\frac{dt}{d\tau} = -\frac{(\omega^2 - x^3)}{(x-1)^2}\left( E - V^t \right) \label{teqn1_Vtime} \ ,$$ with $x=r^2$ and the potential $V^t$ $$V^t=-\frac{\omega A (x-1)}{\omega^2 - x^3} \label{Vtime} \ .$$ Note the factor $\Delta_\omega = 1 - \frac{\omega^2}{x^3}$ in these expressions and recall that $\Delta_\omega = 0$ represents the VLS. In the figs. \[fig:ctcpots\], \[fig:ctcpots2\] and \[fig:ctcpots3\] the black solid line denotes the time potential . It presents the border of the dashed region. In the dashed regions for positive and negative values of energy the direction of time flow in  changes, i.e. $\frac{dt}{d\tau}$ becomes negative there. The grey region, as before, denotes the forbidden regions for motion. Orbits ======  \[section:orbits\] To visualize the geodesics we use the cartesian coordinates $(X,Y,Z,W)$ in the form: $$\begin{aligned} &&X=r\sin\vartheta\cos\varphi \ , \, Y=r\sin\vartheta\sin\varphi \ , \nonumber \\ &&Z=r\cos\vartheta\cos\psi \ , \, W=r\cos\vartheta\sin\psi \label{XYZW} \ ,\end{aligned}$$ where $r\in[0,\infty) \ , \vartheta \in [0, \frac{\pi}{2}] \ , \varphi \in [0, 2 \pi) \ , \psi \in [0, 2 \pi)$. $\vartheta=\frac{\pi}{2}$ -------------------------  \[section:2dorbits\] We first consider motion in the plane $\theta=\pi/2$. Then only motion w.r.t. the angle $\varphi$ is present. From the function $\Theta$ in the equation  follows that in this case $\Psi=0$ and ${K}=\Phi^2$. This implies ${A}=-\Phi$, ${B}=\Phi$ and ${A}=\pm\sqrt{{K}}$. For $\vartheta=\frac{\pi}{2}$ the $\varphi$–equation  consists only of the $r$–dependent part and a constant: $$d\varphi = \left( \frac{\omega E}{r^2-1} - \Phi \right) d\tau = \left( \frac{\omega E}{r^2-1} - \Phi \right) \frac{rdr}{\sqrt{R}} \ . \label{varphieqn2_1}$$ Next we carry out the same substitutions as in the section \[sec:varphi\]. Integration of the equation  then yields $$\varphi (\tau) = \varphi_0 + \int^y_{y_0} \left( \omega E \frac{a_3}{y-p} -\Phi \right) \frac{dy}{\sqrt{P_3(y)}} = \varphi_0 + \omega E a_3 I_1 - \Phi (v-v_{0}) \ , \label{varphieqn3_1}$$ where as in the section \[sec:varphi\] $p = a_3 +\frac{a_2}{3}$, $I_1$ is given by  and $v_0 = v(0)$ for $v(\tau)=\tau-\tau^\prime$ from the equation . From the $\varphi$-equation  we observe that the left hand side vanishes at $$x\equiv r^2= 1+\frac{\omega E}{\Phi} \ . \label{turn}$$ This means that the angular direction of the test particle motion will be changed when arriving at this point. Such an effect is usually known to occur in the presence of an ergosphere as for example in the Kerr or Kerr-Newman spacetimes, where a counterrotating orbit will be forced to corotate with the black hole spacetime. But contrary to those spacetimes, the BMPV spacetime does not possess an ergoregion, since its horizon angular velocity vanishes. In the following we will call this surface the [*turnaround boundary*]{}. In the following we show in the figs. \[fig1:orb\], \[fig2:orb\] and \[fig3:orb\] two-dimensional $X-Y$ plots for the underrotating case, $\omega<1$. The first three orbits in the figure \[fig1:orb\] are many-world-bound orbits, and the orbit in the fourth figure is a two-world escape orbit. The orbits \[orb1\] and \[orb2\] correspond to the potential \[pot1jR\] with $A=\sqrt{K}$. The orbits \[orb11\] and \[orb22\] have opposite $A$ value. We see that in the plots \[orb2\], \[orb11\], \[orb22\] the orbits cross the dashed circle corresponding to the ‘turnaround boundary’, where the test particle changes its angular direction of motion. In the plot \[orb1\] the ‘turnaround boundary’ has no influence on the orbit. In the figure \[fig2:orb\] we show orbits for the same value of $\omega$. In the fig. \[orb111\] the many-world-bound orbit is located inside the ‘turnaround boundary’ and the escape orbit \[orb112\]–outside. The two-world-escape orbit \[orb120\] experiences the influence of the ‘turnaround boundary’, similar to the orbit in the picture \[orb22\]. In the fig. \[fig3:orb\] we show $X-Y$ orbits for $\omega=0.9$. The many-world bound orbits \[orb7\] and \[orb8\], plotted for different values of the energy $E$, are angularly deflected at the ‘turnaround boundary’, while the escape orbits \[orb71\] and \[orb81\] remain far away from the ‘turnaround boundary’. We now turn to orbits in the overrotating case, $\omega>1$. In the figs. \[fig4:orb\], \[fig5:orb\] and \[fig6:orb\] we show trajectories for $\omega=1.1$ and in the figs. \[fig7:orb\] and \[fig8:orb\] trajectories for $\omega=2.1$ and varying values of the separation constant $K$ or angular momentum $\Phi$. For $\omega>1$ there are bound and escape orbits possible as we know from the previous chapters. In the figs. \[orb3\] and \[orb4\] bound orbits deflected at the ‘turnaround boundary’ are shown. The escape orbits \[orb31\] and \[orb41\] do not get influenced by the ‘turnaround boundary’. In the plot \[orb5\] a bound orbit is located inside the ‘turnaround boundary’, while the escape orbit \[orb51\] will be angularly deflected there. In the fig. \[orb9\] the bound orbit is behind the pseudo-horizon and lies beyond the ‘turnaround boundary’. The escape orbit \[orb91\] for the same value of energy is of general hyperbolic type. The figs. \[orb6\] and \[orb61\] for $\omega=2.1$ show a bound orbit influenced by the ‘turnaround boundary’ and an escape orbit. In the orbit \[orb10\] a bound orbit in the form of a Christmas star is located inside the ‘turnaround boundary’, while the particle in the escape orbit \[orb101\] will change its angular direction of motion at the ‘turnaround boundary’. Three dimensional orbits ------------------------  \[section:3dorbits\] In this section we visualize three dimensional geodesics. The three dimensional orbits represent in this case a three-dimensional projection of the general four-dimensional orbits. This projection explains the sometimes not very smooth looking parts of the orbits in the figures below. In the fig. \[fig3d:orb1\] we show a many-world bound orbit  and an escape orbit  for the underrotating case with $\omega=0.7$. Because of the choice of the coordinates there is a divergence at the horizon. The motion is continued on the inner side of the horizon. In the fig. \[fig3d:orb2\] we show trajectories for the overrotating case with $\omega=1.1$. In the figs. \[3dorb2\] and \[3dorb3\] the orbits are bound and possess different values of the energy $E$. In the fig. \[3dorb3\] the energy value is close to the minimum of the effective potential. Fig. \[3orb21\] shows a corresponding escape orbit for the bound orbit \[3dorb2\], while the orbit \[3dorb31\] has an almost critical value of the energy $E$ corresponding to the maximum of the effective potential. The trajectories in the fig. \[fig3d:orb3\] are a bound orbit behind the pseudo-horizon \[3dorb4\] and an escape orbit \[3orb51\] for the same value of the energy. Conclusions =========== In this paper we have discussed the orbits of neutral test particles in the BMPV spacetime. We have solved the full set of geodesic equations analytically in terms of the Weierstrass’ functions. We have analyzed in detail the effective potential of the radial equation, and presented a complete classification of the possible orbits in this spacetime. We have also addressed the causal properties of the BMPV spacetime. Our results are in full accordance with previous more qualitative discussions [@Gibbons:1999uv; @Herdeiro:2000ap; @Herdeiro:2002ft; @Cvetic:2005zi]. - In the underrotating case, when the rotation parameter $\omega <1$, the BMPV spacetime describes supersymmetric black holes. Here the velocity of light surface, which forms the boundary inside which causality violation can occur, is hidden behind the horizon, located at $x=1$. The possible types of orbits in this black hole spacetime are classified in table \[tab2\] and table \[tab3\]. There exist no planetary type orbits in the outer spacetime. But there are many world bound orbits. Moreover, bound orbits are found in the interior region $x<1$ for massive and massless particles. - In the overcritical case $\omega >1$ the BMPV spacetime represents in its exterior region $x \ge 1$ a repulson. No geodesics can cross the pseudo-horizon located at $x=1$. The exterior region is geodesically complete. The velocity of light surface is, however, outside the pseudo-horizon. Therefore this spacetime represents a naked time machine. The possible types of orbits in this overrotating spacetime are also classified in table \[tab2\] and table \[tab3\]. The outer BMPV spacetime now allows for planetary bound orbits for particles and light. But there are also bound orbits in the interior region for massive and massless particles. - In the critical case $\omega=1$ the surface $x=1$ has vanishing area and coincides with the velocity of light surface. As in the repulson case, no geodesics can cross this surface to reach the interior region $x<1$. However, most types of orbits reach the surface $x=1$. The possible types of orbits in this critical spacetime are classified in table \[tab4\] and table \[tab5\]. Only escape orbits in the outer region and bound orbits in the interior region do not reach the surface $x=1$. This holds for massive particles and for light. To illustrate the analytical solutions we have presented various types of trajectories in the plane $\theta=\pi/2$, and subsequently performed a three-dimensional projection for some selected orbits. An interesting effect present in many of the orbits is the change of the angular direction at the ‘turnaround boundary’, where the derivative of the azimuthal angle w.r.t. the radial coordinate vanishes. This effect arises although there is no ergosphere in the spacetime. The resulting orbits have rather intriguing shapes, differing from the ones of the Kerr, Kerr-Newman or Myers-Perry spacetimes. In should be interesting to next address the motion of charged test particles in the BMPV spacetime. This should yield the full analytical solution of the set of equations presented and discussed by Herdeiro [@Herdeiro:2000ap; @Herdeiro:2002ft]. An analogous analysis as the one given here should yield the complete classification of the possible types of orbits. Since the BMPV solution may be considered as a subset of the more general family of solutions found by Chong, Cvetic, Lü and Pope [@Chong:2005hr], it will be interesting to extend the present study to this set of solutions. They include two unequal rotation parameters and describe also non-extremal solutions [@Chong:2005hr]. Moreover a cosmological constant is present. Particularly interesting should be the analysis of the included set of supersymmetric black holes and topological solitons. Moreover, an extension of the present work to the intriguing set of solutions of gauged supergravities in four, five and seven dimensions, discussed by Cvetic, Gibbons, Lü and Pope [@Cvetic:2005zi], appears interesting. However, in analytical studies of the geodesics of such extended sets of solutions it may be necessary to employ more advanced mathematical tools based on hyperelliptic functions [@Hackmann:2008zza; @Hackmann:2008tu; @Enolski:2010if]. On the other hand, the BMPV spacetime can also be viewed as a special case of a more general family of solutions, where the Chern-Simons coupling constant, multiplying the $F^2 A$ term in the action, is a free parameter [@Gauntlett:1998fz]. When this free parameter assumes a particular value, the BMPV solutions of minimal supergravity are obtained. For other values of this coupling constant new surprising phenomena occur. For instance, the resulting black hole solutions need no longer be uniquely specified in terms of their global charges [@Kunz:2005ei] or an infinite sequence of extremal radially excited rotating black holes arises [@Blazquez-Salcedo:2013muz]. The study of the geodesics in these spacetime may also reveal some surprises. [**Acknowledgments.**]{} We gratefully acknowledge discussions with Saskia Grunau, Burkhard Kleihaus and Eugen Radu, and support by the Deutsche Forschungsgemeinschaft (DFG), in particular, within the framework of the DFG Research Training group 1620 [*Models of gravity*]{}. [99]{} J. C. Breckenridge, R. C. Myers, A. W. Peet and C. Vafa, Phys. Lett. B [**391**]{}, 93 (1997) \[hep-th/9602065\]. G. W. Gibbons and C. A. R. Herdeiro, Class. Quant. Grav.  [**16**]{}, 3619 (1999) \[hep-th/9906098\]. J. P. Gauntlett, R. C. Myers and P. K. Townsend, Class. Quant. Grav.  [**16**]{}, 1 (1999) \[hep-th/9810204\]. C. A. R. Herdeiro, Nucl. Phys. B [**582**]{}, 363 (2000) \[hep-th/0003063\]. C. A. R. Herdeiro, Nucl. Phys. B [**665**]{}, 189 (2003) \[hep-th/0212002\]. M. Cvetic, G. W. Gibbons, H. Lu and C. N. Pope, hep-th/0504080. L. Dyson, JHEP [**0701**]{}, 008 (2007) \[hep-th/0608137\]. Z. -W. Chong, M. Cvetic, H. Lu and C. N. Pope, Phys. Rev. Lett.  [**95**]{}, 161301 (2005) \[hep-th/0506029\]. C. W. Misner, K. S. Thorne, J. A. Wheeler, *Gravitation*, W.H. Freeman and Company, (San Francisco) (1973) Y. Mino, Phys. Rev. D [**67**]{}, 084027 (2003) \[gr-qc/0302075\]. A. I. Markushevich, [*Theory of functions of a complex variable*]{}, Vol. III, Prentice-Hall, Inc., Englewood Cliffs, N.J. (1967). S. Grunau and V. Kagramanova, Phys. Rev. D [**83**]{}, 044009 (2011) \[arXiv:1011.5399 \[gr-qc\]\]. E. Hackmann and H. Xu, Phys. Rev. D [**87**]{}, 124030 (2013) \[arXiv:1304.2142 \[gr-qc\]\]. V. Kagramanova and S. Reimers, Phys. Rev. D [**86**]{}, 084029 (2012) \[arXiv:1208.3686 \[gr-qc\]\]. B. O’Neil, *The Geometry of Kerr Black Holes* (A.K. Peters, Wellesley, MA, 1995). V. Kagramanova, J. Kunz, E. Hackmann and C. Lämmerzahl, Phys. Rev. D [**81**]{}, 124044 (2010) \[arXiv:1002.4342 \[gr-qc\]\]. E. Hackmann, C. Lämmerzahl, V. Kagramanova and J. Kunz, Phys. Rev. D [**81**]{}, 044020 (2010) \[arXiv:1009.6117 \[gr-qc\]\]. S. Chandrasekhar, *The Mathematical Theory of Black Holes* (Oxford University Press, Oxford, 1983). E. Hackmann and C. Lämmerzahl, Phys. Rev. Lett.  [**100**]{}, 171101 (2008). E. Hackmann, V. Kagramanova, J. Kunz and C. Lämmerzahl, Phys. Rev. D [**78**]{}, 124018 (2008) \[Erratum-ibid.  [**79**]{}, 029901 (2009)\] \[arXiv:0812.2428 \[gr-qc\]\]. V. Z. Enolski, E. Hackmann, V. Kagramanova, J. Kunz and C. Lammerzahl, J. Geom. Phys.  [**61**]{}, 899 (2011) \[arXiv:1011.6459 \[gr-qc\]\]. J. Kunz and F. Navarro-Lerida, Phys. Rev. Lett.  [**96**]{}, 081101 (2006) \[hep-th/0510250\]. J. L. Blazquez-Salcedo, J. Kunz, F. Navarro-Lerida and E. Radu, arXiv:1308.0548 \[gr-qc\].
--- abstract: 'In this paper, we develop econometric tools to analyze the integrated volatility of the efficient price and the dynamic properties of microstructure noise in high-frequency data under general dependent noise. We first develop consistent estimators of the variance and autocovariances of noise using a variant of realized volatility. Next, we employ these estimators to adapt the pre-averaging method and derive a consistent estimator of the integrated volatility, which converges stably to a mixed Gaussian distribution at the optimal rate $n^{1/4}$. To refine the finite sample performance, we propose a two-step approach that corrects the finite sample bias, which turns out to be crucial in applications. Our extensive simulation studies demonstrate the excellent performance of our two-step estimators. In an empirical study, we characterize the dependence structures of microstructure noise in several popular sampling schemes and provide intuitive economic interpretations; we also illustrate the importance of accounting for both the serial dependence in noise and the finite sample bias when estimating integrated volatility.' author: - | Z. Merrick Li[^1]\ [Erasmus University Rotterdam]{}\ [University of Amsterdam]{}\ [and Tinbergen Institute]{} - | Roger J. A. Laeven[^2]\ [Amsterdam School of Economics]{}\ [University of Amsterdam, EURANDOM]{}\ [and CentER]{} - | Michel H. Vellekoop[^3]\ [Amsterdam School of Economics]{}\ [University of Amsterdam]{} bibliography: - 'reference.bib' title: 'Dependent Microstructure Noise and Integrated Volatility Estimation from High-Frequency Data ' --- \#1 *JEL classification*: C13, C14, C55, C58. Introduction ============ Over the past decade and a half, high-frequency financial data have become increasingly available. In tandem, the development of econometric tools to study the dynamic properties of high-frequency data has become an important subject area in economics and statistics. A major challenge is provided by the accumulation of market microstructure noise at higher frequencies, which can be attributed to various market microstructure effects including, for example, information asymmetries (see [@glosten1985bid]), inventory controls (see [@ho1981optimal]), discreteness of the data (see [@harris1990estimation]), and transaction costs (see [@garman1976market]). It has been well-established (see, e.g., [@black1986noise]) that the observed transaction price[^4] $Y$ can be decomposed into the unobservable “efficient price” (or “frictionless equilibrium price”) $X$ plus a noise component $U$ that captures market microstructure effects. That is, it is natural to assume that $$Y_t = X_t+U_t, \label{eq:Y=X+U}$$ where further assumptions on $X$ and $U$ need to be stipulated. While estimating the integrated volatility of the efficient price is the emblematic problem in high-frequency financial econometrics (see, for example, ), the study of microstructure noise, e.g., its magnitude, dynamic properties, etc., is the main focus of the market microstructure literature (see, for example, [@hasbrouck2007empirical]). A common challenge, however, is that the two components of the observed price $Y$ in are latent. Therefore, distributional features of one component, say, of the microstructure noise, will affect the estimation of characteristics of the other, such as the integrated volatility of the efficient price.[^5] While the semimartingale framework provides the natural class to model the efficient price (see, e.g., [@duffie2010dynamic]), the statistical assumptions on noise induced by microeconomic financial models range from simple to very complex, depending on which phenomena the model aims to capture. For example, the classic Roll model (see [@Roll1984simple]) postulates an i.i.d. bid-ask bounce resulting from uncorrelated order flows; [@hasbrouck1987order], [@choi1988estimation], and [@stoll1989inferring] introduce autocorrelated order flows, yielding autoregressive microstructure noise; and [@gross2013predicting] model microstructure noise with long-memory properties. Therefore, being able to account for the potentially complex statistical behavior of microstructure noise that contaminates our observations of the semimartingale efficient price dynamics, would be an appealing property of any method that aims at disentangling the efficient price and microstructure noise. To estimate the integrated volatility of the efficient price, several de-noise methods have been developed, mostly assuming i.i.d. microstructure noise. Examples include the two-scale and multi-scale realized volatility estimators developed in [@zhang2005TSRV] and [@zhang2006MSRV], the realized kernel methods developed in [@barndorff2008RealizedKernels], the likelihood approach initiated by [@ait2005often] and [@xiu2010], and the pre-averaging method developed in a series of papers by [@podolskij2009pre-averaging-1] and [@jacod2009pre-averaging-2; @jacod2010pre-averaging-3], see also [@podolskij2009bipower]. The variance of noise is usually obtained as a by-product. In this paper, we allow the microstructure noise to be serially dependent in a general setting, nesting many special cases (including independence). We do not impose any parametric restrictions on the distribution of the noise, except for some rather general mixing conditions that guarantee the existence of limit distributions, hence our approach is essentially nonparametric. In this setting, we first derive the stochastic limit of the realized volatility of observed prices after $j$ lags. Using this limit result, we develop consistent estimators of the variance and covariances of noise. The aim of estimating the second moments of noise is twofold. On the one hand, we would like to explore the dynamic properties of microstructure noise. In particular, we would like to compare these properties to those induced by various parametric models of microstructure noise based on leading microstructure theory, and obtain corresponding economic interpretations to achieve a better understanding of the microstructure effects in high-frequency data. On the other hand, the second moments of noise become nuisance parameters in estimating the integrated volatility, which is a prime objective in the analysis of high-frequency financial data. To estimate the integrated volatility, we next adapt the pre-averaging estimator (PAV) to allow for serially dependent noise in our general setting. We find that the stochastic limit of the adapted PAV estimator is a function of the volatility and the variance and covariances of noise, and the latter, constituting an *asymptotic bias*, can be consistently estimated by our realized volatility estimator. Hence, we can correct the asymptotic bias, resulting in centered estimators of the integrated volatility. A key interest in this paper is to unravel the interplay between asymptotic and finite sample biases when estimating integrated volatility. In a finite sample analysis, we find that the realized volatility estimator has a finite sample bias that is proportional to the integrated volatility. The bias term becomes significant when the number of lags (in computing the variant of realized volatility) is large, or the noise-to-signal ratio[^6] is small. Therefore, we are in a situation in which the integrated volatility generates a *finite sample bias* to the estimators of the second moments of noise, while the latter become the *asymptotic bias* in estimating the former. This “feedback effect” in the bias corrections motivates us to develop *two-step estimators*. First, we simply ignore the dependence in noise and proceed with the pre-averaging method to obtain an estimator of the integrated volatility. Next, we use this estimator to obtain *finite sample bias* corrected estimators of the second moments of noise, which can then be used to correct the asymptotic bias yielding the second-step estimator of the integrated volatility. Repeating this process leads to three-step estimators (and beyond) which may further improve the two-step estimators on average, but at the cost of higher standard deviations. Figure \[fig:description\_two\_step\_estimator\] gives a simple graphical illustration of the implementation of the two-step estimators. We conduct extensive Monte Carlo experiments to examine the performance of our estimators, which proves to be excellent. We demonstrate in particular that they can accommodate both serially dependent and independent noise and perform well in finite samples with realistic data frequencies and sample sizes. The experiments reveal the importance of a unified treatment of asymptotic and finite sample biases when estimating integrated volatility. Empirically, we apply our new estimators to a sample of Citigroup transaction data. We find that the associated microstructure noise tends to be positively autocorrelated. This is in line with earlier findings in the microstructure literature, see [@hasbrouck1987order], [@choi1988estimation], and [@huang1997components]. Attributing this positive autocorrelation to order flow continuation, the estimated probability that a buy (or sell) order follows another buy (or sell) order is 0.87. Furthermore, microstructure noise turns out to be negatively autocorrelated under tick time sampling. This is consistent with inventory models, in which dealers alternate quotes to maintain their inventory position. We obtain an estimate of the probability of reversed orders equal to 0.84. Turning to the estimators of integrated volatility, we find that with positively autocorrelated noise the commonly adopted methods that hinge on the i.i.d. assumption of noise tend to overestimate the integrated volatility. Under two alternative (sub)sampling schemes — regular time sampling and tick time sampling — our estimators also appear to work well. This testifies to the critical relevance of the bias corrections embedded in our two-step estimators. In earlier literature, [@Ait-Sahalia2011DependentNoise] show that the two-scale and multi-scale realized volatility estimators are robust to exponentially decaying dependent noise. In this paper, we provide explicit estimators of the second moments of noise and analyze their asymptotic behavior, develop bias-corrected estimators of the integrated volatility based on these moments of noise, and empirically assess the noise characteristics under different sampling schemes. Furthermore, [@hautsch2013preaveraging] study $q$-dependent microstructure noise, develop consistent estimators of the first $q$ autocovariances of microstructure noise and define the associated pre-averaging estimators. An appealing feature of their approach is that their autocovariance-type estimators of $q$-dependent noise consider non-overlapping increments which avoids finite sample bias. We allow for more general assumptions on the dependence structure of microstructure noise. Owing to its generality our setting incorporates many microstructure models as special cases. We therefore do not need to advocate any particular model of microstructure noise and this enables us to obtain economic interpretations of our empirical results under multiple sampling schemes. In two contemporaneous and independent works,  [@jacod2015IVDependentNoise; @jacod2013StatisticalPropertyMMN] also study dependent noise in high-frequency data. In [@jacod2013StatisticalPropertyMMN], they develop a novel local averaging method to “recover” the noise and they can, in principle, estimate any finite (joint) moments of noise with diurnal features. Moreover, they also allow observation times to be random. Empirically, they find some interesting statistical properties of noise. In particular, they find that noise is strongly serially dependent with polynomially decaying autocorrelations. Employing this local averaging method, [@jacod2015IVDependentNoise] develop an estimator of integrated volatility that allows for dependent noise. To distinguish our work from these two papers, we first note that our assumptions on noise are slightly different: we assume that the noise process constitutes a strongly mixing sequence while they require a $\rho$-mixing sequence (see [@bradley2005StrongMixing] for a discussion of mixing sequences). Furthermore, the local averaging method differs from, and allows to analyze more general noise characteristics than, the simpler realized volatility method developed here. The key difference is our explicit treatment of the feedback effect between the asymptotic and finite sample biases: we show that in a finite sample, the integrated volatility and second moments of microstructure noise should be estimated in a unified way, since they induce biases in each other. We design novel and easily implementable two-step estimators to correct for the intricate biases. Our two-step estimators of the integrated volatility, which are designed to allow for dependent noise, also perform well in the special case of independent noise, and in a sample of reasonable size as encountered in practice. This robustness to (mis)specification of noise and to sampling frequencies is an important advantage of our two-step estimators. Our unified treatment of the asymptotic and finite sample biases may help explain why the empirical studies in [@jacod2013StatisticalPropertyMMN] render the strong dependence in noise they find (and question themselves); see our empirical analysis in Section \[sec:EmpiricalStudy\]. In another independent paper, [@da2017moving] introduce a novel quasi maximum likelihood approach to estimate both the volatility and the autocovariances of moving-average microstructure noise. They also extend their estimators to general settings that allow for irregular observation times, intraday patterns of noise and jumps in asset prices. Their approach treats “large” and “small” microstructure noise in a uniform way which leads to a potential improvement in the convergence rate. Our approach is essentially of a nonparametric nature and provides unified estimators of a class of volatility functionals (see Theorem \[thm:consistency\]) including the asymptotic variance, which account for the feedback between finite sample and asymptotic biases. Our empirical study also has a different focus. Our investigation is not as extensive as in [@da2017moving],[^7] but we explicitly consider different sampling schemes,[^8] analyzing the autocovariance patterns of noise in connection to microstructure noise models and their impact on integrated volatility estimation. The remainder of this paper is organized as follows. In Section \[sec:framework\], we introduce the basic setting and notation. In Section \[sec:VarianceCovarianceEst\], we analyze realized volatility with dependent noise and develop consistent estimators of the second moments of noise. The pre-averaging method with dependent noise is studied in Section \[sec:pre-averaging\]. Section \[sec:two-step estimators\] introduces our two-step estimators. Section \[sec:simulation\] reports extensive simulation studies. Our empirical study is presented in Section \[sec:EmpiricalStudy\]. Section \[sec:conclusion\] concludes the paper. All proofs and some additional Monte Carlo simulation and empirical results are collected in an online appendix, see [@LLVsupp2018]. Framework and Assumptions {#sec:framework} ========================= We assume that the efficient log-price process $X$ is represented by a continuous Itô semimartingale defined on a filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\geq 0},\mathbb{P})$: $$X_t = X_0 + \int_{0}^{t}a_s\diff s + \int_{0}^{t}\sigma_s\diff W_s, \label{eq:Eff_Price_Ito_Diffu}$$ where $W$ is a standard Brownian motion, the drift process $a_s$ is optional and locally bounded, and the volatility process $\sigma_s$ is adapted with càdlàg paths. The probability space also supports the noise process $U$. We assume that all observations are collected in the fixed time interval $[0,T]$, where without losing generality we let $T=1$. At stage $n$, the observation times are given by $0=t^n_0<t^n_1<\dots<t^n_n=1$. \[assumption:dependent\_noise\] The noise process $(U_i)_{i\in\mathbb{N}}$ satisfies the following assumptions: 1. $U$ is symmetrically distributed around 0; 2. The noise process $U$ is independent of the efficient log-price process $X$; 3. \[assumption:noiseAssump3\]$U$ is stationary and strongly mixing and the mixing coefficients[^9] $\{\alpha_h\}_{h=1}^\infty$ decay at a polynomial rate, i.e., there exist some constants $C>0,v>0$ such that $$\label{eq:alpha_mixing_coeff_v} \alpha_h \leq\frac{C}{h^v}.$$Moreover, we assume $U$ has bounded moments of all orders. The mixing conditions in Assumption \[assumption:dependent\_noise\] item (3.) ensure that the noise process evaluated at different time instances, say, $i,i+h$, is increasingly limited in dependence when the lag $h$ increases. In particular, there exists some $C'>0$ such that $$\abs{\gamma(h)}\leq \frac{C'}{h^{v/2}}, \label{eq:rho_strong_mixing}$$ where $\gamma(h) = \cov{U_{i}, U_{i+h}}$ is the autocovariance function of $U$. Assuming $U$ to have bounded moments of all orders is not strictly necessary. Depending on the targeted moments, this assumption can be relaxed via the choice of $v$ in , see Lemma VIII 3.102 in [@jacod1987limit]. Throughout the paper we maintain the assumption of bounded moments of all orders and only specify the restrictions on $v$. At stage $n$, we will denote $U_i$ by $U^n_i$, $\forall i\leq n$. The $i$-th observed price is thus given by $$\label{eq:Y^n=X^n+U^n} Y^n_i = X^n_i + U^n_i,$$where $X^n_i = X_{t^n_i}$. In the remainder of the main text, we assume $t^n_i = i/n, i=0,\dots, n$; see Appendix \[sec:IrregularSampling\] for an analysis of irregular sampling schemes. \[rmk:Sampling\_Schemes\] We allow the noise process $U$ to generate dependencies in *sampling time*, including *transaction time*,[^10] *calendar time*,[^11] and *tick time*.[^12] Hence, our noise process essentially constitutes a *discrete-time model* — it does not depend explicitly on the time between successive observations.  [@ait2005often], [@hansen2006realized], and  [@hansen2008moving] study various *continuous-time* models of dependent microstructure noise. In these continuous-time models, the noise component of a log-return over a time interval $\Delta$ is of order $O_p(\sqrt{\Delta})$, the same order as the logarithmic return of the efficient price. \[rmk:q-dependence\] Our assumptions on the dependence of noise are quite general, nesting many models as special cases including, for example, i.i.d. noise, $q$-dependent noise (under which $\gamma(h) = 0,\:\forall h>q$), ARMA($p,q$) noise (see [@mixingARMA]) and some long-memory processes (see [@tsay2005analysis]). We note that AR(1) and AR(2) noise are studied in [@barndorff2008RealizedKernels] and [@hendershott2013implementation] respectively, $q$-dependent noise is considered by [@hansen2008moving] and [@hautsch2013preaveraging], while [@gross2013predicting] study long-memory bid-ask spreads. Estimation of the Variance and Covariances of Noise {#sec:VarianceCovarianceEst} =================================================== In this section, we develop consistent estimators of the second moments of noise under Assumption \[assumption:dependent\_noise\]. These estimators will later serve as important inputs to adapt the pre-averaging method. We also analyze our estimators’ finite sample properties. Realized volatility with dependent noise ---------------------------------------- We start with the following preliminary result: \[prop:RV\_Estimate\_var+cov(1)\] Assume that the efficient log-price follows , the observations follow , and the noise process satisfies Assumption \[assumption:dependent\_noise\]. Furthermore, let $j$ be a fixed integer and assume the sequence $j_n$ and the exponent $v$ satisfy the following conditions: $$\label{eq:Asy_condi_RV_consistency} v>2, \quad j_n\rightarrow\infty,\quad j_n/n\rightarrow 0.$$ Then we have the following convergences in probability as $n\rightarrow\infty$: $$\label{eq:Pconverge_jth_RV} \widehat{\RV{Y,Y}}_{n}(j) :=\frac{\sum_{i=0}^{n-j} (Y^n_{i+j}-Y^n_{i})^2}{2(n-j+1)} \Pconverge \var{U} - \gamma(j),$$ $$\label{eq:consistent_estimate_var_U_dependent} \widehat{\var{U}}_n:=\frac{\sum_{i=0}^{n-j_n} (Y^n_{i+j_n}-Y^n_{i})^2}{2(n-j_n+1)}\Pconverge \var{U},$$ $$\widehat{\gamma(j)}_n := \widehat{\var{U}}_n - \widehat{\RV{Y,Y}}_{n}(j)\Pconverge \gamma(j). \label{eq:gamma(j)_hat}$$ See Appendix \[appendix:prop\_RV\_Estimate\_var+cov(1)\]. The special case of  that occurs when $j=1$ appears in [@Ait-Sahalia2011DependentNoise] assuming exponential decay. We also note that in the most recent version of [@jacod2013StatisticalPropertyMMN] similar estimators as $\widehat{\RV{Y,Y}}_{n}(j) $ are mentioned but without formal analysis of their limiting behavior. To our best knowledge, our paper is the first to estimate the variance and covariances of noise using realized volatility under a general dependent noise setting. Finite sample bias correction {#subsec:Finite_Sample_Bias_Correction} ----------------------------- The theoretical validity of our realized volatility estimators in – hinges on the increasing availability of observations in a fixed time interval, the so-called *infill asymptotics*. In general, an estimator derived from asymptotic results can, however, behave very differently in finite samples. Our realized volatility estimators of the second moments of noise are an example for which the asymptotic theory provides a poor representation of the estimators’ finite sample behavior.[^13] Intuitively, the finite sample bias stems from the diffusion component, when computing the realized volatility $\widehat{\RV{Y,Y}}_n(j)$ over large lags $j$ in a finite sample, and we will explain later (e.g., in Remark \[rmk:why\_correct\_bias\]) why it is critically relevant to account for it in real applications. In the sequel, we assume the drift $a_t$ in  to be zero. According to, for example, [@bandi2008microstructure] and [@lee2012jumps] this is not restrictive in high-frequency analysis. This will be confirmed in our Monte Carlo simulation studies in Section \[sec:simulation\] and Appendix \[sec:SVsimu\]. \[prop:Finite\_Sample\_Bias\_Correction\] Assume that the efficient log-price follows  with $a_{s} = 0\:\forall s$, and assume there is some $\delta>0$ so that $\sigma_t$ is bounded for all $t\in[0,\delta]\cup [1-\delta,1]$. Furthermore, assume the observations follow , and the noise process satisfies Assumption \[assumption:dependent\_noise\]. Then, conditional on the volatility path, $$\begin{aligned} \expectsigma{\widehat{\RV{Y,Y}}_n(j)} = \frac{j\int_{0}^{1}\sigma^2_t\diff t}{2(n-j+1)} +\var{U} - \gamma(j) + O_p\myp{j^2/n^2}. \label{eq:Finite_Sample_Bias_Correction} \end{aligned}$$ Here, $\expectsigma{\cdot}$ is the expectation conditional on the entire path of volatility. See Appendix \[sec:prop:Finite\_Sample\_Bias\_Correction\]. The regularity conditions with respect to $\sigma_{t}$ in Proposition \[prop:Finite\_Sample\_Bias\_Correction\] trivially hold if the volatility is assumed to be continuous. (Volatility is usually assumed to be continuous when making finite sample bias corrections.) Let $j=1$ and let us restrict attention to sampling in calendar time. In that special case the result in Proposition \[prop:Finite\_Sample\_Bias\_Correction\] bears similarities with Theorem 1 in [@hansen2006realized]. Contrary to [@hansen2006realized] we assume that the efficient log-price $X$ is independent of the noise $U$. Therefore, any correlations between the two drop out. Proposition \[prop:Finite\_Sample\_Bias\_Correction\] reveals that $\widehat{\RV{Y,Y}}_n(j) - \frac{j\int_{0}^{1}\sigma^2_t\diff t}{2(n-j+1)}$ will be a better estimator of $\var{U} - \gamma(j)$ in finite samples, and it motivates the following finite sample bias corrected estimators: $$\begin{aligned} \label{eq:RV_SSBC} \widehat{\RV{Y,Y}}^{\rm (adj)}_{n}(j) & := \widehat{\RV{Y,Y}}_n(j) - \frac{\hat{\sigma}^2 j}{2(n-j+1)};\\ \label{eq:var_noise_dependent_SSBC} \widehat{\var{U}}^{\rm (adj)}_n & := \widehat{\var{U}}_n - \frac{\hat{\sigma}^2 j_n}{2(n-j_n+1)};\\ \label{eq:covs_dependent_SSBC} \widehat{\gamma(j)}_n^{\rm (adj)} & := \widehat{\var{U}}_n^{\rm (adj)} - \widehat{\RV{Y,Y}}^{\rm (adj)}_{n}(j); \end{aligned}$$ where $\hat{\sigma}^2$ is an estimator of $\int_{0}^{1}\sigma^2_s\diff s$. We note that the bias corrected estimators are still consistent, as the fraction $\frac{j}{n-j+1}$ is negligible when $j$ is much smaller than $n$. \[rmk:why\_correct\_bias\] We now explain why the finite sample bias correction is crucial in applications. We first rewrite : $$\begin{split} \expectsigma{\widehat{\RV{Y,Y}}_n(j)}& = \frac{j\int_{0}^{1}\sigma^2_t\diff t}{2(n-j+1)} +\var{U} - \gamma(j) + O_p\myp{j^2/n^2} \\ &= \myp{\var{U}-\gamma(j)} \myp{1+\frac{\frac{j}{2(n-j+1)}}{\frac{\var{U} - \gamma(j)}{\int_{0}^{1}\sigma^2_t\diff t}} }+ O_p\myp{j^2/n^2}. \label{eq:why_correct_bias} \end{split}$$ Observe that the finite sample bias is determined by the ratio of the two terms $\frac{j}{2(n-j+1)}$ and $\frac{\var{U} - \gamma(j)}{\int_{0}^{1}\sigma^2_t\diff t}$. The first term, $\frac{j}{2(n-j+1)}$, depends on the data frequency $(n)$ and “target parameters” $(j)$; the second term, $\frac{\var{U} - \gamma(j)}{\int_{0}^{1}\sigma^2_t\diff t}$, is the (latent) noise-to-signal ratio. If the second term is “relatively larger (smaller)” than the first one, then the finite sample bias will be small (large). In other words, the finite sample bias is not only determined by the data frequency and target parameters, but also by other properties of the underlying efficient price and noise processes. In high-frequency financial data, the noise-to-signal ratio $\frac{\var{U}}{\int_{0}^{1}\sigma^2_t\diff t}$ is typically small, but it can vary from $O(10^{-2})$ (see [@bandi2006separating]) to $O(10^{-6})$ (see [@christensen2014jump-high-fre]) in empirical studies. The ratio $\frac{ j}{2(n-j+1)}$, while typically small as well, can still be *relatively* large, depending on the specific situation. Consider the following two scenarios: 1) We have ultra high-frequency data with $n=O(10^5)$ (recall that the number of seconds in a business day is 23,400), and we select $j_{n}=20$. Then, the ratio $\frac{j_n}{2(n-j_n+1)} = O(10^{-4})$. 2) We have i.i.d. noise and we would like to estimate the variance of noise by $\widehat{\RV{Y,Y}}_{n}(1)$ using high-frequency data with average duration of 20 seconds (thus $n\approx 10^3$); see, e.g., [@bandi2006separating]. Hence, $\frac{j}{2(n-j+1)} = O(10^{-3})$. In both scenarios, the ratio of $\frac{j}{2(n-j+1)}$ and $\frac{\var{U} - \gamma(j)}{\int_{0}^{1}\sigma^2_t\diff t}$ can vary widely, depending on the magnitude of the latent noise-to-signal ratio. It is then clear from the first line of  that the finite sample bias term, which is proportional to the integrated volatility, may well wipe out the variance of noise, depending on the specific situation. Note that increasing the sample size by extending the time horizon to $[0,T]$ with large $T$ will not remove the finite sample bias. Hence, the finite sample bias may be viewed as a *low frequency bias*. The Pre-Averaging Method with Dependent Noise {#sec:pre-averaging} ============================================= In this section, we adapt a popular “de-noise” method — the pre-averaging method — to allow for serially dependent noise in our general setting. The pre-averaging method was originally introduced by [@podolskij2009pre-averaging-1] (see also [@jacod2009pre-averaging-2], [@jacod2010pre-averaging-3], [@podolskij2009bipower], and [@hautsch2013preaveraging]). Setup and notation ------------------ For a generic process $V$, we denote its pre-averaged version by $$\preavg{V} := \frac{1}{k_n+1}\sum_{i=(2m-2)k_n}^{(2m-1)k_n}\myp{V^n_{i+k_n} - V^n_{i}}, \label{eq:V_m^k_n}$$ for $1\leq m\leq M_n$ with $M_n=\lfloor \frac{\sqrt{n}}{2c}\rfloor$, where $k_n\in\mathbb{N}$ satisfies $$k_n = c\sqrt{n} + o(n^{1/4}), \label{eq:k_nM_condition}$$ for some positive constant $c$ and where $\lfloor\cdot\rfloor$ is the floor function. For any real $r\geq 2$, the pre-averaged statistics of the log-price process $Y$ are defined as follows: $$\Pbv(Y,r)_n: = n^{\frac{r-2}{4}}\sum_{m=1}^{M_n}\abs{\preavg{Y}}^r,\quad r\geq 2. \label{eq:MBV_Y}$$ Equation invokes a simple version of the pre-averaging method. In particular, we take a simple weighting function to compute the pre-averages in the $m$-th *non-overlapping* interval. We refer to [@jacod2009pre-averaging-2; @jacod2010pre-averaging-3] and [@podolskij2009bipower] for the pre-averaging method with general weighting functions and pre-averaged values based on overlapping intervals. We first present the following proposition, which provides the asymptotic distribution of the pre-averaged noise: \[prop:pre-averaged noise\] Assume that the noise satisfies Assumption \[assumption:dependent\_noise\] with $v>2$ and that $\sigma^2_U$ defined below is strictly positive. Then, the following central limit theorem holds for $\preavg{U}$: $$n^{1/4}\preavg{U}\convergeL \normdist{0}{\frac{2\sigma^2_U}{c}}, \label{eq:asy_distri_pre-averaged_noise}$$ where $$\begin{aligned} \sigma^2_U = \var{U} + 2\sum_{j=1}^{\infty}\gamma(j), \label{eq:sigma2_U}\end{aligned}$$ and $c$ is defined in . See Appendix \[appendix:prop:pre-averaged noise\]. For i.i.d. noise, $\sigma^2_U$ reduces to $\var{U}$, and it is known (see [@zhang2005TSRV] and [@bandi2008microstructure]) that the variance of noise can be consistently estimated by the standardized realized volatility of observed returns. However, when noise is dependent we face a much more complex situation: all variance and covariance terms constitute $\sigma^2_U$. Nevertheless, we can provide a consistent estimator of $\sigma^2_U$, as follows: \[prop:consistent\_nonpar\_sigma2U\] Let $v>2$ and $j_n^3/n\rightarrow 0$. Define $$\widehat{\sigma^2_U}: = \widehat{\var{U}}_n + 2\sum_{j=1}^{i_n}\widehat{\gamma{(j)}}_n, \label{eq:consistent_nonpar_sigma2U}$$ where $i_n$ satisfies the conditions $i_n\rightarrow\infty, i_n\leq j_n$, and $\widehat{\var{U}}_n$ and $\widehat{\gamma(j)}_n$ are defined in  and . Then, $$\widehat{\sigma^2_U}\Pconverge \sigma^2_U.$$ See Appendix \[appendix:prop:consistent\_nonpar\_sigma2U\]. Asymptotic theory: Consistency ------------------------------ The following results establish consistency and a central limit theorem for the pre-averaged log-price process under dependent noise in our general setting. \[thm:consistency\] Assume that the efficient log-price follows , the observations follow , and the noise process satisfies Assumption \[assumption:dependent\_noise\]. Then, for any even integer $r\geq 2$, $$\Pbv(Y,r)_n\Pconverge \Pbv(Y,r) : =\frac{\mu_r}{2c}\int_{0}^{1}\myp{\frac{2c}{3}\sigma^2_s + \frac{2}{c}\sigma^2_U}^{\frac{r}{2}}\diff s, \label{eq:LLN}$$ where $\sigma^2_U$ is defined in  and $\mu_r = \expect{Z^r}$ for a standard normal random variable $Z$. See Appendix \[appendix:thm\_consistency\]. \[corollary:consistency\_IV\] Under the assumptions of Proposition \[prop:consistent\_nonpar\_sigma2U\] and Theorem \[thm:consistency\], we have the following consistency result for the integrated volatility: $${\rm \widehat{IV}}_n:=3\myp{\Pbv(Y,2)_n - \frac{\widehat{\sigma^2_U}}{c^2}}\Pconverge \int_{0}^{1}\sigma^2_s\diff s, \label{eq:consistency_SV_nonpar}$$ where $\widehat{\sigma^2_U}$ is defined in . Asymptotic theory: The central limit theorem -------------------------------------------- \[thm:CLT\] Assume that the efficient log-price follows , the observations follow , and the noise process satisfies Assumption \[assumption:dependent\_noise\]. Furthermore, assume that the process $\sigma$ is a continuous Itô semimartingale, and the assumptions of Proposition \[prop:consistent\_nonpar\_sigma2U\] hold with $v>4$. Then, $$n^{1/4}\myp{{\rm \widehat{IV}}_n-\int_{0}^{1}\sigma^2_s\diff s}\LsConverge \int_{0}^{1}\myp{2\sqrt{c}\sigma^2_s + \frac{6\sigma^2_U}{c^{3/2}}}\diff W_s', \label{eq:LS}$$ where $\LsConverge$ denotes stable convergence in law and where $W'$ is a standard Wiener process independent of $\mathcal{F}$. Moreover, letting $\tau_n^2 := 6\Pbv(Y,4)_n$, we have that $$\begin{aligned} \frac{n^{1/4}\myp{{\rm \widehat{IV}}_n-\int_{0}^{1}\sigma^2_s\diff s}}{\tau_n} \label{eq:stdGau}\end{aligned}$$ converges stably in law to a standard normal random variable, which is independent of $\mathcal{F}$. See Appendix \[appendix:thm\_CLT\]. \[rmk:how\_to\_choose\_c\] The limit result in provides a simple rule to select $c$ conditional on the volatility path: $c$ can be chosen to minimize the asymptotic variance. The optimal $c$ thus obtained is given by $$c^* = 3\sqrt{\frac{\sigma^2_U}{\int_{0}^{1}\sigma^2_s\diff s}}. \label{eq:how_to_choose_c}$$ This result is intuitive: if the noise-to-signal ratio is large, we should pick a large $c$, hence include more observations in a local pre-averaging window to reduce the noise effect. With typical noise-to-signal ratios that range from $10^{-2}$ to $10^{-4}$ as encountered in practice, the optimal $c^*\in [0.03,0.3]$. In our simulation and empirical studies, we throughout fix $c=0.2$. Two-Step Estimators and Beyond {#sec:two-step estimators} ============================== In this section, we present our two-step estimators of the integrated volatility and the second moments of noise based on both our asymptotic theory and finite sample analysis. We observe from Corollary \[corollary:consistency\_IV\] that the second moments of noise contribute to an *asymptotic bias* in the estimation of the integrated volatility. But our finite sample analysis indicates that we need an estimator of the integrated volatility to correct the *finite sample bias* when estimating the second moments of noise. Our two-step estimators are specifically designed for the purpose of correcting the “interlocked” bias. In the first step, we ignore the dependence in noise and estimate the variance of noise by realized volatility. Hence, our first-step estimators of the second moments of noise are given by $$\widehat{\var{U}}_{\rm step1} := \widehat{\RV{Y,Y}}_n(1);\quad\widehat{\gamma(j)}_{\rm step1} := 0;\quad\widehat{\sigma}^2_{U,{\rm step1}} := \widehat{\RV{Y,Y}}_n(1).$$ Next, we proceed with the pre-averaging method to obtain the first-step estimator of the integrated volatility: $$\widehat{\rm IV}_{\rm step1}:=3\myp{\Pbv(Y,2)_n -\frac{\widehat{\sigma}^2_{U,{\rm step1}}}{c^2}}. \label{eq:1stStepIV}$$ To initiate the second step, we first replace $\hat{\sigma}^2$ by $\widehat{\rm IV}_{\rm step1}$ in  and  and obtain the second-step estimators of the variance and covariances of noise as follows:$$\begin{aligned} \widehat{\RV{Y,Y}}_{\rm step2}(j) & := \widehat{\RV{Y,Y}}_n(j)- \frac{j\widehat{\rm IV}_{\rm step1}}{2(n-j+1)};\\ \widehat{\var{U}}_{\rm step2} & := \widehat{\var{U}}_n - \frac{j_n\widehat{\rm IV}_{\rm step1} }{2(n-j_n+1)};\label{eq:2ndStepVarU}\\ \widehat{\gamma(j)}_{\rm step2} & := \widehat{\var{U}}_{\rm step2} - \widehat{\RV{Y,Y}}_{\rm step2}(j);\label{eq:2ndStepGammaj}\\ \widehat{\sigma}^2_{U,\rm step2} & := \widehat{\var{U}}_{\rm step2} + 2\sum_{j=1}^{i_n} \widehat{\gamma(j)}_{\rm step2}\label{eq:2ndStepSigma2U}.\end{aligned}$$ Then, the second-step estimator of the integrated volatility is given by $$\widehat{\rm IV}_{\rm step2}:=3\myp{\Pbv(Y,2)_n -\frac{\widehat{\sigma}^2_{U,{\rm step2}}}{c^2}}. \label{eq:2ndStepIV}$$ The asymptotic properties of the two-step estimators are inherited from the asymptotic properties derived in the previous section. Of course, one can iterate beyond the two steps to obtain $k$-step estimators, for example, $\widehat{\rm IV}_{\rm step3}$. The next section will present simulation evidence to compare the performances of the proposed estimators. As the results in the following section reveal, the two-step estimators already perform very well. Simulation Study {#sec:simulation} ================ Simulation design ----------------- We consider an autoregressive noise process $U$ given by the following dynamics: $$U_{t} = V_{t} + \epsilon_{t}, \label{eq:ARnoise}$$ where $V$ is centered i.i.d. Gaussian and $\epsilon$ is an AR(1) process with first-order coefficient $\rho$, $|\rho|<1$. The processes $V$ and $\epsilon$ are assumed to be statistically independent. As benchmark parameters, we use the GMM estimates of the noise parameters from [@Ait-Sahalia2011DependentNoise] given by $\expect{V^2} = 2.9\times 10^{-8}$, $\expect{\epsilon^2} = 4.3\times 10^{-8}$, and $\rho = -0.7$. We also allow for different dependence structures by varying our choice of $\rho$. Furthermore, the efficient log-price $X$ is assumed to follow an Ornstein-Uhlenbeck process: $$\diff X_t = -\delta(X_t-\mu) \diff t + \sigma\diff W_t,\qquad\:\delta>0,\ \sigma>0. \label{eq:OUprice}$$ We set $\sigma^2=6\times 10^{-5}$, $\delta = 0.5$, and $\mu = 1.6$, and assume the processes $X$ and $U$ to be mutually independent. The signal-to-noise ratio induced by this model for $Y_{t}=X_{t}+U_{t}$ is realistic, according to empirical studies; see, e.g., [@bandi2006separating; @bandi2008microstructure]. For all the experiments in this section, we conduct $1\mathord{,}000$ simulations. Each simulated sample consists of $23\mathord{,}400$ observations in our fixed time interval $[0,1]$ representing one trading day of data sampled at the 1-sec time scale with 6.5 trading hours per day. The ultra high-frequency case with sampling at the 0.05-sec time scale is also considered. We take $c=0.2$. Realized volatility estimators of the second moments of noise ------------------------------------------------------------- To get a first impression of the properties of our estimator $\widehat{\RV{Y,Y}}_n(j)$ defined in , we plot $\widehat{\RV{Y,Y}}_n(j)$ against the number of lags $j$ in Figure \[fig:RVpath\]. In addition to $\widehat{\RV{Y,Y}}_n(j)$, we also plot the bias adjusted version $\widehat{\RV{Y,Y}}_n^{(\rm adj)}(j)$ defined in , in which we employ three “approximations” to the integrated volatility that $\widehat{\RV{Y,Y}}_n^{(\rm adj)}(j)$ depends on: $\hat{\sigma}^2_H = 1.2\sigma^2$, $\hat{\sigma}^2_M = \sigma^2$, and $\hat{\sigma}^2_L = 0.8\sigma^2$. Figure \[fig:RVpath\] shows that a prominent feature of our realized volatility estimator $\widehat{\RV{Y,Y}}_n(j)$ is that it deviates from its stochastic limit $\var{U}-\gamma{(j)}$ almost linearly in the number of lags $j$, as predicted by Proposition \[prop:Finite\_Sample\_Bias\_Correction\]. The deviation, induced by the finite sample bias, can be corrected to a large extent when only rough “estimates” of the integrated volatility are available. In the ideal but infeasible situation that we know the true volatility ($\hat{\sigma}^2_M = \sigma^2$), the bias corrected estimators almost perfectly match the underlying true values. Next, we estimate the second moments of noise by our realized volatility estimators (RV) and, for comparison purposes, by the local averaging estimators (LA) proposed by [@jacod2013StatisticalPropertyMMN]. We demonstrate the importance of the finite sample bias correction to obtain accurate estimates, and this applies to both estimators.[^14] In Figure \[fig:RVvsLA\], we plot the means of the autocorrelations of noise estimated by RV and LA based on $1\mathord{,}000$ simulations. In the top panel we plot the estimators without finite sample bias correction and we plot the estimators with finite sample bias correction in the bottom panel, in which we use the true $\sigma^2$ to make the bias correction. We will analyze the case in which $\sigma^2$ is estimated in the next subsection. We observe that both estimators (RV and LA) perform poorly without finite sample bias correction. In particular, the noise autocorrelations estimated by the LA estimators decay slowly and hover above 0 up to 25 lags, from which we might conclude that the noise exhibits strong and long memory dependence, while the underlying noise is, in fact, only weakly dependent. However, both estimators perform well after the finite sample bias correction. In Figure \[fig:RVvsLAWithCIkn6\], we also plot the 95% simulated confidence intervals of the two bias corrected estimators. In terms of mean squared errors, both estimators, after bias correction, yield accurate estimates. We note that the results for our RV estimator are robust to the choice of $j_{n}$. Figures \[fig:RVpath\]-\[fig:RVvsLAWithCIkn6\] reveal that the finite sample bias correction is crucial to obtain reliable estimates of noise moments. The key ingredient of this correction, however, is (an estimate of) the integrated volatility. Yet, to obtain an estimate of the integrated volatility, we need to estimate the second moments of noise first — whence the feedback loop of bias corrections. This is where our two-step estimators come into play. Two-step estimators of integrated volatility and beyond {#subsec:simu_est_IV} -------------------------------------------------------- In this subsection, we examine the performance of our two-step estimators of integrated volatility. We will compare $\widehat{\rm IV}_{\rm step1}$ to $\widehat{\rm IV}_{\rm step2}$ (cf. and ) to assess the gained accuracy by dropping the possibly misspecified assumption of independent noise, and compare $\widehat{\rm IV}_{n}$ to $\widehat{\rm IV}_{\rm step2}$ (cf. and ) to assess the accuracy gains from the unified treatment of asymptotic and finite sample biases. We also illustrate the increased accuracy achieved by iterating one more step, yielding the estimator $\widehat{\rm IV}_{\rm step3}$. In Table \[tab:IV\_est\_delta=1s\], we report the means of our estimators, with standard deviations between parentheses, based on $1\mathord{,}000$ simulations.[^15] Throughout this subsection, $j_{n}$ is fixed at $20$. Upon comparing the first and the third rows, we observe the important advantage of our two-step estimators over the pre-averaging method that assumes independent noise, since our estimators yield strongly improved accuracy. Furthermore, a comparison between the results in the second and third rows leads to a striking conclusion: ignoring the finite sample bias yields even more inaccuracy than ignoring the dependence in noise! Thus one should be cautious in applying estimators without appropriate bias corrections even with data on a 1-sec time scale. The “cost” of applying our two-step estimators is the slightly larger standard deviations they induce. The increased uncertainty is introduced by correcting the “interlocked” bias. However, the reduction in bias strictly dominates the slight increase in standard deviations when noise is dependent. Therefore, the two-step estimator has smaller mean-squared errors than the other two estimators. The last row of Table \[tab:IV\_est\_delta=1s\] shows that another iteration of bias corrections yields even more accurate estimates, although the respective standard deviations increase slightly. In Table \[tab:IV\_est\_delta=0.1s\], we replicate the results of Table \[tab:IV\_est\_delta=1s\] but now with higher data frequency (sampling at the 0.05-sec time scale). We clearly observe the inconsistency caused by the misspecification of the dependence structure in noise embedded in $\widehat{\rm IV}_{\rm step1}$ in the first row. The improved accuracy achieved by the estimator $\widehat{\rm IV}_{n}$ in the second row compared to the estimator $\widehat{\rm IV}_{\rm step1}$ in the first row confirms our asymptotic theory. However, interestingly we observe that, even with such ultra high-frequency data, the two-step estimator $\widehat{\rm IV}_{\rm step2}$ in the third row still performs better than the other two estimators — with smaller biases in most cases and only slightly larger standard deviations. In this scenario, one more iteration of bias corrections leads to little improvement. Our results remain qualitatively the same when we increase the variance of noise. The relative improvement due to the 2-step estimator is even more pronounced in this case and a 3-step estimator may yield further improvements. As another robustness check, we also changed the exponentiated Ornstein-Uhlenbeck process for the efficient price process into a Geometric Brownian Motion. This only impacts the third digits of the estimates and the second digits of the standard deviations reported above. To numerically “verify” the central limit theorem, we plot the quantiles of the normalized estimators $\frac{n^{1/4}\myp{\widehat{\rm IV}_n-\int_{0}^{1}\sigma^2_s\diff s}}{\tau_n}$, see , and the bias corrected version $\frac{n^{1/4}\myp{\widehat{\rm IV}_{\rm step2}-\int_{0}^{1}\sigma^2_s\diff s}}{\tau_n}$ against standard normal quantiles in Figure \[fig:CLT\_QQplots\]. We observe that the limit distribution established in Theorem \[thm:CLT\] is clearly verified. In Appendix \[sec:SVsimu\], we provide additional Monte Carlo simulation evidence based on *stochastic volatility* models, using realistic parameters motivated by our empirical studies, and we find that our two-step estimator retains its advantage over the other two estimators, $\widehat{\rm IV}_{\rm step1}$ and $\widehat{\rm IV}_{n}$. Empirical Study {#sec:EmpiricalStudy} =============== Data description ---------------- We analyze the NYSE TAQ transaction prices of Citigroup (trading symbol: C) over the month January 2011. We discard all transactions before 9:30 and after 16:00. We retain a total of $4\mathord{,}933\mathord{,}059$ transactions over 20 trading days, thus on average 10.5 observations per second. The estimation is first performed on the full sample, and then on subsamples obtained by different sampling schemes. We demonstrate how the sampling methods affect the properties of the noise, and thus affect the estimation of the integrated volatility. Throughout this section, the tuning parameter of the RV estimator is fixed at $j_n=30$ and $c=0.2$. Estimating the second moments of noise {#subsec:estimate_2nd_moments_noise_Citi} -------------------------------------- We estimate the $j$-th autocovariance and autocorrelation of microstructure noise with $j=0,1,\dots,30$ by three estimators: our realized volatility (RV) estimators in  and , the local averaging (LA) estimators proposed by [@jacod2013StatisticalPropertyMMN], and the bias corrected realized volatility (BCRV) estimators in  and . We perform the estimation over each trading day and end up with 20 estimates (of the 30 lags of autocovariances or autocorrelations) for each estimator. In Figure \[fig:RV\_LA\_2step\_fulldata\] we plot the average of the 20 estimates (over the month) as well as the approximated confidence intervals that are two sample standard deviations away from the mean. We observe that the three estimators yield quite close estimates by virtue of the high data frequency. Noise in this sample tends to be positively autocorrelated — with the BCRV estimators yielding the fastest decay. This is consistent with the finding that the arrivals of buy and sell orders are positively autocorrelated, see [@hasbrouck1987order]. This corresponds to the trading practice that informed traders split their orders over (a short period of) time and trade on one side of the market, rendering continuation in their orders. We emphasize that the finite sample bias can be much more pronounced than what we observe in Figure \[fig:RV\_LA\_2step\_fulldata\], even if we perform estimation on a full transaction data sample. In Appendix \[sec:GE\_Empirical\], we analyze the transaction prices of General Electric (GE) and show that, when the data frequency is very high, the finite sample bias correction is particularly important when the noise-to-signal ratio is very small (recall Remark \[rmk:why\_correct\_bias\]). Estimating the integrated volatility {#subsec:IV_Citi} ------------------------------------ Turning to the estimation of the integrated volatility, we mimic our simulation experiments and study three estimators: $\widehat{\rm IV}_{\rm step1}$, $\widehat{\rm IV}_{n}$, and $\widehat{\rm IV}_{\rm step2}$. In the top panel of Figure \[fig:CitiIVsFullData\], we plot the three estimators of the integrated volatility for each trading day. We note that the estimator $\widehat{\rm IV}_{n}$ and the two-step estimator $\widehat{\rm IV}_{\rm step2}$ yield quite close results. However, the estimator $\widehat{\rm IV}_{\rm step1}$, which ignores the dependence in noise, yields very different estimates, and the differences are one-sided — $\widehat{\rm IV}_{\rm step1}$ yields higher estimates over each trading day. Moreover, the differences are statistically significant by virtue of Theorem \[thm:CLT\] — 19 out of the 20 estimates fall outside of the 95% confidence intervals, as the bottom panel of Figure \[fig:CitiIVsFullData\] reveals. Decaying rate of autocorrelation -------------------------------- Figure \[fig:RV\_LA\_2step\_fulldata\] shows that the positive autocorrelations of noise drop to zero rapidly. To assess the rate of decay, we perform a logarithmic transformation of the autocorrelations estimated by BCRV.[^16] In the top panel of Figure \[fig:CitiLogCors\], we plot the logarithmic autocorrelations for each trading day, revealing clear support for a linear trend. To better visualize the linear relationship, we plot the means of the logarithmic autocorrelations over the 20 trading days and fit a regression line to it; see the bottom panel of Figure \[fig:CitiLogCors\]. The nearly perfect fit indicates that the logarithmic autocorrelation is approximately a linear function of the number of lags, i.e., the autocorrelation function is decaying at an exponential rate.[^17] Robustness check — estimation under other sampling schemes ---------------------------------------------------------- It is interesting to analyze how our estimators perform when the data is sampled at different time scales. In this section, we consider two alternative (sub)sampling schemes: regular time sampling and tick time sampling (recall Remark \[rmk:Sampling\_Schemes\] for details on the sampling schemes). ### Regular time sampling {#subsubsec:regular_sampling} The prices in this sample are recorded on a 1-second time scale. If there were multiple prices in a second, we select the first one; and we do not record a price if there is no transaction in a second. We end up with $21\mathord{,}691$ observations on average per trading day. Figure \[fig:RV\_LA\_2step\_seconddata\] is analogous to Figure \[fig:RV\_LA\_2step\_fulldata\]. The three estimators, RV, LA, and BCRV, now produce very different patterns. Both the RV and LA estimators indicate that noise is strongly autocorrelated in this subsample, even stronger than in the original full sample. This would be counterintuitive since we eliminate more than 90% of the full sample in a fairly random way — the elimination should if anything have weakened the serial dependence of noise in the remaining sample. However, the estimates by BCRV reveal that in fact the noise is approximately uncorrelated — it is the finite sample bias that makes the autocorrelations of noise seem strong and persistent if not taken into account. If the noise is close to being independent, $\widehat{\rm IV}_{\rm step1}$, which assumes i.i.d. noise, would be a valid estimator of the integrated volatility. An alternative estimator, e.g., $\widehat{\rm IV}_{\rm step2}$ or $\widehat{\rm IV}_{n}$, would be robust if it delivered similar estimates. In the top panel of Figure \[fig:CitiIVsSecondData\], we observe that $\widehat{\rm IV}_{\rm step1}$ and $\widehat{\rm IV}_{\rm step2}$ yield virtually identical estimates. The estimator $\widehat{\rm IV}_{n}$, however, yields lower estimates on each trading day. If we rely on the asymptotic theory only, we would conclude that the estimates by $\widehat{\rm IV}_{\rm step1}$ (or $\widehat{\rm IV}_{\rm step2}$) are significantly higher than those by $\widehat{\rm IV}_{n}$ in the statistical sense — all the 20 estimates by $\widehat{\rm IV}_{\rm step1}$ (or $\widehat{\rm IV}_{\rm step2}$) are outside the 95% asymptotic confidence intervals of $\widehat{\rm IV}_{n}$, as we observe from the bottom panel of Figure \[fig:CitiIVsSecondData\]. We conclude that Figures \[fig:CitiIVsFullData\] and \[fig:CitiIVsSecondData\] jointly reveal the importance of our multi-step approach. Indeed, $\widehat{\rm IV}_{\rm step1}$ shows unreliable behaviour in Figure \[fig:CitiIVsFullData\], while $\widehat{\rm IV}_{n}$ shows unreliable behaviour in Figure \[fig:CitiIVsSecondData\]. ### Tick time sampling {#subsec:tick_sampling} In a tick time sample, prices are collected with each price change, i.e., all zero returns are suppressed, see, e.g., [@da2017moving], [@Ait-Sahalia2011DependentNoise], [@griffin2008sampling], [@kalnina2011subsampling] and [@zhou1996high]. For the Citigroup transaction data, 70% of the returns are zero. The corresponding average number of prices per second in our tick time sample is 3.2. Figure \[fig:RV\_LA\_2step\_tickdata\] shows that the microstructure noise has a different dependence pattern in the tick time sample — its autocorrelation function is alternating. Masked by alternating noise, the observed returns at tick time have a similar pattern; see [@Ait-Sahalia2011DependentNoise] and [@griffin2008sampling]. This dependence structure of noise is perceived to be due to the discreteness of price changes, irrespective of the distributional features of noise in the original transactions or quotes data. Interestingly, Figure \[fig:CitiIVsTickData\] shows that the three estimators of the integrated volatility, $\widehat{\rm IV}_{\rm step1}$, $\widehat{\rm IV}_{\rm step2}$, and $\widehat{\rm IV}_{n}$, remain close. It is not surprising to see a close fit of $\widehat{\rm IV}_{\rm step2}$ and $\widehat{\rm IV}_n$ since the data frequency is still quite high. By contrast, it is not directly obvious why $\widehat{\rm IV}_{\rm step1}$ and $ \widehat{\rm IV}_{\rm step2}$ deliver almost identical estimates, given the fact that the dependence of noise in this tick time sample is drastically different from i.i.d. noise. However, a clue is provided by the observation that negatively autocorrelated noise has less impact on the estimation of the integrated volatility, as the high-order alternating autocovariances partially cancel out, thus contributing less to the asymptotic bias $\sigma^2_U$.[^18] Economic interpretation and empirical implication ------------------------------------------------- The dependence structure of microstructure noise is complex, and depends on the sampling scheme. In an original transaction data sample, noise is likely to be positively autocorrelated as a result of various trading practices that entail continuation in order flows. The dependence of noise can be reduced by sampling sparsely, say, every few (or more) seconds as we show in Section \[subsubsec:regular\_sampling\]; noise is close to independent in such sparse subsamples. If, however, we remove all zero returns, thus sample in tick time, noise typically exhibits an alternating autocorrelogram. Microstructure theories can provide some intuitive economic interpretations of the dynamic properties of microstructure noise recovered in this paper. The positive autocorrelation function displayed in Figure \[fig:RV\_LA\_2step\_fulldata\] is consistent with the findings in [@hasbrouck1987order], [@choi1988estimation] and [@huang1997components] that explicitly model the probability of order reversal $\pi$ (or order continuation by $1-\pi$),[^19] so that the deviation of transaction prices from fundamentals becomes an AR(1) process. Fitting the autocorrelation function recovered by BCRV in Figure \[fig:RV\_LA\_2step\_fulldata\] to that of an AR(1) model, we obtain an estimate of the AR(1) coefficient equal to $\hat{\rho} = 0.75$ and the probability of order continuation is $1-\hat{\pi} = (1+\hat{\rho})/2 = 0.87$. That is, the estimated probability that a buy (or sell) order follows another buy (or sell) order is 0.87. In view of the extensive empirical results in [@huang1997components] (see Table 5 therein), this is a reasonable estimate. One possible interpretation of the positively autocorrelated order flows is that a large *order* is often executed as a series of smaller *trades* to reduce the price impact, or conducted against multiple *trades* from stale limit orders. However, such positive autocorrelation contradicts the prediction of inventory models, in which market makers induce negatively autocorrelated order flows to stabilize inventories; see [@ho1981optimal]. Consequently, according to inventory models the probability of order reversal would be $\pi>0.5$. One remedy, suggested by [@huang1997components], is to collapse multiple *trades* at the same price into one *order*, which is exactly the tick time sampling scheme considered in Section \[subsec:tick\_sampling\]. Exploiting the estimates by BCRV presented in Figure \[fig:RV\_LA\_2step\_tickdata\], we obtain an estimate of the probability of order reversal equal to $\hat{\pi} = 0.84$, which is very close to the average probability $0.87$ in [@huang1997components]. We emphasize that we recover these probabilities without any prior knowledge or estimates of the order flows. The dependence structure of microstructure noise, and hence the choice of sampling scheme, affect the estimation of integrated volatility. Popular de-noise methods that assume i.i.d. noise work reasonably well with relatively sparse regular time samples or tick time samples. However, this discards a substantial amount of the original transaction data.[^20] Instead, we can directly estimate the integrated volatility from the original transaction data using our estimators that explicitly take the potential dependence in noise into account. In our empirical study, we have also illustrated that bias corrections play an essential role in recovering the statistical properties of noise and in estimating the integrated volatility. Our two-step estimators are specifically designed to conduct such bias corrections, and have the advantage of being robust to different sampling schemes and frequencies. Conclusion {#sec:conclusion} ========== In high-frequency financial data the efficient price is contaminated by microstructure noise, which is usually assumed to be independently and identically distributed. This simple distributional assumption is challenged by both microeconomic financial models and various empirical facts. In this paper, we deviate from the i.i.d. assumption by allowing noise to be dependent in a general setting. We then develop econometric tools to recover the dynamic properties of microstructure noise and design improved approaches for the estimation of the integrated volatility. This paper makes four contributions. First, it develops nonparametric estimators of the second moments of microstructure noise in a general setting. Second, it provides a robust estimator of the integrated volatility, without assuming serially independent noise. Third, it reveals the importance of both asymptotic and finite sample bias analysis and develops simple and readily implementable two-step estimators that are robust to the sampling frequency. Empirically, it characterizes the dependence structures of noise in several popular sampling schemes and provides intuitive economic interpretations; it also investigates the impact of the dynamic properties of microstructure noise on integrated volatility estimation. This paper thus introduces a robust and accurate method to effectively separate the two components of high-frequency financial data — the efficient price and microstructure noise. The robustness lies in its flexibility to accommodate rich dependence structures of microstructure noise motivated by various economic models and trading practices, whereas the accuracy is achieved by the finite sample refinement. As a result, we discover dynamic properties of microstructure noise consistent with microstructure theory and obtain accurate volatility estimators that are robust to sampling schemes. Acknowledgements {#acknowledgements .unnumbered} ================ We are very grateful to Yacine Aït-Sahalia, Federico Bandi, Peter Boswijk, Peter Reinhard Hansen, Siem Jan Koopman, Oliver Linton, and Xiye Yang for their comments and discussions on earlier versions of this paper. This research was funded in part by the Netherlands Organization for Scientific Research under grant NWO VIDI 2009 (Laeven). Tables and Figures {#tables-and-figures .unnumbered} ================== $\rho$ -0.7 -0.3 0 0.3 0.7 -------------------------------- ------------- ------------- ------------- ------------- ------------- $\widehat{\rm IV}_{\rm step1}$ 5.53 (0.46) 5.74 (0.46) 5.98 (0.47) 6.39 (0.49) 7.57 (0.56) $\widehat{\rm IV}_{n}$ 3.04 (0.40) 3.02 (0.40) 3.02 (0.41) 3.04 (0.43) 2.91 (0.50) $\widehat{\rm IV}_{\rm step2}$ 5.79 (0.61) 5.87 (0.63) 5.99 (0.63) 6.23 (0.67) 6.67 (0.76) $\widehat{\rm IV}_{\rm step3}$ 5.92 (0.70) 5.93 (0.72) 6.00 (0.72) 6.13 (0.76) 6.22 (0.87) : Estimation of the integrated volatility. The numbers represent the means of the four estimators of integrated volatility, $\widehat{\rm IV}_{\rm step1}$, $\widehat{\rm IV}_{n}$, $\widehat{\rm IV}_{\rm step2}$ and $\widehat{\rm IV}_{\rm step3}$, based on $1\mathord{,}000$ simulations with standard deviations between parentheses. The true value of the integrated volatility is given by $\sigma^2 = 6\times 10^{-5}$. All numbers in the table are multiplied by $10^5$. We take $\Delta=1$ sec and the number of observations is $23\mathord{,}400$. The tuning parameter of the RV estimator is $j_n=20$ and $i_n=10$.[]{data-label="tab:IV_est_delta=1s"} $\rho$ -0.7 -0.3 0 0.3 0.7 -------------------------------- ------------- ------------- ------------- ------------- ------------- $\widehat{\rm IV}_{\rm step1}$ 5.52 (0.22) 5.76 (0.21) 6.00 (0.22) 6.37 (0.23) 7.71 (0.27) $\widehat{\rm IV}_{n}$ 5.86 (0.22) 5.85 (0.21) 5.85 (0.22) 5.84 (0.23) 5.88(0.27) $\widehat{\rm IV}_{\rm step2}$ 5.99 (0.23) 6.00 (0.22) 6.00 (0.23) 6.00 (0.24) 6.07 (0.27) $\widehat{\rm IV}_{\rm step3}$ 6.00 (0.23) 6.00 (0.22) 6.00 (0.23) 5.99 (0.24) 6.03 (0.27) : Estimation of the integrated volatility with ultra high-frequency data. The numbers represent the means of the four estimators of integrated volatility, $\widehat{\rm IV}_{\rm step1}$, $\widehat{\rm IV}_{n}$, $\widehat{\rm IV}_{\rm step2}$ and $\widehat{\rm IV}_{\rm step3}$, based on $1\mathord{,}000$ simulations with standard deviations between parentheses. The true value of the integrated volatility is given by $\sigma^2 = 6\times 10^{-5}$. All numbers in the table are multiplied by $10^5$. Different from Table \[tab:IV\_est\_delta=1s\], we now take $\Delta=0.05$ sec and the number of observations is $468\mathord{,}000$. The tuning parameter of the RV estimator is $j_n=20$ and $i_n=10$.[]{data-label="tab:IV_est_delta=0.1s"} \(m) \[matrix of math nodes, row sep=5em, column sep=6em\] [ **Step 1:**& (U)\_[step1]{} & \_[step1]{}\ **Step 2:**& \_[step2]{} &(U)\_[step2]{},(j)\_[step2]{}\ ]{}; [ \[start chain\] (m-1-1); (m-1-2) \[join=[node\[above,labeled\] [[Observedprice]{}Y]{} node\[below,labeled\] [RV]{}]{}\]; (m-1-3) \[join=[node\[above,labeled\] [Asymptotic bias]{} node\[below,labeled\] [PAV]{}]{}\]; [ (m-2-3) \[join=[node\[left,labeled\] [Finitesample bias]{} node\[right,labeled\] [RV]{}]{}\]; ]{} [ \[start chain\] (m-2-2) \[join=[node\[below,labeled\] [PAV]{} node\[above,labeled\] [Asymptotic bias]{}]{}\]; ]{}]{} [Supplementary Material to]{}\ [“Dependent Microstructure Noise and Integrated Volatility Estimation from High-Frequency Data”]{} Appendix {#sec:Appendix .unnumbered} ======== Sections \[appendix:prop\_RV\_Estimate\_var+cov(1)\]–\[appendix:thm\_CLT\] in this appendix contain detailed technical proofs of our results. In Sections \[sec:SVsimu\] and \[sec:GE\_Empirical\], we provide additional Monte Carlo simulation and empirical results. In the proofs that follow the constants $C$ and $\delta\in(0,1)$ may vary from line to line. We add a subscript $q$ if they depend on some parameter $q$. Proof of Proposition \[prop:RV\_Estimate\_var+cov(1)\] {#appendix:prop_RV_Estimate_var+cov(1)} ====================================================== Adopting the standard localization procedure (see e.g., [@jacod2011discretization] for further details), we may assume that the processes $a$ and $\sigma$ are bounded by constants $C_{a},C_\sigma>0$. This yields for any such continuous Itô semimartingale $X$ and stopping times $S\leq T$ that $$\begin{aligned} \label{eq:Classic_Est_X_p} \conexp{\abs{X_T-X_S}^p}{\mathcal{F}_S}\leq C_p\conexp{T-S}{\mathcal{F}_S},\quad \forall p\geq 2.\end{aligned}$$ Let $\Delta_n = 1/n$. For any process $V$, we write $\Delta^n_{i,j}V := V^n_{i+j}-V^n_{i}$, $j=1,2,\ldots,n-i$. Then, for the log-price process $Y$, $$[Y,Y]^{j}_n := \sum_{i=0}^{n-j}(\Delta^n_{i,j} Y)^2 = \sum_{i=0}^{n-j}(\Delta^n_{i,j} X)^2 + 2\sum_{i=0}^{n-j}\Delta^n_{i,j} X\ \Delta^n_{i,j} U + \sum_{i=0}^{n-j}(\Delta^n_{i,j} U)^2. \label{eq:decomp_[Y,Y]^j_n}$$ We now analyze the asymptotic properties of the three components on the right-hand side of : (i) First note that $\sum_{i=0}^{n-j}(\Delta^n_{i,j} X)^2/j\Pconverge [X,X]$, where $[X,X]$ is the quadratic variation of $X$. (ii) By the independence of $X$ and $U$, we have $$\begin{aligned} \label{eq:dX2_dU2_bound} \sum_{i=0}^{n-j}\expect{\myp{\Delta^n_{i,j} X\ \Delta^n_{i,j} U }^2}= \sum_{i=0}^{n-j}\expect{\myp{\Delta^n_{i,j} X}^2}\expect{\myp{\Delta^n_{i,j} U }^2}\leq Cj.\end{aligned}$$ The last inequality follows from the fact that $U$ has bounded moments and from an application of . Next, $$\label{eq:dX2_dU2_cross_bound} \begin{split} &\sum_{i,i': i< i'}\expect{\Delta^n_{i,j} X\ \Delta^n_{i,j} U\ \Delta^n_{i',j} X\ \Delta^n_{i',j} U }\\ =&\sum_{i,i':i< i'}\expect{\Delta^n_{i,j} X\ \Delta^n_{i',j} X}\expect{ \Delta^n_{i,j} U\ \Delta^n_{i',j} U }\\ \leq& Cj\Delta_n\myp{\sum_{i,i': i+j< i'}\expect{ \Delta^n_{i,j} U\ \Delta^n_{i',j} U }+\sum_{i,i': i+j\geq i'>i }\expect{ \Delta^n_{i,j} U\ \Delta^n_{i',j} U }}\\ \leq& Cj^2. \end{split}$$ The first inequality follows from the Cauchy-Schwarz inequality and . To see the second inequality, we apply the Cauchy-Schwarz inequality, Lemma VIII 3.102 of [@jacod1987limit] (hereafter abbreviated as JS-Lemma), and the fact that $v>2$ to obtain $$\label{eq:Apply_JS_Lemma} \begin{split} \sum_{i,i':i+j<i'}\expect{ \Delta^n_{i,j} U\ \Delta^n_{i',j} U } & = \sum_{i,i':i+j<i'}\expect{\Delta^n_{i,j} U\ \conexp{ \Delta^n_{i',j} U }{\infor{(i+j)\Delta_n}}}\\ &\leq C\sum_i\sum_{i':i+j<i'}\sqrt{\expect{ \myp{\conexp{ \Delta^n_{i',j} U }{\infor{(i+j)\Delta_n}}}^2} }\\ & \leq C\sum_i\sum_{i':i+j<i'}(i'-(i+j))^{-v/2}\leq C\Delta_n^{-1}. \end{split}$$ Eqns. and  imply that $\expect{\myp{\sum_{i=0}^{n-j}\Delta^n_{i,j} X\ \Delta^n_{i,j} U }^2}\leq Cj^2$, thus $$\label{eq:dXdU_order} \sum_{i=0}^{n-j}\Delta^n_{i,j} X\ \Delta^n_{i,j} U = O_p(j).$$ (iii) Turning to the last sum of , let $\nu_j := \expect{(U^n_{i+j} - U^n_i)^2}= 2(\var{U} - \gamma(j))$. For $i>j$, we obtain the following in a similar way in which we derived : $$\begin{aligned} \abs{\cov{(U^n_{j} - U^n_0)^2,(U^n_{i+j} - U^n_i)^2}} \leq C(i-j)^{-v/2}, $$ which implies $$\expect{\myp{\sum_{i=0}^{n-j}\myp{(\Delta^n_{i,j}U)^2-\nu_j}}^2}\leq C\Delta_n^{-1}j. \label{eq:Asy_Orders_RV(U)_j}$$ For any fixed $j$, any $j_n$ satisfying $\Delta_nj_n\rightarrow 0, j_n\rightarrow \infty$, we have by ,  and  that $$\label{eq:RV{Y,Y}_Op} \begin{split} &\widehat{\RV{Y,Y}}_n(j)-\myp{ \var{U}-\gamma(j)} = O_p\myp{\sqrt{\Delta_nj}};\\ &\widehat{\RV{Y,Y}}_n(j_n)-\var{U} = O_p\myp{\max\left\{\sqrt{\Delta_nj_n},j_n^{-v/2}\right\}}. \end{split}$$Now the stated results follow from . Proof of Proposition \[prop:Finite\_Sample\_Bias\_Correction\] {#sec:prop:Finite_Sample_Bias_Correction} ============================================================== Let $k= \lfloor \frac{n}{j}\rfloor$. We will adopt the square bracket notation in for $X$ and $U$ as well. By Itô’s isometry, we have $$\begin{aligned} \expectsigma{[X,X]^j_{kj-1}} &= \sum_{i=0}^{j-1}\int_{i\Delta_n}^{\myp{(k-1)j+i}\Delta_n}\sigma^2_s\diff s =\sum_{i=0}^{j-1}\myp{\int_{0}^{kj\Delta_n}\sigma^2_s\diff s -\int_{0}^{i\Delta_n} \sigma^2_s\diff s - \int_{\myp{(k-1)j+i}\Delta_n}^{kj\Delta_n} \sigma^2_s\diff s}\\ & = j\int_{0}^{kj\Delta_n}\sigma^2_s\diff s +O_p(j^2\Delta_n).\end{aligned}$$ Hence, we have $$\begin{aligned} \expectsigma{[X,X]^j_n} = j\int_{0}^{1}\sigma^2_s\diff s + O_p(j^2\Delta_n),\end{aligned}$$ where the stochastic orders follow from the regularity conditions of the volatility path at 0 and 1. Furthermore, it is immediate that $\expectsigma{[U,U]^j_n} = 2(n-j+1)(\var{U}-\gamma(j)).$ Thus, we have, by the independence of $X$ and $U$, $$\begin{aligned} \expectsigma{\widehat{\RV{Y,Y}}_n(j)} = \frac{j\int_{0}^{1}\sigma^2_s\diff s}{2(n-j+1)} + \var{U} - \gamma(j) + O_p(j^2\Delta_n^2).\end{aligned}$$ Proof of Proposition \[prop:pre-averaged noise\] {#appendix:prop:pre-averaged noise} ================================================ Recall that $$\begin{aligned} \preavg{U} & = \frac{1}{k_n+1}\sum_{i=(2m-2)k_n}^{(2m-1)k_n}\myp{U^n_{i+k_n} - U^n_{i}} \\&= \frac{1}{k_n+1}\myp{\sum_{i=(2m-1)k_n}^{2mk_n}U^n_{i} - \sum_{i=(2m-2)k_n}^{(2m-1)k_n}U^n_{i}}.\end{aligned}$$ Also recall that $U$ is symmetrically distributed around 0, whence $\preavg{U}$ is equal to the following in distribution: $$\begin{aligned} \label{eq:pre_U_order} \preavg{U} \overset{d}{ = }\frac{1}{k_n+1}\myp{\sum_{i=(2m-2)k_n}^{2mk_n}U^n_{i}} + O_p(\sqrt{\Delta_n}).\end{aligned}$$ Since $v>2$, we have $\sigma_U^2<\infty$, and an application of Corollary VIII 3.106 of [@jacod1987limit] yields $$\frac{1}{\sqrt{2k_n + 1}}\sum_{i=(2m-2)k_n}^{2mk_n}U^n_{i}\convergeL \normdist{0}{\sigma^2_U},$$ whence $$n^{1/4}\preavg{U}\convergeL \normdist{0}{2\sigma^2_U/c}.$$ Proof of Proposition \[prop:consistent\_nonpar\_sigma2U\] {#appendix:prop:consistent_nonpar_sigma2U} ========================================================= For any fixed $j$,   implies $\widehat{\gamma(j)}_n - \gamma(j) =O_p\myp{\max\left\{\sqrt{\Delta_nj_n},j_n^{-v/2}\right\}}$. Therefore, $$\begin{aligned} \widehat{\sigma^2_U} - \sum_{j=-i_n}^{i_n}\gamma(j) = O_p\myp{\max\left\{\sqrt{\Delta_nj_ni_n^2},j_n^{-v/2}i_n\right\}}.\end{aligned}$$ Now the result follows given that $\Delta_nj_n^3\rightarrow 0, i_n\leq j_n,i_n\rightarrow \infty, v>2.$ Proof of Theorem \[thm:consistency\] {#appendix:thm_consistency} ==================================== The proof of this theorem basically follows [@podolskij2009pre-averaging-1], but we need to deal with generally dependent noise. First, we introduce some notation: $$\begin{aligned} &\beta^n_m := n^{1/4}\myp{\sigma_{\frac{m-1}{M_n}}\preavg{W} + \preavg{U}}\label{eq:beta^n_m};\\ &\xi^n_m :=n^{1/4}\preavg{Y} - \beta^n_m;\label{eq:xi^n_m}\\ &\eta^n_m :=\frac{n^{r/4}}{2c}\conexp{\abs{\preavg{Y}}^r}{\mathcal{F}_{\frac{m-1}{M_n}}};\\ &\widetilde{\eta^n_m} :=\frac{\mu_r}{2c}\myp{\frac{2c}{3}\sigma^2_{\frac{m-1}{M_n}} + \frac{2}{c}\sigma^2_U}^{\frac{r}{2}};\\ &\Pbv^n:=\sum_{m=1}^{M_n}\eta^n_m;\\ &\widetilde{\Pbv}^n :=\sum_{m=1}^{M_n}\widetilde{\eta^n_m}.\end{aligned}$$ Then, we state the following lemma: \[lemma:stochastic\_Order\_X\_Y\] For any $q>0$, there is some constant $C_q>0$ (depending on $q$), such that $\forall m$: $$\label{eq:StochasticOrderXbar} \expect{\abs{\xi^n_m}^q} + \expect{\abs{n^{1/4}\preavg{X}}^q}<C_q;$$ and the following holds for $q\in (0,2r+\varepsilon)$ with $\varepsilon$ as defined in Theorem \[thm:consistency\]: $$\label{eq:StochasticOrderYbar} \expect{\abs{\beta^n_m}^q} + \expect{\abs{n^{1/4}\preavg{Y}}^q}<C_q.$$ \[Proof of Lemma \[lemma:stochastic\_Order\_X\_Y\]\] The boundedness of moments of $\xi^n_m$ and $n^{1/4}\preavg{X}$ (which don’t depend on the noise) follows from Lemma 1 in [@podolskij2009pre-averaging-1]. Now we show the boundedness of $\expect{\abs{n^{1/4}\preavg{Y}}^q}$ for $0<q<2r+\varepsilon$. We note (see Proposition 3.8 in [@white2000asymptotic]) that there is some $C_q$ so that the following is true: $$\expect{\abs{n^{1/4}\preavg{Y}}^q}\leq C_q\myp{\expect{\abs{n^{1/4}\preavg{X}}^q} + \expect{\abs{n^{1/4}\preavg{U}}^q}}.$$ Boundedness of $\expect{\abs{n^{1/4}\preavg{X}}^q}$ has already been established, while $\expect{\abs{n^{1/4}\preavg{U}}^q}$ is bounded by Proposition \[prop:pre-averaged noise\] and a well known fact that convergence in distribution implies convergence in moments under uniformly bounded moments condition, see, e.g., Theorem 4.5.2 of [@chung2001course]. A similar proof holds for $\expect{\abs{\beta^n_m}^q}$. We present the proof in several steps. (i) We first prove that $$\label{eq:Mgl_diff_Pconverge_0} \Pbv(Y,r)_n\ - \frac{1}{M_n}\Pbv^n\Pconverge 0.$$ First, recall our choice of $M_n = \left\lfloor\frac{\sqrt{n}}{2c}\right\rfloor$. Next, observe that the difference on the left-hand side of is in fact a sum of *martingale differences*: $$\begin{aligned} &\Pbv(Y,r)_n\ - \frac{1}{M_n}\Pbv^n \\ =& \sum_{m=1}^{M_n}\frac{1}{\sqrt{n}}\myp{\abs{n^{\frac{1}{4}}\preavg{Y}}^r-\conexp{\abs{n^{\frac{1}{4}}\preavg{Y}}^r }{\mathcal{F}_{\frac{m-1}{M_n}}}}.\end{aligned}$$ In light of Lemma 2.2.11 in [@jacod2011discretization], it suffices to show that $$\begin{aligned} \label{eq:squared_Mgl_Pconverge_0} \frac{1}{n}\sum_{m=1}^{M_n}\conexp{\abs{n^{\frac{1}{4}}\preavg{Y}}^{2r}}{\mathcal{F}_{\frac{m-1}{M_n}}}\Pconverge 0.\end{aligned}$$ But this follows from the boundedness established in Lemma \[lemma:stochastic\_Order\_X\_Y\] and the choice of $M_n$. (ii) Next, we prove that $$\label{eq:MBV-MBVtilde=0} \frac{1}{M_n}\Pbv^n-\frac{1}{M_n}\widetilde{\Pbv}^n\Pconverge 0.$$ To prove this, we proceed in several steps: 1. We first note that the error of approximating $n^{1/4}\preavg{Y}$ by $\beta^n_m$, denoted by $\xi^n_m$ in , is small in the sense that $$\label{eq:approxi_err_small} \frac{1}{M_n}\sum_{m=1}^{M_n}\expect{\abs{\xi^n_m}^2}\rightarrow 0.$$ For a detailed proof, see [@podolskij2009pre-averaging-1]. (Note that our assumptions on the noise process are different from [@podolskij2009pre-averaging-1], but the noise terms don’t appear in $\xi^{n}_{m}$.) 2. Next, define the approximation error $$\begin{aligned} \zeta^n_m := \frac{\abs{n^{1/4}\preavg{Y}}^r - \abs{\beta^n_m}^r}{2c}.\end{aligned}$$ We note that this error is also small: $$\label{eq:approxi_err_zeta} \frac{1}{M_n}\sum_{m=1}^{M_n}\expect{\abs{\zeta^n_m}} \rightarrow 0,$$ which follows from $$\label{eq:approxi_err_zeta2} \frac{1}{M_n}\sum_{m=1}^{M_n}\expect{\abs{\zeta^n_m}^2} \rightarrow 0.$$ This, in turn, can be proved following [@podolskij2009pre-averaging-1]. then follows, and it implies $$\label{eq:zeta_P_converge_0} \frac{1}{M_n}\sum_{m=1}^{M_n}\conexp{\zeta^n_m}{\mathcal{F}_{\frac{m-1}{M_n}}} \Pconverge 0,$$ by the Markov inequality. 3. Now we show the following: $$\label{eq:conditional_PBV_op(1)} \conexp{\abs{\beta^n_m}^r}{\mathcal{F}_{\frac{m-1}{M_n}}} ={\mu_r}\myp{\frac{2c}{3}\sigma^2_{\frac{m-1}{M_n}} + \frac{2\sigma^2_U}{c}}^{\frac{r}{2}}+o_p(1),$$ which holds uniformly in $m$. Recall that $r\geq 2$ is an even integer. Let $r_n\rightarrow\infty$ but $r_n=o(n^{1/2})$. Denote $$\begin{aligned} \overline{\beta}^{n}_{m-1,r_n} & = \frac{n^{1/4}}{k_n+1}\myp{\sum_{i=(2m-2)k_n}^{(2m-2)k_n+r_n}\sigma_{\frac{m-1}{M_n}}\myp{W^n_{i+k_n} - W^n_{i}} +\myp{U^n_{i+k_n} -U^n_{i} }}\\ &=: n^{1/4}\myp{\sigma_{\frac{m-1}{M_n}} \overline{W}^{n}_{m-1,r_n} + \overline{U}^{n}_{m-1,r_n} };\\ \overline{\beta}^{n}_{r_n,m} & = \frac{n^{1/4}}{k_n+1}\myp{\sum_{i=(2m-2)k_n+r_n+1}^{(2m-1)k_n}\sigma_{\frac{m-1}{M_n}}\myp{W^n_{i+k_n} - W^n_{i}} +\myp{U^n_{i+k_n} -U^n_{i} }}\\ &=: n^{1/4}\myp{\sigma_{\frac{m-1}{M_n}} \overline{W}^{n}_{r_n,m} + \overline{U}^{n}_{r_n,m} }. $$ Then, we have $\beta^n_m =\overline{\beta}^{n}_{m-1,r_n} + \overline{\beta}^{n}_{r_n,m}$. Furthermore, by our construction, $\overline{\beta}^{n}_{m-1,r_n}=o_p(1)$ and $\overline{\beta}^{n}_{r_n,m}$ has the same asymptotic distribution as $\beta^n_m$, which can be derived from the asymptotic distributions of $n^{1/4}\overline{U}^n_m$ and $n^{1/4}\overline{W}^n_m$, and the independence assumption between $X$ and $U$. By the Mean Value Theorem, we have $$\begin{aligned} \conexp{\myp{\beta^n_m }^r-\myp{\overline{\beta}^n_{r_n,m} }^r}{\infor{\frac{m-1}{M_n}}} = \conexp{r\myp{\overline{\beta}^n_{r_n,m}}^{r-1}\myp{\overline{\beta}^n_{m-1,r_n}} }{\infor{\frac{m-1}{M_n}}} + o_p(1). $$ The moment conditions and an application of Cauchy-Schwarz inequality yields $$\conexp{\myp{\overline{\beta}^n_{r_n,m}}^{r-1}\myp{\overline{\beta}^n_{m-1,r_n}} } {\infor{\frac{m-1}{M_n}}}=o_p(1).$$ Thus, $$\label{eq:r_big_blk_op(1)_1} \conexp{\myp{\beta^n_m }^r}{\infor{\frac{m-1}{M_n}}} = \conexp{\myp{\overline{\beta}^n_{r_n,m} }^r}{\infor{\frac{m-1}{M_n}}} + o_p(1).$$ For any $l\leq r$, define $\overline{U}^{n,l}_{r_n,m}:=\myp{n^{1/4}\overline{U}^n_{r_n,m}}^{l}$, and let $$C_l: = \expect{\myp{\conexp{\overline{U}^{n,l}_{r_n,m}}{\infor{\frac{m-1}{M_n}}} - \expect{\overline{U}^{n,l}_{r_n,m}}}^2}.$$ By the JS-Lemma, we have $C_l\leq Cr_n^{-v}$. Let $$\Lambda_l: = \frac{\conexp{\overline{U}^{n,l}_{r_n,m}}{\infor{\frac{m-1}{M_n}}} - \expect{\overline{U}^{n,l}_{r_n,m}}}{\sqrt{C_l}};$$ note that $\expect{\Lambda_l^2} = 1$. Thus, $$\conexp{\overline{U}^{n,l}_{r_n,m}}{\infor{\frac{m-1}{M_n}}}=\expect{\overline{U}^{n,l}_{r_n,m}} + \sqrt{C_l}\Lambda_l.$$ Therefore, we can substitute the conditional moments by the unconditional moments and we obtain the following ($C^k_r =\frac{r!}{k!(r-k)!}$ denotes the binomial coefficient): $$\begin{aligned} &\conexp{\myp{\overline{\beta}^n_{r_n,m} }^r}{\infor{\frac{m-1}{M_n}}}\\ =& \conexp{\sum_{k=0}^{r} C^k_r\sigma^k_{\frac{m-1}{M_n}} \myp{n^{1/4}\overline{W}^n_{r_n,m}}^k \myp{n^{1/4}\overline{U}^n_{r_n,m}}^{r-k}}{\infor{\frac{m-1}{M_n}}} \\ =& \sum_{k=0}^{r} C^k_r\sigma^k_{\frac{m-1}{M_n}} \conexp{\myp{n^{1/4}\overline{W}^n_{r_n,m}}^k}{\sigma_{\frac{m-1}{M_n}}} \conexp{\myp{n^{1/4}\overline{U}^n_{r_n,m}}^{r-k}}{\infor{\frac{m-1}{M_n}}}\\ =& \conexp{\myp{\overline{\beta}^n_{r_n,m} }^r}{{\sigma_{\frac{m-1}{M_n}}}} +\sum_{k=0}^{r} C_k^r\sigma^k_{\frac{m-1}{M_n}} \conexp{\myp{n^{1/4}\overline{W}^n_{r_n,m}}^k}{\sigma_{\frac{m-1}{M_n}}} \sqrt{C_{r-k}}\Lambda_{r-k}.\end{aligned}$$ Clearly, the last term is $o_p(1)$, and together with , we have $$\label{eq:key_condi_approxi} \begin{split} \conexp{\myp{\beta^n_m }^r}{\infor{\frac{m-1}{M_n}}} & = \conexp{\myp{\overline{\beta}^n_{r_n,m} }^r}{\sigma_{\frac{m-1}{M_n}}} +o_p(1)\\ & = \mu_r\myp{\frac{2c}{3}\sigma^2_{\frac{m-1}{M_n}} + \frac{2\sigma^2_U}{c}}^{\frac{r}{2}}+o_p(1). \end{split}$$ The last equality is a consequence of the asymptotic distribution of $\beta^n_m$. 4. Now follows from  and . (iii) Following Proposition 2.2.8 in [@jacod2011discretization], we see that the Riemann approximation converges: $$\frac{1}{M_n}\sum_{m=1}^{M_n}\widetilde{\Pbv^n}\Pconverge \Pbv(Y,r). \label{eq:Riem}$$ Recall that we already proved that $$\Pbv(Y,r)_n\ - \frac{1}{M_n}\Pbv^n\Pconverge 0;\quad\mathrm{and}\quad \frac{1}{M_n}\Pbv^n-\frac{1}{M_n}\widetilde{\Pbv}^n\Pconverge 0;$$ in previous steps. Now it is immediate to conclude that $$\Pbv(Y,r)_n\Pconverge \Pbv(Y,r).$$ This finalizes the proof of Theorem \[thm:consistency\]. Robustness to Irregular Sampling {#sec:IrregularSampling} ================================ In this section, we show that the consistency results for integrated volatility in Theorem \[thm:consistency\] and Corollary \[corollary:consistency\_IV\] can be extended to irregular sampling times for the case $r=2$, by adapting the approach in Appendix C of [@christensen2014jump-high-fre] to allow for serially dependent noise in our general setting (recall $Y^n_i = X_{t^n_i} + U^n_i$). Let $f:[0,1]\mapsto[0,1]$ be a strictly increasing map with Lipschitz continuous first order derivatives. Let $f(0)=0$ and $f(1)=1$. Suppose that the observation times are $\{t^n_i=f(i/n):0\leq i\leq n \}$. Let $C_f' = \max_{x\in[0,1]}\abs{f'(x)}$. Note that $C'_f<\infty$ by the continuity of $f'$. First, we note that the asymptotic results related to the noise process we derived so far still hold under irregular sampling, because the noise is indexed by $i$ rather than by $t_{i}$ in our setting. The proof then proceeds in several steps: 1. We first provide the analogs of Lemma \[lemma:stochastic\_Order\_X\_Y\] and step (i) in the proof of Theorem \[thm:consistency\]. Assume $q\geq 1$. Then, $$\begin{aligned} \expect{\abs{\xi^n_m}^q}& = \expect{\abs{\frac{n^{1/4}}{k_n + 1}\sum_{i=(2m-2)k_n}^{(2m-1)k_n}X^n_{i+k_n} - X^n_{i} -\sigma_{t^n_{(2m-2)k_n}} \myp{W^n_{i+k_n} - W^n_{i}} }^q }\\ &\leq\frac{n^{\frac{q}{4}}}{k_n+1}\sum_{i=(2m-2)k_n}^{(2m-1)k_n}\expect{\abs{X^n_{i+k_n} - X^n_{i} -\sigma_{t^n_{(2m-2)k_n}} \myp{W^n_{i+k_n} - W^n_{i}}}^q}\\ &=\frac{n^{\frac{q}{4}}}{k_n+1}\sum_{i=(2m-2)k_n}^{(2m-1)k_n}\expect{\abs{ \int_{t^n_i}^{t^n_{i+k_n}} \myp{\alpha_s\diff s + \myp{\sigma_s - \sigma_{t^n_{(2m-2)k_n}} }\diff W_s}}^q}\\ &\leq C_\alpha (C_f')^q n^{-\frac{q}{4}}+ \frac{C_qn^{\frac{q}{4}}}{k_n+1}\sum_{i=(2m-2)k_n}^{(2m-1)k_n}\expect{\abs{ \int_{t^n_i}^{t^n_{i+k_n}} \myp{\sigma_s - \sigma_{t^n_{(2m-2)k_n}} }\diff W_s}^q}\\ &\leq C+ \frac{C_qn^{\frac{q}{4}}}{k_n+1}\sum_{i=(2m-2)k_n}^{(2m-1)k_n}\expect{\myp{ \int_{t^n_i}^{t^n_{i+k_n}} \abs{\sigma_s - \sigma_{t^n_{(2m-2)k_n}} }^2\diff s}^{q/2}}\\ &\leq C.\end{aligned}$$ The second inequality follows from the boundedness of $\alpha$ and $C_f'$. The third inequality is an application of the Burkholder-Davis-Gundy inequality. The last inequality follows from the fact that $\sigma$ is bounded. Similarly, we can prove that $\expect{\abs{n^{1/4}\preavg{X}}^q}$ is bounded. For $q\in(0,1)$, the result is immediate using Jensen’s inequality. Now the boundedness of $\expect{\abs{n^{1/4}\preavg{Y}}^q}$, $q\in (0,2r+\varepsilon)$, is obvious as the asymptotic distribution of the pre-averaged noise (which is indexed by $i$) does not change under irregular sampling. 2. Next, we prove the analog of step (ii) item (a) in the proof of Theorem \[thm:consistency\]. We have that $$\begin{aligned} &\expect{\abs{\xi^n_m}^2}\\ \leq& \sum_{i=(2m-2)k_n}^{(2m-1)k_n}\frac{\expect{\abs{n^{\frac{1}{4}}\myp{ \myp{X^n_{i+k_n} - X^n_{i} } -\sigma_{t_{(2m-2)k_n}^n}\myp{W^n_{i+k_n} - W^n_{i} }} }^2}}{k_n+1}\\ =& \sum_{i=(2m-2)k_n}^{(2m-1)k_n}\frac{\expect{\abs{n^{\frac{1}{4}}\myp{ \int_{t^n_i}^{t^n_{i+k_n}} \alpha_s\diff s + \int_{t^n_i}^{t^n_{i+k_n}} \myp{\sigma_s -\sigma_{t_{(2m-2)k_n}^n}} \diff W_s }} }^2}{k_n+1}\\ \leq &\sum_{i=(2m-2)k_n}^{(2m-1)k_n}\frac{2\expect{n^{\frac{1}{2}}\myp{ \int_{t^n_i}^{t^n_{i+k_n}} \alpha_s\diff s}^2 +n^{1/2} \int_{t^n_i}^{t^n_{i+k_n}} \myp{\sigma_s -\sigma_{t_{(2m-2)k_n}^n}}^2 \diff s } }{k_n+1}\\ \leq &\frac{{C_f'}^2C_\alpha}{\sqrt{n}} + 2n^{1/2}\expect{\int_{t^n_{(2m-2)k_n}}^{t_{2mk_n}^{n}} \myp{\sigma_s -\sigma_{t_{(2m-2)k_n}^n}}^2 \diff s } .\end{aligned}$$ The second inequality is due to the Cauchy’s inequality and Itô’s isometry. The third inequality is a consequence of the boundedness of $\alpha,|f'|$ and our choice of $k_n$; it is obtained by taking $i$ to be the lower and upper bound. Now we have $$\begin{aligned} \frac{1}{M_n}\sum_{m=1}^{M_n}\expect{\abs{\xi^n_m}^2}&\leq O(1/\sqrt{n}) + \frac{2n^{1/2}}{M_n}\sum_{m=1}^{M_n}\expect{\int_{t^n_{(2m-2)k_n}}^{t_{2mk_n}^{n}} \myp{\sigma_s -\sigma_{t_{(2m-2)k_n}^n}}^2 \diff s }\\ & = O(1/\sqrt{n}) + 4c\int_{0}^1\expect{ \myp{\sigma_s -\sigma_{\frac{\lfloor M_ns \rfloor}{M_n}}}^2} \diff s.\end{aligned}$$ Since $\sigma_{\frac{\lfloor M_ns \rfloor}{M_n}}\rightarrow \sigma_s $-a.s., and $\sigma$ is bounded, upon applying Lebesgue’s Dominated Convergence Theorem, we obtain the analog of . We note that the analog of item (b) of step (ii) in the proof of Theorem \[thm:consistency\] is directly obtained because (6.10) in [@podolskij2009pre-averaging-1] holds. 3. We now provide the analog of . First, we note that all the steps in proving  hold except those pertaining to the conditional variance of the pre-averaging Brownian motion. Next, we show that $$\var{n^{1/4}\preavg{W}} = f'((2m-2)k_n/n)\frac{2c}{3}+o(1).$$ By the Lipschitz continuity of $f'$ we obtain: $$\begin{aligned} &\var{\sum_{i=2(m-1)k_n}^{(2m-1)k_n}\myp{W^n_{i+k_n} - W^n_{i}}}\\ = &\sum_{i=2(m-1)k_n}^{(2m-1)k_n}\var{W^n_{i+k_n} - W^n_{i}} + \sum_{i\neq j}\cov{W^n_{i+k_n} - W^n_{i},W^n_{j+k_n} - W^n_{j}}\\ = &\sum_{i=(2m-2)k_n}^{(2m-1)k_n}(t^n_{i+k_n} - t^n_i) + 2\sum_{i=(2m-2)k_n}^{(2m-1)k_n-1}\sum_{j>i}(t^n_{i+k_n}-t^n_j)\\ = &\sum_{i=(2m-2)k_n}^{(2m-1)k_n}\myp{f'(i/n)\frac{k_n}{n}+o(k_n/n)} + 2\sum_{i=(2m-2)k_n}^{(2m-1)k_n-1}\sum_{j>i} \myp{f'(j/n)\frac{i+k_n-j}{n}+o(k_n/n)}\\ = & f'\myp{\frac{(2m-2)k_n}{n}}\sum_{i=(2m-2)k_n}^{(2m-1)k_n}\myp{\frac{k_n}{n}+o(k_n/n)} \\ & + 2\sum_{i=(2m-2)k_n}^{(2m-1)k_n-1}\sum_{j>i} \myp{\frac{i+k_n-j}{n}+o(k_n/n)}\\ =& f'\myp{\frac{(2m-2)k_n}{n}}\frac{2c^3\sqrt{n}}{3} + o(\sqrt{n}).\end{aligned}$$ Now the analog of  (with $r=2$) is $$\conexp{(\beta^n_m)^2}{\infor{t^n_{(2m-2)k_n}}} = \myp{f'\myp{\frac{(2m-2)k_n}{n}}\sigma^2_{f\myp{\frac{(2m-2)k_n}{n}}}\frac{2c}{3} + \frac{2\sigma^2_U}{c}}+o_p(1).$$ 4. Finally, Riemann integrability yields the analog of : $$\begin{aligned} \Pbv(Y,2)_n\Pconverge \int_{0}^{1}\myp{f'(s)\sigma^2_{f(s)}\frac{2c}{3} + \frac{2\sigma^2_U}{c}} \diff s = \int_{0}^{1}\myp{\frac{2c}{3}\sigma^2_t + \frac{2\sigma^2_U}{c}}\diff t.\end{aligned}$$ The last equality is due to the change of variable $f(s) = t$. Proof of Theorem \[thm:CLT\] {#appendix:thm_CLT} ============================ We will first prove three lemmas. Then Theorem \[thm:CLT\] follows as a consequence. \[lemma:1st\_approxi\] We have that $$\begin{aligned} \label{eq:CLT1stapproxi} &\conexp{\myp{\beta^n_m}^2}{\infor{\frac{m-1}{M_n}}} = \myp{\frac{2c}{3}\sigma^2_{\frac{m-1}{M_n}}+\frac{2}{c}\sigma^2_U} + o_p(n^{-1/4}).\end{aligned}$$ Let $r_n$ satisfy $$\label{eq:r_n_Asy_cond} r_n \asymp n^{\vartheta}, \quad \frac{1}{4v}<\vartheta<\frac{1}{4}.$$ To simplify notation, we let $s^n_m:=(2m-2)k_n+r_n$, and we recall our earlier notation used in the proof of Theorem \[thm:consistency\]: $$\begin{aligned} \overline{\beta}^{n}_{m-1,r_n} & = \frac{n^{1/4}}{k_n+1}\myp{\sum_{i=(2m-2)k_n}^{(2m-2)k_n+r_n}\sigma_{\frac{m-1}{M_n}}\myp{W^n_{i+k_n} - W^n_{i}} +\myp{U^n_{i+k_n} -U^n_{i} }}\\ &=: n^{1/4}\myp{\sigma_{\frac{m-1}{M_n}} \overline{W}^{n}_{m-1,r_n} + \overline{U}^{n}_{m-1,r_n} };\\ \overline{\beta}^{n}_{r_n,m} & = \frac{n^{1/4}}{k_n+1}\myp{\sum_{i=(2m-2)k_n+r_n+1}^{(2m-1)k_n}\sigma_{\frac{m-1}{M_n}}\myp{W^n_{i+k_n} - W^n_{i}} +\myp{U^n_{i+k_n} -U^n_{i} }}\\ &=: n^{1/4}\myp{\sigma_{\frac{m-1}{M_n}} \overline{W}^{n}_{r_n,m} + \overline{U}^{n}_{r_n,m} },\end{aligned}$$ where $\overline{\beta}^{n}_{m-1,r_n}+\overline{\beta}^{n}_{r_n,m}=\beta^n_m $. The proof consists of three steps: 1. We start by showing that $$\begin{aligned} \label{eq:small_BLk_n=o_p(n^-1/4)} \conexp{\myp{\beta^n_m}^2}{\infor{\frac{m-1}{M_n}}}-\conexp{\myp{\overline{\beta}^{n}_{r_n,m}}^2}{\infor{\frac{m-1}{M_n}}} = o_p(n^{-1/4}).\end{aligned}$$ To prove , we first prove that $$\label{eq:CLT_small_blk2_op(n^-1/4)} \conexp{\myp{\overline{\beta}^{n}_{m-1,r_n}}^2}{\infor{\frac{m-1}{M_n}}}=o_p(n^{-1/4}).$$ For this purpose, we show the following for any $k\leq i< j$: $$\label{eq:conditional_cov_decay_exp} \expect{\abs{\conexp{U^n_{i} U^n_{j} }{\infor{\frac{k}{n}}}}}\leq C\myp{j-i}^{-v/2}.$$ To see this, we apply JS-Lemma to obtain that $$c_{ij}:=\expect{\myp{\conexp{U^n_j}{\infor{\frac{i}{n}}}}^2}\leq C\myp{j-i}^{-v}.$$ Then, $$\begin{aligned} \expect{\abs{\conexp{U^n_{i}U^n_{j}}{\infor{\frac{k}{n}}}}}& \leq \sqrt{C\myp{j-i}^{-v}} \expect{\abs{\conexp{U^n_i\frac{\conexp{U^n_j}{\infor{\frac{i}{n}}}}{\sqrt{c_{ij}}} }{\infor{\frac{k}{n}}}}}.\end{aligned}$$ Now applying the Cauchy-Schwarz inequality and using the fact that the variance of noise is bounded, we obtain . From  and some simple algebra we find that $$\conexp{\myp{\sum_{i=(2m-2)k_n}^{s^n_m} \sigma_{\frac{m-1}{M_n}} \myp{W^n_{i+k_n} - W^n_{i} } }^2}{\infor{\frac{m-1}{M_n}}}$$ is asymptotically much smaller than $$\begin{aligned} \conexp{\myp{\sum_{i=(2m-2)k_n}^{s^n_m} \myp{U^n_{i+k_n} - U^n_{i} } }^2}{\infor{\frac{m-1}{M_n}}} =O_p(r_n)=o_p(n^{1/4}), \label{eq:CLT_small_blk2_noise_op(n^-1/4)}\end{aligned}$$ whence  holds. Next, we prove that $$\label{eq:CLT_cross_product_big&small_blk} \conexp{\myp{\overline{\beta}^{n}_{r_n,m}}\myp{\overline{\beta}^{n}_{m-1,r_n}}}{\infor{\frac{m-1}{M_n}}}=o_p(n^{-1/4}).$$ (Note that the left-hand side of is equal to the left-hand side of plus twice the left-hand side of ). To show that $$\begin{aligned} \frac{n^{1/2}}{(k_n+1)^2} \conexp{ \myp{\sum_{i=(2m-2)k_n}^{s^n_m}U^n_{i+k_n} -U^n_{i} } \myp{\sum^{(2m-1)k_n}_{i=s^n_m+1}U^n_{i+k_n} -U^n_{i}}}{\infor{\frac{m-1}{M_n}}}=o_p(n^{-1/4}),\end{aligned}$$ we first evaluate $$\begin{aligned} & \frac{n^{1/2}}{(k_n+1)^2}\abs{\conexp{\myp{\sum_{i=(2m-2)k_n}^{s^n_m}U^n_{i+k_n} }\myp{\sum^{(2m-1)k_n}_{j=s^n_m+1}U^n_{j+k_n} } }{\infor{\frac{m-1}{M_n}}}}\\ \leq&\frac{n^{1/2}}{(k_n+1)^2}\sum_{i=(2m-2)k_n}^{s^n_m}\sum^{(2m-1)k_n}_{j=s^n_m+1}\abs{\conexp{U^n_{i+k_n}U^n_{j+k_n}}{\infor{\frac{m-1}{M_n}}}}. $$ Now apply  and by the fact that $v>4$, we have $$\begin{aligned} \sum_{i=(2m-2)k_n}^{s^n_m}\sum^{(2m-1)k_n}_{j=s^n_m+1}\expect{\abs{\conexp{U^n_{i+k_n}U^n_{j+k_n}}{\infor{\frac{m-1}{M_n}}}}}&\overset{\eqref{eq:conditional_cov_decay_exp}}{\leq } \sum_{i=(2m-2)k_n}^{s^n_m}\sum^{(2m-1)k_n}_{j=s^n_m+1}C (j-i)^{-v/2}\\ &\leq C\sum_{\ell = 1}^{r_n}\ell^{1-\frac{v}{2}}\leq C . $$ Similarly, we can prove that the other three cross products have the same order. It is also easy to verify that $$\frac{\sqrt{n}}{(k_n+1)^2}\expect{\sum_{i=(2m-2)k_n}^{s^n_m}(W^n_{i+k_n}-W^n_{i}) \sum^{(2m-1)k_n}_{j=s^n_m+1}(W^n_{j+k_n}-W^n_{j})} = O(r_n/\sqrt{n}).$$ Now  is proved and consequently  follows from  and . 2. Next, we prove that $$\begin{aligned} \label{eq:Diff_cond_uncond_o_p(n^-1/4)} \conexp{\myp{\overline{\beta}^n_{r_n,m}}^2 }{\infor{\frac{m-1}{M_n}}} - \conexp{\myp{\overline{\beta}^n_{r_n,m}}^2}{\sigma_{\frac{m-1}{M_n}}} = o_p(n^{-1/4}).\end{aligned}$$ For this purpose, we note that $$\begin{aligned} &\frac{(k_n+1)^2}{\sqrt{n}} \abs{\conexp{\myp{\overline{\beta}^n_{r_n,m}}^2 }{\infor{\frac{m-1}{M_n}}} - \conexp{\myp{\overline{\beta}^n_{r_n,m}}^2}{\sigma_{\frac{m-1}{M_n}}}} \\ =&\abs{\myp{\conexp{\myp{\sum_{i=s^n_m+1}^{(2m-1)k_n} \myp{U^n_{i+k_n} -U^n_{i} }}^2}{\infor{\frac{m-1}{M_n}}}-\expect{\myp{\sum_{i=s^n_m+1}^{(2m-1)k_n} \myp{U^n_{i+k_n} -U^n_{i} }}^2}}}.\end{aligned}$$ Applying again the JS-Lemma, we find that $$\conexp{\myp{\overline{\beta}^n_{r_n,m}}^2 }{\infor{\frac{m-1}{M_n}}} - \conexp{\myp{\overline{\beta}^n_{r_n,m}}^2}{\sigma_{\frac{m-1}{M_n}}} =O_p(r_n^{-v}),$$ whence  follows from . 3. Finally, we show that $$\begin{aligned} \label{eq:Diff_uncond_o_p(n^-1/4)} \conexp{\myp{\overline{\beta}_{r_n,m}}^2}{\sigma_{\frac{m-1}{M_n}}} = \myp{\frac{2c}{3}\sigma^2_{\frac{m-1}{M_n}}+\frac{2}{c}\sigma^2_U} + o_p(n^{-1/4}).\end{aligned}$$ This follows from the following equalities, which are straightforward: $$\begin{aligned} \conexp{\myp{\frac{n^{1/4}}{k_n+1}\myp{\sum_{i=s^n_m+1}^{(2m-1)k_n}\sigma_{\frac{m-1}{M_n}}\myp{W^n_{i+k_n} - W^n_{i}}}}^2}{\sigma_{\frac{m-1}{M_n}}}= \frac{2c}{3}\sigma^2_{\frac{m-1}{M_n}}+o_p(n^{-1/4}),\end{aligned}$$ $$\begin{aligned} \conexp{\myp{\frac{n^{1/4}}{k_n+1}\myp{\sum_{i=s^n_m+1}^{(2m-1)k_n}\myp{U^n_{i+k_n} - U^n_{i}}}}^2}{\sigma_{\frac{m-1}{M_n}}}= \frac{2\sigma ^2_U}{c}+o_p(n^{-1/4}).\end{aligned}$$ Now  follows from ,  and , and the proof is complete. Let $$\begin{aligned} L_n & :=n^{-1/4} \sum_{m=1}^{M_n}\myp{\myp{\beta^n_m}^2 -\conexp{\myp{\beta^n_m}^2}{\infor{\frac{m-1}{M_n}}}}.\end{aligned}$$ Then, we have the following stable convergence in law: $$\begin{aligned} \label{eq:CLT_stable_convergence} L_n\LsConverge \sqrt{\frac{1}{c}}\int_0^1 \myp{\frac{2c}{3}\sigma_s^2+\frac{2\sigma^2_U}{c}}\diff W_s',\end{aligned}$$ where $W'$ is a standard Wiener process independent of $\mathcal{F}$. Let $\theta^n_m := n^{-1/4}\myp{\myp{{\beta^n_m}}^2 -\myp{\frac{2c}{3}\sigma^2_{\frac{m-1}{M_n}}+\frac{2}{c}\sigma^2_U} }.$ Then, $$\begin{aligned} L_n = \sum_{m=1}^{M_n}\theta_m + o_p(1),\end{aligned}$$ by Lemma \[lemma:1st\_approxi\]. We also have $$\begin{aligned} \label{eq:CLT_sum=P=>0} \sum_{m=1}^{M_n}\conexp{\theta^n_m}{\infor{\frac{m-1}{M_n}}}\Pconverge 0,\end{aligned}$$ again by Lemma \[lemma:1st\_approxi\] and $$\begin{aligned} \sum_{m=1}^{M_n}\conexp{\myp{\theta^n_m}^2}{\infor{\frac{m-1}{M_n}}} =& \frac{1}{2cM_n}\sum_{m=1}^{M_n} \conexp{\myp{{\beta^n_m}}^4}{\infor{\frac{m-1}{M_n}}}+\frac{1}{2cM_n}\sum_{m=1}^{M_n} \myp{\frac{2c}{3}\sigma^2_{\frac{m-1}{M_n}} +\frac{2\sigma^2_U}{c}}^2 \\ & - \frac{1}{cM_n}\sum_{m=1}^{M_n}\conexp{\myp{{\beta^n_m}}^2}{\infor{\frac{m-1}{M_n}}}\myp{\frac{2c}{3}\sigma^2_{\frac{m-1}{M_n}} +\frac{2\sigma^2_U}{c}}.\end{aligned}$$ Now it follows from  and a Riemann approximation that $$\begin{aligned} \label{eq:CLT_sum=P=>Ct} \sum_{m=1}^{M_n}\conexp{\myp{\theta^n_m}^2}{\infor{\frac{m-1}{M_n}}}\Pconverge \frac{1}{c}\int_{0}^{1}\myp{\frac{2c}{3}\sigma^2_u+\frac{2\sigma^2_U}{c}}^2\diff u.\end{aligned}$$ Next, denote $\widebar{\Delta^n_m V} = V^n_{(2m-1)k_n} - V^n_{{2(m-1)k_n}}$, for any process $V$. We will show that $$\sum_{m=1}^{M_n}\conexp{\theta^n_m\widebar{\Delta^n_m N}}{\mathcal{F}^n_{2(m-1)k_n} }\Pconverge 0, \label{eq:CLT_MGL_ortho}$$ for any bounded martingale $N$ defined on the same probability space, where $\mathcal{F}^n_i = \mathcal{F}_{i/n}$ whence $\mathcal{F}^n_{2(m-1)k_n} = \infor{\frac{m-1}{M_n}}$. To complete the proof, it is convenient to specify the respective probability spaces as follows. (We can always extend the probability space — whether the noise process and the efficient price process are defined on the same probability space or not — see e.g., the detailed arguments in [@jacod2013StatisticalPropertyMMN].) The efficient price process lives on $(\Omega', \mathcal{F}', (\mathcal{F}'_t)_{t\in\reals},\mathbb{P}')$. The noise process $(U_i)_{i\in\mathbb{N}}$ is defined on $(\Omega^{''}, \mathcal{F}^{''}, (\mathcal{F}^{''}_i)_{i\in\mathbb{N}},\mathbb{P}^{''})$, where the filtration is defined by $\mathcal{F}^{''}_i = \sigma\myp{U_j,j\leq i, j\in\mathbb{N}}$ and $\mathcal{F}^{''}=\bigvee_{i\in\mathbb{N}}\mathcal{F}^{''}_i$. Let $$\Omega=\Omega'\times\Omega^{''},\quad \mathcal{F} = \mathcal{F}'\otimes\mathcal{F}^{''},\quad \mathbb{P}\myp{\diff \omega',\diff\omega^{''}} = \mathbb{P}'\myp{\diff \omega'}\mathbb{P}^{''}\myp{\diff \omega^{''}}.$$ For a realization of observation times $(t^n_i)_{0\leq i\leq n}$, we introduce $\mathcal{F}^n_i = \mathcal{F}'_{t^n_i}\otimes\mathcal{F}^{''}_i$. According to [@jacod2009pre-averaging-2] and the proof of Theorem IX 7.28 of [@jacod1987limit] it suffices to consider martingales in $\mathcal{N}^0$ or $\mathcal{N}^1$, where $\mathcal{N}^0$ is the set of all bounded martingales on $(\Omega',\mathcal{F}',\mathbb{P}')$, orthogonal to $W$, and $\mathcal{N}^1$ is the set of all martingales having a limit $N_\infty = f(Y_{t_1},\ldots,Y_{t_q})$, where $f$ is any bounded Borel function on $\reals^q$, $t_1<\ldots<t_q$ and $q\geq 1$. First, let $N\in\mathcal{N}^0$ and let $\mathcal{\widetilde{F}}'_t=\bigcap_{s>t}\mathcal{F}'_s\otimes\mathcal{F}''$. Then, for any $t>\frac{m-1}{M_n}$, $\overline{\theta}^n_m(t):=\conexp{\theta_m^n}{\mathcal{\widetilde{F}'}_t}$, conditional on $\sigma_{\frac{m-1}{M_n}}$, is a martingale with respect to the filtration generated by $\{W_t-W_{\frac{m-1}{M_n}}|t>\frac{m-1}{M_n}\}$. By the martingale representation theorem, we have $\overline{\theta}^n_m(t)=\overline{\theta}^n_m(\frac{m-1}{M_n}) + \int_{\frac{m-1}{M_n}}^{t}\gamma_u\diff W_u$ for some predictable process $\gamma$. Now it follows from the orthogonality of $W,N$ and the martingale property of $N$ that $$\begin{aligned} \conexp{\theta^n_m\widebar{\Delta^n_mN}}{\widetilde{\mathcal{F}}'_{\frac{m-1}{M_n}}} = \conexp{\myp{\theta^n_m - \overline{\theta}^n_m\myp{\frac{m-1}{M_n}}}\widebar{\Delta^n_mN} + \overline{\theta}^n_m\myp{\frac{m-1}{M_n}}\widebar{\Delta^n_mN}}{\widetilde{\mathcal{F}}'_{\frac{m-1}{M_n}}}=0, $$ which leads to $$\begin{aligned} \conexp{\theta^n_m\widebar{\Delta^n_mN}}{ {\mathcal{F}}^n_{2(m-1)k_n}}= 0,\end{aligned}$$ since $\mathcal{F}_t\subset\widetilde{\mathcal{F}}'_t$. Next, assume that $N\in\mathcal{N}^1$. It can be shown (see [@jacod2009pre-averaging-2]) that there exists some $\hat{f}_t$ such that $t\in[t_l,t_{l+1})$, $N_t = \hat{f}_t(Y_{t_0}, Y_{t_1},\ldots,Y_{t_{l}})$ with $t_0=0,t_{q+1}=\infty$, and such that it is measurable in $(Y_{t_1},\ldots,Y_{t_l})$. Hence, $\widebar{\Delta^n_m N}=0$ if it does not cover any of the points $t_1,\ldots,t_{q+1}$. But such intervals (to compute $\widebar{\Delta^n_m N}$) that contain any of $t_1,\ldots,t_{q+1}$ are at most finite in number. Furthermore, by the boundedness of $N$ and the conditional Cauchy-Schwarz inequality, we have the following: $$\begin{aligned} \conexp{\abs{\theta^n_m\widebar{\Delta^n_m N}}}{\mathcal{F}^n_{2(m-1)k_n}}\leq \sqrt{\conexp{\myp{\theta^n_m}^2}{\mathcal{F}^n_{2(m-1)k_n}}}\sqrt{\conexp{\myp{\widebar{\Delta^n_m N}}^2}{\mathcal{F}^n_{2(m-1)k_n}}}=O_p(n^{-1/4}).\end{aligned}$$ Now  follows since there are at most finitely many such intervals. The following is also trivial: $$\begin{aligned} \conexp{\theta^n_m\widebar{\Delta^n_m W}}{\mathcal{F}^n_{2(m-1)k_n}} = 0, \label{eq:CLT_W_ortho}\end{aligned}$$ since $\theta^n_m$ is an even functional of $U$ and $W$ and $(U,W)$ are distributed symmetrically. From , we know that $(\theta^n_m)^2\idfun{\abs{\theta^n_m}>\varepsilon}=o_p(n^{-1/2})$ for any $\varepsilon>0$. We then have $$\begin{aligned} \label{eq:CLT_tight} \sum_{m=1}^{M_n}\conexp{(\theta^n_m)^2\idfun{\abs{\theta^n_m}>\varepsilon}}{ \mathcal{F}^n_{2(m-1)k_n} }\Pconverge0.\end{aligned}$$ Now the proof is complete in view of , , ,  and , and Theorem IX 7.28 of [@jacod1987limit]. We have that $$\label{eq:third_approxi} \sum_{m=1}^{M_n}\myp{\preavg{Y}}^2- \frac{1}{\sqrt{n}}\sum_{m=1}^{M_n}\myp{\beta^n_m}^2 = o_p(n^{-1/4}).$$ Denote $$\widetilde{Y}^{n}_m = \sigma_{\frac{m-1}{M_n}}\preavg{W}+\preavg{U}.$$ Then, $$\begin{aligned} \expect{\abs{\sum_{m=1}^{M_n}\myp{\preavg{Y}}^2- \frac{1}{\sqrt{n}}\sum_{m=1}^{M_n}\myp{\beta^n_m}^2}}\leq \sum_{m=1}^{M_n}\sqrt{\expect{\myp{\preavg{Y}-\widetilde{Y}^n_m}^2}}\sqrt{\expect{\myp{\preavg{Y}+\widetilde{Y}^n_m}^2}}.\end{aligned}$$ Since $\sqrt{\expect{\myp{\preavg{Y}+\widetilde{Y}^n_m}^2}}=O(n^{-1/4})$, the result follows if $$\begin{aligned} \sum_{m=1}^{M_n}\sqrt{\expect{\myp{\preavg{Y}-\widetilde{Y}^n_m}^2}}\rightarrow 0.\end{aligned}$$ But this follows directly from Lemma 7.8 in [@barndorff2006central]. \[Proof of Theorem \[thm:CLT\]\] Now the proof of Theorem \[thm:CLT\] is complete in view of  and , and our consistency result in . Simulation Study under Stochastic Volatility {#sec:SVsimu} ============================================ In this section, we provide additional simulation results in the presence of stochastic volatility. We simulate the microstructure noise process employing various combinations of dependence structure and sampling frequency. We assume that the efficient log-price is generated by the following dynamics: $$\diff X_t = -\delta (X_t-\mu_1)\diff t + \sigma_t\diff W_t, \qquad \diff \sigma^2_t = \kappa\myp{ \mu_2- \sigma^2_t}\diff t + \gamma \sigma_{t}\diff B_t, $$ where $B$ is a standard Brownian motion and its quadratic covariation with the standard Brownian motion $W$ is $\varrho t$. We set the parameters as follows: $\delta = 0.5$, $\mu_1 = 1.6,\kappa =5/252,\mu_ 2 = 0.04/252 ,\gamma =0.05/252$, and $\varrho = -0.5$. We employ the same noise process as in . We set $\expect{V^2} = 1.9\times 10^{-7}$, and $\expect{\epsilon^2} = 1.3\times 10^{-7}$. Note that these parameters are slightly different from those in Section \[sec:simulation\], which were based on [@Ait-Sahalia2011DependentNoise]. They are chosen to mimic the results of our empirical studies. Figure \[fig:SVsimu\] presents the estimates of the second moments of noise. Clearly, the bias correction can be important, potentially yielding significantly improved results. Turning to the estimation of the integrated volatility using $\widehat{\rm IV}_{\rm step1}$, $\widehat{\rm IV}_{n}$, $\widehat{\rm IV}_{\rm step2}$ and $\widehat{\rm IV}_{\rm step3}$, we observe from Table \[tab:SVsimu\] similar results under stochastic volatility as in our previous simulation studies that assumed deterministic volatility: the two-step estimators of the integrated volatility have much smaller bias and only slightly larger standard deviations when noise is dependent. One more iteration of bias corrections further improves the performance when noise is serially correlated. They also deliver reliable estimates when noise turns out to be independent. $\rho,\Delta_n$ $\rho=0.7$, $\Delta_n=0.2$ sec $\rho=0$, $\Delta_n=1$ sec $\rho=-0.7$, $\Delta_n=0.4$ sec -------------------------------------------------------------- -------------------------------- ---------------------------- --------------------------------- $\widehat{\rm IV}_{\rm step1}-\int_{0}^{1}\sigma^2_t\diff t$ 5.02e-5 (1.10e-5) 4.33e-7 (1.32e-5) -1.50e-5 (9.97e-6) $\widehat{\rm IV}_{n}-\int_{0}^{1}\sigma^2_t\diff t$ -1.64e-5 (1.09e-5) -7.82e-5 (1.18e-5) -3.17e-5 (9.77e-6) $\widehat{\rm IV}_{\rm step2}-\int_{0}^{1}\sigma^2_t\diff t$ 4.32e-6 (1.20e-5) 9.94e-7 (1.79e-5) -3.15e-6 (1.17e-5) $\widehat{\rm IV}_{\rm step3}-\int_{0}^{1}\sigma^2_t\diff t$ -2.32e-7 (1.21e-5) 1.27e-6 (2.06e-5) -8.05e-7 (1.21e-5) : Estimation of the integrated volatility in the presence of stochastic volatility and under various combinations of noise dependence structure and sampling frequency. We report the means of the bias of the four integrated volatility estimators: $\widehat{\rm IV}_{\rm step1}-\int_{0}^{1}\sigma^2_t\diff t, \widehat{\rm IV}_{n}-\int_{0}^{1}\sigma^2_t\diff t$, $\widehat{\rm IV}_{\rm step2}-\int_{0}^{1}\sigma^2_t\diff t$ and $\widehat{\rm IV}_{\rm step3}-\int_{0}^{1}\sigma^2_t\diff t$, based on $1\mathord{,}000$ simulations with standard deviations between parentheses. From the left to the right, the three combinations of $\rho,\Delta_n$ mimic transaction time sampling, regular time sampling (at 1 sec scale), and tick time sampling. The tuning parameters are set as follows: $j_n=20$, $i_n=10$ and $c=0.2$.[]{data-label="tab:SVsimu"} Empirical Study of Transaction Data for General Electric {#sec:GE_Empirical} ======================================================== We collect $2\mathord{,}721\mathord{,}475$ transaction prices of General Electric (GE) over the month January 2011. On average there are 5.8 observations per second. In contrast to the analysis of Citigroup transaction prices in Sections \[subsec:estimate\_2nd\_moments\_noise\_Citi\] and \[subsec:IV\_Citi\], bias correction plays a very pronounced role here. Despite the high data frequency, the finite sample bias can be very significant if the underlying noise-to-signal ratio is small (recall Remark \[rmk:why\_correct\_bias\]). This is indeed the case as Figure \[fig:GE\_C\_obv\_n2s\] reveals: compared with Citigroup, the data frequency of the General Electric sample is typically lower but the noise-to-signal ratio is also (much) smaller. While the data frequency is immediately available, the noise-to-signal ratio is latent. Therefore, one should always be wary to rely solely on asymptotic theory in practice. The top panel of Figure \[fig:GE\_autocorr\_IVs\] shows that both the realized volatility (RV) and local averaging (LA) estimators indicate that the noise is strongly autocorrelated, while the bias corrected realized volatility (BCRV) estimator reveals that the noise is only weakly dependent. Such a pattern also appears in our simulation study, where we have seen that it is the finite sample bias that induces this discrepancy. The bottom panel of Figure \[fig:GE\_autocorr\_IVs\] plots two estimators of the integrated volatility, $\widehat{\rm IV}_{n}$ and $\widehat{\rm IV}_{\rm step2}$, to illustrate that the finite sample bias correction is particularly essential. If one would solely rely on asymptotic theory, then one would end up with much lower estimates and narrow confidence intervals that may well exclude the true values! [^1]: Corresponding author. University of Amsterdam, Amsterdam School of Economics, PO Box 15867, 1001 NJ Amsterdam, The Netherlands. Email: <Z.Merrick.Li@gmail.com>. Phone: +31 (0)20 5254252. [^2]: University of Amsterdam, Amsterdam School of Economics, PO Box 15867, 1001 NJ Amsterdam, The Netherlands. Email: <R.J.A.Laeven@uva.nl>. Phone: +31 (0)20 5254219. [^3]: University of Amsterdam, Amsterdam School of Economics, PO Box 15867, 1001 NJ Amsterdam, The Netherlands. Email: <M.H.Vellekoop@uva.nl>. Phone: +31 (0)20 5254210. [^4]: In this paper, “price” always refers to the “logarithmic price”. [^5]: Indeed, while high-frequency data in principle facilitate the asymptotic and empirical analysis of volatility estimators, the pronounced presence of microstructure noise at high frequency subverts the desirable properties of traditional estimators such as realized volatility. [^6]: The ratio of the variance of noise and the integrated volatility. [^7]: Da and Xiu maintain a website to provide up-to-date daily annualized volatility estimates for all S&P 1500 index constituents, see <http://dachxiu.chicagobooth.edu/#risklab>. [^8]: In their empirical studies, [@da2017moving] only consider tick time sampling. [^9]: The *mixing coefficients* constitute a sequence satisfying $$\abs{\prob{A\cap B}-\prob{A}\prob{B}}\leq \alpha_h,$$ for all $A\in\sigma\myp{U_0,\dots,U_k},B\in\sigma\myp{U_{k+h},U_{k+h+1},\dots}$, where $\sigma(A)$ is the $\sigma$-algebra generated by $A$. We refer to [@bradley2007strongMixing] or Chapter VIII of [@jacod1987limit] for further details on and properties of mixing sequences. [^10]: Under this sampling scheme, $Y^n_i$ (resp. $X^n_i,U^n_i$) is the observed log-price (resp. efficient log-price, microstructure noise) associated with the $i$-th trade. The observation times $(t_{i}^{n})_{0\leq i\leq n}$ can, in general, be deterministic or random, and regular or irregular. [^11]: Under this sampling scheme, $Y^n_i$ (resp. $X^n_i,U^n_i$) is the observed log-price (resp. efficient log-price, microstructure noise) at regular time $i\Delta_n$, with $\Delta_n = 1/n$ in the main text. [^12]: Tick time sampling removes all zero returns; see [@Ait-Sahalia2011DependentNoise] and [@griffin2008sampling]. Hence, $Y^n_i$ is by definition different from $Y^n_{i-1}$ and $Y^n_{i+1}$ under this sampling scheme. [^13]: This applies to the local averaging estimators developed in [@jacod2013StatisticalPropertyMMN] as well; see Footnote \[fn:lafs\] for further details. [^14]: \[fn:lafs\]The finite sample bias corrected local averaging estimators of the noise covariances are given by $$\widehat{R}(j)_n = \frac{1}{n}U((0,j))_n - \frac{K_n}{n}\myp{\frac{4}{3}\hat{\sigma}^2},$$ where $U((0,j))_n/n$ is the local averaging estimator of the $j$-th covariance without bias correction and $\hat{\sigma}^2$ is an estimator of the integrated volatility; see [@jacod2013StatisticalPropertyMMN] for more details. While [@jacod2013StatisticalPropertyMMN] provide a finite sample bias correction when developing their local averaging estimators of noise covariances, they don’t consider the feedback between, and unified treatment of, asymptotic and finite sample biases, which is a key interest in this paper. [^15]: The numbers are multiplied by $10^5$. [^16]: We restrict attention to the lags up to $j=15$. The logarithmic autocorrelations at higher lags are very volatile since the autocorrelations are close to zero. [^17]: The autocorrelation decay rate would be slower without unified treatment of the bias corrections, which may explain the polynomial dependence in noise found in [@jacod2013StatisticalPropertyMMN] and questioned by these authors themselves. [^18]: For a tractable analysis, one may consider AR(1) noise processes. Let $\rho\in(0,1)$ be the absolute value of the AR(1) coefficient. When the noise is positively autocorrelated, the asymptotic bias $\sigma^2_U$ corrected by $\widehat{\rm IV}_{\rm step1}$ and $\widehat{\rm IV}_{\rm step2}$ is $(1-\rho)\var{U}$ and $\frac{1+\rho}{1-\rho}\var{U}$, respectively; when the noise is negatively autocorrelated, it is $(1+\rho)\var{U}$ and $\frac{1-\rho}{1+\rho}\var{U}$. Consider $\rho = 0.8$. Then, $(1-\rho)\var{U} = 0.2\var{U}$ and $\frac{1+\rho}{1-\rho}\var{U} = 9\var{U}$ while $(1+\rho)\var{U} = 1.8\var{U}$ and $\frac{1-\rho}{1+\rho}\var{U} = \frac{1}{9}\var{U}$. Therefore, the difference in the asymptotic bias is smaller when the noise is negatively autocorrelated; consequently, the integrated volatility estimates by $\widehat{\rm IV}_{\rm step1}$ and $\widehat{\rm IV}_{\rm step2}$ are close. See also Tables \[tab:IV\_est\_delta=1s\] and \[tab:IV\_est\_delta=0.1s\] in our simulation study. [^19]: It is the probability that a buy (sell) order follows another sell (buy) order. [^20]: To obtain the Citigroup tick time sample and the 1-second regular time sample, we delete roughly 70% and 90% of the original transaction data, respectively.
--- abstract: 'In this article, we give, under the Riemann hypothesis, an upper bound for the exponential moments of the imaginary part of the logarithm of the Riemann zeta function on the critical line. Our result, which gives information on the fluctuations of the distribution of the zeros of $\zeta$, has the same accuracy as the result obtained by Soundararajan in [@Sound] for the moments of $|\zeta|$.' author: - 'Joseph <span style="font-variant:small-caps;">Najnudel</span> [^1]' title: Exponential moments of the argument of the Riemann zeta function on the critical line --- Introduction ============ The behavior of the Riemann zeta function on the critical line has been intensively studied, in particular in relation with the Riemann hypothesis. A natural question concerns the order of magnitude of the moments of $\zeta$: $$\mu_k(T) := \frac{1}{T} \int_0^T |\zeta (1/2 + it)|^{2k} dt = \mathbb{E} [ |\zeta (1/2 + iUT)|^{2k}],$$ where $k$ is a positive real number and $U$ is a uniform random variable in $[0,1]$. It is believed that the order of magnitude of $\mu_k(T)$ is $(\log T)^{k^2}$ for fixed $k$ and $T$ tending to infinity. More precisely, it is conjectured that there exists $C_k > 0$ such that $$\mu_k(T) \sim_{T \rightarrow \infty} C_k (\log T)^{k^2}.$$ An explicit expression of $C_k$ has been predicting by Keating and Snaith [@bib:KSn], using an expected analogy between $\zeta$ and the characteristic polynomials of random unitary matrices. The conjecture has been only proven for $k =1$ by Hardy and Littlewood and for $k = 2$ by Ingham (see Chapter VII of [@Tit]). The weaker conjecture $\mu_k(T) = T^{o(1)}$ for fixed $k > 0$ and $T \rightarrow \infty$ is equivalent to the Lindelöf hypothesis which states that $|\zeta(1/2 + it) | \leq t^{o(1)}$ when $t$ goes to infinity. The Lindelöf hypothesis is a still open conjecture which can be deduced from the Riemann hypothesis. Under the Riemann hypothesis, it is known that $(\log T)^{k^2}$ is the right order of magnitude for $\mu_k(T)$. In [@Sound], Soundararajan proves that $$\mu_k(T) = (\log T)^{k^2 + o(1)}$$ for fixed $k > 0$ and $T$ tending to infinity. In [@Harper], Harper improves this result by showing that $$\mu_k(T) \ll_k (\log T)^{k^2},$$ for all $k > 0$ and $T$ large enough, the notation $A \ll_x B$ meaning that there exists $C > 0$ depending only on $x$ such that $|A| \leq C B$. On the other hand, the lower bound $$\mu_k(T) \gg_k (\log T)^{k^2}$$ has been proven by Ramachandra [@Ra78; @Ra95]) and Heath-Brown [@HB81], assuming the Riemann hypothesis, and, for $k \geq 1$, by Raziwiłł and Soundararajan [@RS13], unconditionally. The moment $\mu_k(T)$ can be written as follows: $$\mu_k(T) = \mathbb{E} [ \exp ( 2k \Re \log \zeta(1/2 + iUT))].$$ Here $\log \zeta$ denotes the unique determination of the logarithm which is well-defined and countinous everywhere except at the left of the zeros and the pole of $\zeta$, and which is real on the interval $(1, \infty)$. It is now natural to also look at similar moments written in terms of the imaginary part of $\log \zeta$: $$\nu_k (T) = \mathbb{E} [ \exp ( 2k \Im \log \zeta(1/2 + iUT))].$$ Note that $\Im \log \zeta$ is directly related to the fluctuations of the distribution of the zeros of $\zeta$ with respect to their “expected distribution”: we have $$N(t) = \frac{t}{2 \pi} \log \frac{t}{2 \pi e} + \frac{1}{\pi} \Im \log \zeta(1/2 + it) + \mathcal{O}(1),$$ where $N(t)$ is the number of zeros of $\zeta$ with imaginary part between $0$ and $t$. In the present article, we prove, conditionally on the Riemann hypothesis, an upper bound on $\nu_k(T)$ with the same accuracy as the upper bound on $\mu_k(T)$ obtained by Soundararajan in [@Sound]. The general strategy is similar, by integrating estimates on the tail of the distribution of $\Im \log \zeta$, obtained by using bounds on moments of sums on primes coming from the logarithm of the Euler product of $\zeta$. The main difference with the paper by Soundararajan [@Sound] is that we do not have an upper bound of $\Im \log \zeta$ which is similar to the upper bound of $\log |\zeta|$ given in his Proposition. On the other hand, from the link between $\Im \log \zeta$ and the distribution of the zeros of $\zeta$, we can deduce that $\Im \log \zeta(1/2 + it)$ cannot decrease too fast when $t$ increases. We intensively use this fact in order to estimate $\Im \log \zeta$ in terms of sums on primes. The precise statement of our main result is the following: Under the Riemann hypothesis, for all $k \in \mathbb{R}$, $\varepsilon> 0$, $$\mathbb{E} [\exp(2 k \Im \log \zeta(1/2 + iTU))] \ll_{k, \varepsilon} (\log T)^{k^2 + \varepsilon},$$ where $U$ is a uniform variable on $[0,1]$. The proof of this result is divided into two main parts. In the first part, we bound the tail of the distribution of $\Im \log \zeta(1/2 + iTU)$ in terms of the tail of an averaged version of this random variable. In the second part, we show that this averaged version is close to a sum on primes, whose tail is estimated from bounds on its moments. Combining this estimate with the results of the first part gives a proof of the main theorem. Comparison of $\Im \log \zeta$ with an averaged version ======================================================= The imaginary part of $\log \zeta$ varies in a smooth and well-controlled way on the critical line when there are no zeros, and has positive jumps of $\pi$ when there is a zero. We deduce that it cannot decrease too fast. More precisely, the following holds: For $2 \leq t_1 \leq t_2$, we have $$\Im \log \zeta(1/2 + it_2) \geq \Im \log \zeta(1/2 + i t_1) - (t_2 - t_1) \log t_2 + \mathcal{O}(1).$$ Is is an easy consequence of Theorem 9.3 of Titchmarsh [@Tit]: for example, see Proposition 4.1 of [@bib:Naj] for details. We will now define some averaging of $\Im \log \zeta$ around points of the critical line. From the previous proposition, if $\Im \log \zeta$ is large at some point $1/2 + it_0$ of the critical line, then it remains large on some segment $[1/2 + it_0,1/2 + i(t_0 + \delta)]$ which tends to also give a large value of an average of $\Im \log \zeta(1/2 + it)$ for $t$ around $t_0$. Our precise way of averaging is the following. We fix a function $\varphi$ satisfying the following properties: $\varphi$ is real, nonnegative, even, dominated by any negative power at infinity, and its Fourier transform is compactlly supported, takes values in $[0,1]$, is even and equal to $1$ at zero. The Fourier transform is normalized as follows: $$\widehat{\varphi}(\lambda) = \int_{-\infty}^{\infty} \varphi(x) e^{-i \lambda x} dx.$$ For $H> 0$ we define an averaged version of $\Im \log \zeta$ as follows: $$I(\tau,H) := \int_{-\infty}^{\infty} \Im \log \zeta (1/2 + i (\tau + t H^{-1}))\varphi (t) dt.$$ The following result holds: Let $\varepsilon \in (0,1/2)$. Then, there exist $a, K > 1$, depending only on $\varphi$ and $\varepsilon$, and satisfying the following property. For $T > 100$, $\tau \in [\sqrt{T},T]$, $ K < V < \log T$, $H := K V^{-1} \log T $, the inequalities $$\Im \log \zeta(1/2 + i \tau) \geq V,$$ $$\Im \log \zeta(1/2 + i (\tau - e^r H^{-1})) \geq - 2 V$$ for all integers $r$ between $0$ and $\log \log T$, together imply $$I (\tau + a H^{-1}, H) \geq (1-\varepsilon) V.$$ Similarly, the inequalities $$\Im \log \zeta(1/2 + i \tau) \leq -V,$$ $$\Im \log \zeta(1/2 + i (\tau + e^r H^{-1})) \leq 2 V$$ for all integers $r$ between $0$ and $\log \log T$, together imply $$I (\tau - a H^{-1}, H) \leq -(1-\varepsilon) V.$$ First, we observe that $H > 1$ since $K > 1$ and $V < \log T$. We deduce that for all the values of $s$ such that $\Im \log \zeta (1/2 + is)$ is explicitly written in the proposition, $ \sqrt{T} - \log T \leq s \leq T + \log T$: in particular $s > 2$ since $T > 100$, and we can apply the previous proposition to compare these values of $\Im \log \zeta$. If $\Im \log \zeta (1/2 + i \tau) \geq V$, then for all $t \geq 0$, $$\begin{aligned} & \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \geq V - t H^{-1} \log (\tau + tH^{-1}) + \mathcal{O}(1) \\ & \geq V - t K^{-1} V (\log T)^{-1} \log (\tau + t H^{-1}) + \mathcal{O}(1).\end{aligned}$$ Since $H > 1$ and $\tau \leq T$, $$\Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \geq V ( 1 - t K^{-1} (\log T)^{-1} \log(T + t)) + \mathcal{O}(1).$$ We have $$\log (T + t) = \log T + \log ( 1 + t/T) \leq \log T + \log (1+t),$$ and then, integrating against $\varphi(t-a)$ from $0$ to $\infty$, $$\begin{aligned} & \int_0^{\infty} \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \varphi(t-a) dt \\ & \geq V \left[ \int_{0}^{\infty} \varphi(t-a) dt - K^{-1} \int_{0}^{\infty} t \varphi(t-a) dt \right] + \mathcal{O}_{a,\varphi}(1),\end{aligned}$$ since $$V K^{-1} (\log T)^{-1} \int_0^{\infty} t\varphi(t-a) \log (1 + t) dt = \mathcal{O}_{a, \varphi}(1),$$ because $V K^{-1} (\log T)^{-1} < 1$ and $\varphi$ is integrable against $t \log (1+t)$ (it is rapidly decaying at infinity). We deduce, since $ V K^{-1} > 1$ and then $\mathcal{O}_{a, \varphi}(1) = \mathcal{O}_{a, \varphi}( V K^{-1})$, and since the integral of $\varphi$ on $\mathbb{R}$ is $\widehat{\varphi}(0) = 1$, $$\int_0^{\infty} \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \varphi(t-a) dt \geq V ( 1 - o_{\varphi, a \rightarrow \infty}(1) - \mathcal{O}_{a, \varphi}( K^{-1})).$$ If we take $a$ large enough depending on $\varphi$ and $\varepsilon$, and then $K$ large enough depending on $\varphi$, $a$ and $\varepsilon$, we deduce $$\int_0^{\infty} \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \varphi(t-a) dt \geq V (1 - \varepsilon/2).$$ Now, let us consider the same integral between $-\infty$ and $0$. For $0 \leq r \leq \log \log T$ integer and $u \in [0, e^{r} - e^{r-1}]$ for $r \geq 1$, $u \in [0,1]$ for $r = 0$, $$\Im \log \zeta (1/2 + i (\tau - (e^r - u) H^{-1})) - \Im \log \zeta (1/2 + i (\tau - e^r H^{-1})) \geq - u H^{-1} \log T,$$ and then $$\Im \log \zeta (1/2 + i (\tau - (e^r - u) H^{-1})) \geq - 2 V - u K^{-1} V$$ Since $u \leq e^r$, $K > 1$, and $ 1 + e^r - u \geq e^{r-1}$, $$\Im \log \zeta (1/2 + i (\tau - (e^r - u) H^{-1})) \geq - 2 V - e (1 + e^r - u) V,$$ and then, for all $t \in [- e^{\lfloor \log \log T \rfloor}, 0]$, $$\Im \log \zeta (1/2 + i (\tau + t H^{-1})) \geq - \mathcal{O}( V (1+|t|)).$$ If $K$ is large enough depending on $\varphi$, this estimate remains true for $t < - e^{\lfloor \log \log T \rfloor}$, since by Titchmarsh [@Tit], Theorem 9.4, and by the fact that $|t| \geq e^{\log \log T -1 } \gg \log T$, $$\begin{aligned} |\Im \log \zeta (1/2 + i (\tau + t H^{-1})) | & \ll \log ( 2 + \tau + |t| H^{-1}) \ll \log (T + |t|) \\ & = \log T + \log ( 1 + |t|/T) \leq \log T + |t|/T \ll |t|,\end{aligned}$$ whereas $V > K > 1$. Integrating against $\varphi(t-a)$, we get $$\int_{-\infty}^0 \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \varphi(t-a) dt \geq - \mathcal{O} ( I V ),$$ where $$I = \int_{-\infty}^0 (1 + |t|) \varphi(t-a) dt \leq \int_{-\infty}^{-a} (1 + |s|) \varphi(s) ds \underset{a \rightarrow \infty}{\longrightarrow} 0.$$ Hence, for $a$ large enough depending on $\varepsilon$ and $\varphi$, $$\int_{-\infty}^0 \Im \log \zeta (1/2 + i (\tau + t H^{-1}) ) \varphi(t-a) dt \geq - \varepsilon V/2.$$ Adding this integral to the same integral on $[0, \infty)$, we deduce the first part of the proposition. The second part is proven in the same way, up to minor modifications which are left to the reader. In the previous proposition, if we take $\tau$ random and uniformly distributed in $[0,T]$, we deduce the following result: For $\varepsilon \in (0,1/2)$, $K$ as in the previous proposition (depending on $\varepsilon$ and $\varphi$), $T > 100$, $K < V < \log T$, $H = KV^{-1} \log T$, $U$ uniformly distributed on $[0,1]$, $$\begin{aligned} \mathbb{P} [ | & \Im \log \zeta (1/2 + i UT)| \geq V] \leq \mathbb{P} [ |I(UT, H)| \geq (1-\varepsilon) V ] \\ & + ( 1 + \log \log T) \mathbb{P} [ |\Im \log \zeta (1/2 + i UT)| \geq 2 V] + \mathcal{O}_{\varepsilon,\varphi} (T^{-1/2}).\end{aligned}$$ We have immediately, by taking $\tau = UT$, $$\begin{aligned} \mathbb{P} [ |\Im \log \zeta (1/2 + i UT)| & \geq V] \leq \mathbb{P} [ I(UT + a H^{-1}, H) \geq (1-\varepsilon) V ] \\ & + \mathbb{P} [ I(UT - a H^{-1}, H) \leq -(1-\varepsilon) V ] \\ & + \sum_{r = 0}^{ \lfloor \log \log T \rfloor} \mathbb{P} [ \Im \log \zeta (1/2 + i (UT + e^r H^{-1}) ) \geq 2 V] \\ & + \sum_{r = 0}^{ \lfloor \log \log T \rfloor} \mathbb{P} [ \Im \log \zeta (1/2 + i (UT - e^r H^{-1}) ) \leq - 2 V] \\ & + \mathcal{O}(T^{-1/2}),\end{aligned}$$ the last term being used to discard the event $UT \leq \sqrt{T}$. Now, for $u \in \mathbb{R} $, the symmetric difference between the uniform laws on $[0,T]$ and $[u H^{-1}, T + u H^{-1}]$ is dominated by a measure of total mass $\mathcal{O} ( |u| H^{-1} T^{-1})$. Hence, in the previous expression, we can replace $UT + u H^{-1}$ by $UT$ in each event, with the cost of an error term $\mathcal{O} ( |u| H^{-1} T^{-1}) = \mathcal{O}(|u| T^{-1})$. The values of $|u|$ which are involved are less than $\max(a, \log T)$, and there are $\mathcal{O}(\log \log T)$ of them. Hence, we get an error term $\mathcal{O} (T^{-1}(a + \log T) \log \log T) = \mathcal{O}_{\varepsilon, \varphi} (T^{-1/2})$ since $a$ depends only on $\varepsilon$ and $\varphi$. We can now iterate the proposition: applying it for $V, 2V, 4V,...$. After a few manipulations, it gives the following: \[sumV\] For $\varepsilon \in (0,1/2)$, $K$ as in the previous proposition (depending on $\varepsilon$ and $\varphi$), $T > 100$, $K< V < \log T$, $H = KV^{-1} \log T$, $U$ uniformly distributed on $[0,1]$, $$\begin{aligned} \mathbb{P} [ | & \Im \log \zeta (1/2 + i UT)| \geq V] \\ & \leq \sum_{r = 0}^{p-1} (1 +\log \log T)^r \mathbb{P} [ |I(UT, 2^{-r} H)| \geq (1-\varepsilon) 2^r V ] + \mathcal{O}_{\varepsilon,\varphi} (T^{-1/3}),\end{aligned}$$ where $p$ is the first integer such that $2^p V \geq \log T$. We iterate the formula until the value of $V$ reaches $\log T$. The number of steps is dominated by $\log \log T - \log K \leq \log \log T$. Each step gives an error term of at most $\mathcal{O}_{\varepsilon, \varphi} ( ( 1+ \log \log T)^{\mathcal{O} (\log \log T)} T^{-1/2})$. Hence, the total error is $ \mathcal{O}_{\varepsilon,\varphi} (T^{-1/3})$. We deduce $$\begin{aligned} \mathbb{P} [ |\Im & \log \zeta (1/2 + i UT)| \geq V] \leq \sum_{r = 0}^{p-1} (1 +\log \log T)^r \mathbb{P} [ |I(UT, 2^{-r} H)| \geq (1-\varepsilon) 2^r V ] \\ & + (1 +\log \log T)^p \, \mathbb{P} [ | \Im \log \zeta (1/2 + i UT)| \geq 2^p V] + \mathcal{O}_{\varepsilon, \varphi} (T^{-1/3}),\end{aligned}$$ Under the Riemann hypothesis, Theorem 14.13 of Titchmarsh [@Tit] shows that $| \Im \log \zeta (1/2 + i UT)| \ll (\log \log T)^{-1} \log T$ under the Riemann hypothesis. Hence the probability that $|\Im \log \zeta|$ is larger than $2^p V \geq \log T$ is equal to zero if $T$ is large enough, which can be assumed (for small $T$, we can absorb everything in the error term). Tail distribution of the averaged version of $\Im \log \zeta$ and proof of the main theorem =========================================================================================== The averaged version $I(\tau, H)$ of $\Im \log \zeta$ can be written in terms of sums indexed by primes: \[Naj32\] Let us assume the Riemann hypothesis. There exists $\alpha > 0$, depending only on the function $\varphi$, such that for all $\tau \in \mathbb{R}$, $0 < H < \alpha \log (2+|\tau|)$, $$I (\tau, H) = \Im \sum_{p \in \mathcal{P}} p^{-1/2 - i \tau} \widehat{\varphi} (H^{-1} \log p) + \frac{1}{2} \Im \sum_{p \in \mathcal{P}} p^{-1- 2 i \tau} \widehat{\varphi} (2 H^{-1} \log p) + \mathcal{O}_{\varphi} (1),$$ $\mathcal{P}$ being the set of primes. This result is an immediate consequence of Proposition 3.2 of [@bib:Naj], which is itself deduced from Lemma 5 of Tsang [@Tsang86]. We will now estimate the tail distribution of $I(UT, H)$, where $U$ is uniformly distributed on $[0,1]$, by using upper bounds of the moments of the sums on primes involved in the previous proposition. We use Lemma 3 of Soundararajan [@Sound], which is presented as a standard mean value estimate by the author (a similar result can be found in Lemma 3.3 of Tsang’s thesis [@Tsang84]), and which can be stated as follows: \[Lemma3Sound\] For $T$ large enough and $2 \leq x \leq T$, for $k$ a natural number such that $x^k \leq T/\log T$, and for any complex numbers $a(p)$ indexed by the primes, we have $$\int_{T}^{2T} \left| \sum_{p \leq x, p \in \mathcal{P}} \frac{a(p)}{p^{1/2 + it}} \right|^{2k} dt \ll T k! \left( \sum_{p \leq x, p \in \mathcal{P}} \frac{|a(p)|^2}{p} \right)^k.$$ From Propositions \[Naj32\] and \[Lemma3Sound\], we can deduce the following tail estimate: For $T$ large enough, $\varepsilon \in (0,1/10)$, $K > 1$ depending only on $\varepsilon$ and $\varphi$, $0 < V < \log T$, $H = K V^{-1} \log T$, we have $$\mathbb{P} [ |I(UT, H)| \geq (1-\varepsilon) V] \ll_{\varepsilon, \varphi} e^{- (1- 3 \varepsilon) V^2 / \log \log T} + e^{- b_{\varepsilon, \varphi} V \log V} + T^{-1/2},$$ where $b_{\varepsilon, \varphi} > 0$ depends only on $\varepsilon$ and $\varphi$. We can assume $V \geq 10 \sqrt{ \log \log T}$ and $V$, $T$ larger than any given quantity depending only on $\varepsilon$ and $\varphi$: otherwise the upper bound is trivial. In particular, if we choose $\alpha$ as in Proposition \[Naj32\], it depends only on $\varphi$, and then, for $K > 1$ depending only on $\varepsilon$ and $\varphi$, we can assume $V > 2K/\alpha$. From this inequality, we deduce $H < ( \alpha/2) \log T$, which gives $H < \alpha \log (2 + UT)$ with probability $ 1 - \mathcal{O} (T^{-1/2})$. Under this condition, Proposition \[Naj32\] applies to $\tau = UT$, and we deduce: $$I(UT, H) = \Im S_1 + \Im S_2 + \Im S_3 + \mathcal{O}_{\varphi}(1),$$ where $$S_1 := \sum_{p \in \mathcal{P}, p \leq T^{1/( V \log \log T)} } p^{-1/2 - i UT} \widehat{\varphi} (H^{-1} \log p),$$ $$S_2 := \sum_{p \in \mathcal{P}, p > T^{1/( V \log \log T)} } p^{-1/2 - i UT} \widehat{\varphi} (H^{-1} \log p),$$ $$S_3 := \frac{1}{2} \sum_{p \in \mathcal{P} } p^{-1 - 2 i UT} \widehat{\varphi} (2 H^{-1} \log p).$$ Since we can assume $V$ large depending on $\varepsilon$ and $\varphi$, we can suppose that the term $ \mathcal{O}_{\varphi}(1)$ is smaller than $\varepsilon V/20$: $$|I(UT, H)| \leq |S_1 |+ |S_2| + |S_3 |+ \varepsilon V/20,$$ with probability $ 1 - \mathcal{O} (T^{-1/2})$. We deduce $$\mathbb{P} [ |I(UT, H)| \geq (1-\varepsilon) V] \leq \mathbb{P} [ |S_1| \geq (1- 1.1 \varepsilon) V]$$ $$+ \mathbb{P} [| S_2 |\geq \varepsilon V/100] + \mathbb{P} [| S_3 | \geq \varepsilon V/100] + \mathcal{O}(T^{-1/2}).$$ We estimate the tail of these sums by applying Markov inequality to their moment of order $2k$, $k$ being a suitably chosen integer. Since $\widehat{\varphi}$ is compactly supported, all the sums have finitely many non-zero terms, and we can apply, for $T$ large, the lemma to all values of $k$ up to $\gg_{\varphi} \log (T /\log T) / H$, i.e. $\gg_\varphi V/K$, and then $\gg_{\varepsilon, \varphi} V$. For the sum $S_1$, we can even go up to $(V \log \log T)/2$. For $S_2$, we can take $k = \lfloor c_{\varepsilon, \varphi} V \rfloor$, for a suitable $c_{\varepsilon, \varphi} > 0$ depending only on $\varepsilon$ and $\varphi$. The moment of order $2k$ is $$\ll k! \left(\sum_{T^{1/(V \log \log T)} < p \leq e^{\mathcal{O}_\varphi (H)}} p^{-1} \right)^k \leq k^k ( \log \log \log T + \mathcal{O}_{\varepsilon, \varphi}(1) )^k$$ Hence, $$\mathbb{P} [ | S_2 |\geq \varepsilon V/100] \leq (\varepsilon V/100)^{- 2 \lfloor c_{\varepsilon, \varphi} V \rfloor} ( c_{\varepsilon, \varphi} V ( \log \log \log T + \mathcal{O}_{\varepsilon, \varphi}(1) ) )^{ \lfloor c_{\varepsilon, \varphi} V \rfloor }$$ Since we have assumed $V \geq 10 \sqrt{\log \log T}$, we have $$(\varepsilon V/100)^{-2} c_{\varepsilon, \varphi} V ( \log \log \log T + \mathcal{O}_{\varepsilon, \varphi}(1) ) \leq V^{-0.99}$$ and $$\lfloor c_{\varepsilon, \varphi} V \rfloor \geq 0.99 \, c_{\varepsilon, \varphi} V,$$ for $T$ large enough depending on $\varepsilon$ and $\varphi$. Hence $$\mathbb{P} [ | S_2| \geq \varepsilon V/100] \leq V^{-0.98 c_{\varepsilon, \varphi} V},$$ which is acceptable. An exactly similar proof is available for $S_3$, since we even get a $2k$-th moment bounded by $k! (\mathcal{O}(1))^k$. For $S_1$, the $2k$-th moment is $$\ll k! (\log \log T + \mathcal{O}_{\varepsilon, \varphi}(1))^k$$ for $k \leq (V \log \log T)/2$. Hence, the probability that $|S_1| \geq W := (1 - 1.1 \varepsilon)V$ is $$\ll W^{-2k} k! (\log \log T + \mathcal{O}_{\varepsilon, \varphi}(1))^k$$ We appoximately optimize this expression in $k$. If $V \leq (\log \log T)^2/2$, we can take $k = \lfloor W^2/ \log \log T \rfloor$ since this expression is smaller than $V \log \log T/2$. Notice that since $V \geq 10 \sqrt{\log \log T}$ and $\varepsilon < 1/10$, we have $W \geq 8 \sqrt{\log \log T}$ and $k$ is strictly positive. The probability that $|S_1| \geq W$ is then $$\ll [W^{-2} (k/e) (\log \log T + \mathcal{O}_{\varepsilon, \varphi}(1))]^k \sqrt{k}$$ The quantity inside the bracket is smaller than $e^{-(1- (\varepsilon/100))}$ for $T$ large enough depending on $\varepsilon$ and $\varphi$. Hence, in this case, the probability is $$\leq e^{- (1- (\varepsilon/100)) k} \sqrt{k} \ll_{\varepsilon} e^{- (1- (\varepsilon/50)) k} \ll e^{ - (1- (\varepsilon/50)) ( 1- 1.1 \varepsilon)^2 V^2 / \log \log T}.$$ This is acceptable. If $V > (\log \log T)^2/2$, we take $k = \lfloor V \log \log T/2 \rfloor$. We again get a probability $$\ll [W^{-2} (k/e) (\log \log T + \mathcal{O}_{\varepsilon, \varphi}(1))]^k \sqrt{k}.$$ Inside the bracket, the quantity is bounded, for $T$ large enough depending on $\varepsilon$ and $\varphi$, by $$\begin{aligned} & W^{-2} (V \log \log T/2e) (1.001 \log \log T) \leq W^{-2} V (\log \log T)^2/5.4 \\ & = V^{-1} (\log \log T)^2 (1-1.1\varepsilon)^{-2}/5.4 \leq 2 (1-1.1\varepsilon)^{-2}/5.4 \leq 1/2\end{aligned}$$ Hence, we get a probability $$\ll 2^{-k} \sqrt{k} \ll e^{-k/2} \ll e^{- V \log \log T/4} \ll e^{-V \log V/4},$$ the last inequality coming from the fact that $V < \log T$ by assumption. This is again acceptable. We then get the following bounds for the tail of $\Im \log \zeta$, which easily imply the main theorem by integrating against $e^{2 k V}$: For all $\varepsilon \in (0,1/10)$, $V > 0$, $$\mathbb{P} [ | \Im \log \zeta(1/2 + iUT)| \geq V] \ll_{\varepsilon} e^{(\log \log \log T)^3} e^{- (1-\varepsilon) V^2/ \log \log T} + e^{- c_{\varepsilon} V \log V},$$ where $c_{\varepsilon} > 0$ depends only on $\varepsilon$. We fix a function $\varphi$ satisfying the assumptions given at the beginning: this function will be considered as universal, and then we will drop all the dependences on $\varphi$ in this proof. From Theorem 14.13 of Titchmarsh [@Tit], we can assume $V \ll (\log \log T)^{-1} \log T$ and then $V < \log T$ for $T$ large (otherwise the probability is zero). We can then also assume $T$ larger than any given quantity depending only on $\varepsilon$ (if $T$ is small, $V$ is small), and $V \geq 10 \sqrt{\log \log T}$. Under these assumptions, we can suppose $V > K$ if $K > 0$ depends only on $\varepsilon$, which allows to apply Proposition \[sumV\]. The error term $\mathcal{O}_{\varepsilon}(T^{-1/3})$ can be absorbed in $\mathcal{O}_{\varepsilon}( e^{-c_{\varepsilon} V \log V})$ since $V \ll (\log \log T)^{-1} \log T$. The sum in $r$ is, by the previous proposition, dominated by $$\begin{aligned} \sum_{r = 0}^{p-1} (1 + \log \log T)^{r} e^{- (1-3 \varepsilon) (2^r V)^2/ \log \log T} & + \sum_{r = 0}^{p-1} (1 + \log \log T)^{r} e^{- b_{\varepsilon} (2^r V) \log (2^r V)} \\ & + T^{-1/2} \sum_{r = 0}^{p-1} (1 + \log \log T)^{r},\end{aligned}$$ where $b_{\varepsilon} > 0$ depends only on $\varepsilon$. We can assume $V$ large, and then the exponent in the last exponential decreases by at least $b_{\varepsilon} V$ when $r$ increases by $1$, and then by more than $\log ( 2( 1+ \log \log T))$ when $T$ is large enough depending on $\varepsilon$, since $V \geq 10 \sqrt{ \log \log T}$. Hence, each term of the sum is less than half the previous one and the sum is dominated by its first term. This is absorbed in $\mathcal{O}_{\varepsilon}( e^{-c_{\varepsilon} V \log V})$. For the last sum, we observe that $2^{p-1} V < \log T$ by definition of $p$, and then (since we can assume $V > 1$), $p \ll \log \log T$, which gives a term $$\ll T^{-1/2} ( \log \log T) (1 + \log \log T)^{ \mathcal{O} (\log \log T)} \ll T^{-1/3},$$ which can again be absorbed in $\mathcal{O}_{\varepsilon}( e^{-c_{\varepsilon} V \log V})$ since we can assume $V \ll (\log \log T)^{-1} \log T$. For the first sum, we separate the terms for $r \leq 10 \log \log \log T$, and for $r > 10 \log \log \log T$. For $T$ large, the sum of the terms for $r$ small is at most $$\begin{aligned} & \sum_{r = 0}^{\lfloor 10 \log \log \log T \rfloor} ( 1 + \log \log T)^{r} e^{-(1-3 \varepsilon) V^2/ \log \log T} \\ & \ll (1 + 10 \log \log \log T) e^{ 10 \log \log \log T \log ( 1+ \log \log T)} e^{-(1-3 \varepsilon) V^2/ \log \log T} \\ & \ll e^{(\log \log \log T)^3} e^{- (1- 3 \varepsilon) V^2/ \log \log T}\end{aligned}$$ This term is acceptable after changing the value of $\varepsilon$. When $r > 10 \log \log \log T$, we have (for $T$ large, $V \geq 10 \sqrt{\log \log T}$ and $\varepsilon < 1/10$) $$(1-3\varepsilon) (2^r V)^2 / \log \log T \geq 2^{2r} = e^{ 20 (\log 2) \log \log \log T} \geq (\log \log T)^{13}.$$ The exponent is multiplied by $4$ when $r$ increases by $1$, and then decreased by more than $3 (\log \log T)^{13}$, whereas the prefactor is multiplied by $1 + \log \log T$. Hence, the term $r = \lfloor 10 \log \log \log T \rfloor + 1$ dominates the sum of all the terms $r > 10 \log \log \log T$, and its order of magnitude is acceptable. [10]{} A. J. Harper. . , 2013. D. R. Heath-Brown. Fractional moments of the [R]{}iemann zeta function. , 24(1):65–78, 1981. J.-P. Keating and N. Snaith. . , 214:57–89, 2000. J. [Najnudel]{}. . , 2017. M. Radziwiłł  and K. Soundararajan. Continuous lower bounds for moments of zeta and [$L$]{}-functions. , 59(1):119–128, 2013. K. Ramachandra. Some remarks on the mean value of the [R]{}iemann zeta function and other [D]{}irichlet series. [I]{}. , 1:15, 1978. K. Ramachandra. , volume 85 of [*Tata Institute of Fundamental Research Lectures on Mathematics and Physics*]{}. Published for the Tata Institute of Fundamental Research, Bombay; by Springer-Verlag, Berlin, 1995. K. Soundararajan. Moments of the [R]{}iemann zeta function. , 170(2):981–993, 2009. E.-C. Titchmarsh. . The Clarendon Press, Oxford University Press, New York, second edition, 1986. Edited and with a preface by D. R. Heath-Brown. K.-M. Tsang. . ProQuest LLC, Ann Arbor, MI, 1984. Thesis (Ph.D.)–Princeton University. K.-M. Tsang. Some [$\Omega$]{}-theorems for the [R]{}iemann zeta-function. , 46(4):369–395, 1986. [^1]: `joseph.najnudel@bristol.ac.uk`
--- abstract: | In the domain of *Computing with words* (CW), fuzzy linguistic approaches are known to be relevant in many decision-making problems. Indeed, they allow us to model the human reasoning in replacing words, assessments, preferences, choices, wishes$\ldots$ by *ad hoc* variables, such as fuzzy sets or more sophisticated variables. This paper focuses on a particular model: Herrera & Martínez’ 2-tuple linguistic model and their approach to deal with unbalanced linguistic term sets. It is interesting since the computations are accomplished without loss of information while the results of the decision-making processes always refer to the initial linguistic term set. They propose a fuzzy partition which distributes data on the axis by using linguistic hierarchies to manage the non-uniformity. However, the required input (especially the density around the terms) taken by their fuzzy partition algorithm may be considered as too much demanding in a real-world application, since density is not always easy to determine. Moreover, in some limit cases (especially when two terms are very closed semantically to each other), the partition doesn’t comply with the data themselves, it isn’t close to the reality. Therefore we propose to modify the required input, in order to offer a simpler and more faithful partition. We have added an extension to the package jFuzzyLogic and to the corresponding script language FCL. This extension supports both 2-tuple models: Herrera & Martínez’ and ours. In addition to the partition algorithm, we present two aggregation algorithms: the arithmetic means and the addition. We also discuss these kinds of 2-tuple models. author: - 'Mohammed-Amine ABCHIR and Isis TRUCK' bibliography: - 'AbchirTruck.bib' nocite: '\nocite{}' title: 'Towards an extension of the 2-tuple linguistic model to deal with unbalanced linguistic term sets' --- Introduction {#intro} ============ Decision making is one of the most central human activities. The need of choosing between solutions in our complex world implies setting priorities on them considering multiple criteria such as benefits, risk, feasibility… The interest shown by scientists to Multi Criteria Decision Making (MCDM) problems, as the survey of Bana e Costa shows [@COSTA90], has led to the development of many MCDM approaches such as the Utility Theory, Bayesian Theory, Outranking Methods and the Analytic Hierarchy Process (AHP). But the main lack of these approaches is that they represent the preferences of the decision maker about a real-world problem in a crisp mathematical model. As we are dealing with human reasoning and preference modeling, qualitative data and linguistic variables may be more suitable to represent linguistic preferences and their underlying aspects [@Cha10]. Martínez *et al.* have presented in [@Mar10] a wide list of applications to show the usability and the advantages that the linguistic information (using various linguistic computational models) produce in decision making. The preference extraction can be done thanks to elicitation strategies performed through User Interfaces (UIs) [@Boo89] and Natural Language Processing (NLP) [@Amb97] in a stimulus-response application for instance. In the literature, many approaches allow to model the linguistic preferences and the interpretation made of it such as the classical fuzzy approach from Zadeh [@Zad75]. Zadeh has introduced the notions of linguistic variable and *granule* [@Zad97] as basic concepts that underlie human cognition. In [@HACH09], the authors review the computing with words in Decision Making and explain that a granule “which is the denotation of a word (…) is viewed as a fuzzy constraint on a variable”. Among the existing models, there is one that permits to deal with granularity and with linguistic assessments in a fuzzy way with a simple and regular representation: the fuzzy linguistic 2-tuples introduced by Herrera and Martínez [@Her00a]. Moreover, this model enables the representation of unbalanced linguistic data (*i.e.* the fuzzy sets representing the terms are not symetrically and uniformly distributed on their axis). However, in practice, the resulting fuzzy sets do not match exactly with human preferences. Now we know how crucial the selection of the membership functions is to determine the validity of a CW approach [@Mar10]. That is why an intermediate representation model is needed when we are dealing with data that are “very unbalanced” on the axis. The aim of this paper is to introduce another kind of fuzzy partition for unbalanced term sets, based on the fuzzy linguistic 2-tuple model. Using the levels of linguistic hierarchies, a new algorithm is presented to improve the matching of the fuzzy partitioning. This paper is structured as follows. First, we shortly recall the fuzzy linguistic approach and the 2-tuple fuzzy linguistic representation model by Herrera & Martínez. In Section \[sec:ouralgo\] we introduce a variant version of fuzzy linguistic 2-tuples and the corresponding partitioning algorithm before presenting aggregation operators (Section \[sec:aggreg\]). Then in Section \[sec:discussion\] another extension of the model and a prospective application of this new kind of 2-tuples are discussed. We finally conclude with some remarks. The 2-tuple fuzzy linguistic representation model {#sec:stateart} ================================================= In this section we remind readers of the fuzzy linguistic approach, the 2-tuple fuzzy linguistic representation model and some related works. We also review some studies on the use of natural language processing in human computer interfaces. 2-tuples linguistic model and fuzzy partition --------------------------------------------- Among the various fuzzy linguistic representation models, the approach that fits our needs the most is the representation that has been introduced by Herrera and Mart[í]{}nez in [@Her00a]. This model represents linguistic information by means of a pair $(s, \alpha)$, where $s$ is a label representing the linguistic term and $\alpha$ is the value of the symbolic translation. The membership function of $s$ is a triangular fuzzy set. Let us note that in this paper we call a linguistic *term* a word (*e.g.* tall) and a *label* a symbol on the axis (*i.e.* an $s$). The computational model developed for this representation one includes comparison, negation and aggregation operators. By default, all triangular fuzzy sets are uniformly distributed on the axis, but the targeted aspects are not usually uniform. In such cases, the representation should be enhanced with tools such as *unbalanced* linguistic term sets which are not uniformly distributed on the axis [@Herrera08afuzzy]. To support the non-uniformity of the terms (we recall that the term set shall be unbalanced), the authors have chosen to change the scale granularity, instead of modifying the shape of the fuzzy sets. The key element that manages multigranular linguistic information is the *level* of a *linguistic hierarchy*, composed of an odd number of triangular fuzzy sets of the same shape, equally distributed on the axis, as a fuzzy partition in Ruspini’s sense [@Rus69]. A linguistic hierarchy $(LH)$ is composed of several label sets of different levels (*i.e.*, with different granularities). Each level of the hierarchy is denoted $l(t,n(t))$ where $t$ is the level number and $n(t)$ the number of labels (see Figure \[fig:2tuples\]). Thus, a linguistic label set $S^{n(t)}$ belonging to a level $t$ of a linguistic hierarchy $LH$ can be denoted $S^{n(t)} = \{ s_0^{n(t)},\dots,s_{n(t)-1}^{n(t)} \}$. In Figure \[fig:2tuples\], it should be noted that $s_5^2$ (bottom, plain and dotted line) is a *bridge unbalanced label* because it is not symmetric. Actually each label has two sides: the upside (left side) that is denoted $\overline{s_i}$ and the downside (right side) that is denoted $\underline{s_i}$. Between two levels there are *jumps* so we have to bridge the unbalanced term to obtain a fuzzy partition. Both sides of a bridge unbalanced label belong to two different levels of hierarchy. Linguistic hierarchies are unions of levels and assume the following properties [@HER01]: - levels are ordered according to their granularity; - the linguistic label sets have an odd number $n(t)$; - the membership functions of the labels are all triangular; - labels are uniformly and symmetrically distributed on $[0,1]$; - the first level is $l(1,3)$, the second is $l(2,5)$, the third is $l(3,9)$, etc. Using the hierarchies, Herrera and Martínez have developed an algorithm that permits to partition data in a convenient way. This algorithm needs two inputs: the linguistic term set $\mathcal{S}$[^1] (composed by the medium term denoted $\mathcal{S}_C$, the set of terms on its left denoted $\mathcal{S}_L$ and the set of terms on its right denoted $\mathcal{S}_R$) and the density of term distribution on each side. The density can be *middle* or *extreme* according to the user’s choice. For example the description of $\mathcal{S} = \{A,B,C,D,E,F,G,H,I\}$ is $\{(2,extreme),1,(6,extreme)\}$ with $\mathcal{S}_L = \{A,B\}$, $\mathcal{S}_C = \{C\}$ and $\mathcal{S}_R = \{D,E,F,G,H,I\}$. Drawbacks of the 2-tuple linguistic model fuzzy partition in our context ------------------------------------------------------------------------ First, the main problem of this algorithm is the density. Since the user is not an expert, how could he manage to give the density? First, he should be able to understand notions of granularity and unbalanced scales. Second, it is compulsory to have an odd number of terms (*cf.* $n(t)$) in order to define a middle term (*cf.* $\mathcal{S}_C$). But it may happen that the parity shall not be fulfilled. For example, when talking about a GPS battery we can consider four levels: full, medium, low and empty. Last, the final result may be quite different from what was initially expected because only a “small unbalance” is allowed. It means that even if the *extreme* density is chosen, it doesn’t guarantee the obtention of a very thin granularity. Only two levels of density are allowed (*middle* or *extreme*) which can be a problem when considering distances such as: arrived, very closed, closed, out of reach. “Out of reach” needs a level of granularity quite different from the level for terms “arrived”, “very closed” and “closed”. As the fuzzy partition obtained by this approach does not always fit with the reality, we proposed in [@MAA11] a draft of approach to overcome this problem. This is further described in [@MAAIT11] where we mainly focus on the industrial context (geolocation) and the underlying problems addressed by our specific constraints. The implementations and tests made for this work are based on the jFuzzyLogic library. It is the most used fuzzy logic package by Java developers. It implements Fuzzy Control Language (FCL) specification (IEC 61131-7) and is available under the Lesser GNU Public Licence (LGPL). Even if it is not the main point of this paper, one part of our work is to provide an interactive tool in the form of a natural language dialogue interface. This dialogue, through an elicitation strategy, helps to extract the human preferences. We use NLP techniques to represent the grammatical, syntactical and semantic relations between the words used during the interaction part. Moreover, to be able to interpret these words, the NLP is associated to fuzzy linguistic techniques. Thus, fuzzy semantics are associated to each word which is supported by the interactive tool (especially adjectives such as “long”, “short”, “low”, “high”, etc.) and can be used at the interpretation time. This NLP-Fuzzy Linguistic association also enables to assign different semantics to the same word depending on the user’s criteria (business domain, context, etc.). It allows then to unify the words used in the dialogue interface for different use cases by only switching between their different semantics. Another interesting aspect of this NLP-fuzzy linguistic association lies in the possibility of an automatic semantic generation in a sort of autocompletion mode. For example, in a geolocation application, if the question is “*When do you want to be notified?*”, a user’s answer can be “*I want to be notified when the GPS battery level is **low***”. Here the user says *low*, so we propose a semantic distribution of the labels of the term set according to the number of the synonyms of this term. Indeed, the semantic relations between words introduced by NLP (synonyms, homonyms, opposites, etc.) can be used to highlight words associated with the term *low* semantically and then to construct a linguistic label set around it. The more relevant words found for a term, the higher the density of labels is around it. In comparison with the 2-tuple fuzzy linguistic model introduced by Herrera & al., this amounts to deduce the *density* (in Herrera & Martínez’ sense) according to the number of synonyms of a term. In practice, thanks to a synonym dictionary it is possible to compute a semantic distance between each term given by the geolocation expert. If two terms are considered as synonymous they will share the same $LH$. Moreover, a word with few (or no) synonyms will be represented in a coarse-grained hierarchy while a word with many synonyms will be represented in a fine-grained hierarchy. We can see here how much the unbalanced linguistic label sets can be relevant in many situations. To couple NLP techniques and fuzzy linguistic models seems very promising. Towards another kind of 2-tuples linguistic model {#sec:ouralgo} ================================================= Starting from a running example, we now present our proposal that aims at avoiding the drawbacks mentioned above. Running example --------------- Herrera & Martínez’ methodology needs a term set $\mathcal{S}$ and an associated description with two densities. For instance, when considering the blood alcohol concentration (BAC in percentage) in the USA, we can focus on five main values: $0\%$ means no alcohol, $.05\%$ is the legal limit for drivers under 21, $.065\%$ is an intermediate value (illegal for young drivers but legal for the others), $.08\%$ is the legal limit for drivers older than 21 and $.3\%$ is considered as the BAC level where risk of death is possible. In particular, the ideal partition should comply with the data and with the gap between values (see Figure \[fig:idealPart\] that simply proposes triangular fuzzy sets without any real semantics, obtained directly from the input values). But this prevents us from using the advantages of Herrera & Martínez’ method, that are mainly to keep the original semantics of the terms, *i.e.* to keep the same terms from the original linguistic term set. The question is how to express linguistically the results of the computations if the partition doesn’t fulfill “good” properties such as those from the 2-tuple linguistic model? ![The ideal fuzzy partition for the BAC example.[]{data-label="fig:idealPart"}](figures/BAC_V1.jpg){width="12cm"} Extension of jFuzzyLogic and preliminary definitions {#ssec:model} ---------------------------------------------------- With Herrera & Martínez’ method, we have\ $\mathcal{S} = \{\textit{NoAlcohol},$ $\textit{YoungLegalLimit},$ $\textit{Intermediate},$ $\textit{LegalLimit},$ $\textit{RiskOfDeath}\}$ and its description is $\{(3,extreme),1,(1,extreme)\}$ with $\mathcal{S}_L = \{\textit{NoAlcohol}$, $\textit{YoungLegalLimit}$, $\textit{Intermediate}\}$, $\mathcal{S}_C = \{\textit{LegalLimit}\}$ and $\mathcal{S}_R = \{\textit{RiskOfDeath}\}$. jFuzzyLogic extension (we have added the management of Herrera & Martínez’ 2-tuple linguistic model) helps modeling this information and we obtain the following FCL script: `VAR_INPUT`\ ` BloodAlcoholConcentration : LING;`\ `END_VAR`\ `FUZZIFY BloodAlcoholConcentration`\ ` TERM S := ling NoAlcohol YoungLegalLimit`\ ` Intermediate | LegalLimit | RiskOfDeath,`\ ` extreme extreme;`\ `END_FUZZIFY`\ The resulting fuzzy partition is quite different from what was initially expected (see Figure \[fig:partLuis\] compared to Figure \[fig:idealPart\] where we notice that the label unbalance is not really respected). We recall that each label $s_i$ has two sides. For instance, the label $s_i$ associated to *NoAlcohol* has a downside and no upside while the term $s_j$ associated to *RiskOfDeath* has an upside and no downside. ![Fuzzy partition generated by Herrera & Martínez’ approach.[]{data-label="fig:partLuis"}](figures/BAC_V2.jpg){width="12cm"} Two problems appear: the use of densities is not always obvious for final users, and the gaps between values (especially between *LegalLimit* and *RiskOfDeath*) are not respected. To avoid the use of the densities that can be hard to obtain from the user (*e.g.*, see the specific geolocation industrial context explained in [@MAAIT11]), we have evoked in [@MAA11] a tentative approach which offers a simpler way to retrieve unbalanced linguistic terms. The aim was to accept any kind of description of the terms coming from the user. That is why we propose an extension of jFuzzyLogic to handle linguistic 2-tuples in addition to an enrichment of the FCL language specification. Consequently, we suggest another way to define a `TERM` with a new type of variable called `LING` (see the example below).\ `VAR_INPUT`\ ` BloodAlcoholConcentration : LING;`\ `END_VAR`\ `FUZZIFY BloodAlcoholConcentration`\ ` TERM S := ling (NoAlcohol,0.0) (YoungLegalLimit,0.05)`\ ` (Intermediate,0.065) (LegalLimit,0.08) (RiskOfDeath,0.3);`\ `END_FUZZIFY`\ It should be noted that the linguistic values are composed by a pair $(\mathsf{s,v})$ where $\mathsf{s}$ is a linguistic term (*e.g.*, *LegalLimit*) and $\mathsf{v}$ is a number giving the position of $\mathsf{s}$ on the axis (*e.g.*, $0.08$). Thus several definitions can now be given. \[def:SRonde\] Let $\mathcal{S}$ be an unbalanced ordered linguistic term set and $U$ be the numerical universe where the terms are projected. Each linguistic value is defined by a unique pair $(\mathsf{s,v}) \in \mathcal{S} \times U$. The numerical distance between $\mathsf{s}_i$ and $\mathsf{s}_{i+1}$ is denoted by $d_i$ with $d_i=\mathsf{v}_{i+1}-\mathsf{v}_i$. Let $S=\{s_0,\ldots,s_p\}$ be an unbalanced linguistic label set and $(s_i,\alpha)$ be a linguistic 2-tuple. To support the unbalance, $S$ is extended to several balanced linguistic label sets, each one denoted $S^{n(t)}=\{s_0^{n(t)},\ldots,s_{n(t)-1}^{n(t)}\}$ (obtained from the algorithm of [@HER01]) defined in the level $t$ of a linguistic hierarchy $LH$ with $n(t)$ labels. There is a unique way to go from $\mathcal{S}$ (Definition \[def:SRonde\]) to $S$, according to Algorithm \[algo\_pa\]. Let $l(t,n(t))$ be a level from a linguistic hierarchy. The *grain* $g$ of $l(t,n(t))$ is defined as the distance between two 2-tuples $(s_i^{n(t)},\alpha)$. The grain $g$ of a level $l(t,n(t))$ is obtained as: $g_{l(t,n(t))}=1/(n(t)-1)$. $g$ is defined as the distance between $(s_i^{n(t)},\alpha)$ and $(s_{i+1}^{n(t)},\alpha)$, *i.e.*, between two kernels of the associated triangular fuzzy sets because $\alpha$ equals $0$. Since the hierarchy is normalized on $[0,1]$, this distance is easy to compute using $\Delta^{-1}$ operator from [@HER01] where $\Delta^{-1}(s_i^{n(t)},\alpha)=\frac{i}{n(t)-1}+\alpha=\frac{i}{n(t)-1}$. As a result, $g_{l(t,n(t))}=\frac{(i+1)}{n(t)-1}-\frac{i}{n(t)-1} = 1/(n(t)-1)$. For instance, the grain of the second level is $g_{l(2,5)}=.25$. \[2\*grain\] The grain $g$ of a level $l(t-1,n(t-1))$ is twice the grain of the level $l(t,n(t)$: $g_{l(t-1,n(t-1))} = 2g_{l(t,n(t))}$ This comes from the following property of the linguistic hierarchies. Let $l(t,n(t))$ be a level. Its successor is defined as: $l(t+1,2n(t)-1)$ (see [@Herrera08afuzzy]). A new partitioning {#ssec:newpart} ------------------ The aim of the partitioning is to assign a label $s_i^{n(t)}$ (indeed one or two) to each term $\mathsf{s}_k$. The selection of $s_i^{n(t)}$ depends on both the distance $d_k$ and the numerical value $\mathsf{v}_k$. We look for the nearest level — they are all known in advance, see Table 1 in [@Herrera08afuzzy] — *i.e.*, for the level with the closest grain from $d_k$. Then the right $s_i^{n(t)}$ is chosen to match $\mathsf{v}_k$ with the best accuracy. $i$ has to minimize the quantity $\min_i |\Delta^{-1}(s_i^{n(t_k)},0)-\mathsf{v}_k|$. By default, the linguistic hierarchies are distributed on $[0,1]$, so a scaling is needed in order that they match the universe $U$. The detail of these different steps is given in Algorithm \[algo\_pa\]. We notice that there is *no condition* on the *parity* of the number of terms. Besides, the function returns a set of bridge unbalanced linguistic 2-tuples with a level of granularity that may not be the same for the upside than for the downside. $\langle(\mathsf{s}_0,\mathsf{v}_0), \ldots, (\mathsf{s}_{p-1},\mathsf{v}_{p-1})\rangle$ are $p$ pairs of $\mathcal{S} \times U$;\ $t, t_0, \ldots, t_{p-1}$ are levels of hierarchies scale the linguistic hierarchies on $[0,\mathsf{v}_{\textit{max}}]$, with $\mathsf{v}_{\textit{max}}$ the maximum $\mathsf{v}$ value precompute $\eta$ levels and their grain $g$ ($\eta \geq 6$) $d_k \gets \mathsf{v}_{k+1} - \mathsf{v}_{k}$ $t_k \gets t$ $\textit{tmp}=\mathsf{v}_{\textit{max}}$ $\textit{tmp} = |\Delta^{-1}(s_i^{n(t_k)},0)-\mathsf{v}_k|$ $j \gets i$ $\underline{s_k^{n(t_k)}} \gets \underline{s_{j}^{n(t_k)}}$ ; $\overline{s_{k+1}^{n(t_k)}} \leftarrow \overline{s_{j+1}^{n(t_k)}}$ depending on the level, $\underline{\alpha_k}=\mathsf{v}_k-\Delta^{-1}(s_j^{n(t_k)},0)$ or\                                 $\overline{\alpha_{k+1}}=\mathsf{v}_{k+1}+\Delta^{-1}(s_{j+1}^{n(t_{k})},0)$ the set $\{ (\underline{s_{0}^{n(t_0)}}, \underline{\alpha_0}), (\overline{s_{1}^{n(t_0)}}, \overline{\alpha_1}), (\underline{s_{1}^{n(t_1)}}, \underline{\alpha_1}),\ldots,$\                        $(\underline{s_{p-2}^{n(t_{p-2})}}, \underline{\alpha_{p-2}}), (\overline{s_{p-1}^{n(t_{p-2})}}, \overline{\alpha_{p-1}})\}$ Herrera & Martínez’ partitioning does not follow exactly the user wishes because it transforms them into a model with many properties, such as Ruspini conditions [@Rus69]. As for us, we try to match the wishes as best as possible by adding lateral translations $\alpha$ to the labels $s_i^{n(t)}$. From this, it results a possible non-fulfillment of the previous properties. For instance, what we obtain is not a fuzzy partition. But we assume to do without these conditions since the goal is to totally cover the universe. This is guaranteed by the *minimal covering property*. \[appartMinimale\] The 2-tuples $(s_i^{n(t)},\alpha)$ (from several levels $l(t,n(t))$) obtained from our partitioning algorithm are triangular fuzzy sets that cover the entire universe $U$. Actually, the distance between any pair $\langle(\underline{s_k^{n(t)}},\underline{\alpha_k}), (\overline{s_{k+1}^{n(t)}},\overline{\alpha_{k+1}})\rangle$ is always strictly greater than twice the grain of the corresponding level. By definition and construction, $d_k$ is used to choose the convenient level $t$ for this pair. We recall that when $t$ decreases, $g_{l(t,n(t))}$ increases. As a result, we have: $$\label{eq:dists} g_{l(t,n(t))} \leq d_k < g_{l(t-1,n(t-1))}$$ After having applied the steps of the assignation process we obtain two linguistic 2-tuples $(\underline{s_k^{n(t)}}, \underline{\alpha_k})$ and $(\overline{s_{k+1}^{n(t)}}, \overline{\alpha_{k+1}})$ representing the downside and upside of labels $s_k^{n(t)}$ and $s_{k+1}^{n(t)}$ respectively. Thanks to the symbolic translations $\alpha$, the distance between the kernel of these two 2-tuples is $d_k$. Then, according to Proposition \[2\*grain\] and to Equation \[eq:dists\] we conclude that: $$\label{eq:lhProperty} d_k < 2g_{l(t,n(t))}$$ which means that, for each value in $U$, this fuzzy partition has a minimum membership value $\varepsilon$ strictly greater than 0. Considering $\mu_{s_i^{n(t)}}$ the membership function associated with a label $s_i^{n(t)}$, this property is denoted: $$\label{eq:coverage} \forall u \in U, \ \ \ \ \ \mu_{s_0^{n(t_0)}}(u) \vee \dots \vee \mu_{s_i^{n(t_i)}}(u) \vee \dots \vee \mu_{s_{p-1}^{n(t_{p-1})}}(u) \geq \varepsilon > 0$$ To illustrate this work, we take the running example concerning the BAC. The set of pairs $(\mathsf{s,v})$ is the following: $\{(\textit{NoAlcohol},.0)$, $(\textit{YoungLegalLimit},.05)$ $(\textit{Intermediate},.065)$ $(\textit{LegalLimit},.08)$ $(\textit{RiskOfDeath},.3)\}$. It should be noted that our algorithm implies to add another level of hierarchy: $l(0,2)$. We denote by $L$ and $R$ the upside and downside of labels respectively. Table 1 shows the results, with $\alpha$ values not normalized. To normalize them, it is easy to see that they have to be multiplied by $1/.3$ because $\mathsf{v}_\textit{max}=.3$. linguistic term level 2-tuple ---------------------- ------------ ------------------- *NoAlcohol\_R* $l(3,9)$ $(s_0^9,0)$ *YoungLegalLimit\_L* $l(3,9)$ $(s_1^9,.0125)$ *YoungLegalLimit\_R* $l(5,33)$ $(s_5^{33},.003)$ *Intermediate\_L* $l(5,33)$ $(s_6^{33},0)$ *Intermediate\_R* $l(4, 17)$ $(s_3^{17},0)$ *LegalLimit\_L* $l(4, 17)$ $(s_4^{17},.005)$ *LegalLimit\_R* $l(1, 3)$ $(s_1^{3},-.07)$ *RiskOfDeath\_R* $l(1, 3)$ $(s_1^3,0)$ : The 2-tuple set for the BAC example. See Figure \[fig:ourPart\] for a graphical representation of the fuzzy partition. ![Fuzzy partition generated by our algorithm for the BAC example.[]{data-label="fig:ourPart"}](figures/BAC_V3.jpg){width="12cm"} Aggregation with our 2-tuples {#sec:aggreg} ============================= Arithmetic mean {#ssec:aggreg} --------------- As our representation model is based on the 2-tuple fuzzy linguistic one, we can use the aggregation operators (weighted average, arithmetic mean, etc.) of the unbalanced linguistic computational model introduced in [@Herrera08afuzzy]. The functions $\Delta$, $\Delta^{-1}$, $\mathcal{LH}$ and $\mathcal{LH}^{-1}$ used in our aggregation are derived from the same functions in Herrera & Martínez’ computational model. In the aggregation process, linguistic terms $(\mathsf{s}_k, \mathsf{v}_k)$ belonging to a linguistic term set $\mathcal{S}$ have to be dealt with. After the assignation process, these terms are associated to one or two 2-tuples $(s_i^{n(t)}, \alpha_i)$ (remember the upside and downside of a label) of a level from a linguistic hierarchy $LH$. We recall two definitions taken from [@Herrera08afuzzy]. $\mathcal{LH}^{-1}$ is the transformation function that associates with each linguistic 2-tuple expressed in $LH$ its respective unbalanced linguistic 2-tuple. Let $S= \{s_0,\ldots,s_g\}$ be a linguistic label set and $\beta \in [0,g]$ a value supporting the result of a symbolic aggregation operation. Then the linguistic 2-tuple that expresses the equivalent information to $\beta$ is obtained with the function $\Delta : [0,g] \longrightarrow S \times [-.5,.5)$, such that $$\Delta(\beta)=\displaystyle\left\{\begin{tabular}{ll} $s_i$ & $i=\mathit{round}(\beta)$\\ $\alpha=\beta - i$ & $\alpha \in [-.5, .5)$ \end{tabular} \right.$$ where $s_i$ has the closest index label to $\beta$ and $\alpha$ is the value of the symbolic translation. Thus the aggregation process (arithmetic mean) can be summarized by the three following steps: 1. Apply the aggregation operator to the $\mathsf{v}$ values of the linguistic terms. Let $\beta$ be the result of this aggregation. 2. Use the $\Delta$ function to obtain the $(s_q^r, \alpha_q)$ 2-tuple of $LH$ corresponding to $\beta$. 3. In order to express the resulting 2-tuple in the initial linguistic term set $\mathcal{S}$, we use the $\mathcal{LH}^{-1}$ function as defined in [@Herrera08afuzzy] to obtain the linguistic pair $(\mathsf{s}_l, \mathsf{v}_l)$. To illustrate the aggregation process, we suppose that we want to aggregate two terms (two pairs ($\mathsf{s},\mathsf{v}$)) of our running example concerning the BAC: (*YoungLegalLimit*, .05) and (*LegalLimit*, .08). In this example we use the arithmetic mean as aggregation operator. Using our representation algorithm, the term (*YoungLegalLimit*, .05) is associated to $(\underline{s_{1}^9}, .125)$ and $(\overline{s_5^{33}}, .003)$ and (*LegalLimit*, .08) is associated to $(\underline{s_{4}^{17}}, .005)$ and $(\overline{s_1^{3}}, -.07)$. First, we apply the arithmetic means to the $\mathsf{v}$ value of the two terms. As these values are in absolute scale, it simplifies the computations. The result of the aggregation is $\beta = .065$. The second step is to represent the linguistic information of aggregation $\beta$ by a linguistic label expressed in $LH$. For the representation we choose the level associated to the two labels with the finest grain. In our example it is $l(5,33)$ (fifth level of $LH$ with $n(t)=33$). Then we apply the $\Delta$ function on $\beta$ to obtain the result: $\Delta(.065) = (s_7^{33}, -.001)$. Finally, in order to express the above result in the initial linguistic term set $\mathcal{S}$, we apply the $\mathcal{LH}^{-1}$ function. It associates to a linguistic 2-tuple in $LH$ its corresponding linguistic term in $\mathcal{S}$. Thus, we obtain the final result $\mathcal{LH}^{-1}((s_7^{33}, -.001)) =$ (*YoungLegalLimit*, .005). Given that countries have different rules concerning the BAC for drivers, the aggregation of such linguistic information can be relevant to calculate an average value of allowed and prohibited blood alcohol concentration levels for a set of countries (Europe, Africa, etc.). Addition {#ssec:autreOper} -------- As we are using an absolute scale on the axis for our linguistic terms, the approach for other operators is the same as the one described above for the arithmetic means aggregation. We first apply the operator to the $\mathsf{v}$ values of the linguistic terms and then we use the $\Delta$ and the $\mathcal{LH}^{-1}$ functions successively to express the result in the original term set. If we consider for instance that, this time, we need to add the two following terms: (*YoungLegalLimit*, .05) and (*LegalLimit*, .08), we denote $(\emph{YoungLegalLimit}, .05) \oplus (\emph{LegalLimit}, .08)$ and proceed as follows: - We add the two $\mathsf{v}$ values $.05$ and $.08$ to obtain $\beta = .13$. - We then apply the $\Delta$ function to express $\beta$ in $LH$, $\Delta(0.13) = (s_{14}^{33}, -.001)$. - Finally, we apply the $\mathcal{LH}^{-1}$ function to obtain the result expressed in the initial linguistic term set $\mathcal{S}$ : $\mathcal{LH}^{-1}((s_{14}^{33}, -.001)) =$ (*LegalLimit*, .05). This $\oplus$ addition looks like a fuzzy addition operator (see *e.g.* [@Her00a]) used as a basis for many aggregation processes (combine experts’ preferences, etc.). Actually, $\oplus$ operator can be seen as an extension (in the sense of Zadeh’s principle extension) of the addition for our 2-tuples. The same approach can be applied to other operators. It will be further explored in our future works. Discussions {#sec:discussion} =========== Towards a fully linguistic model -------------------------------- When dealing with linguistic tools, the aim is to avoid the user to supply precise numbers, since he’s not always able to give them. Thus, in the pair $(\mathsf{s},\mathsf{v})$ that describes the data, it may happen that the user doesn’t know exactly the position $\mathsf{v}$. For instance, considering five grades $(A,B,C,D,E)$, the user knows that (i) $D$ and $E$ are fail grades, (ii) $A$ is the best one, (iii) $B$ is not far away, (iv) $C$ is in the middle. If we replace $\mathsf{v}$ by a linguistic term, that is a *stretch factor*, the five pairs in the previous example could be: $(A,\textit{VeryStuck}); (B,\textit{Far}); (C,\textit{Stuck}); (D,\textit{ModeratelyStuck});$ $(E,$N/A$)$ (see Figure \[fig:StretchFactor\]). $(A,\textit{VeryStuck})$ means that $A$ is very stuck to its next label. $(E,$N/A$)$ means that $E$ is the last label ($\mathsf{v}$ value is not applicable). This improvement permits to ask the user for: - either the pairs $(\mathsf{s},\mathsf{v})$, with $\mathsf{v}$ a linguistic term (stretch factor); - or only the labels $\mathsf{s}$ while placing them on a visual scale (*i.e.*, the stretch factors are automatically computed to obtain the pairs $(\mathsf{s},\mathsf{v})$); - or the pairs $(\mathsf{s},\mathsf{v})$, with $\mathsf{v}$ a numerical value, as proposed above. It should be noted that the first case ensures to deal with fully linguistic pairs $(\mathsf{s},\mathsf{v})$. It should also be noted that our stretch factor looks like Herrera & Martínez’ densities, but in our case, it permits to construct a more accurate representation of the terms. Towards a simplification of binary trees ---------------------------------------- The linguistic 2-tuple model that uses the pair $(s_i^{n(t)},\alpha)$ and its corresponding level of linguistic hierarchy can be seen as another way to express the various nodes of a tree. There is a parallel to draw between the node depth and the level of the linguistic hierarchy. Indeed, let us consider a binary tree, to simplify. The root node belongs to the first level, that is $l(1,3)$ according to [@HER01]. Then its children belong to the second one ($l(2,5)$), knowing that the next level is obtained from its predecessor: $l(n+1,2n(t)-1)$. And so on, for each node, until there is no node left. In the simple case of a binary tree (*i.e.*, a node has two children or no child), it is easy to give the position — the 2-tuple $(s_i^{n(t)},\alpha)$ — of each node: this position is unique, left child is on the left of its parent in the next level (resp. right for the right child). The algorithm that permits to simplify a binary tree in a linguistic 2-tuple set is now given (see Algorithm \[algo\_sa\]). If we consider the graphical example of Figure \[fig:simplif\], the linguistic 2-tuple set we obtain is the following (ordered by level):\ $\{(s_{1}^{3},0),(s_{1}^{5},0),(s_{3}^{5},0),(s_{5}^{9},0),(s_{7}^{9},0),(s_{9}^{17},0),(s_{11}^{17},0)\}$, where $a \gets (s_{1}^{3},0)$, $b \gets (s_{1}^{5},0)$, $c \gets (s_{3}^{5},0)$, $d \gets (s_{5}^{9},0)$, $e \gets (s_{7}^{9},0)$, $f \gets (s_{9}^{17},0)$ and $g \gets (s_{11}^{17},0)$. The last graph of the figure shows the semantics obtained, using the representation algorithm described in [@Herrera08afuzzy]. $o$ is a node, $T$ is a binary tree, $o^{\prime}$ is the root node of $T$ $o^{\prime} \gets (s_0^3,0)$ let $(s_i^j,k)$ be the parent node of $o$ $o \gets (s_{2i-1}^{2j-1},0)$ $o \gets (s_{2i+1}^{2j-1},0)$ the set of linguistic 2-tuples, one per node In a way, this algorithm permits to flatten a binary tree into a 2-tuple set which can be useful to express distances between nodes. The opposite is also true: a linguistic term set can be expressed through a binary tree. One of the advantages to perform this flattening is to consider a new dimension in the data of a given problem. This new dimension is the distance between the possible outcomes (the nodes that can be decisions, choices, preferences, etc.) of the problem and this would allow for a ranking of the outcomes, as if we had a B-tree. The fact that the level of the linguistic hierarchy is not the same, depending on the node depth, is interesting since it gives a different granularity level, and, as with Zadeh’s granules, it permits to connect a position in the tree and a precision level. Concluding remarks ================== In this paper, we have formally introduced and discussed an approach to deal with unbalanced linguistic term sets. Our approach is inspired by the 2-tuple fuzzy linguistic representation model from Herrera and Martínez, but we fully take advantage of the symbolic translations $\alpha$ that become a very important element to generate the data set. The 2-tuples of our linguistic model are twofold. Indeed, except the first one and the last one of the partition that have a shape of right-angled triangles, they all are composed of two *half* 2-tuples: an upside and a downside 2-tuple. The upside and downside of the 2-tuple are not necessary expressed in the same hierarchy nor level. Regarding the partitioning phase, there is no need to have all the symbolic translations equal to zero. This permits to express the non-uniformity of the data much better. Despite the changes we made, the minimal cover property is fulfilled and proved. Moreover, the aggregation operators that we redefine give consistent and satisfactory results. Next steps in future work will be to study other operators, such as comparison, negation, aggregation, implication, etc. ACKNOWLEDGEMENT {#acknowledgement .unnumbered} =============== This work is partially funded by the French National Research Agency (ANR) under grant number ANR-09-SEGI-012. [^1]: When talking about linguistic terms, $\mathcal{S}$ (calligraphic font) is used, otherwise $S$ (normal font) is used.
--- author: - | [^1]\ Institute for Theoretical Physics, University of Regensburg, Germany\ E-mail: - | Johannes Mahr\ Institute for Theoretical Physics, University of Regensburg, Germany\ E-mail: - | Sebastian Schmalzbauer\ Institute for Theoretical Physics, University of Regensburg, Germany\ E-mail: bibliography: - 'biblio.bib' title: 'Complex Langevin in low-dimensional QCD: the good and the not-so-good' --- Introduction ============ Monte Carlo simulations of QCD at nonzero chemical potential are strongly hindered by the *sign problem*, as the complex fermion determinant prohibits the use of importance sampling methods. Most known methods to circumvent the sign problem in QCD have a computational cost that grows exponentially with the volume. An alternative that has recently caught a lot of attention is the complex Langevin (CL) method [@Sexty:2013ica]. The CL stochastic differential equation uses the drift generated by the complex fermion action to evolve the complexified gauge configurations in the SL(3,$\mathbf{C}$) gauge group. After equilibration, and if a number of conditions are met [@Aarts:2011ax], the time evolution of these configurations should reproduce the correct QCD results for gauge invariant observables. Although the sign problem in QCD is particularly serious in four dimensions, it is already present in lower dimensions. In this talk we present a study on the viability of the CL method in 0+1d, where the sign problem is mild, and in 1+1d in the strong coupling limit where the sign problem is quite large in some regions of parameter space depending on the chemical potential, quark mass, temperature and spatial volume. Partition function and Dirac operator ===================================== We consider the strong coupling partition function $$\begin{aligned} Z=\int\!\left[\prod_{x}\prod_\nu d U_{x,\nu}\right]\, \det D(\{U_{x,\nu}\}) \label{ZQCD}\end{aligned}$$ with $d$-dimensional staggered Dirac operator $$\begin{aligned} D_{xy} &= m \, \delta_{xy} + \frac12 \left[ e^{\mu} U_{x,0}\delta_{x+\hat{0},y} -e^{-\mu} U_{y,0}^{-1}\delta_{x-\hat{0},y}\right] +\frac12\sum_{i=1}^{d-1}\eta_i(x)\big[U_{x,i}\delta_{x+\hat i,y}-U_{y,i}^{-1}\delta_{x-\hat i,y}\big] \label{eq:Dirac}\end{aligned}$$ for a quark of mass $m$ at chemical potential $\mu$ and antiperiodic boundary conditions in the temporal direction. The staggered phases are $\eta_\nu=(-1)^{x_0+x_1+\ldots+x_{\nu-1}}$, $\hat\nu$ is a unit step in direction $\nu$, and we set the lattice spacing $a=1$. At zero $\mu$ the determinant of the Dirac operator is real and positive in SU(3), but at nonzero real $\mu$ the operator is no longer antihermitian, as ${D}(\mu)^\dagger=-{D}(-\mu)$, and its determinant becomes complex. Complex Langevin evolution {#sec:0+1d-CLE} ========================== We represent the gauge links using the Gell-Mann parameterization $$\begin{aligned} U = \exp\left[i\sum_a z_a \lambda_a \right] ,\end{aligned}$$ with Gell-Mann matrices $\lambda_a$ and eight complex parameters $z_a$ for $U\in \text{SL}(3,\mathbf{C})$. According to the CL equation, the discrete time evolution of $U_{x,\nu}$ in SL(3,$\mathbf{C}$) is given by the rotation [@Sexty:2013ica] $$\begin{aligned} U_{x,\nu}(t+1) = R_{x,\nu} (t) \: U_{x,\nu}(t) ,\end{aligned}$$ where in the stochastic Euler discretization $R_{x,\nu} \in \text{SL}(3,\mathbf{C})$ is given by $$\begin{aligned} R_{x,\nu} = \exp\left[i\sum_a \lambda_a ({\varepsilon}K_{a,x,\nu} + \sqrt{{\varepsilon}}\,\eta_{a,x,\nu})\right] ,\end{aligned}$$ with drift term $$\begin{aligned} K_{a,x,\nu} &= - D_{a,x,\nu}(S) = - \partial_\alpha S(U_{x,\nu}\to e^{i\alpha\lambda_a} U_{x,\nu})|_{\alpha=0} ,\end{aligned}$$ real Gaussian noise $\eta_{a,x,\nu}$ with variance 2, fermion action $S=-\log \det D$ and discrete Langevin time step ${\varepsilon}$. Gauge cooling {#sec:gc} ============= Previous studies using the CL method have shown that incorrect results are obtained when the simulation wanders off too far in the imaginary direction. In gauge theories it was suggested to counter this problem using *gauge cooling*, where the SL(3,$\mathbf{C}$) gauge invariance of the theory is used to keep the trajectories as closely as possible to the SU(3) group [@Seiler:2012wz]. A general gauge transformation of the link $U_{x,\nu}$ is given by $$\begin{aligned} U_{x,\nu} \to G_x \, U_{x,\nu} \, G_{x+\hat{\nu}}^{-1} \label{gc}\end{aligned}$$ with $G_x \in \text{SL(3,$\mathbf{C}$})$. Gauge cooling corresponds to the minimization of the unitarity norm $$\begin{aligned} ||\mathcal{U}|| = \sum_{x,\nu} \operatorname{tr}\left[ U_{x,\nu}^{\dagger}U_{x,\nu} + \left(U_{x,\nu}^{\dagger}U_{x,\nu}\right)^{-1} \!\!-2\right] \label{unorm}\end{aligned}$$ over all $G_x$, which is usually done via steepest descent. Clearly, observables are *invariant* under gauge transformations and so is the drift term in the CL equation. However, as the noise distribution in the CL equation is *not invariant* under SL(3,$\mathbf{C}$) gauge transformations, the gauge cooling and Langevin steps do not commute, which leads to different trajectories in configuration space when cooling is introduced. Although this is exactly the aim of the cooling procedure, it is still an open question whether or under what conditions this procedure leads to the correct QCD expectation values (see also [@Nagata:2015uga] for recent developments). QCD in 0+1 dimensions ===================== We first consider [0+1d QCD]{}where the determinant of the Dirac operator can be reduced to the determinant of a $3\times 3$ matrix [@Bilic:1988rw] $$\begin{aligned} \det D \propto \det \left[ e^{\mu/T} P+e^{-\mu/T} P^{-1} + 2\cosh\left(\mu_c/T\right)\,{\mathbbm{1}}_3\right]\end{aligned}$$ with Polyakov line $P= \prod_t U(t)$ and effective mass $\mu_c = \operatorname{arsinh}(m)$. The partition function is then a one-link integral of $\det D$ over $P$ without gauge action. As analytic results [@Bilic:1988rw; @Ravagli:2007rw], as well as numerical solutions using subsets [@Bloch:2013ara], are available in this case, the correctness of the numerical results obtained with the CL method can be verified. Note that some modified models for [0+1d QCD]{}were already solved using the CL method, including a one-link formulation with mock-gauge action [@Aarts:2008rr] and a U($N_c$) theory in the spectral representation [@Aarts:2010gr]. In [0+1d QCD]{}gauge transformations simplify to $$\begin{aligned} P \to G P G^{-1} ,\end{aligned}$$ only depending on a single $G \in \text{SL(3,$\mathbf{C}$)}$. It is easy to show that in this case maximal cooling, i.e. minimizing , is achieved by the similarity transformation diagonalizing $P$. We found that cooling typically reduces the unitarity norm by about two orders of magnitude. In Fig. \[fig3\] we show the results for the quark density and chiral condensate as a function of $\mu/T$ for $m=0.1$. Below the data we show the statistical significance of the deviation between the numerical result $y$ and the analytical result $y_\text{th}$, i.e. $|y-y_\text{th}|/\sigma_y$. For the uncooled results the deviation is far too large to be attributed to statistical fluctuations and we conclude that the CL method introduces a systematic error. After cooling, however, the CL results are in agreement with the theoretical predictions within the statistical accuracy (except for $\mu \approx 0.7$ where the deviation is still somewhat too large for the chiral condensate). Gauge cooling seems absolutely necessary to get the correct result, even in this one-dimensional gauge theory. ![Density of the value of the determinant in the complex plane for $\mu/T=1$ and $m=0.1$ for the uncooled (left) and cooled (right) cases.[]{data-label="fig4"}](plots/1d_det_uncooled.pdf "fig:"){height="0.25\linewidth"} ![Density of the value of the determinant in the complex plane for $\mu/T=1$ and $m=0.1$ for the uncooled (left) and cooled (right) cases.[]{data-label="fig4"}](plots/1d_det_cooled.pdf "fig:"){height="0.25\linewidth"} To illustrate the effect of gauge cooling on the SL(3, $\mathbf{C}$) trajectories we show how the density of the determinant in the complex plane is affected by cooling in Fig. \[fig4\]. The effect is quite dramatic, as the origin, which is inside the distribution without cooling, is clearly avoided when cooling is applied. Avoiding the singular drift at the origin could be a necessary condition for the complex Langevin to yield the correct result [@Aarts:2011ax; @Nishimura:2015pba]. ![ Distribution of the real part of the chiral condensate (left) and quark number density (right) at $\mu/T=1$ and $m=0.1$ in the uncooled (red) and cooled (blue) cases.[]{data-label="fig5"}](plots/1d_chiral_condensate_real "fig:"){width="32.00000%"} ![ Distribution of the real part of the chiral condensate (left) and quark number density (right) at $\mu/T=1$ and $m=0.1$ in the uncooled (red) and cooled (blue) cases.[]{data-label="fig5"}](plots/1d_qn_real "fig:"){width="32.00000%"} Another known signal for problems in the CL method is the existence of skirts in the distribution of the (real part of the) observables [@Aarts:2013uxa]. This is illustrated in Fig. \[fig5\]. Without cooling the observables have very wide skirts, hinting at a polynomial decay, while after cooling very sharp exponential fall offs are observed. Note that in the 0+1d case we can also parameterize the Polyakov line in its diagonal representation with two complex parameters. The numerical results obtained with the CL method in this representation agree with the analytical predictions. This is consistent with the above results as gauge cooling also brings the Polyakov line to its diagonal form in the Gell-Mann parameterization. QCD in 1+1 dimensions ===================== A more stringent test of the CL method is provided by [1+1d QCD]{}where the sign problem is more severe. The staggered Dirac operator is given in and in this work we restrict our simulations to the strong coupling case. The complex Langevin equations are given in Sec. \[sec:0+1d-CLE\] and the gauge cooling procedure in Sec. \[sec:gc\]. All results shown here are preliminary and were obtained on a $4\times 4$ lattice. We are currently performing further evaluation runs for lattices of size $N_s\times N_t=4\times\{2,6,8,10\}$, $6\times\{2,4,6,8\}$ and $8\times\{2,4,6,8\}$. ![Quark density (left) and chiral condensate (right) as a function of the chemical potential for $m=0.1,0.5,1.0,2.0$. We compare uncooled (red) and cooled (blue) CL results with subset results (line).[]{data-label="fig8"}](plots/qn "fig:"){width="32.00000%"} ![Quark density (left) and chiral condensate (right) as a function of the chemical potential for $m=0.1,0.5,1.0,2.0$. We compare uncooled (red) and cooled (blue) CL results with subset results (line).[]{data-label="fig8"}](plots/chi "fig:"){width="32.00000%"} To validate the CL method we compare our CL measurements with results obtained using the subset method [@Bloch:2013qva; @Bloch:2015iha]. As can be seen in Fig. \[fig8\], the results of the bare or uncooled CL are not consistent with the subset data, in all cases considered. After cooling the situation is much improved and the large mass CL simulations agree very well with the subset results over the complete $\mu$-range. However, for the smallest mass value ($m=0.1$), even the cooled CL does not produce correct results over a large range of $\mu$ values. Furthermore, a close inspection of the $m=0.5$ results also shows a significant deviation. Clearly, the CL does not work for light quarks even after applying full gauge cooling. ![Density distribution of the determinant value for $\mu=0.07$ (top) and $\mu=0.25$ (bottom) in the uncooled (left) and cooled (right) case.[]{data-label="fig10"}](plots/det_0,07_uncooled "fig:"){height="0.25\linewidth"} ![Density distribution of the determinant value for $\mu=0.07$ (top) and $\mu=0.25$ (bottom) in the uncooled (left) and cooled (right) case.[]{data-label="fig10"}](plots/det_0,07_cooled "fig:"){height="0.25\linewidth"}\ ![Density distribution of the determinant value for $\mu=0.07$ (top) and $\mu=0.25$ (bottom) in the uncooled (left) and cooled (right) case.[]{data-label="fig10"}](plots/det_0,25_uncooled "fig:"){height="0.25\linewidth"} ![Density distribution of the determinant value for $\mu=0.07$ (top) and $\mu=0.25$ (bottom) in the uncooled (left) and cooled (right) case.[]{data-label="fig10"}](plots/det_0,25_cooled "fig:"){height="0.25\linewidth"} To investigate why gauge cooling does not work for small masses, we look at its effect on the density distribution of the determinant for $m=0.1$. In Fig. \[fig10\] we see that for $\mu=0.07$, where the CL results seem correct, cooling significantly changes the distribution: it squeezes the density along the real axis while also pushing it away from the origin. For $\mu=0.25$, however, cooling has very little effect: the fireball is somewhat shifted to the right but its shape remains approximately unchanged and the origin is still inside the distribution. The CL results are thus incorrect when cooling is unable to change the distribution substantially, such that it still contains the origin and remains broad in the imaginary direction. We also looked at the effect of cooling on the distribution of the observables. This is illustrated in Fig. \[fig11\], which shows the distribution of the real part of the chiral condensate for increasing chemical potential, without and with cooling for $m=0.1$. The uncooled distribution always displays skirts, with a decay that is fairly independent of $\mu$. With gauge cooling we observe that the skirt vanishes for small chemical potentials, but as it is increased the skirts gradually reappear, signaling that the results of the CL method gradually become untrustworthy for light quarks, even in the presence of cooling, as we move into the region that has a substantial sign problem. Conclusions =========== In this work we have shown that in [0+1d QCD]{}the results obtained with the complex Langevin method deviate significantly from the analytical predictions when no gauge cooling is applied. After introducing gauge cooling the correct results are recovered. In [1+1d QCD]{}at strong coupling the uncooled CL method yields wrong results for any mass and chemical potential. When applying gauge cooling the results are rectified for heavy quarks, but for light quarks the results remain incorrect for a significant range of the chemical potential. Gauge cooling seems absolutely necessary, although not sufficient in some cases. As the quarks get lighter and the sign problem larger, gauge cooling no longer works properly. The results were validated by comparing with subset measurements, but signals for wrong convergence are also available within the CL method itself, such as skirts in observable distributions, and distributions of the determinant that contain the origin and are broad in the imaginary direction, even after cooling. From our study we conclude that much remains to be understood about the complex Langevin method and its applicability to QCD. Several suggestions presented in the literature such as changes of variables [@Mollgaard:2014mga] or using different cooling norms or gauge fixing conditions [@KeitaroNagata2015] should be investigated. We are currently validating the CL method for light quarks on larger lattices and in the presence of a gauge action, as it is believed that the CL method performs better in the weak coupling regime. Several unanswered questions remain: does the CL method work when the sign problem is large and how well can we trust its results considering that its degradation seems to happen gradually and not in an on-off way? [^1]: Supported by DFG
--- abstract: 'In Mod. Phys. Lett. A **9,** 3119 (1994), one of us (R.D.S) investigated a formulation of quantum mechanics as a generalized measure theory. Quantum mechanics computes probabilities from the absolute squares of complex amplitudes, and the resulting interference violates the (Kolmogorov) sum rule expressing the additivity of probabilities of mutually exclusive events. However, there is a higher order sum rule that quantum mechanics does obey, involving the probabilities of three mutually exclusive possibilities. We could imagine a yet more general theory by assuming that it violates the next higher sum rule. In this paper, we report results from an ongoing experiment that sets out to test the validity of this second sum rule by measuring the interference patterns produced by three slits and all the possible combinations of those slits being open or closed. We use attenuated laser light combined with single photon counting to confirm the particle character of the measured light.' author: - Urbasi Sinha - Christophe Couteau - Zachari Medendorp - 'Immo S[ö]{}llner' - Raymond Laflamme - Rafael Sorkin - Gregor Weihs bibliography: - 'confproc.bib' title: | Testing Born’s Rule in Quantum Mechanics with a\ Triple Slit Experiment --- [ address=[Institute for Quantum Computing, University of Waterloo, 200 University Ave W,\ Waterloo, Ontario N2L 3G1, Canada]{}, email=[usinha@iqc.ca]{} ]{} [ address=[Institute for Quantum Computing, University of Waterloo, 200 University Ave W,\ Waterloo, Ontario N2L 3G1, Canada]{}]{} [ address=[Institute for Quantum Computing, University of Waterloo, 200 University Ave W,\ Waterloo, Ontario N2L 3G1, Canada]{}]{} [ address=[Institute for Quantum Computing, University of Waterloo, 200 University Ave W,\ Waterloo, Ontario N2L 3G1, Canada]{}, altaddress=[Institut für Experimentalphysik, Universität Innsbruck, Technikerstrasse 25, 6020 Innsbruck, Austria]{}]{} [ address=[Institute for Quantum Computing, University of Waterloo, 200 University Ave W,\ Waterloo, Ontario N2L 3G1, Canada]{}, altaddress=[Perimeter Institute for Theoretical Physics, 31 Caroline St, Waterloo, Ontario N2L 2Y5, Canada]{}]{} [ address=[Department of Physics, Syracuse University, Syracuse, NY 13244-1130]{}, altaddress=[Perimeter Institute for Theoretical Physics, 31 Caroline St, Waterloo, Ontario N2L 2Y5, Canada]{}]{} [ address=[Institute for Quantum Computing, University of Waterloo, 200 University Ave W,\ Waterloo, Ontario N2L 3G1, Canada]{}, altaddress=[Institut für Experimentalphysik, Universität Innsbruck, Technikerstrasse 25, 6020 Innsbruck, Austria]{}, email=[gregor.weihs@uibk.ac.at]{} ]{} Introduction and Motivation =========================== Quantum Mechanics has been one of the most successful tools in the history of Physics. It has revolutionized Modern Physics and helped explain many phenomena. However, in spite of all its successes, there are still some gaps in our understanding of the subject and there may be more to it than meets the eye. This makes it very important to have experimental verifications of all the fundamental postulates of Quantum Mechanics. In this paper, we aim to test Born’s interpretation of probability [@Born26a], which states that if a quantum mechanical state is specified by the wavefunction $\psi (r,t)$ [@Schrodinger26], then the probability $p(\mathbf r,t)$ that a particle lies in the volume element $d^{3}r$ located at $\mathbf r$ and at time $t$, is given by: $$p(\mathbf r,t) ) = \psi^{*}(\mathbf r,t) \psi(\mathbf r,t) d^{3}r = |\psi (\mathbf r,t)|^{2} d^{3}r$$ Although this definition of probability has been assumed to be true in describing several experimental results, no experiment has ever been performed to specifically test this definition alone. Already in his Nobel lecture in 1954, Born raised the issue of proving his postulate. Yet, 54 years have passed without there being a dedicated attempt at such a direct experimental verification, although the overwhelming majority of experiments indirectly verify the postulate when they show results that obey quantum mechanics. In this paper, we report the results of ongoing experiment that directly tests Born’s rule. The 3-slit experiment ===================== In Ref. [@Sorkin94a], one of us (R.D.S) proposed a triple slit experiment motivated by the “sum over histories” approach to Quantum Mechanics. According to this approach, Quantum theory differs from classical mechanics not so much in its kinematics, but in its dynamics, which is stochastic rather than deterministic. But if it differs from deterministic theories, it also differs from previous stochastic theories through the new phenomenon of [*interference*]{}. Although the quantum type of randomness is thus non-classical, the formalism closely resembles that of classical probability theory when expressed in terms of a sum over histories. Each set A of histories is associated with a non-negative real number $p_A=|A|$ called the “quantum measure”, and this measure can in certain circumstances be interpreted as a probability (but not in all circumstances because of the failure of the classical sum rules as described below). It is this measure (or the corresponding probability) that enters the sum rules we are concerned with.Details of the quantum measure theory following a sum over histories approach can be found in [@Sorkin94a; @Sorkin97a]. Interference expresses a deviation from the classical additivity of the probabilities of mutually exclusive events. This additivity can be expressed as a “sum rule” $I=0$ which says that the interference between arbitrary pairs of alternatives vanishes. In fact, however, one can define a whole hierarchy of interference terms and corresponding sum-rules as given by the following equations. They measure not only pairwise interference, but also higher types involving three or more alternatives, types which could in principle exist, but which quantum mechanics does not recognize. $$\label{zero} I_A = p_A$$ $$\label{one} I_{AB} = p_{AB} - p_A - p_B$$ $$\label{two} I_{ABC} = p_{ABC} - p_{AB} - p_{BC} - p_{CA} + p_A + p_B + p_C$$ Equations (\[zero\]), (\[one\]), and (\[two\]) refer to the zeroth, first, and second sum rule respectively. Here, $p_{ABC}$ means the probability of the disjoint union of the sets A, B, and C. A physical system in which such probability terms appear would be a system with three classes of paths [@Weihs96a], for example three slits A, B and C in an opaque aperture. For particles incident on the slits, $p_A$ would refer to the probability of a particle being detected at a chosen detector position having traveled through slit A and $p_B$ and $p_C$ would refer to similar probabilities through slits B and C. The zeroth sum rule needs to be violated ($ I_A \neq 0 $) for a non-trivial measure. If the first sum rule holds, i.e. $I_{AB} = 0$, it leads to regular probability theory for example for classical stochastic processes. Violation of the first sum rule ($ I_{AB} \neq 0$) is consistent with Quantum Mechanics. A sum rule always entails that the higher ones in the hierarchy hold. However, since the first sum rule is violated in Quantum Mechanical systems, one needs to go on to check the second sum rule. In known systems, triadditivity of mutually exclusive probabilities is true i.e., the second sum rule holds, $ I_{ABC} = 0$. This follows from algebra as shown below and is based on the assumption that Born’s rule holds. $$\begin{aligned} \label{algebra} p_{ABC} &=& | \psi_A + \psi_B + \psi_C |^{2} \nonumber \\ &=& | \psi_A |^2 + | \psi_B |^2 + | \psi_C |^2 + \psi_{A}^{*} \psi_B + \psi_{B}^{*} \psi_A + \psi_{B}^{*} \psi_C + \psi_{C}^{*} \psi_B + \psi_{A}^{*} \psi_C + \psi_{C}^{*} \psi_A \nonumber \\&=& p_{A} + p_{B} + p_{C} + I_{AB} + I_{BC} + I_{CA} \nonumber \\&=& p_{A} + p_{B} + p_{C} + (p_{AB} - p_A - p_B) + (p_{BC} - p_B - p_C) + (p_{CA} - p_C - p_A) \nonumber \\&=& p_{AB} + p_{BC} + p_{CA} - p_A - p_B - p_C \\\Rightarrow I_{ABC} &\equiv& p_{ABC} - p_{AB} - p_{BC} - p_{CA} + p_A + p_B + p_C =0\end{aligned}$$ ![Pictorial representation of how the different probability terms are measured. The leftmost configuration has all slits open, whereas the rightmost has all three slits blocked. The black bars represent the slits, which are never changed or moved throughout the experiment. The thick grey bars represent the opening mask, which will is moved in order to make different combinations of openings overlap with the slits, thus switching between the different combinations of open and closed slits.[]{data-label="fig:slitsandopenings"}](slitsandopenings.eps){width="50.00000%"} If however, there is a higher order correction to Born’s rule (however small that correction might be), equation (\[algebra\]) will lead to a violation of the second sum rule. The triple slit experiment proposes to test the second sum rule, or in more physical language, to look for a possible “three way interference” beyond the pairwise interference seen in quantum mechanics. For this purpose we define a quantity $\epsilon$ as $$\label{epsilon} \epsilon = p_{ABC} - p_{AB} - p_{BC} - p_{CA} + p_A + p_B + p_C -p_0.$$ Figure \[fig:slitsandopenings\] shows how the various probabilities could be measured in a triple slit configuration. As opposed to the ideal formulation where empty sets have zero measure, we need to provide for a non-zero $p_0$, the background probability of detecting particles when all paths are closed. This takes care of any experimental background, such as detector dark counts. For better comparison between possible realizations of such an experiment, we further define a normalized variant of $\epsilon$ called $\rho$, $$\begin{aligned} \rho &=& \frac {\epsilon}{\delta}\mbox{, where} \\ \delta &=& | I_{AB} | + | I_{BC} | + | I_{CA} | \nonumber \\ &=& | p_{AB} - p_A - p_B + p_0| + | p_{BC} - p_B - p_C + p_0 | + | p_{CA} - p_C - p_A + p_0 |.\end{aligned}$$ Since $\delta$ is a measure of the regular interference contrast, $\rho$ can be seen as the ratio of the violation of the second sum rule versus the expected violation of the first sum rule. (If $\delta=0$ then $\epsilon=0$ trivially, and we really are not dealing with quantum behavior at all, but only classical probabilities.) In the following sections we will describe how we implemented the measurements of all the terms that compose $\rho$ and analyze our results. Making the slits ---------------- ![Different ways of measuring the eight intensities. The LHS shows a schematic of a 3 slit pattern. In the center, the first blocking scheme is demonstrated, in which the slits are blocked according to the terms being measured. The whole glass plate is thus transparent with only the blocking portions opaque. The RHS shows the second blocking scheme in which the slits are opened up as needed on a glass plate which is completely opaque except for the unblocking openings.[]{data-label="blocking"}](makingtheslits.eps){height=".25\textheight"} Our first step in designing the experiment was to find a way to reliably block and unblock the slits, which we expected to be very close together, so that simple shutters wouldn’t work. Therefore we decided to use a set of two plates, one containing the slit pattern and one containing patters to block or unblock the slits. The slits were fabricated by etching them on some material which covered a glass plate. The portion of the material which had the slits etched in would be transparent to light and the rest of the glass plate which was still covered would be opaque. However, not all materials exhibit the same degree of opacity to infra-red light and this leads to spurious transmission through portions of the glass plate which should be opaque in theory. Various types of materials were used for etching the slits and each modification led to a decrease in spurious transmission through the glass plate. At first, a photo-emulsion plate was used which had a spurious transmission of around 5%. This was followed by a glass plate with a chromium layer on top. This had a spurious transmission of around 3%. The plate currently in use has an aluminium layer of 500 nm thickness on top. Aluminium is known to have a very high absorption coefficient for infrared light and this led to a spurious transmission of less than 0.1%. The blocking patterns were etched on a different glass plate covered with the same material as the first glass plate. Figure \[blocking\] shows an example of a set of blocking patterns which would give rise to the eight intensities corresponding to the probability terms related to the 3-slit open, 2-slit open and 1-slit open configurations as discussed in the previous section. Another way of achieving the eight intensities would be to open up the right number and position of slits instead of blocking them off. This is also shown in Figure \[blocking\] and leads to a big change in the appearance of the second glass plate. In the first instance, when the slits were being blocked for the different cases, the rest of the glass plate was transparent and only the portions which were being used for blocking off the slits in the first glass plate were opaque to light. This led to spurious effects as a lot of light was being let through the glass plate this way, which caused background features in the diffraction patterns. However, with the second design, the whole plate was covered with the opaque material and only portions which were being used to open up slits allowed light to go through, thus leading to diminishing background effects. The experimental set-up ----------------------- ![Schematic of experimental set-up[]{data-label="slitsetup"}](slitsetup.eps){width=".6\textwidth"} Figure \[slitsetup\] shows a schematic of the complete experimental set-up. The He-Ne laser beam passes through an arrangement of mirrors and collimators before being incident on a 50/50 beam splitter. In the near future we will replace the laser by a heralded single photon source [@Bocquillon08]. The beam then splits into two, one of the beams is used as a reference arm for measuring fluctuations in laser power whereas the other beam is incident on the glass plate, which has the slit pattern etched on it. The beam height and waist is adjusted so that it is incident on a set of three slits, the slits being centered on the beam. There is another glass plate in front which has the corresponding blocking designs on it such that one can measure the seven probabilities in equation (\[two\]). The slit plate remains stationary whereas the blocking plate is moved up and down in front of the slits to yield the various combinations of opened slits needed to measure the seven probabilities. As mentioned above, in our experimental set-up, we also measure an eighth probability which corresponds to all three slits being closed in order to account for dark counts and any background light. Figure \[fig:slitsandopenings\] shows this pictorially. There is a horizontal microscope (not shown in Figure \[slitsetup\]) for initial alignment between the slits and the corresponding openings. A multi-mode optical fiber is placed at a point in the diffraction pattern and connected to an avalanche photo-diode (APD) which measures the photon counts corresponding to the various probabilities. Using a single photon detector confirms the particle character of light at the detection level. The optical fiber can be moved to different positions in the diffraction pattern in order to obtain the value of $\rho$ at different positions in the pattern. Figure \[diff\] shows a measurement of the eight diffraction patterns corresponding to the eight configurations of open and closed slits as required by equation (\[epsilon\]). ![Diffraction patterns of the eight combinations of open and closed slits including all slits closed (“0”), measured using a He-Ne laser. The vertical axis is in units of 1000 photocounts.[]{data-label="diff"}](diffraction.eps){width=".6\textwidth"} ![Overnight measurement of $\rho$. Each data point corresponds to approximately 5 min of total measurement time. A slight drift of the mean is visible. The error bars are the size of the standard deviation of the $\rho$ values.[]{data-label="fig:rhodata"}](rhodata.eps){width=".6\textwidth"} Results ------- In a null experiment like ours, where we try to prove the existence or absence of an effect, proper analysis of possible sources of errors is of utmost importance. It is essential to have a good estimation of both random errors in experimental quantities as well as possible potential sources of systematic errors. For each error mechanism we calculate within the framework of some accepted theory how much of a deviation from the ideally expected value they will cause. Drifts in time or with repetition can often be corrected by better stabilization of the apparatus, but any errors that do not change in time can only be characterized by additional measurements. We have measured $\rho$ for various detector points using a He-Ne laser. Initially, the value of $\rho$ showed strong variations with time and this was solved by having better temperature control in the lab and also by enclosing the set-up in a black box so that it is not affected by stray photons in the lab. Fig. \[fig:rhodata\] shows a recent overnight run which involved measuring $\rho$ around hundred times at a position near the center of the diffraction pattern. Only a slight drift in the mean can be discerned. The typical value of $\rho$ is in the range of $10^{-2} \pm 10^{-3}$. The random error is the standard error of the mean of $\rho$ and obviously it is too small to explain the deviation of the mean of $\rho$ from zero. Next we analyze some systematic errors which may affect our experiment to see if these can be big enough to explain the deviation of $\rho$ from the zero expected from Born’s rule. Analysis of some possible sources of systematic errors ====================================================== By virtue of the definition of the measured quantity $\epsilon$ (or its normalized variant $\rho$) some potential sources of errors do not play a role. For example, it is unimportant whether the three slits in the aperture have the same size, shape, open transmission, or closed light leakage. However, in the current set-up we are measuring the eight different combinations of open and closed slits using a blocking mechanism that does not block individual slits but by changing a global unblocking mask. Also, the measurements of the different combinations occur sequentially, which makes the experiment prone to the effects of fluctuations and drifts. In the following we will analyze the effects of three systematic error mechanisms, power drifts or uneven mask transmission, spurious mask transmission combined with misalignment, and detector nonlinearities. The power of a light source is never perfectly stable and the fact that we measure the eight individual combinations at different times leads to a difference in the total energy received by a certain aperture combination over the time interval it is being measured for. Since in practice we don’t know how the power will change, and because we may choose a random order of our measurements we can effectively convert this systematic drift into a random error. Conversely, if in the experiment we found that the power was indeed drifting slowly in one direction, then randomization of the measurement sequence would mitigate a non-zero mean. Let us therefore assume a stationary mean power $P$ and a constant level of fluctuations $\Delta P$ around that power for an averaging time that is equal to the time we take to measure one of the eight combinations. Let the relative fluctuation $\Delta p = \Delta P/P$. Using Gaussian error propagation, the fluctuation $\Delta\rho$ of $\rho$, whose quantum theoretical mean is zero, is then given by $$\begin{aligned} (\Delta\rho)^2 &=& \frac{1}{\delta^2} \left[ P_{ABC}^2 + (1+s_{BC}\rho) P_{BC}^2 + (1+s_{AC}\rho) P_{AC}^2 + (1+s_{AB}\rho) P_{AB}^2 + \right. \\ \nonumber && \;\;\;\;\;\;\; (1+(s_{BC}+s_{AC})\rho) P_{C}^2 + (1+(s_{BC}+s_{AB})\rho) P_{B}^2 + (1+(s_{AC}+s_{AB})\rho) P_{A}^2 + \\ \nonumber && \;\;\;\;\;\;\; \left.(1+(s_{BC}+s_{AC}+s_{AB})\rho) P_{0}^2 \right] (\Delta p)^2, \label{eq:powererror}\end{aligned}$$ where the quantities $s$ are the signs of the binary interference terms that appear in $\delta$, e.g. $s_{AB}=\mathrm{sign}(I(A,B))$. Fig. \[fig:powererror\] shows a plot of $\Delta \rho/\Delta p$ as a function of the position in the diffraction pattern. The curve has divergences wherever $\delta$ has a zero. These are the only points that have to be avoided. Otherwise, the relative power stability of the source translates with factors close to unity into the relative error of $\rho$. Obviously, Eq. \[eq:powererror\] is also exactly the formula for the propagation of independent random errors of any origin in the measurements, if they are all of the same relative magnitude. However, if we use a photon counting technique, then the random error of each measurement follows from the Poissonian distribution of the photocounts. In this case, the (relative) random error of $P_x$ is proportional to $1/\sqrt{P_x}$, where $x$ is any of the eight combinations. As a consequence the random error of $\rho$ will be proportional to the same expression with all the $P_x^2$ replaced by $P_x$. While it appears that drifting or fluctuating power can be mitigated, a worse problem is that in our realization of the unblocking of slits every pattern could potentially have slightly different transmission. Possible reasons for this are dirt, or incomplete etching of the metal layer, or inhomogeneities in the glass substrate or the antireflection coating layers. In order to avoid any of these detrimental possibilities the next implementation of the slits will be air slits in a steel membrane. ![Fluctuation $\Delta\rho$ of $\rho$ caused by fluctuating source power $\Delta p$ (solid line). The horizontal axis is the spatial coordinate in the far field of the three slits. The dotted line shows a scaled three-slit diffraction pattern as a position reference.[]{data-label="fig:powererror"}](powererror.eps){width="80.00000%"} As a second source of systematic errors we have identified the unwanted transmission of supposedly opaque parts of the slit and blocking mask. This by itself would not cause a non-zero $\rho$, but combined with small errors in the alignment of the blocking mask it will yield aperture transmission functions that are not simply always the same open and closed slit, but in every one of the eight combinations we have a particular aperture transmission function. If the slits were openings in a perfectly opaque mask there would be no effect, since they are not being moved between the measurements of different combinations. In practice, we found that all our earlier masks had a few percent of unwanted transmission as opposed to the current one which has an unwanted transmission smaller than 0.1%. Fig. \[fig:blockingerror\] shows the results of a simulation assuming the parameters of the current mask, which seems to be good enough to avoid this kind of systematic error at the cur rent level of precision. ![Value of $\rho$ in the diffraction pattern of three slits for the following set of parameters: $30\;\mu\mathrm m$ slit size, $100\;\mu\mathrm m$ slit separation, 800 nm wavelength, $100\;\mu\mathrm m$ opening size, 5% unwanted mask transmission, and a set of displacements of the blocking mask uniformly chosen at random from the interval \[0,$10\;\mu\mathrm m$\][]{data-label="fig:blockingerror"}](blockingerror.eps){width="80.00000%"} Finally, there is a source of systematic error, which is intrinsically linked to the actual objective of this measurement. We set out to check the validity of Born’s rule, that probabilities are given by absolute squares of amplitudes. Yet, any real detector will have some nonlinearity. In a counting measurement the effect of dead-time will limit the linearity severely, even at relatively low average count rates. A typical specification for an optical power meter is 0.5% nonlinearity within a given measurement range. The measurement of all eight combinations involves a large dynamic range. From the background intensity to the maximum with all three slits open, this could be as much as six orders of magnitude. Fig. \[fig:nlerror\] shows that 1% nonlinearity translates into a non-zero value of $\rho$ of up to 0.007. For the measurements shown above the mean count rate was about 80,000 counts per second. Given a specified dead time of our detector of 50 ns, we expect the deviation from linearity to be about 0.4% and the resulting apparent value of $\rho \approx 0.003$. ![Value of $\rho$ in the diffraction pattern of three slits for a 0.5% nonlinear detector, where the ratio between the maximum detected power and the minimum detector power is 100.[]{data-label="fig:nlerror"}](nlerror.eps){width="90.00000%"} All of these systematics are potential contributors to a non-zero mean $\rho$. From the above calculations and our efforts to stabilize the incident power and improvements in the mask properties, we conclude that while detector nonlinearities may have contributed something, the main source of systematic error must be the inhomogeneities in the unblocking mask. Hopefully air slits will bring a significant improvement. Discussion, conclusion and future work ====================================== In this experiment, we have attempted to test Born’s rule for probabilities. This is a null experiment but due to experimental inaccuracies, we have measured a value of $\rho$ which is about $10^{-2} \pm 10^{-3}$. We have analyzed some major sources of systematic errors that could affect our experiments and we will try to reduce their influence in future implementations. Further, we plan replace the random laser source by a heralded single photon source [@Bocquillon08]. This will ensure the particle nature of light both during emission and detection and give us the advantage that we can count the exact number of particles entering the experiment. At this point we don’t know of any other experiment that has tried to test Born’s rule using three-path interference, therefore we cannot judge how well we are doing. However, our collaborators [@Cory08a] are undertaking an interferometric experiment using neutrons, which will perform the test in a completely different system. These two approaches are complementary and help us in our quest to estimate the extent of the validity of the Born interpretation of the wavefunction. Acknowledgements ================ Research at IQC and Perimeter Institute was funded in part by the Government of Canada through NSERC and by the Province of Ontario through MRI. Research at IQC was also funded in part by CIFAR. This research was partly supported by NSF grant PHY-0404646. U.S. thanks Aninda Sinha for useful discussions.
--- abstract: 'We consider a scenario where $N$ users send packets to a common access point. The receiver decodes the message of each user by treating the other user’s signals as noise. Associated with each user is its channel state and a finite queue which varies with time. Each user allocates his power and the admission control variable dynamically to maximize his expected throughput. Each user is unaware of the states, and actions taken, by the other users. This problem is formulated as a Markov game for which we show the existence of equilibrium and an algorithm to compute the equilibrium policies. We then show that when the number of users exceeds a particular threshold, the throughput of all users at all the equilibria are the same. Furthermore the equilibrium policies of the users are invariant as long as the number of users remain above the latter threshold. We also show that each user can compute these policies using a sequence of linear programs which does not depend upon the parameters of the other users. Hence, these policies can be computed by each user without any information or feedback from the other users. We then provide numerical results which verify our theoretical results.' author: - bibliography: - 'GLOBECOMM\_REFRENCES.bib' title: Large Player games on Wireless Networks --- \[sec:Introduction\]Introduction ================================ There has been a tremendous growth of wireless communication systems over the last few years. The success of wireless systems is primarily due to the efficient use of their resources. The users are able to obtain their quality of service efficiently in a time varying radio channel by adjusting their own transmission powers. Distributed control of resources is an interesting area of study since its alternative involves high system complexity and large infrastructure due the presence of a central controller. Non cooperative game theory serves as a natural tool to design and analyze wireless systems with distributed control of resources[@NonCop]. In [@Lai], a distributed resource allocation problem using game theory on the multiple access channel (MAC) is considered. They considered the problem where each user maximizes their own transmission rate in a selfish manner, while knowing the channel gains of all other users. Scutari et al.[@Scutari1] [@Scutari2] analyzed competitive maximization of mutual information on the multiple access channel subject to power constraints. They provide sufficient conditions for the existence of unique Nash equilibrium. In a similar setup, [@Qiao] showed that for maximizing the effective capacity of each user, there exists a unique Nash equilibrium. Heikkinen [@Heikkinen] analyzed distributed power control problems via potential games. In [@Debbah], consider a MAC model, where each user knows only their own channel gain and only the statistics of the channel gains of other users. The problem is formulated as a Bayesian game, for which they show the existence of a unique Nash equilibrium. Altman et al. [@Uplink] studied the problem of maximizing throughput of saturated users (a user always has a packet to transmit) who have a Markov modeled channel and are subjected to power constraints. They considered both the centralized scenario where the base station chooses the transmission power levels for all users as well as the decentralized scenario where each user chooses its own power level based on the condition of its radio channel. In [@Wiopt1], the authors showed the convergence of the iterative algorithm proposed in [@Uplink]. Altman et al. [@Globe] later considered the problem of maximizing the throughput of competetive users in a distributed manner subject to both power and buffer constraints. The works considered so far, compute equilibrium policies for games with fixed number of users. As the number of user increases, the corresponding equilibrium policies of users change and the complexity of computing these policies also increases. To overcome these problems, Population games[@sandholm2010population] models the number of users present in the system as being significantly large, such that each user can be modeled as a selfish player playing against a continuum of players. Then one employs techniques such as evolutionary dynamics to compute the Nash equilibrium policy of a user in this model. Using the framework of population games, [@Chandramani] model a mobile cellular system, where users adjust their base station associations and dynamically control their transmitter power to adapt to their time varying radio channels. Another technique to overcome the latter problems was developed in [@MFG_Lasry; @MFG_Lions]. Here each user interacts with other players only through their average behavior called the mean field. Note that all the users in these models are considered to be interchangeable[@sandholm2010population; @MFG_Lasry; @MFG_Lions]. An application of mean field modeling in resource allocation was considered in [@Meriaux], where each user maximizes their own signal to interference and noise ratio (SINR). The authors showed this problem as number of users tends to infinity can be modeled as a Mean field game. The authors in [@Wiopt1] consider a different form of analysis for large number of users. In the model they considered, they showed that once the number of players in their game exceed some fixed threshold, the Nash equilibrium policies of each user gets fixed and can be precomputed in linear time. The authors refer to such a policy as an Infinitely invariant Nash equilibrium (IINE) policy. They further showed each user requires no information or feedback from other users to compute these policies. However the saturated model considered in [@Wiopt1] does not take into account the rate at which data packets arrive form the higher layer. Hence using polcies which are optimal in the saturated model may cause arbitarily large queues at the transmitter. The long delays produced by these policies significantly reduce the quality of service of wireless systems.To mitigate the affect of the latter problem, we consider that each users has a finite buffer where the incoming packets are stored before transmission. We also ensure an average queue constraint on each user. The user then must dynamically allocate power and control the number of packets arriving at its buffer to satisfy its average power as well as queue constraints. Thus unlike the saturated scenario, the user actions affect his state transitions. We model this problem as a Constrained Markov decision game with independent state information[@Proof]. Besides providing a algorithm to compute Nash equilibria of this game, we also prove the existence of IINE policies.In the saturated scenario, the IINE was computed using a greedy algorithm. Here The IINE is computed by solving a finite number of linear programs(LP), where at each stage, the current LP requires the solutions of the LP’s of the previous stages previous stages. The method of proving this result is different and more general as compared to the saturated scenario. [*Notations:*]{} Let $g_{i}$ denote an element of the set ${\mathcal{G}}_{i}$ of possible values of a certain parameter associated with the $i$th user. The set ${\mathcal{G}}=\prod_{i=1}^{N}{\mathcal{G}}_{i}$ denotes the Cartesian product of these sets. We represent $g=(g_{i},\cdots,g_{N})$, $g\in\mathcal{G}$ as an element of the set $\mathcal{G}$. The set ${\mathcal{G}}_{-i}=\prod_{j=1,j\neq i}^{N}{\mathcal{G}}_{j}$, denotes the Cartesian product of the sets other than ${\mathcal{G}}_{i}$ . Any element of this set is represented by $g_{-i}=(g_{i},\cdots,g_{i-1},g_{i+1},\cdots,g_{N})$, $g_{-i}\in\mathcal{G}_{-i}$. $|{\mathcal{G}}|$ denotes the cardinality of the set ${\mathcal{G}}$. \[sec:System-model\]System model ================================ We denote $n$ as the time index of the $n$’th time slot of a discrete time system model. We represent ${\mathcal{N}}=\{1,2,3,\cdots,N\}$ as the set of users sending messages to a common receiver over a wireless medium. We assume that the fading channel gain remains constant over each time slot . We represent by $h_{i}[n]$, the fading channel gain of the $i$th user in the $n'$th time slot. The channel gain belongs to a finite, non-negative, ordered set ${\mathcal{H}}_{i}=\{h_{i}^{0},h_{i}^{1},\cdots,h_{i}^{k}\}$, where $|{\mathcal{H}}_{i}|=r+1$. The finite set of discrete channel gains is obtained by the quantization of the channel state information[@Uplink; @Globe; @Wiopt1]. We assume that the fading channel gain process $h_{i}[n]$ is stationary and ergodic. The $i$th user transmits with power $p_{i}[n]$ in the $n$th time slot and the value $p_{i}[n]$ belongs to a finite ordered set ${\mathcal{P}}_{i}=\{p_{i}^{0},p_{i}^{1},\cdots,p_{i}^{l}\}$, where $|{\mathcal{P}}_{i}|=q+1$. gives The set $\mathcal{P}$ is quantization of the transmit power levels [@Uplink; @Globe]. , The set ${\mathcal{P}}_{i}$ includes zero, i.e., $p_{i}^{0}=0$, as the user may not transmit any message in a time slot. At time slot $n$, user $i$ can transmit up to $q_{i}[n]$ packets from his finite buffer of size $Q_{i}$, i.e the value $q_{i}$ belongs to the set ${\mathcal{Q}}_{i}=\{q_{i}^{0},q_{i}^{1},\cdots,Q_{i}\}$. Also at time slot $n$, user $i$ receives $w_{i}[n]$ from the higher layer, according to given independent and identically distributed (i.i.d) distribution $F_{i}$. the incoming packets may be accepted or rejected by the user, which is indicated by the variable $c_{i}[n]\in\{0,1\}$, where $c_{i}=1$ and $c_{i}=0$ indicate acceptance and rejection respectively. Each user can accept packets until the buffer is full, while the remaining packets are dropped. We assume that in a given time slot, all arrivals from the upper layer occur after transmission. The queue process $q_{i}[n]$ evolves as, **$$\begin{aligned} q_{i}[n+1]=\min([q_{i}[n]+c_{i}[n]w_{i}[n]-1_{\{p_{i}[n]>0\}}]^{+},Q_{i}),\end{aligned}$$** where $1_{\mathcal{E}}$ denotes the indicator function of the event $\mathcal{E}$ and $x^{+}$indicates max$(x,0)$. The set of states ${\mathcal{X}}_{i}$ of user $i$ is the Cartesian product of the set of channel states ${\mathcal{H}}_{i}$ and the set of queue states ${\mathcal{Q}}_{i}$, i.e. ${\mathcal{X}}_{i}:={\mathcal{H}}_{i}\times{\mathcal{Q}}_{i}$. The set of actions of user $i$ ${\mathcal{A}}_{i}$ is the Cartesian product of the set $\{0,1\}$ and the set of transmit power ${\mathcal{P}}_{i}$ , i.e. ${\mathcal{A}}_{i}:=\{0,1\}\times{\mathcal{P}}_{i}$. Any element of these sets ${\mathcal{X}}_{i}$ and ${\mathcal{A}}_{i}$ are represented as $x_{i}:=(h_{i},q_{i})$ and $a_{i}:=(c_{i},p_{i}),\;c_{i}\in\{0,1\}$ respectively. Each user has an average power and average queue constraints of $\overline{P_{i}}$and $\overline{Q_{i}}$ respectively. We assume that each user knows their instantaneous channel gain and queue state but is not aware of channel gains, queue state and transmit power of other users. The message of each user is decoded by treating the signals of the other users as noise. We assume that only user $i$ and the receiver has complete information about the number of packets in his buffer and the arrival process $w_{i}[n]$ of user $i$ is independent of his fading process $h_{i}[n]$. The reward function associated with user $i$ when $h_{i}$,$q_{i}$ and $p_{i}$ are the instantaneous channel gain, queue state and transmit power of the $i$th user, respectively is given by, $$\begin{aligned} t_{i}(x,a)\triangleq\log_{2}\biggl(1+\frac{h_{i}p_{i}\cdot1_{\{q_{i}>0\}}}{N_{0}+\sum_{j=1,j\neq i}^{N}h_{j}p_{j}\cdot1_{\{q_{j}>0\}}}\biggr),\end{aligned}$$ where $N_{0}$ is the receiver noise variance. \[sec:Problem-Formulation\]Problem Formulation ============================================== Here we define the queue and power allocation policies for each user. Furthermore we define the time average rewards and constraints for each user. We then formulate the latter as a Markov game, and show the existence of Nash equilibria for the game. Each user utilizes a stationary policy $z_{i}(a_{i}/x_{i}),$ which represents the conditional probability of using action $a_{i}\in\mathcal{A}_{i}$ at state $x_{i}\in\mathcal{X}_{i}$. Corresponding to each stationary policy and initial distribution $\beta_{i}$ of user $i$ over the set of states ${\mathcal{X}}_{i}$ , we obtain a probability distribution called occupation measure $z_{i}(x_{i},a_{i})$ on the Cartesian set $\mathcal{X}_{i}\times\mathcal{A}_{i}$. It is defined as, $$\begin{aligned} z_{i}(\beta_{i},u_{i};x_{i},a_{i}):=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{n=1}^{T}\lambda_{\beta_{i}}^{z_{i}}(x_{i}[n]=x_{i},a_{i}[n]=a_{i}). & & .\label{Def_Occ_msr}\end{aligned}$$ Under the assumption of unichain MDP, the occupation measure $z_{i}(\beta_{i},u_{i};x_{i},a_{i})$ is well defined for a stationary policy $u_{i}$ and is independent of the initial distribution $\beta_{i}$ (Theorem. $4.1$, [@CMDPBOOK]) and is related to the corresponding stationary policy as, $$\begin{aligned} z_{i}(a_{i}|x_{i})=\frac{z_{i}(x_{i},a_{i})}{\sum_{a_{i}\in{\mathcal{A}}_{i}}z_{i}(x_{i},a_{i})},\text{\,}a_{i}\in\mathcal{A}_{i},\text{\,\ensuremath{x_{i}}\ensuremath{\in\mathcal{X}_{i}}. }\label{Calc_stat}\end{aligned}$$ It can be verified that the above MDP is unichain. In this work, we shall consider only the occupation measures, as the stationary policy can be obtained from it using (\[Calc\_stat\]) and refer to them interchangeably. Given policies $z_{i}$, the average rate obtained by user $i$ is, $$\begin{aligned} T_{i}(z)=\sum_{x_{i},a_{i}}R_{i}^{z_{-i}}(x_{i},a_{i})z_{i}(x_{i},a_{i}),\label{eq:Avg_Rwrd}\end{aligned}$$ where the instantaneous rate $R_{i}^{z_{-i}}(x_{i},a_{i})$ of user $i$ is defined as, $$\begin{aligned} R_{i}(x_{i},a_{i})=\sum_{x_{-i}}\sum_{a_{-i}}\bigl(\prod_{l=1}^{N}z_{j}(x_{j},a_{j})\bigr)t_{i}(x,a).\label{eq:Instant_Rwrd}\end{aligned}$$ Similarly, we define the average power and average queue length under policy $z_{i}$ for user $i$ respectively as, $$P_{i}(z_{i})=\sum_{x_{i},a_{i}}p_{i}\cdotp z_{i}(x_{i},a_{i}),\ Q_{i}(z_{i})=\sum_{x_{i},a_{i}}q_{i}\cdotp z_{i}(x_{i},a_{i})\label{eq:Power_cost}$$ Any policy $z_{i}$ which satisfies the user’s queue and transmit power constraints is called a feasible policy. Hence, we define the set of feasible policies $\mathcal{Z}_{i}$ as, $$\begin{aligned} \begin{aligned}\mathcal{Z}_{i} & =\end{aligned} & \Biggl\{ z_{i}(x_{i},a_{i}),\,x_{i}\in\mathcal{X}_{i},\,a_{i}\in\mathcal{A}_{i}\Bigg|\sum_{(x_{i},a_{i})}z_{i}(x_{i},a_{i})=1,\nonumber \\ & \sum_{(x_{i},a_{i})}[1_{y_{i}}(x_{i})-P_{x_{i}a_{i}y_{i}}]z_{i}(x_{i},a_{i})=0,\;\forall\;y_{i}\in\mathcal{X}_{i},\nonumber \\ & P_{i}(z_{i})\leq\overline{P}_{i},\,Q_{i}(z_{i})\leq\overline{Q}_{i},\label{eq:Polyhedron}\\ & \;z_{i}(x_{i},a_{i})\geq0,\;\forall\;(x_{i},a_{i})\in\mathcal{X}_{i}\times\mathcal{A}_{i},\Biggr\}\nonumber \end{aligned}$$ Each user selects a feasible policy to maximize his average rate (\[eq:Avg\_Rwrd\]). Hence, we model this problem as a Non cooperative Markov game. A feasible policy $z_{i}^{*}\in\mathcal{Z}_{i}$ of user $i$ is called a best response policy if, $$\begin{aligned} T_{i}(z_{i}^{*},z_{-i})-T_{i}(z_{i},z_{-i})\geq0,\forall z_{i}\in\mathcal{Z}_{i}.\label{eq:Best_Rep_LP}\end{aligned}$$ We represent the set of all such policies as $\mathcal{B}_{i}(z_{-i})$. This Markov game is represented as the following tuple, $$\begin{aligned} {\scriptstyle \Gamma_{\mathcal{N}}=\Biggr[\{N\},\{\mathcal{X}_{i}\}_{i\in\mathcal{N}},\{\mathcal{A}_{i}\}_{i\in\mathcal{N}},\{t_{i}\}_{i\in\mathcal{N}},\{\overline{P}_{i}\}_{i\in\mathcal{N}},\{\overline{Q}_{i}\}_{i\in\mathcal{N}},\{F_{i}\}_{i\in\mathcal{N}}\Biggr]}.\end{aligned}$$ We define $\epsilon-$Nash equilibrium for this game as follows, \[def:-Nash\_eqb\_defn\]A feasible policy $z^{*}\in\mathcal{Z}$ for all users is called an $\epsilon-$Nash equilibrium ($\epsilon-$NE) if for each user $i\in\mathcal{N}$ and for any feasible policy $v_{i}\in\mathcal{Z}_{i}$, we have $$\begin{aligned} T_{i}(z^{*})-T_{i}(v_{i},z_{-i}^{*})\geq-\epsilon.\label{eq:NashEqb_defn}\end{aligned}$$ A policy is called Nash equilibrium if $\epsilon=0.$ In the next section, we prove the existence of a Nash equilibrium and provide a iterative best response algorithm to compute it. Existence and Computation of Nash equilibria. ============================================= The existence of Nash equilibria for this game has been proved in [@Globe]. We now propose a iterative best response algorithm to compute an equilibrium policy. **Set** iteration index $k=0$ **Initialize** $z(0)\in \mathcal{Z}$ **Set** $\epsilon>0$ *Set* $z_j(k+1)\in \mathcal{B}_i(z_{-i}(k))$ $k \gets k+1$ **return** $z(k)$ Potential games --------------- \[Ptnl\_fnc\_dfn\]A potential function $\hat{T}:\mathcal{Z}\longmapsto\mathbb{R}$ for the Markov game $\Gamma_{\mathcal{N}}$ is a function which for all users $i\in\mathcal{N}$, any pair of policies $(z_{i},\hat{z}_{i})$ of user $i$ and for any multi-policy $z_{-i}$ of users other than user $i$ satisfies, $$\begin{aligned} T_{i}(z_{i},z_{-i})-T_{i}(\hat{z}_{i},z_{-i})=\hat{T}(z_{i},z_{-i})-\hat{T}(\hat{z}_{i},z_{-i}).\label{Pot_Defn}\end{aligned}$$ The next condition can be used to check whether game $\Gamma_{\mathcal{N}}$ has an potential function. \[Ver\_ptnl\] Suppose there exist a function $t$ such that for any users $i\in{\mathcal{N}}$, for all state-action pair ($x_{-i},a_{-i}$) other users and any pair of state action $(x_{i},a_{i})$ and $(\hat{x}_{i},\hat{a}_{i})$ of user $i$ the function $t_{i}(x,a)$ satisfies, $$\begin{aligned} t_{i}(x_{i},a_{i},x_{-i},a_{-i})-t_{i}(\hat{x}_{i},\hat{a}_{i},x_{-i},a_{-i})=\label{Pot_cndtn}\end{aligned}$$ $$\begin{aligned} t(x_{i},a_{i},x_{-i},a_{-i})-t(\hat{x}_{i},\hat{a}_{i},x_{-i},a_{-i}).\end{aligned}$$ Then there exist a potential function for the Markov game $\Gamma$. Furthermore, the function $$\begin{aligned} \hat{T}(z)=\sum_{x\in{\mathcal{X}}}\sum_{{a}\in{\mathcal{A}}}{\biggl[\prod_{l=1}^{N}{z_{l}(x_{l},a_{l})}\biggr]\hat{t}(x,a)}\end{aligned}$$ is a potential function for the Markov game. The next result shows that the Markov game $\Gamma_{\mathcal{N}}$ has a potential function. \[Chk\_ptnl\] The game $\Gamma_{\mathcal{N}}$ has a potential function. Define the function $t$ as, $$t(x,a)\triangleq\log_{2}\biggl(1+\frac{\sum_{j=1}^{N}h_{j}p_{j}\cdot1_{\{q_{j}>0\}}}{N_{0}}\biggr).$$ The reader can verify the condition (\[Pot\_cndtn\]). The proof then follows from theorem \[Ver\_ptnl\]. Using the latter results, we show that the iterative best response algorithm (Algorithm\[Best\_resp\_algo\]) will converge to an $\epsilon-$Nash equilibrium in finite iterations. \[thm:Algorithm\_convergence\_THm\] 1. When the error approximation is $\epsilon=0$ and suppose the best response policy provided by the best response algorithm at each stage is a vertex of the polyhedron $\mathcal{Z}_{i}$, then the algorithm computes a Nash equilibrium in finite number of iterations. 2. When the error approximation is $\epsilon>0$, the best response algorithm computes an $\epsilon-$ NE in finite steps. The proof is same as proof of theorem $3$ in [@Arxiv2]. In general, though the number of Linear programs(LP) required to compute the NE policies using the iterative best response algorithm is less, in each such computation, we need to calculate the objective function for the LP, which quickly becomes computationally expensive. Indeed, the order complexity of computing the objective function is $O\left(\left(2*(Q_{i}+1)*(L+1)*(K+1)\right)^{N-1}\right)$. Hence even for moderate number of users, the Iterative best response algorithm becomes unfeasible in practical amounts of time. In the next section, we overcome this problem by introducing the concept of an Infinitely invariant Nash equilibrium(IINE). Games with large number of users. ================================= We observe that as the number of the users tends to infinity, the equilibrium policies of user become fixed. Indeed, the equilibria policies eventually belong to a fixed set, which we shall characterize in this section. To do so, we first give the definition of an Infinitely invariant Nash equilibrium (IINE)[@Wiopt1]. \[def:IINE\_defn\] A policy $z_{i}^{*}$ of the $i'$th user is referred to as an *Infinitely invariant Nash equilibrium* *(IINE) policy* if for some natural number $N^{*}$ and every finite subsets of users $\mathcal{N}\subseteq\mathbb{Z}^{+}$ such that $|\mathcal{N}|\geq N^{*}$, the policy $z_{i}^{*}$ is a Nash equilibrium policy for the game $\Gamma$ , for all users $i\in\mathcal{N}$. The existence of an IINE, ensures that the equilibria policy of each user remains same as long as the number of users remains beyond the threshold $N^{*}$. In the next theorems, we show the existence of an IINE under assumption of interchangeability of users [@MFG_Lions; @MFG_Lasry]. We first proceed by defining sets which contain such policies. We define iteratively the set of $k'$th sensitive policies as, $$\mathcal{S}_{i}^{k}=\arg\max\big\{ l_{i}^{k}(z_{i})\ \big|\ z_{i}\in\mathcal{S}_{i,\,}^{k}\big\},\label{eq:Kth_sensitive_set}$$ where the set $\mathcal{S}_{i}^{0}=\mathcal{Z}_{i}$ and the linear function $$l_{i}^{k}(z_{i})=\sum_{x_{i},a_{i}}(-1)^{k+1}(h_{i}p_{i})^{k}\cdot z_{i}(x_{i},a_{i}).\label{eq:Kth_sensitive_function}$$ We now define the set of infinitely sensitive policies $\mathcal{S}_{i}$ of user as $$\mathcal{S}_{i}=\cap_{k=1}^{\infty}\mathcal{S}_{i}^{k}.\label{eq:Infty_Sensitive_set}$$ From (\[eq:Kth\_sensitive\_set\]), we observe that $$\mathcal{S}_{i}^{k}\in\mathcal{S}_{i}^{k-1}\label{eq:Sensitive_set_prop}$$ . Hence one can restate (\[eq:Infty\_Sensitive\_set\]) as $\mathcal{S}_{i}=\lim_{k\rightarrow\infty}\mathcal{S}_{i}^{k}.$ In the next theorem, we show that the set containing all the IINE policies of user $i$ is precisely the set $\mathcal{S}_{i}$. \[thm:NAS\_IINE\] Suppose the set $\mathbb{Z}^{+}$ of strictly positive integers can be partitioned into finite sets $\mathcal{N}_{1},\mathcal{N}_{2},\cdots,\mathcal{N}_{k}$ such that for all users $i$ and $j$ of a set $\mathcal{N}_{l},\,1\leq l\leq k$, we have $\overline{P}_{i}=\overline{P}_{j}$,$\overline{Q}_{i}=\overline{Q}_{j}$ $\mathcal{H}_{i}=\mathcal{H}_{j}$, $\mathcal{P}_{i}=\mathcal{P}_{j}$ ,$\mathcal{Q}_{i}=\mathcal{Q}_{j}$ and $F_{i}=F_{j}$. Furthermore, assume there exist a policy $z_{i}\in\mathcal{Z}_{i}$, such that $l_{i}^{1}(z_{i})>0,$ then all the IINE policies of user $i$ belong to the set $\mathcal{S}_{i}$. Conversely, every policy of the set $\mathcal{S}_{i}$ is an IINE policy. The previous theorem shows that we can always show there exist an IINE policy if we can show the set $\mathcal{S}_{i}$ is non-empty. We show this in the next theorem and also provide a *finite* sequence of iterated Linear programs to compute one such policy. \[thm:Existence\_IINE\] The set of infinitely sensitive policies $\mathcal{S}_{i}$ is nonempty. Furthermore, if we define $M$ to be the distinct number of elements in the set $\{h_{i}p_{i}|h_{i}\in\mathcal{H}_{i}\,,\,p_{i}\in\mathcal{P}_{i}\}$, then $\mathcal{S}_{i}=\mathcal{S}_{i}^{M}.$ Theorem (\[thm:Existence\_IINE\]) shows that rather than solving infinite linear programs as given in condition (\[eq:Infty\_Sensitive\_set\]), we need to only solve for M (finite) number of linear programs. The integer $M$ is the maximum number of distinct values of the and $ $,SNR random variable $X_{i}$ which takes values the set $\{h_{i}p_{i}|h_{i}\in\mathcal{H}_{i},p_{i}\in\mathcal{P}_{i}\}$. The objective functions (\[eq:Kth\_sensitive\_function\]) are only functions of the users states and actions and do not depend upon any parameters of the other users. Hence these policies can be precomputed by each user without knowing any information from other users. When the number of users in the system does cross the number $N^{*},$ these policies are indeed a Nash equilibrium policies. Hence there is absolutely no need to use the iterative best response algorithm. Indeed, at large number of users, the complexity of computing the equilibrium policies becomes linear. Now, consider the scenario, where $N\geq N^{*}$ and another new player joins the system. The new user employs his IINE policy, while the old users employ their previous IINE policies. Again by the definition of IINE(\[def:IINE\_defn\]), these policies constitute an equilibrium for the resulting game of $N+1$ players. Once again, the use of IINE policies obtains significant reduction in complexity. The next theorem shows that all the IINE policies are interchangeable. \[thm:-Interchangeability\_IINE\] Let $z_{i}$ and $z_{i}^{*}$ represent two distinct IINE policies for each user $i$. Then, for each user $i,$ $T_{i}(z)=T_{i}(z^{*}).$Acknowledgment The latter theorem indicates that the users can employ any one of their IINE policies. Indeed, as there may exist multiple IINE policies, however all are equivalent in the sense, that they all provide the same reward to each user. Numerical Results ================= In this section, we validate our theoretical results using simulations. We denote the largest power index and the largest channel index by $L$ and $K$ respectively. The set of channel states and power values for each user $i$ is the same is equal to $\left\{ 0,\frac{1}{K},\frac{2}{K},\cdots,1\right\} $and $\left\{ 0,1,\cdots,L\right\} $respectively. We consider a Markov fading model with channel state transition probabilities given by $P(0/0)=\frac{1}{2}$,$P(1/0)=\frac{1}{2}$,$P(K-1/K)=\frac{1}{2}$,$P(K/K)=\frac{1}{2}$ and $P(k-1/k)=\frac{1}{2}$,$P(k+1/k)=\frac{1}{2}$,$P(k/k)=\frac{1}{2}$ $\left(1\leq k\leq K-1\right)$. The Noise variance for each simulation is fixed to be 1. The maximum number of admissible packets in the buffer of every user is fixed to be $Q$. Hence the set of queue states for each user $i$ is $\left\{ 0,1,\cdots,Q\right\} .$ The arrival distribution of every user is Poisson with parameter $\lambda$. The Power and Queue constraint for each user is the same and is denoted as $\hat{P}$ and $\hat{Q}$ respectively. For a fixed set of parameters (Scenarios), we shall use the Best response algorithm (\[Best\_resp\_algo\]) to compute a Nash equilibrium policy for $N_{max}$ games. where the number of players in the $N$’th game $\left(1\leq N\leq N_{max}\right),$is $N$ itself. We fix $N_{max}=4$ for all scenarios. \[Table1\] $Scenario$ $K$ $L$ $Q$ $\hat{P}$ $\hat{Q}$ $\lambda$ $M$ $N^{*}$ ------------ ----- ----- ----- ----------- ----------- ----------- ----- --------- $1$ $2$ $2$ $1$ $.50000$ $.500$ $.49$ $4$ $3$ $2$ $2$ $3$ $1$ $.95000$ $.500$ $.49$ $6$ $3$ $3$ $2$ $3$ $2$ $1.5500$ $1.00$ $.90$ $6$ $3$ $4$ $3$ $3$ $2$ $1.2800$ $.650$ $.60$ $7$ $3$ $5$ $3$ $3$ $3$ $2.1000$ $1.60$ $1.5$ $7$ $4$ $6$ $2$ $3$ $2$ $1.5500$ $.900$ $1.0$ $6$ $2$ 7 $2$ $3$ $2$ $1.7000$ $.900$ $1.0$ $6$ $1$ : Simulation Parameters We list the parameters considered in the various scenarios in Table \[Table1\]. The last two columns include the the number of linear programs $\left(M\right)$ required to compute the IINE policy and the minimum number of users $\left(N^{*}\right)$ at which the IINE policy becomes an equilibrium policy. In Figure \[Figure1\], we plot the $l_{2}$ norm distance between the NE policies and the IINE policy of user $1$, as the number of users$(N)$ varies from $1$ to $N_{max}.$ At each value of $N$, the best response algorithm is used to compute a NE policy ($z_{1}(N)$) of user $1$ for each scenario. Then in Figure (\[Figure1\]), we plot $||z_{1}(N)-z_{1}^{*}||_{2}$ for each scenario versus $N$, as $N$ varies from $1$ to $N_{max}$. $z_{1}^{*}$denotes the invariant policy of user $1$ and is calculated by solving a sequence of linear programs as given in theorem (\[thm:Existence\_IINE\]). The $l_{2}$norm between two policies $z_{1}$ and $z_{1}^{*}$is defined as $$||z_{1}(N)-z_{1}^{*}||_{2}=\sqrt{\sum_{x_{i},a_{i}}\left(z_{1}\left(x_{1},a_{1}\right)-z_{i}^{*}\left(x_{1},a_{1}\right)\right)^{2}.}$$ ![Plot of the $l_{2}$ norm distance between the NE policy and the IINE policy of the first user against the total number of user as it varies from $1$ to $N_{max}$.[]{data-label="Figure1"}](Two_norm_diff4){width="50.00000%" height="7cm"} We observe from figure \[Figure1\], that the NE policy of user $1$, has become equal to the IINE policy when the number of users exceed $N^{*}$ as shown in table (\[Table1\]). Thus after $N^{*}$, there is no need to use the computationally expensive Iterative best response algorithm, and rather we can compute the IINE policy simply. The same result is enforced in figure \[Figure2\]. Here we plot the absolute difference between the time average rate of user $1$, when all the $N$ users use their NE policies and the time average rate of user $1$, when all the $N$ users use their IINE policies, against the number of users $N.$ That is, we plot $|T_{1}(z(N))-T_{1}(z^{*}(N))|$ versus $N$, where $z(N)=\left(z_{1}(N),\cdots z_{N}(N)\right)$ represents the NE policies of all the $N$ users, when there are $N$ players in the game and $z^{*}(N)=\left(z_{1}^{*}(N),\cdots z_{N}^{*}(N)\right)$ represents the IINE polices of the $N$ users. Here also we can see that after the critical number $N\geq N^{*}$, the equilibrium reward$(T_{1}(z(N)))$ of user $1$ is the same as the reward$(T_{1}(z^{*}(N)))$ when all the $N$ users employ their IINE policy. Indeed as in these scenarios, the NE polices have become equal to the IINE policies as shown in figure \[Figure1\], the rewards then also become the same. From table \[Table1\], $N^*$ is $1$ for secnario $7$. This implies that the IINE is a NE policy when total number of users exceeds $1$.As there have to be atleast one user, hence for this scenario, the IINE is a optimal solution to the single user problem where the user maximzies their own rate subject to power and queue constraint. Futhermore as shown in Figure \[Figure1\] and Figure \[Figure2\] this policy remains a NE policy for each user irrespective of number of users in the system. ![Plot of the absolute difference between the reward of user $1$ when all users use their NE policies and the reward of user $1$ when all users use their IINE policies against the total number of user as it varies from $1$ to $N_{max}$.[]{data-label="Figure2"}](Reward_diff4){width="50.00000%" height="7cm"} In Figure \[Figure3\], we plot the absolute difference of One-sensitive reawrds of user $1$ computed at the Nash equilibrium policies and the IINE policy of user $1$ versus the number of users(N). Recall that the One-sensitive reward when user $1$ employs policy $z_1$ is $$l_{1}^{1}(z_{1})=\sum_{x_{i},a_{i}}h_{i}p_{i}\cdot z_{i}(x_{i},a_{i}).\label{eq:1th_sensitive_function}$$ This is simply the time average SNR of user $1$ at policy $z_1$ and from \[thm:Existence\_IINE\], the IINE maximizes the time average SNR. In Figure \[Figure3\], we plot $|l_1(z_1(N))-l_1(z^*)|$ versus the number of users $N$. As observed in Figure \[Figure3\], the NE policies maximize the One-sensitive reward of user $1$, once the number of users crosses $N^*$. Indeed, as once the number of users cross $N^*$ the NE policies are infinitely invariant and hence maximize the time average SNR. Also in scenario $7$, irrespective of number of users, these policies always are One-sensitive optimal, as here $N^*=1$. ![Plot of the absolute difference between the One sensitive reward of user $1$ when user $1$ employs its NE policy for the game with $N$ users and the One sensitive reward of user $1$ when user $1$ employs its IINE policy against the total number of user as $N$ varies from $1$ to $N_{max}$.[]{data-label="Figure3"}](One_sensitive_rwrd_diff1){width="60.00000%" height="8cm"} Conclusions =========== In this paper, we analyzed the scenario where multiple transmitters can send atmost one packet to a single receiver simultaneously over the multiple access channel. We model this problem as a Constrained Markov game with independent state information. That is, each user only knows his own states and actions but only knows the statistics of the state and actions of the other users. Each user selfishly maximized their own rate by choosing a power and queue allocation policy subject to power and queue constraints. We showed the existence of Nash equilibrium in this setup and provided an iterative best response algorithm to compute this equilibrium for any number of users. We showed that under the assumption of “finitely symmetric users”, there exists an infinitely invariant Nash equilibrium; that is, when the total number of users crosses a particular threshold ($N^*$), the Nash equilibrium policies of each user remains the same. We then showed that an IINE can be computed by solving a finte sequence of Linear programs. Proof of theorem \[Ver\_ptnl\]\[sec:Proof-of-Potential\_thm\] ============================================================= We first observe that, $$\begin{aligned} \begin{aligned}\end{aligned} & T_{i}(z_{i},z_{-i})-T_{i}(\hat{z}_{i},z_{-i})\\ =^{1} & \sum_{x_{-i},a_{-i}}\prod_{l\neq i}z_{l}(x_{l},a_{l})\Bigl(\sum_{x_{i},a_{i}}t_{i}(x_{i},a_{i},x_{-i},a_{-i})\\ \cdot & \Bigl(z(x_{i},a_{i})-\hat{z}_{i}(x_{i},a_{i})\Bigl)\Bigr)\\ =^{2} & \sum_{x_{-i},a_{-i}}\prod_{l\neq i}z_{l}(x_{l},a_{l})\cdot\Biggl[\Bigg[\sum_{(x_{i},a_{i})\neq(\hat{x}_{i},\hat{a}_{i})}\\ & (t_{i}(x_{-i},a_{-i},x_{i},a_{i})\Bigl(z(x_{i},a_{i})-\hat{z}_{i}(x_{i},a_{i})\Bigl)\Bigg]\\ + & (t_{i}(x_{-i},a_{-i},\hat{x}_{i},\hat{a_{i}})\Bigl(z(\hat{x}_{i},\hat{a_{i}})-\hat{z}_{i}(\hat{x}_{i},\hat{a_{i}})\Bigl)\Bigg]\\ =^{3} & \sum_{x_{-i},a_{-i}}\prod_{l\neq i}z_{l}(x_{l},a_{l})\cdot\Biggl[\sum_{(x_{i},a_{i})\neq(\hat{x}_{i},\hat{a}_{i})}\Big(t_{i}(x_{-i},a_{-i},x_{i},a_{i})\\ - & (t_{i}(x_{-i},a_{-i},\hat{x}_{i},\hat{a_{i}})\Big)\Bigl(z(x_{i},a_{i})-\hat{z}_{i}(x_{i},a_{i})\Bigl)\Bigg]\\ =^{4} & \sum_{x_{-i},a_{-i}}\prod_{l\neq i}z_{l}(x_{l},a_{l})\cdot\Biggl[\sum_{(x_{i},a_{i})\neq(\hat{x}_{i},\hat{a}_{i})}\Big(t(x_{-i},a_{-i},x_{i},a_{i})\\ - & (t(x_{-i},a_{-i},\hat{x}_{i},\hat{a_{i}})\Big)\Bigl(z(x_{i},a_{i})-\hat{z}_{i}(x_{i},a_{i})\Bigl)\Bigg]\\ =^{5} & T(z_{i},z_{-i})-T(\hat{z}_{i},z_{-i}).\end{aligned}$$ The equality ($5$) in the above proof follows from reversing the steps of the above result. Equality $3$ follows from the observation that $\sum_{x_{i},a_{i}}z_{i}(x_{i},a_{i})=\sum_{x_{i},a_{i}}\hat{z}_{i}(x_{i},a_{i})$. Proof of theorem \[thm:NAS\_IINE\]\[sec:Proof\_NAS\_IINE\] ========================================================== Before, we provide the proof of theorem (\[thm:NAS\_IINE\]), we give some definition and notation which are used repeatedly. We use Asymptotic notation, i.e, given two real valued functions, $f(n)$ and $g(n)$on the set of natural numbers, denote, $f(n)=o(g(n))$, $f(n)=O(g(n))$ and $f(n)=\Theta(g(n))$ whenever there exist constants $c_{1}$ and $c_{2}$, both strictly greater than $0,$ and a natural number $N_{0}$ such that for all $n\geq N_{0}$, $f(n)>c_{1}g(n)$, $f(n)<c_{2}g(n)$ or $c_{1}g(n)<f(n)<c_{2}g(n)$ respectively. We also denote $$\mu=\inf_{i\geq0}\mathbb{E}(X_{i})\,\text{and\,\ensuremath{\beta}\,=\,\ensuremath{\sup_{i\geq0}\mathbb{E}}(\ensuremath{X_{i}}),}\label{eq:UPPER=000026LOWER_MEAN}$$ where the random variable $X_{i}$ is defined as, $$X_{i}=h_{i}p_{i}\,\text{{w.p} }\,\sum_{q_{i},c_{i}}z_{i}^{*}(h_{i},p_{i},c_{i},q_{i}),\label{eq:SNR_RandomVariable}$$ where $z_{i}^{*}$ is an IINE policy of user $i$. Note that, from assumptions of Theorem (\[thm:NAS\_IINE\]), we show in Lemma (\[lem:Set\_of\_all\_best\_resp\]) $\mu>0$ and $\beta<\infty$. The following well known inequality is also used heavily in our work. We mention it without proof. \[thm:HOEFFDING\_THM\] We have for some constant $c>0$, $$P\left(\left|\frac{\sum_{j\geq2}^{N+1}\left(X_{j}-\mathbb{E}(X_{j})\right)}{N}\right|\geq t\right)\leq2\exp(-cNt^{2})\label{eq:HOEFFDING}$$ We mention that certain Lemmas required in the proof of theorem (\[thm:NAS\_IINE\]) are provided after the statement of the proof. $\smallskip$ We first show that if $z_{1}^{*}\in\mathcal{S}_{1}$, then $z_{1}^{*}$ is an IINE policy. We have from Lemma (\[lem:FEASIBLE\_POLICY\]), that $l_{1}^{1}(z_{1}^{*})>0$. Let $z_{1}$ denote any feasible policy of user $1$ such that it is also a vertex/endpoint of the polyhedron $\mathcal{Z}_{i}$ and $z_{1}\notin\mathcal{S}_{1}$. We shall prove that there exist some positive number $N_{k_{1}}$such that for all $N\geq N_{k_{1}}$, $T_{1}(z_{1}^{*},z_{-1}^{*})-T_{1}(z_{1},z_{-1}^{*})>0$. $$\begin{aligned} & \left(T_{1}(z_{1}^{*},z_{-1}^{*})-T_{1}(z_{1},z_{-1}^{*})\right)\left(N\right)^{k}\nonumber \\ = & \sum_{x_{1},a_{1}}\mathbb{\Bigg[E}\left[\left(N\right)^{k}\log_{2}\left(1+\frac{h_{1}p_{1}1_{q_{1}>0}}{\sum_{j\geq2}^{N+1}X_{j}+N_{0}}\right)\right]\big(z_{1}^{*}(x_{1},a_{1})\label{eq:Converse_1}\\ - & z_{1}(x_{1},a_{1})\big)\Bigg].\nonumber \end{aligned}$$ As $z_{1}\notin\mathcal{S}_{1}$, there exist a positive integer $k$ such that $z_{1}\notin\mathcal{S}_{1}^{k}$. Let $k$ represent the smallest such integer. Then $z_{1}^{*}\in\mathcal{S}_{1}^{m},\,1\leq\,m\,\leq k-1$. Then using a Taylor series expansion in the previous expression (\[eq:Converse\_1\]), we have for some constant $c$, $$\begin{aligned} & T_{1}(z_{1}^{*},z_{-1}^{*})-T_{1}(z_{1},z_{-1}^{*})\nonumber \\ = & \mathbb{E}\left[\frac{\left(N\right)^{k}}{\left(\sum_{j\geq2}^{N+1}X_{j}+N_{0}\right)^{k}}\right]\Bigg[\sum_{x_{1},a_{1}}\left(-1\right)^{k+1}\left(h_{1}p_{1}1_{q_{1}>0}\right)^{k}\nonumber \\ \cdot & \left(z_{1}^{*}(x_{1},a_{1})-z_{1}(x_{1},a_{1})\right)\Bigg]\,+c\mathbb{E}\left[\frac{\left(N\right)^{k}}{\left(\sum_{j\geq2}^{N+1}X_{j}+N_{0}\right)^{k+1}}\right]\nonumber \\ \cdot & \sum_{x_{1},a_{1}}\left(-1\right)^{k}\left(h_{1}p_{1}1_{q_{1}>0}\right)^{k+1}\left(z_{1}^{*}(x_{1},a_{1})-z_{1}(x_{1},a_{1})\right).\label{eq:Converse_2}\end{aligned}$$ Using that $z_{1}^{*}\notin\mathcal{S}_{1}^{k},\,z_{1}^{*}\in\mathcal{S}_{1}^{m},\,1\leq\,m\,\leq k-1$ and Lemma (\[lem:HOEFFDING\_BD\_APPLICATION\]), we have in (\[eq:Converse\_2\]), for some positive constant $c_{1}>0$, and some constant $c_{2}$ $$\begin{aligned} T_{1}(z_{1}^{*},z_{-1}^{*})-T_{1}(z_{1},z_{-1}^{*}) & >c_{1}-\frac{c_{2}}{N}.\end{aligned}$$ Hence there exist a positive number $N_{1}$such that for all $N\geq N_{1}$, $T_{1}(z_{1}^{*},z_{-1}^{*})-T_{1}(z_{1,j},z_{-1}^{*})>0$. Let $N_{j}$ denote the positive number such that for all $N\geq N_{j}$, $T_{1}(z_{1}^{*},z_{-1}^{*})-T_{1}(z_{j},z_{-1}^{*})>0$, where $z_{,j}$ represents a vertex/endpoint of the polyhedron $\mathcal{Z}_{1}$ and $z_{,j}\notin\mathcal{S}_{1}$. We claim that $T_{1}(z_{1}^{*},z_{-1}^{*})-T_{1}(z_{1},z_{-1}^{*})\geq0$ for all $z_{1}\in\mathcal{Z}_{1}$ ,for all $N\geq N^{*}=\max_{j}N_{1,j}$. Indeed, to verify (\[eq:Best\_Rep\_LP\]), it suffices to consider only the vertices of the polyhedron $\mathcal{Z}_{1}$ as the optimization problem (\[eq:Best\_Rep\_LP\]) is a Linear program. Clearly, we have that for $N\geq N^{*}$, $T_{1}(z_{1}^{*},z_{-1}^{*})-T_{1}(z_{j},z_{-1}^{*})>0$, where $z_{j}$ represents a vertex/endpoint of the polyhedron $\mathcal{Z}_{i}$ and $z_{j}\notin\mathcal{S}_{1}$. Now consider those points $\hat{z}_{1}$, which belong to the set $\mathcal{S}_{1}$ and are also vertex of the polyhedron $\mathcal{Z}_{1}$. Then we have, from (\[eq:Infty\_Sensitive\_set\]), $l_{1}^{k}(z_{1}^{*})=l_{1}^{k}(\hat{z}_{1})$, for all $k.$ Hence if we define the distribution $$\hat{P}\left(X_{1}=h_{1}p_{1}\right)=\sum_{q_{1},c_{1}}\hat{z}_{1}(h_{1},p_{1},q_{1},c_{1}),$$ then we see that the SNR random variable $X_{1}$ has the same moments according to the distributions $P$ and $\hat{P}.$ Hence, by method of moments, we have $P=\hat{P}.$ It can be shown now that $T_{1}(z_{1}^{*},z_{-1}^{*})=T_{1}(\hat{z}_{1,},z_{-1}^{*})$, for any $\mathcal{N}$. In particular, this implies that $T_{1}(z_{1}^{*},z_{-1}^{*})=T_{1}(\hat{z}_{1,},z_{-1}^{*})$, for all $N\geq N^{*}$. Hence, we have that $T_{1}(z_{1}^{*},z_{-1}^{*})\geq T_{1}(z_{1,},z_{-1}^{*})$, for all for all $z_{1}\in\mathcal{Z}_{1}$ and $N\geq N^{*}$, thus $z_{1}^{*}$ is a IINE policy. We now prove the converse. Let $z_{1}^{*}$ denote an IINE policy for user $i$, we shall prove that $z_{1}^{*}\in\mathcal{S}_{1}$ using induction. As $z_{1}^{*}$ is an IINE policy, we have from definition (\[def:IINE\_defn\]) and (\[eq:SNR\_RandomVariable\]) that for any policy $z_{1}\in\mathcal{Z}_{1}$, for all $N\geq N^{*},$ $$\begin{aligned} \begin{aligned}\end{aligned} & \sum_{x_{1},a_{1}}\mathbb{\Bigg[{E}}\left[N\log_{2}\left(1+\frac{h_{1}p_{1}1_{q_{1}>0}}{\sum_{j\geq2}^{N+1}X_{j}+N_{0}}\right)\right]\big(z_{1}^{*}(x_{1},a_{1})\label{eq:Condition_1}\\ - & z_{1}(x_{1},a_{1})\big)\Bigg]\,\geq\,0\nonumber \end{aligned}$$ At sufficiently large $N$, we have using a Taylor series expansion, for some constant $c>0,$ $$\begin{aligned} & \mathbb{E}\left[N\log_{2}\left(1+\frac{h_{1}p_{1}1_{q_{1}>0}}{\sum_{j\geq2}^{N+1}X_{j}+N_{0}}\right)\right]\\ = & \mathbb{\mathbb{E}}\left[\frac{Nh_{1}p_{1}1_{q_{1}>0}}{\sum_{j\geq2}^{N+1}X_{j}+N_{0}}\right]+c\mathbb{\mathbb{E}}\left[N\cdot O\left(\frac{1}{\left(\sum_{j\geq2}^{N+1}X_{j}+N_{0}\right)^{2}}\right)\right]\end{aligned}$$ Using Lemma (\[lem:HOEFFDING\_BD\_APPLICATION\]), the latter is, $$=\Theta(1)h_{i}p_{i}1_{q_{i}>0}+\Theta\left(\frac{1}{N}\right).\label{eq:Asymptotic_k=00003D1}$$ Applying the latter bound (\[eq:Asymptotic\_k=00003D1\]) to get an upper bound for (\[eq:Condition\_1\]), we have for some constants $c_{1},c_{2}$, both strictly greater than $0,$ $$\sum_{x_{i},a_{i}}c_{1}h_{1}p_{1}1_{q_{1}>0}\left(z_{1}^{*}(x_{1},a_{1})-z_{1}(x_{1},a_{1})\right)+\frac{c_{2}}{N}\geq0.$$ Letting $N\rightarrow\infty$, we get that $z_{i}^{*}\in\mathcal{S}_{1}^{1}.$ We now assume that the result is true till $k-1$, i.e $z_{1}^{*}\in\mathcal{S}_{1}^{m},\,1\leq m\leq k-1$ we show that $z_{1}^{*}\in\mathcal{S}_{1}^{k}.$ We then have by Taylor series expansion, $$\begin{aligned} & \mathbb{E}\left[N^{k}\log_{2}\left(1+\frac{h_{1}p_{1}1_{q_{1}>0}}{\sum_{j\geq2}^{N+1}X_{j}+N_{0}}\right)\right]\nonumber \\ = & c\mathbb{\mathbb{E}}\left[\sum_{l=1}^{k}-N{}^{k}\left(\frac{-h_{1}p_{1}1_{q_{1}>0}}{\sum_{j\geq2}^{N+1}X_{j}+N_{0}}\right)^{l}\right]\nonumber \\ + & c\mathbb{\mathbb{E}}\left[(N)^{k}\cdot O\left(\frac{1}{\left(\sum_{j\geq2}^{N+1}X_{j}+N_{0}\right)^{K+1}}\right)\right]\nonumber \\ = & c\mathbb{\mathbb{E}}\left[\sum_{l=1}^{k-1}-N{}^{k}\left(\frac{-h_{1}p_{1}1_{q_{1}>0}}{\sum_{j\geq2}^{N+1}X_{j}+N_{0}}\right)^{l}\right]\nonumber \\ + & \Theta(1)\left(\left(-1\right)^{k+1}h_{i}p_{i}1_{q_{i}>0}\right)^{k}+\Theta\left(\frac{1}{N}\right),\label{eq:Asymptotic_N=00003Dk}\end{aligned}$$ where the last equality follows from Lemma (\[lem:HOEFFDING\_BD\_APPLICATION\]). As $z_{1}^{*}$ is an IINE policy, we have from definition (\[def:IINE\_defn\]) and (\[eq:SNR\_RandomVariable\]) that for any policy $z_{1}\in\mathcal{Z}_{1}$, for all $N\geq N^{*},$ $$\begin{aligned} \begin{aligned}\end{aligned} & \sum_{x_{1},a_{1}}\mathbb{\Bigg[{E}}\left[\left(N\right)^{k}\log_{2}\left(1+\frac{h_{1}p_{1}1_{q_{1}>0}}{\sum_{j\geq2}^{N+1}X_{j}+N_{0}}\right)\right]\big(z_{1}^{*}(x_{1},a_{1})\label{eq:Condition_k}\\ - & z_{1}(x_{1},a_{1})\big)\Bigg]\,\geq\,0.\nonumber \end{aligned}$$ Using (\[eq:Asymptotic\_N=00003Dk\]) we get a upper bound for the latter ($\ref{eq:Condition_k}$),Hence we have for some constants $c_{1},c_{2}$, both strictly greater than $0,$ $$\begin{aligned} & \sum_{x_{i},a_{i}}\Bigg[c\mathbb{\mathbb{E}}\left[\sum_{l=1}^{k-1}-N{}^{k}\left(\frac{-h_{1}p_{1}1_{q_{1}>0}}{\sum_{j\geq2}^{N+1}X_{j}+N_{0}}\right)^{l}\right]\big(z_{1}^{*}(x_{1},a_{1})\nonumber \\ & -z_{1}(x_{1},a_{1})\big)\Bigg]\nonumber \\ + & \sum_{x_{i},a_{i}}c_{1}\left(\left(-1\right)^{k+1}h_{1}p_{1}1_{q_{1}>0}\right)^{k}\left(z_{1}^{*}(x_{1},a_{1})-z_{1}(x_{1},a_{1})\right)\label{eq:Condition_k_1}\\ & +\frac{c_{2}}{N}\geq0.\nonumber \end{aligned}$$ As $z_{i}^{*}\in\mathcal{S}_{1}^{m},\,1\leq m\leq k-1,$ then from (\[eq:Sensitive\_set\_prop\]), the first term in the above expression (\[eq:Condition\_k\_1\]) is $0$, hence (\[eq:Condition\_k\_1\]) is, $$\begin{aligned} = & \sum_{x_{i},a_{i}}c_{1}\left(\left(-1\right)^{k+1}h_{1}p_{1}1_{q_{1}>0}\right)^{k}\left(z_{1}^{*}(x_{1},a_{1})-z_{1}(x_{1},a_{1})\right)\label{eq:Condition_k_2}\\ & +\frac{c_{2}}{N}\geq0.\nonumber \end{aligned}$$ Letting $N\rightarrow\infty$, we get that $z_{1}^{*}\in\mathcal{S}_{1}^{k}.$ Hence by induction, $z_{1}^{*}\in\mathcal{S}_{1}$. \[lem:HOEFFDING\_BD\_APPLICATION\]We have, for each natural number $k$, $$\mathbb{\mathbb{E}}\left[\frac{1}{\left(\sum_{j\geq2}^{N+1}X_{j}+N_{0}\right)^{k}}\right]=\Theta\left(\frac{1}{N^{k}}\right).\label{eq:Bound_2}$$ $$\mathbb{\mathbb{E}}\left[\frac{N^{k}}{\left(\sum_{j\geq2}^{N+1}X_{j}+N_{0}\right)^{k}}\right]=\Theta(1),\label{eq: Bound_1}$$ We prove statement (\[eq:Bound\_2\]). Using $X_{j}\leq h_{j}^{k}p_{J}^{l}$, we have that , $$\mathbb{\mathbb{E}}\left[\frac{1}{\left(\sum_{j\geq2}^{N+1}X_{j}+N_{0}\right)^{k}}\right]=o\left(\frac{1}{N^{k}}\right).\label{eq:BOUND2_1}$$ To prove the other way, we have Proof of theorem \[thm:NAS\_IINE\]\[sec:Proof\_NAS\_IINE-1\] $$\begin{aligned} & \mathbb{\mathbb{E}}\left[\frac{1}{\left(\sum_{j\geq2}^{N+1}X_{j}+N_{0}\right)^{k}}\right]\\ = & \mathbb{\mathbb{E}}\left[\frac{1_{\left\{ \sum_{j\geq2}^{N+1}X_{j}\geq\frac{1}{2}N\mu\right\} }+1_{\sum_{j\geq2}^{N+1}X_{j}<\frac{1}{2}N\mu}}{\left(\sum_{j\geq2}^{N+1}X_{j}+N_{0}\right)^{k}}\right]\\ \leq & \frac{2^{k}}{\left(N\mu\right)^{k}}+\frac{1}{N_{o}^{k}}P\left(\frac{\sum_{j\geq2}^{N+1}X_{j}}{N}<u/2\right)\\ \leq & \frac{2^{k}}{\left(N\mu\right)^{k}}+\frac{1}{N_{o}^{k}}\exp\left(-c_{1}N\mu^{2}\right)\end{aligned}$$ where the last inequality follows from Hoeffding’s inequality (\[eq:HOEFFDING\]) . Hence, $$\mathbb{\mathbb{E}}\left[\frac{1}{\left(\sum_{j\geq2}^{N+1}X_{j}+N_{0}\right)^{k}}\right]=O\left(\frac{1}{N^{k}}\right),\label{eq:BOUND2.2}$$ and (\[eq:Bound\_2\]) follows from (\[eq:BOUND2\_1\]) and (\[eq:BOUND2.2\]). We now prove statement (\[eq: Bound\_1\]). Using the proof technique as carried out for (\[eq:Bound\_2\]), we have, $$\begin{aligned} \frac{1}{\left(h_{j}^{k}p_{J}^{l}\right)^{k}}< & \mathbb{\mathbb{E}}\left[\frac{N^{k}}{\left(\sum_{j\geq2}^{N+1}X_{j}+N_{0}\right)^{k}}\right]<\\ & \left(\frac{2}{\mu}\right)^{k}+\left(\frac{N}{N_{0}}\right)^{k}\exp\left(-c_{1}N\mu^{2}\right).\end{aligned}$$ Hence we have (\[eq: Bound\_1\]). In the next lemma we show that under the assumption that the channel provides a positive reward for transmission $\left(\pi(h_{i}=0)<1\right)$ and that there is always data to transmit $F_{i}(0)>1$, there always exists some policy $z_{i}$ such that $l_{i}(z_{i})>0.$ \[lem:FEASIBLE\_POLICY\] Assume $F_{i}(0)>1$ and $\pi(h_{i}=0)<1$. Consider the stationary policy $u_{i}$ for user $i$, $$\begin{aligned} u_{i}\left(p_{i},c_{i}/h_{i},q_{i}\right) & =\begin{cases} 1 & c_{i}=0,p_{i}=p_{i}^{1},q_{i}\neq0,\forall h_{i}\in\mathcal{H}_{i},\\ 1-s & c_{i}=0,p_{i}=0,q_{i}=0,\forall h_{i}\in\mathcal{H}_{i},\\ s & c_{i}=1,p_{i}=0,q_{i}=0,\forall h_{i}\in\mathcal{H}_{i},\\ 0 & \text{otherwise}, \end{cases}\end{aligned}$$ where $p_{i}^{1}=\inf\left\{ p_{i}|p_{i}>0\right\} $. Given any value of Queue constraint $\overline{Q}_{i}$ and Power constraint $\overline{P}_{i}$, there exist some value of $s$, under which both the constraints are satisfied. Also, under the same value of $s$, we have $l_{i}^{1}(z_{i})>0.$ Let $z_{i}$ represent the occupation measure corresponding to the stationary policy $u_{i}$. Let $z_{i}(h_{i})=\sum_{q_{i},a_{i}}z_{i}(h_{i},q_{i},a_{i})$, $z_{i}(q_{i})=\sum_{h_{i},a_{i}}z_{i}(h_{i},q_{i},a_{i})$ , $z_{i}(p_{i})=\sum_{x_{i},a_{i}}z_{i}(p_{i},c_{i},x_{i})$ and $z_{i}(h_{i},q_{i})=\sum_{a_{i}}z_{i}(h_{i},q_{i},a_{i})$. It can be easily verified under the policy $u_{i}$, the fading process $h_{i}[n]$ and the queue process $q_{i}[n]$ are independent. Also the queue process $q_{i}[n]$ is ergodic with a single communicating class consisting of the whole set $\mathcal{Q}.$ We denote $\pi(h_{i})$,$\pi(q_{i})$ and $\pi(h_{i},q_{i})$ as the stationary probability of being in channel state $h_{i}$, stationary probability of queue state $q_{i}$ and the joint stationary probability of state $\left(q_{i},h_{i}\right).$ Then $\pi(h_{i},q_{i})=\pi(h_{i})\pi(q_{i})$, $z_{i}(h_{i})=\pi(h_{i})$, $z_{i}(q_{i})=\pi(q_{i})$ and $z_{i}(h_{i},q_{i})=\pi(h_{i},q_{i}).$ The transition probability of the queue process under the policy $u_{i}$ is given by, $$P\left(q_{2}/q_{1}\right)=\begin{cases} sF_{i}(0)+1-s & q_{2}=0,q_{1}=0,\\ sF_{i}(j) & q_{2}=0,q_{1}=j,\,1\leq j\leq Q-1,\\ \sum_{j=Q}^{\infty}sF_{i}(j) & q_{2}=0,q_{1}=Q,\\ 1 & q_{2}=j-1,q_{i}=j,1\leq j\leq Q,\\ 0 & \text{otherwise.} \end{cases}$$ Using the steady state equations, $\pi=\pi P$ for the queue process, we can show that, $$\begin{aligned} \pi(k) & =s\pi(0)\left(1-\sum_{j=0}^{k-1}F_{i}(j)\right),\\ \pi(0) & =\frac{1}{1+sc}\,,\end{aligned}$$ where $c=\sum_{k=1}^{Q}\left(1-\sum_{j=0}^{k-1}F_{i}(j)\right)\geq0.$ Note that $c=0$ if and only if $F_{i}(0)=1$ and hence $\pi(0)=1$ if and only if $F_{i}(0)=1$. The average queue length under the policy $u_{i}$ is, $$\begin{aligned} Q_{i}(z_{i}) & =\sum_{x_{i},a_{i}}q_{i}z_{i}\left(x_{i},a_{i}\right)\nonumber \\ = & \sum_{q_{i}\neq0}q_{i}\pi(q_{i})\nonumber \\ = & \frac{s}{1+sc}\sum_{k=1}^{Q}\left(\left(1-\sum_{j=0}^{k-1}F_{i}(j)\right)\right)\nonumber \\ \leq & \frac{sQ}{1+sc}\nonumber \\ \leq & sQ.\label{eq:AVG_QUEUE_LENGTH}\end{aligned}$$ The average power expenditure under policy $u_{i}$ is, $$\begin{aligned} P_{i}(z_{i}) & =\sum_{x_{i},a_{i}}p_{i}z_{i}(x_{i},a_{i})\\ = & \sum_{q_{i}\neq0}p_{i}^{1}\pi(q_{i})\\ = & \frac{sp_{i}^{1}}{1+sc}\sum_{k=1}^{Q}\left(\left(1-\sum_{j=0}^{k-1}F_{i}(j)\right)\right)\\ \leq & \frac{sQp_{i}^{1}}{1+sc}\\ \leq & sQp_{i}^{1}.\end{aligned}$$ Now by choosing $0<s<\min\left\{ \frac{\overline{Q}_{i}}{Q},\frac{\overline{P_{i}}}{Qp_{i}^{1}},1\right\} $, we can ensure that the average queue and power constraint will get satisfied. We now compute $l_{i}^{1}(z_{i})$ for the policy $u_{i}.$ Define $h_{i}^{1}=\inf\left\{ h_{i}|h_{i}>0\right\} $ $$\begin{aligned} l_{i}^{1}(z_{i}) & =\sum_{x_{i},a_{i}}h_{i}p_{i}z_{i}(x_{i},a_{i})\nonumber \\ \geq & h_{i}^{1}p_{i}^{1}\left(\sum_{q_{i}\neq0}\sum_{h_{i}\neq0}\sum_{p_{i}\neq0}\sum_{c_{i}=0}^{1}z_{i}(h_{i},q_{i},p_{i},c_{i})\right)\nonumber \\ = & h_{i}^{1}p_{i}^{1}\left(\sum_{q_{i}\neq0}\sum_{h_{i}\neq0}\sum_{p_{i}\neq0}{}_{}z_{i}(h_{i},q_{i},p_{i})\right).\label{eq:One_sensitve_rwrd_calculation_1}\end{aligned}$$ As user $i$, always transmit when his queue is not empty, we have $z_{i}(h_{i},q_{i},p_{i}=0)=0$, for all $q_{i}\neq0$. Thus, we have in (\[eq:One\_sensitve\_rwrd\_calculation\_1\]) , $$\begin{aligned} l_{i}^{1}(z_{i}) & \geq h_{i}^{1}p_{i}^{1}\left(\sum_{q_{i}\neq0}\sum_{h_{i}\neq0}\sum_{p_{i}}{}_{}z_{i}(h_{i},q_{i},p_{i})\right)\\ = & h_{i}^{1}p_{i}^{1}\left(\sum_{q_{i}\neq0}\sum_{h_{i}\neq0}z_{i}(h_{i},q_{i})\right)\\ = & h_{i}^{1}p_{i}^{1}\left(1-\pi(q_{i}=0)\right)\left(1-\pi(h_{i}=0)\right).\end{aligned}$$ Thus if we ensure $\pi(q_{i}=0)<1$(or equivalently $F_{i}(0)>1$) and $\pi(h_{i}=0)<1$, then $l_{i}^{1}(z_{i})>0$. \[lem:Set\_of\_all\_best\_resp\] Let $z_{i}^{*}$ denote an IINE policy of user $i$ and let the random variable $X_{i}$ be, $$X_{i}=h_{i}p_{i}\,\text{{w.p} }\,\sum_{q_{i},c_{i}}z_{i}^{*}(h_{i},p_{i},c_{i},q_{i}),\label{eq:SNR_RandomVariable-1}$$ Then $$\mu=\inf_{i\geq0}\mathbb{E}(X_{i})>0\,\text{and\,\ensuremath{\beta}\,=\,\ensuremath{\sup_{i\geq0}\mathbb{E}}(\ensuremath{X_{i}})<\ensuremath{\infty}.}\label{eq:UPPER=000026LOWER_MEAN-1}$$ As the IINE policy is a NE policy, we shall prove that $l_{i}^{1}(z_{i})=\sum_{x_{i},a_{i}}h_{i}p_{i}z_{i}(x_{i},a_{i})>0$ for any best response policy $z_{i}$. To do so, we first define the set of all best response of user $i$ as, $${\mathcal{B}}_{i}=\bigcup_{{\mathcal{N}}\subseteq{\mathbb{Z}}^{+}}\bigcup_{z_{-i}\in{\mathcal{Z}}_{-i}^{\mathcal{N}}}{\mathcal{B}}_{i}(z_{-i}),\label{eq:Set_ALL_best_responses}$$ where ${\mathcal{Z}}_{-i}^{\mathcal{N}}$ denotes the the set of all policies of users other than $i$, when the set ${\mathcal{N}}$ containing the number $i$ in the game $\Gamma_{\mathcal{N}}$. Let ${\mathcal{C}}({\mathcal{Z}}_{i})$ denote the class of all the subsets of the set ${\mathcal{Z}}_{i}$, which are obtained as convex closure of finitely many vertices of the polyhedron ${\mathcal{Z}}_{i}$. As the vertices of the polyhedron ${\mathcal{Z}}_{i}$ is finite, we have the class ${\mathcal{C}}({\mathcal{Z}}_{i})$ itself as finite. We note that the set ${\mathcal{B}}_{i}(z_{-i})$ contains all the best response policies of user $i$, when users other than user $i$ play multi-policy $z_{-i}$. As this set contains all the set of solutions of a linear program, we have that this set is the convex closure of finitely many points, each point being a vertex of the set ${\mathcal{Z}}_{i}$ of feasible occupation measures of user $i$. This implies that the best response set ${\mathcal{B}}_{i}(z_{-i})$ belongs to the class ${\mathcal{C}}({\mathcal{Z}}_{i})$ for each multi-policy $z_{-i}$ of users other than user $i$. As the class ${\mathcal{C}}({\mathcal{Z}}_{i})$ is finite, we have that the set ${\mathcal{B}}_{i}$ is a finite union of compact sets. Hence ${\mathcal{B}}_{i}$ is compact. As the IINE belongs to the set ${\mathcal{B}}_{i}$, it suffices to show that $\inf_{z_{i}\in{\mathcal{B}}_{i}}l_{i}(z_{i})>0$. As the set ${\mathcal{B}_{i}}$is compact, we simply show that $l_{i}(z_{i}^{*})>0$ for any policy $z_{i}^{*}\in{\mathcal{B}}_{i}.$ Let $z_{i}^{*}\in{\mathcal{B}}_{i}$ be any best response policy of user $i$ for some game $\Gamma_{\mathcal{N}}$, when the other users employ policy $z_{-i}$. Let $z_{i}$ denote any arbitrary policy of user $i$. Then we have that $T_{i}(z_{i},z_{-i})>0$ if and only if $l_{i}^{1}(z_{i})>0$. By lemma (\[lem:FEASIBLE\_POLICY\]), we have a policy $z_{i}^{1}$ such that $l_{i}^{1}(z_{i}^{1})>0$, hence we have,$T_{i}(z_{i}^{*},z_{-i})\geq T_{i}(z_{i}^{1},z_{-i})>0$. Thus $l_{i}(z_{i}^{*})>0$ for the policy $z_{i}^{*}\in{\mathcal{B}}_{i}.$ This shows that $\mu>0$. Proof of Theorem \[thm:Existence\_IINE\] and Theorem \[thm:-Interchangeability\_IINE\]\[sec:Appendix\_C\] ========================================================================================================= We note that by assumptions in Theorem (\[thm:NAS\_IINE\]), we have that $\mathcal{Z}_{i}$ is non-empty, hence there exist a point $z_{i}\in\mathcal{S}_{i}^{1}.$ Hence by, statement (\[eq:Kth\_sensitive\_set\]), the sets $\mathcal{S}_{i}^{k}$ are nonempty. By property (\[eq:Sensitive\_set\_prop\]), the set $\mathcal{S}_{i}$ is non-empty. Thus by theorem (\[thm:NAS\_IINE\]), there exist an IINE. We now show that $\mathcal{S}_{i}=\mathcal{S}_{i}^{k*}$, where $M=\#\left(\left\{ h_{i}p_{i}|h_{i}\in\mathcal{H}_{i}\,,\,p_{i}\in\mathcal{P}_{i}\right\} \right)$. Let $z_{i}$ and $\hat{z}_{i}$ represent two distinct policies belonging to the set $\mathcal{S}_{i}^{k}$ and $\mathcal{S}_{i}$ respectively. Hence, we have that both policies belong to $\mathcal{S}_{i}^{k}$. We shall now show that $z_{i}\in\mathcal{S}_{i}$. Let $X_{i}$ denote the SNR random variable which take values in the set $\left\{ h_{i}p_{i}|h_{i}\in\mathcal{H}_{i}\,,\,p_{i}\in\mathcal{P}_{i}\right\} $. We order the set as $\left\{ x_{1},x_{2},\cdots,x_{M}\right\} $, with $x_{i}\leq x_{i+1}$, $x_{1}=0$ and $x_{M}=h_{i}^{k}p_{i}^{l}$. Define two probability distributions, $P$ and $\hat{P}$ such that, $$\hat{P}\left(X_{1}=h_{1}p_{1}\right)=\sum_{q_{1},c_{1}}\hat{z}_{1}(h_{1},p_{1},q_{1},c_{1})\,\text{and}$$ $$P\left(X_{1}=h_{1}p_{1}\right)=\sum_{q_{1},c_{1}}z_{1}(h_{1},p_{1},q_{1},c_{1}).\label{eq:SNR_distbn_defn}$$ Let $m_{k}$ and $\hat{m}_{k}$ represent the $k$th moments of the random variable $X_{i}$ with respect to the two distributions $P$ and $\hat{P}$. As $z_{i}$ and $\hat{z}_{i}$ represent two distinct policies belonging to the set $\mathcal{S}_{i}^{k}$, we have $m_{k}=\hat{m}_{k},\,\forall\,1\leq k\leq M.$ If we define a matrix $V$ of size $(M-1\times M-1)$ with entries $V_{k,l}=\left(x_{k}\right)^{l},\,2\leq l\leq M-1,\,2\leq k\leq M-1$, then we have $V\left(\hat{y}-y\right)=0$, where the $M-1$ vectors are defined as $\hat{z}=\left(\hat{P}(x_{2}),\hat{P}(x_{3}),\cdots,\hat{P}(x_{M})\right)$ and $z=\left(P(x_{2}),P(x_{3}),\cdots,P(x_{M})\right)$ respectively. However, as $V$ is an invertible matrix, we have $\hat{y}=y$ and hence $\hat{P}=P.$ Now as the distributions are the same, this implies that the moments $m_{k}$ and $\hat{m}_{k}$ are the same for all $k$. As $l_{i}^{k}(z_{i})=\left(-1\right)^{k+1}m_{k}$ and $l_{i}^{k}(\hat{z}_{i})=\left(-1\right)^{k+1}\hat{m}_{k}$, we have that $z_{i}\in\mathcal{S}_{i}$ and hence $\mathcal{S}_{i}^{k}\in\mathcal{S}_{i}$. Let $z_{i}$ and $\hat{z}_{i}$ represent two IINE policies for each user $i$. Define two probability distributions $P$ and $\hat{P}$ as in (\[eq:SNR\_distbn\_defn\]). We can now show by a computation that for each set $\mathcal{N}$ $$T_{i}(z_{i},z_{-i})=\mathbb{E}\left[\log_{2}\left(1+\frac{X_{i}}{N_{0}+\sum_{j\neq i}X_{j}}\right)\right],$$ where the $X_{i}$ are SNR random variables which takes values in the set $\left\{ h_{i}p_{i}|h_{i}\in\mathcal{H}_{i}\,,\,p_{i}\in\mathcal{P}_{i}\right\} .$ As $z_{i}$ and $\hat{z}_{i}$ are both IINE policies for user $i$, by using a similar argument as done in the proof of Theorem (\[thm:Existence\_IINE\]), we have that , and hence $T_{i}(z_{i},z_{-i})=T_{i}(\hat{z}_{i},\hat{z}_{-i})$. Thus the IINE policies are interchangeable.
--- abstract: | We consider systems of word equations and their solution sets. We discuss some fascinating properties of those, namely the size of a maximal independent set of word equations, and proper chains of solution sets of those. We recall the basic results, extend some known results and formulate several fundamental problems of the topic. Keywords: word equations, independent systems, solution chains author: - | Juhani Karhumäki and Aleksi Saarela\ Department of Mathematics and Statistics\ University of Turku, 20014 Turku, Finland\ date: | Originally published in 2011\ Note added in 2015 title: 'On Maximal Chains of Systems of Word Equations [^1] ' --- Introduction ============ Theory of word equations is a fundamental part of combinatorics on words. It is a challenging topic of its own which has a number of connections and applications, e.g., in pattern unification and group representations. There have also been several fundamental achievements in the theory over the last few decades. Decidability of the existence of a solution of a given word equation is one fundamental result due to Makanin [@Ma77]. This is in contrast to the same problem on Diophantine equations, which is undecidable [@Ma70]. Although the complexity of the above *satisfiability problem* for word equations is not known, a nontrivial upper bound has been proved: it is in PSPACE [@Pl04]. Another fundamental property of word equations is the so-called *Ehrenfeucht compactness property*. It guarantees that any system of word equations is equivalent to some of its finite subsystems. The proofs (see [@AlLa85ehrenfeucht] and [@Gu86]) are based on a transformation of word equations into Diophantine equations and then an application of Hilbert’s basis theorem. Although we have this finiteness property, we do not know any upper bound, if it exists, for the size of an equivalent subsystem in terms of the number of unknowns. And this holds even in the case of three unknown systems of equations. In free monoids an equivalent formulation of the compactness property is that each *independent* system of word equations is finite, independent meaning that the system is not equivalent to any of its proper subsystems. We analyze in this paper the size of the maximal independent systems of word equations. As a related problem we define the notion of *decreasing chains* of word equations. Intuitively, this asks how long chains of word equations exist such that the set of solutions always properly diminishes when a new element of the chain is taken into the system. Or more intuitively, how many proper constraints we can define such that each constraint reduces the set of words satisfying these constraints. It is essentially the above compactness property which guarantees that these chains are finite. Another fundamental property of word equations is the result of Hmelevskii [@Hm71] stating that for each word equation with three unknowns its solution set is *finitely parameterizable*. This result is not directly related to our considerations, but its intriguity gives, we believe, a strong explanation and support to our view that our open problems, even the simplest looking ones, are not trivial. Hmelevskii’s argumentation is simplified in the extended abstract [@KaSa08dlt], and used in [@Sa09dlt] to show that the satisfiability problem for three unknown equations is in NP. A full version of these two conference articles has been submitted [@KaSa15]. The goal of this note is to analyze the above maximal independent systems of equations and maximal decreasing chains of word equations, as well as search for their relations. An essential part is to propose open problems on this area. The most fundamental problem asks whether the maximal independent system of word equations with $n$ unknowns is bounded by some function of $n$. Amazingly, the same problem is open for three unknown equations, although we do not know larger than three equation systems in this case. Systems and Chains of Word Equations ==================================== The topics of this paper are independent systems and chains of equations in semigroups. We are mostly interested in free monoids; in this case the equations are constant-free word equations. We present some questions about the sizes of such systems and chains, state existing results, give some new ones, and list open problems. Let $S$ be a semigroup and $\Xi$ be an alphabet of variables. We consider equations $U=V$, where $U,V \in \Xi^+$. A morphism $h: \Xi^+ \to S$ is a *solution* of this equation if $h(U) = h(V)$. (If $S$ is a monoid, we can use $\Xi^*$ instead of $\Xi^+$.) A system of equations is *independent* if it is not equivalent to any of its proper subsystems. In other words, equations $E_i$ form an independent system of equations if for every $i$ there is a morphism $h_i$ which is not a solution of $E_i$ but which is a solution of all the other equations. This definition works for both finite and infinite systems of equations. We define *decreasing chains* of equations. A finite sequence of equations $E_1, \dots, E_m$ is a decreasing chain if for every $i \in \{0, \dots, m-1\}$ the system $E_1, \dots, E_i$ is inequivalent to the system $E_1, \dots, E_{i+1}$. An infinite sequence of equations $E_1, E_2, \dots$ is a decreasing chain if for every $i \geq 0$ the system $E_1, \dots, E_i$ is inequivalent to the system $E_1, \dots, E_{i+1}$. Similarly we define *increasing chains* of equations. A sequence of equations $E_1, \dots, E_m$ is an increasing chain if for every $i \in \{1, \dots, m\}$ the system $E_i, \dots, E_m$ is inequivalent to the system $E_{i+1}, \dots, E_m$. An infinite sequence of equations $E_1, E_2, \dots$ is an increasing chain if for every $i \geq 1$ the system $E_i, E_{i+1}, \dots$ is inequivalent to the system $E_{i+1}, E_{i+2}, \dots$. Now $E_1, \dots, E_m$ is an increasing chain if and only if $E_m, \dots, E_1$ is a decreasing chain. However, for infinite chains these concepts are essentially different. Note that a chain can be both decreasing and increasing, for example, if the equations form an independent system. We will consider the *maximal* sizes of independent systems of equations and chains of equations. If the number of unknowns is $n$, then the maximal size of an independent system is denoted by ${ \mathrm{IS} }(n)$. We use two special symbols ${ \mathrm{ub} }$ and ${ \infty }$ for the infinite cases: if there are infinite independent systems, then ${ \mathrm{IS} }(n) = { \infty }$, and if there are only finite but unboundedly large independent systems, then ${ \mathrm{IS} }(n) = { \mathrm{ub} }$. We extend the order relation of numbers to these symbols: $k < { \mathrm{ub} }< { \infty }$ for every integer $k$. Similarly the maximal size of a decreasing chain is denoted by ${ \mathrm{DC} }(n)$, and the maximal size of an increasing chain by ${ \mathrm{IC} }(n)$. Often we are interested in the finiteness of ${ \mathrm{DC} }(n)$, or its asymptotic behaviour when $n$ grows. However, if we are interested in the exact value of ${ \mathrm{DC} }(n)$, then some technical remarks about the definition are in order. First, the case $i=0$ means that there is a solution which is not a solution of the first equation $E_1$; that is, $E_1$ cannot be a trivial equation like $U = U$. If this condition was removed, then we could always add a trivial equation in the beginning, and ${ \mathrm{DC} }(n)$ would be increased by one. Second, we could add the requirement that there must be a solution which is a solution of all the equations $E_1, \dots, E_m$, and the definition would remain the same in the case of free monoids. However, if we consider free semigroups, then this addition would change the definition, because then $E_m$ could not be an equation with no solutions, like $xx = x$ in free semigroups. This would decrease ${ \mathrm{DC} }(n)$ by one. Relations Between Systems and Chains ==================================== Independent systems of equations are a well-known topic (see, e.g., [@HaKaPl02]). Chains of equations have been studied less, so we prove here some elementary results about them. The following theorem states the most basic relations between ${ \mathrm{IS} }$, ${ \mathrm{DC} }$ and ${ \mathrm{IC} }$. \[thm:basic\] For every $n$, $ \is(n) \leq \dc(n), \ic(n) .$ If ${ \mathrm{DC} }(n) < { \mathrm{ub} }$ or ${ \mathrm{IC} }(n) < { \mathrm{ub} }$, then $ \dc(n) = \ic(n).$ Every independent system of equations is also a decreasing and increasing chain of equations, regardless of the order of the equations. This means that $ \is(n) \leq \dc(n), \ic(n) .$ A finite sequence of equations is a decreasing chain if and only if the reverse of this sequence is an increasing chain. Thus $ \dc(n) = \ic(n),$ if ${ \mathrm{DC} }(n) < { \mathrm{ub} }$ or ${ \mathrm{IC} }(n) < { \mathrm{ub} }$. A semigroup has the *compactness property* if every system of equations has an equivalent finite subsystem. Many results on the compactness property are collected in [@HaKaPl02]. In terms of chains, the compactness property turns out to be equivalent to the property that every decreasing chain is finite. \[thm:cp\_dc\] A semigroup has the compactness property if and only if ${ \mathrm{DC} }(n) \leq { \mathrm{ub} }$ for every $n$. Assume first that the compactness property holds. Let $E_1, E_2, \dots$ be an infinite decreasing chain of equations. As a system of equations, it is equivalent to some finite subsystem $E_{i_1}, \dots, E_{i_k}$, where $i_1 < \dots < i_k$. But now $E_1, \dots E_{i_k}$ is equivalent to $E_1, \dots, E_{i_k + 1}$. This is a contradiction. Assume then that ${ \mathrm{DC} }(n) \leq { \mathrm{ub} }$. Let $E_1, E_2, \dots$ be an infinite system of equations. If there is an index $N$ such that $E_1, \dots, E_i$ is equivalent to $E_1, \dots, E_{i+1}$ for all $i \geq N$, then the whole system is equivalent to $E_1, \dots, E_N$. If there is no such index, then let $i_1 < i_2 < \dots$ be all indexes such that $E_1, \dots E_{i_k}$ is not equivalent to $E_1, \dots, E_{i_k + 1}$. But then $E_{i_1}, E_{i_2}, \dots$ is an infinite decreasing chain, which is a contradiction. The next example shows that the values of ${ \mathrm{IS} }$, ${ \mathrm{DC} }$ and ${ \mathrm{IC} }$ can differ significantly. We give an example of a monoid where ${ \mathrm{IS} }(1) = 1$, ${ \mathrm{DC} }(1) = { \mathrm{ub} }$ and ${ \mathrm{IC} }(1) = { \infty }$. The monoid is $$\langle a_1, a_2, \dots \ | \ a_i a_j = a_j a_i , \ a_i^{i+1} = a_i^i \rangle .$$ Now every equation on one unknown is of the form $x^i = x^j$. If $i<j$, then this is equivalent to $x^i = x^{i+1}$. So all nontrivial equations are, up to equivalence, $$x = 1, \ x^2 = x, \ x^3 = x^2, \ \dots ,$$ and these have strictly increasing solution sets. Thus ${ \mathrm{IC} }(1) = { \infty }$, ${ \mathrm{DC} }(1) = { \mathrm{ub} }$ and ${ \mathrm{IS} }(1) = 1$. Free Monoids ============ From now on we will consider free monoids and semigroups. The bounds related to free monoids are denoted by ${ \mathrm{IS} }$, ${ \mathrm{DC} }$ and ${ \mathrm{IC} }$, and the bounds related to free semigroups, by ${ \mathrm{IS} }_+$, ${ \mathrm{DC} }_+$ and ${ \mathrm{IC} }_+$. We give some definitions related to word equations and make some easy observations about the relations between maximal sizes of independent systems and chains, assuming these are finite. A solution $h$ is *periodic* if there exists a $t \in S$ such that every $h(x)$, where $x \in \Xi$, is a power of $t$. Otherwise $h$ is *nonperiodic*. An equation $U=V$ is *balanced* if every variable occurs as many times in $U$ as in $V$. The maximal size of an independent system in a free monoid having a nonperiodic solution is denoted by ${ \mathrm{IS} }'(n)$. The maximal size of a decreasing chain having a nonperiodic solution is denoted by ${ \mathrm{DC} }'(n)$. Similar notation can be used for free semigroups. Every independent system of equations $E_1, \dots, E_m$ is also a chain of equations, regardless of the order of the equations. If the system has a nonperiodic solution, then we can add an equation that forces the variables to commute. If the equations in the system are also balanced, then we can add equations $x_i = 1$ for all variables $x_1, \dots, x_n$, and thus get a chain of length $m+n+1$. If they are not balanced, then we can add at least one of these equations. In all cases we obtain the inequalities ${ \mathrm{IS} }'(n) \leq { \mathrm{IS} }(n) \leq { \mathrm{IS} }'(n) + 1$ and ${ \mathrm{DC} }'(n) + 2 \leq { \mathrm{DC} }(n) \leq { \mathrm{DC} }'(n) + n + 1$, as well as ${ \mathrm{IS} }(n) + 1 \leq { \mathrm{DC} }(n)$ and ${ \mathrm{IS} }'(n) \leq { \mathrm{DC} }'(n)$. In the case of free semigroups we derive similar inequalities. Thus ${ \mathrm{IS} }'$ and ${ \mathrm{DC} }'$ are basically the same as ${ \mathrm{IS} }$ and ${ \mathrm{DC} }$, if we are only interested in their finiteness or asymptotic growth. It was conjectured by Ehrenfeucht in a language theoretic setting that the compactness property holds for free monoids. This conjecture was reformulated in terms of equations in [@CuKa83], and it was proved independently by Albert and Lawrence [@AlLa85ehrenfeucht] and by Guba [@Gu86]. \[thm:compactness\] ${ \mathrm{DC} }(n) \leq { \mathrm{ub} }$, and hence also ${ \mathrm{IS} }(n) \leq { \mathrm{ub} }$. The proofs are based on Hilbert’s basis theorem. The compactness property means that ${ \mathrm{DC} }(n) \leq { \mathrm{ub} }$ for every $n$. No better upper bounds are known, when $n > 2$. Even the seemingly simple question about the size of ${ \mathrm{IS} }'(3)$ is still completely open; the only thing that is known is that $2 \leq { \mathrm{IS} }'(3) \leq { \mathrm{ub} }$. The lower bound is given by the example $xyz=zyx, xyyz=zyyx$. Three and Four Unknowns ======================= The cases of three and four variables have been studied in [@Cz08]. The article gives examples showing that ${ \mathrm{IS} }'_+(3) \geq 2$, ${ \mathrm{DC} }_+(3) \geq 6$, ${ \mathrm{IS} }'_+(4) \geq 3$ and ${ \mathrm{DC} }_+(4) \geq 9$. We are able to give better bounds for ${ \mathrm{DC} }_+(3)$ and ${ \mathrm{DC} }(4)$. First we assume that there are three unknowns $x$, $y$, $z$. There are trivial examples of independent systems of three equations, for example, $x^2=y, y^2=z, z^2=x$, so ${ \mathrm{IS} }_+(3) \geq 3$. There are also easy examples of independent pairs of equations having a nonperiodic solution, like $xyz=zyx, xyyz=zyyx$, so ${ \mathrm{IS} }'_+(3) \geq 2$. Amazingly, no other bounds are known for ${ \mathrm{IS} }_+(3)$, ${ \mathrm{IS} }'_+(3)$, ${ \mathrm{IS} }(3)$ or ${ \mathrm{IS} }'(3)$. [The following chain of equations shows that ${ \mathrm{DC} }(3) \geq 7$: $$\begin{aligned} xyz &= zxy,& x&=a, \ y=b, \ z=abab\\ xy xzy z &= z xzy xy,& x&=a, \ y=b, \ z=ab\\ xz &= zx,& x&=a, \ y=b, \ z=1\\ xy &= yx,& x&=a, \ y=a, \ z=a\\ x &= 1,& x&=1, \ y=b, \ z=a\\ y &= 1,& x&=1, \ y=1, \ z=a\\ z &= 1,& x&=1, \ y=1, \ z=1 .\end{aligned}$$ Here the second column gives a solution which is not a solution of the equation on the next row but is a solution of all the preceding equations. Also ${ \mathrm{DC} }_+(3) \geq 7$, as shown by the chain $$\begin{aligned} xxyz &= zxyx,& &x=a, \ y=b, \ z=aabaaba\\ xxyx zy z &= z zy xxyx,& &x=a, \ y=b, \ z=aaba\\ xz &= zx,& &x=a, \ y=b, \ z=a\\ xy &= yx,& &x=a, \ y=aa, \ z=a\\ x &= y,& &x=a, \ y=a, \ z=aa\\ x &= z,& &x=a, \ y=a, \ z=a\\ xx &= x,& &\text{no solutions}.\end{aligned}$$]{} If there are three variables, then every independent pair of equations having a nonperiodic solution consists of balanced equations (see [@HaNo03]). It follows that ${ \mathrm{IS} }'(3) + 4 \leq { \mathrm{DC} }(3)$. There are also some other results about the structure of equations in independent systems on three unknowns (see [@CzKa07] and [@CzPl09]). [If we add a fourth unknown $t$, then we can trivially extend any independent system by adding the equation $t=x$. This gives ${ \mathrm{IS} }_+(4) \geq 4$ and ${ \mathrm{IS} }'_+(4) \geq 3$. For chains the improvements are nontrivial. The following chain of equations shows that ${ \mathrm{DC} }(4) \geq 12$: $$\begin{aligned} xyz &= zxy,& x&=a, \ y=b, \ z=abab, \ t=a\\ xyt &= txy,& x&=a, \ y=b, \ z=abab, \ t=abab\\ xy xzy z &= z xzy xy,& x&=a, \ y=b, \ z=ab, \ t=abab\\ xy xty t &= t xty xy,& x&=a, \ y=b, \ z=ab, \ t=ab\\ xy xzty zt &= zt xzty xy,& x&=a, \ y=b, \ z=ab, \ t=1\\ xz &= zx,& x&=a, \ y=b, \ z=1, \ t=ab\\ xt &= tx,& x&=a, \ y=b, \ z=1, \ t=1\\ xy &= yx,& x&=a, \ y=a, \ z=a, \ t=a\\ x &= 1,& x&=1, \ y=a, \ z=a, \ t=a\\ y &= 1,& x&=1, \ y=1, \ z=a, \ t=a\\ z &= 1,& x&=1, \ y=1, \ z=1, \ t=a\\ t &= 1,& x&=1, \ y=1, \ z=1, \ t=1.\end{aligned}$$]{} The next theorem sums up the new bounds given in this section. ${ \mathrm{DC} }_+(3) \geq 7$ and ${ \mathrm{DC} }(4) \geq 12$. Lower Bounds ============ In [@KaPl96] it is proved that ${ \mathrm{IS} }(n) = \Omega(n^4)$ and ${ \mathrm{IS} }_+(n) = \Omega(n^3)$. The former is proved by a construction that uses $n = 10m$ variables and gives a system of $m^4$ equations. Thus ${ \mathrm{IS} }(n)$ is asymptotically at least $n^4/10000$. We present here a slightly modified version of this construction. By ”reusing” some of the unknowns we get a bound that is asymptotically $n^4/1536$. If $n = 4m$, then ${ \mathrm{IS} }'(n) \geq m^2(m-1)(m-2)/6$. We use unknowns $x_i, y_i, z_i, t_i$, where $1 \leq i \leq m$. The equations in the system are $$E(i,j,k,l): x_i x_j x_k y_i y_j y_k z_i z_j z_k t_l = t_l x_i x_j x_k y_i y_j y_k z_i z_j z_k ,$$ where $i,j,k,l \in \{1, \dots, m\}$ and $i<j<k$. If $i,j,k,l \in \{1, \dots, m\}$ and $i<j<k$, then $$\begin{aligned} x_r &= \begin{cases} ab, & \text{if $r \in \{i,j,k\}$} \\ 1, & \text{otherwise} \end{cases} \quad y_r = \begin{cases} a, & \text{if $r \in \{i,j,k\}$} \\ 1, & \text{otherwise} \end{cases} \\ z_r &= \begin{cases} ba, & \text{if $r \in \{i,j,k\}$} \\ 1, & \text{otherwise} \end{cases} \quad t_r = \begin{cases} ababa, & \text{if $r=l$} \\ 1, & \text{otherwise} \end{cases}\end{aligned}$$ is not a solution of $E(i,j,k,l)$, but is a solution of all the other equations. Thus the system is independent. The idea behind this construction (both the original and the modified) is that $ (ababa)^k = (ab)^k a^k (ba)^k$ holds for $k<3$, but not for $k=3$. It was noted in [@Pl03] that if we could find words $u_i$ such that $ (u_1 \dots u_m)^k = u_1^k \dots u_m^k$ holds for $k<K$, but not for $k=K$, then we could prove that ${ \mathrm{IS} }(n) = \Omega(n^{K+1})$. However, it has been proved that such words do not exist for $K \geq 5$ (see [@Ho01]), and conjectured that such words do not exist for $K=4$. [For small values of $n$ it is better to use ideas from the constructions showing that ${ \mathrm{DC} }(3) \geq 7$ and ${ \mathrm{DC} }(4) \geq 12$. This gives ${ \mathrm{IS} }'(n) \geq (n^2 - 5n + 6)/2$ and ${ \mathrm{DC} }(n) \geq (n^2 + 3n - 4)/2$. The equations in the system are $$x y x z_i z_j y z_i z_j = z_i z_j x z_i z_j y x y ,$$ where $i, j \in \{1, \dots, n-2\}$ and $i<j$. The equations in the chain are $$\begin{aligned} x y z_k &= z_k x y ,\\ x y x z_k y z_k &= z_k x z_k y x y ,\\ x y x z_i z_j y z_i z_j &= z_i z_j x z_i z_j y x y ,\\ x z_k &= z_k x ,\\ x y &= y x ,\\ x &= 1 ,\\ y &= 1 ,\\ z_k &= 1 ,\end{aligned}$$ where $i, j \in \{1, \dots, n-2\}$, $i<j$ and $k \in \{1, \dots, n-2\}$. Here we should first take the equations on the first row in some order, then the equations on the second row in some order, and so on.]{} We conclude this section by mentioning a related question. It is well known that any nontrivial equation on $n$ variables forces a defect effect; that is, the values of the variables in any solution can be expressed as products of $n-1$ words (see [@HaKa04] for a survey on the defect effect). If a system has only periodic solutions, then the system can be said to force a maximal defect effect, so ${ \mathrm{IS} }'(n)$ is the maximal size of an independent system not doing that. But how large can an independent system be if it forces only the minimal defect effect, that is, the system has a solution in which the variables cannot be expressed as products of $n-2$ words? In [@KaPl96] it is proved that there are such systems of size $\Omega(n^3)$ in free monoids and of size $\Omega(n^2)$ in free semigroups. Again, no upper bounds are known. Concluding Remarks and Open Problems ==================================== To summarize, we list a few fundamental open problems about systems and chains of equations in free monoids. 1. Is ${ \mathrm{IS} }(3)$ finite? \[q1\] 2. Is ${ \mathrm{DC} }(3)$ finite? \[q2\] 3. Is ${ \mathrm{IS} }(n)$ finite for every $n$? \[q3\] 4. Is ${ \mathrm{DC} }(n)$ finite for every $n$? \[q4\] A few remarks on these questions are in order. First we know that each of these values is at most ${ \mathrm{ub} }$. Second, if the answer to any of the questions is ”yes”, a natural further question is: What is an upper bound for this value, or more sharply, what is the best upper bound, that is, the exact value? For the lower bounds the best what is known, according to our knowledge, is the following 1. ${ \mathrm{IS} }(3) \geq 3$, 2. ${ \mathrm{DC} }(3) \geq 7$, 3. ${ \mathrm{IS} }(n) = \Omega(n^4)$, 4. ${ \mathrm{DC} }(n) = \Omega(n^4)$. A natural sharpening of Question \[q3\] (and \[q4\]) asks whether these values are exponentially bounded. A related question to Question \[q1\] is the following amazing open problem from [@CuKa83] (see, e.g., [@Cz08] and [@CzKa07] for an extensive study of it): 1. Does there exist an independent system of three equations with three unknowns having a nonperiodic solution? As a summary we make the following remarks. As we see it, Question \[q3\] is a really fundamental question on word equations or even on combinatorics on words as a whole. Its intriguity is revealed by Question \[q1\]: we do not know the answer even in the case of three unknowns. This becomes really amazing when we recall that still the best known lower bound is only 3! To conclude, we have considered equations over word monoids and semigroups. All of the questions can be stated in any semigroup, and the results would be different. For example, in commutative monoids the compactness property (Theorem \[thm:compactness\]) holds, but in this case the value of the maximal independent system of equations is ${ \mathrm{ub} }$ (see [@KaPl96]). Note added on June 9, 2015 {#note-added-on-june-9-2015 .unnumbered} ========================== When writing the original version of this article, we were not aware of any previous research on increasing chains. However, they were defined and studied already in 1999 by Honkala [@Ho99] (in the case of free monoids). They were called descending chains in that article. Decreasing chains were called ascending chains and Theorem \[thm:cp\_dc\] was proved in the case of free monoids. Most of the paper was devoted to descending chains and test sets. The following conjecture, which we state here using our notation, was given in [@Ho99]: In free monoids ${ \mathrm{IC} }(n) \leq { \mathrm{ub} }$ for all $n$. This appears to be a very interesting and difficult problem. Proofs of Ehrenfeucht’s conjecture are ultimately based on the fact that ideals in polynomial rings satisfy the ascending chain condition. As pointed out in [@Ho99], the same is not true for the descending chain condition, so the above conjecture could be expected to be significantly more difficult to prove than Ehrenfeucht’s conjecture was. Of course, if ${ \mathrm{DC} }(n) < { \mathrm{ub} }$, then ${ \mathrm{IC} }(n) = { \mathrm{DC} }(n)$ by Theorem \[thm:basic\]. [10]{} M. H. Albert and J. Lawrence. A proof of [E]{}hrenfeucht’s conjecture. , 41(1):121–123, 1985. Karel Culik, II and Juhani Karhum[ä]{}ki. Systems of equations over a free monoid and [E]{}hrenfeucht’s conjecture. , 43(2–3):139–153, 1983. Elena Czeizler. Multiple constraints on three and four words. , 391(1–2):14–19, 2008. Elena Czeizler and Juhani Karhum[ä]{}ki. On non-periodic solutions of independent systems of word equations over three unknowns. , 18(4):873–897, 2007. Elena Czeizler and Wojciech Plandowski. On systems of word equations over three unknowns with at most six occurrences of one of the unknowns. , 410(30–32):2889–2909, 2009. V. S. Guba. Equivalence of infinite systems of equations in free groups and semigroups to finite subsystems. , 40(3):321–324, 1986. Tero Harju and Juhani Karhum[ä]{}ki. Many aspects of defect theorems. , 324(1):35–54, 2004. Tero Harju, Juhani Karhum[ä]{}ki, and Wojciech Plandowski. Independent systems of equations. In M. Lothaire, editor, [*Algebraic Combinatorics on Words*]{}, pages 443–472. Cambridge University Press, 2002. Tero Harju and Dirk Nowotka. On the independence of equations in three variables. , 307(1):139–172, 2003. Ju. I. Hmelevski[ĭ]{}. . American Mathematical Society, 1976. Translated by G. A. Kandall from the Russian original: Trudy Mat. Inst. Steklov. 107 (1971). t[ě]{}p[á]{}n Holub. Local and global cyclicity in free semigroups. , 262(1–2):25–36, 2001. Juha Honkala. On chains of word equations and test sets. , 68:157–160, 1999. Juhani Karhum[ä]{}ki and Wojciech Plandowski. On the size of independent systems of equations in semigroups. , 168(1):105–119, 1996. Juhani Karhum[ä]{}ki and Aleksi Saarela. An analysis and a reproof of [H]{}melevskii’s theorem. In [*Proceedings of the 12th DLT*]{}, volume 5257 of [*LNCS*]{}, pages 467–478. Springer, 2008. Juhani Karhum[ä]{}ki and Aleksi Saarela. Hmelevskii’s theorem and its complexity, Submitted. G. S. Makanin. The problem of the solvability of equations in a free semigroup. , 103(2):147–236, 1977. English translation in Math. USSR Sb. 32:129–198, 1977. Y. Matijasevic. Enumerable sets are diophantine ([R]{}ussian). , 191:279–282, 1970. Translation in Soviet Math Doklady, Vol 11, 1970. Wojciech Plandowski. Test sets for large families of languages. In [*Proceedings of the 7th DLT*]{}, volume 2710 of [*LNCS*]{}, pages 75–94. Springer, 2003. Wojciech Plandowski. Satisfiability of word equations with constants is in [PSPACE]{}. , 51(3):483–496, 2004. Aleksi Saarela. On the complexity of [H]{}melevskii’s theorem and satisfiability of three unknown equations. In [*Proceedings of the 13th DLT*]{}, volume 5583 of [*LNCS*]{}, pages 443–453. Springer, 2009. [^1]: Supported by the Academy of Finland under grant 121419
--- abstract: 'Attribute guided face image synthesis aims to manipulate attributes on a face image. Most existing methods for image-to-image translation can either perform a fixed translation between any two image domains using a single attribute or require training data with the attributes of interest for each subject. Therefore, these methods could only train one specific model for each pair of image domains, which limits their ability in dealing with more than two domains. Another disadvantage of these methods is that they often suffer from the common problem of mode collapse that degrades the quality of the generated images. To overcome these shortcomings, we propose attribute guided face image generation method using a single model, which is capable to synthesize multiple photo-realistic face images conditioned on the attributes of interest. In addition, we adopt the proposed model to increase the realism of the simulated face images while preserving the face characteristics. Compared to existing models, synthetic face images generated by our method present a good photorealistic quality on several face datasets. Finally, we demonstrate that generated facial images can be used for synthetic data augmentation, and improve the performance of the classifier used for facial expression recognition.' address: - | Signal Processing Laboratory (LT55), Ecole Polytechnique Féderale de Lausanne (EPFL-STI-IEL-LT55), Station 11,\ 1015 Lausanne, Switzerland - 'Istanbul Technical University, Istanbul, Turkey' - 'Department of Radiology, University Hospital Center (CHUV), University of Lausanne (UNIL), Lausanne, Switzerland' author: - Behzad - Mohammad Saeed - 'Haz[i]{}m Kemal Ekenel' - 'Jean-Philippe' bibliography: - 'egbib.bib' title: Learn to synthesize and synthesize to learn --- Attribute guided face image synthesis ,generative adversarial network ,facial expression recognition Introduction {#sec:intro} ============ In this work, we are interested in the problem of synthesizing realistic faces by controlling the facial attributes of interest (e.g. expression, pose, lighting condition) without affecting the identity properties (see Fig. \[fig:1\]). In addition, this paper investigates learning from synthetic facial images for improving expression recognition accuracy. Synthesizing photo-realistic facial images has applications in human-computer interactions, facial animation and more importantly in facial identity or expression recognition. However, this task is challenging since image-to-image translation is ill-defined problem and it is difficult to collect images of varying attributes for each subject (e.g. images of different facial expressions for the same subject). The most notable solution is the incredible breakthroughs in generative models. In particular, Generative Adversarial Network (GAN) [@goodfellow2014generative] variants have achieved state-of-the-art results for the image-to-image translation task. These GAN models could be trained in both with paired training data [@isola2017image] and unpaired training data [@kim2017learning; @zhu2017unpaired]. Most existing GAN models [@shen2017learning; @zhu2017unpaired] are proposed to synthesize images of a single attribute, which make their training inefficient in the case of having multiple attributes, since for each attribute a separate model is needed. In addition, GAN based approaches are often fragile in the common problem of mode collapse that degrades the quality of the generated images. To overcome these challenges, our objective is to use a single model to synthesize multiple photo-realistic images from the same input image with varying attributes simultaneously. Our proposed model, namely Lean to Synthesize and Synthesize to Learn (LSSL) is based on encoder-decoder structure, using the image latent representation, where we model the shared latent representation across image domains. Therefore, during inference step, by changing input face attributes, we can generate plausible face images owing attribute of interest. We introduce bidirectional learning for the latent representation, which we have found this loss term to prevent generator mode collapse. Moreover, we propose to use an additional face parsing loss to generate high-quality face images. ![image](Fig1.jpg){height="10.5cm" width="16.5cm"} Our paper makes the following contributions: 1. This paper investigates domain adaptation using simulated face images for improving expression recognition accuracy. We show that how the proposed approach can be used to generate photo-realistic frontal facial images using synthetic face image and unlabeled real face images as the input. We compared our results with SimGAN method [@shrivastava2017learning] in terms of expression recognition accuracy to see improvement in the realism of frontal faces. The source code is available at [<https://github.com/CreativePapers/Learn-to-Synthesize-and-Synthesize-to-Learn>]{}. 2. We show that use of our method leads to realistic generated images that contribute to improve the performance of expression recognition accuracy despite having small number of real training images. Further, compared to other variants of GAN models [@zhu2017unpaired; @perarnau2016invertible; @choi2018stargan], we show that a better performance can be attained through a proposed method to focus on the data augmentation process; 3. Unlike most of existing GAN based methods [@perarnau2016invertible], which are trained with a large number of labeled and matching image pairs, the proposed method is adopted for unpaired image-to-image translation. As a matter of fact, the proposed method transfers the learnt characteristics between different classes; 4. The proposed method is capable of learning image-to-image translation among multiple domains using a single model. We introduce a bidirectional learning for the image latent representation to additionally enforce latent representation to capture shared features of different attribute categories and to prevent generator mode collapse. By doing so, we synthesize face photos with a desired attribute and translate an input image into another domain image[^1]. Besides, we present face parsing loss and identity loss that help to preserve the face image local details and identity. Related work {#sec:related} ============ Recently, GAN based models [@goodfellow2014generative] have achieved impressive results in many image synthesis applications, including image super-resolution [@ledig2017photo], image-to-image translation (pix2pix) [@isola2017image] and CycleGAN [@zhu2017unpaired]. We summarize contributions of few important related works in below: #### Applications of GANs to Face Generation [@taigman2016unsupervised] proposed a domain transfer network to tackle the problem of emoji generation for a given facial image. [@lu2018attribute] proposed attribute-guided face generation to translate low-resolution face images to high-resolution face images. [@huang2017beyond] proposed a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic face synthesis by simultaneously considering local face details and global structures. #### Image-to-Image Translation Using GANs Many of existing image-to-image translation methods e.g. [@isola2017image; @shrivastava2017learning] formulated GANs in the supervised setting, where example image pairs are available. However, collecting paired training data can be difficult. On the other side, there are other GAN based methods, which do not require matching pairs of samples. For example, CycleGAN [@zhu2017unpaired] is capable to learn transformations from source to target domain without one-to-one mapping between two domain’s training data. [@li2016deep] proposed a Deep convolutional network model for Identity-Aware Transfer (DIAT) of the facial attributes. However, these GAN based methods could only train one specific model for each pair of image domains. Unlike the aforementioned approaches, we use a single model to learn to synthesize multiple photo-realistic images, each having specific attribute. More recently, IcGAN [@perarnau2016invertible] and StarGAN [@choi2018stargan] proposed image editing using AC-GAN [@odena2017conditional] with conditional information. However, we use domain adaptation by adding the realism to the simulated faces and there is no such a solution in these methods. Similar to [@perarnau2016invertible], Fader Networks [@lample2017fader] proposed image synthesis model without needing to apply a GAN to the decoder output. However, these methods impose constraints on image latent space to enforce it to be independent from the attributes of interest, which may result in loss of information in generating attribute guided images. #### GANs for Facial Frontalization and Expression Transfer [@zhang2018joint] proposed a method by disentangling the attributes (expression and pose) for simultaneous pose-invariant facial expression recognition and face images synthesis. Instead, we seek to learn attribute-invariant information in the latent space by imposing auxiliary classifier to classify the generated images. [@qiao2018geometry] proposed a Geometry-Contrastive Generative Adversarial Network (GC-GAN) for transferring continuous emotions across different subjects. However, this requires a training data with expression information, which may be expensive to obtain. Alternatively, our self-supervised approach automatically learns the required factors of variation by transferring the learnt characteristics between different emotion classes. [@zhu2018emotion] investigated GANs for data augmentation for the task of emotion classification. [@lai2018emotion] proposed a multi-task GAN-based network that learns to synthesize the frontal face images from profile face images. However, they require paired training data of frontal and profile faces. Instead, we seek to add realism to the synthetic frontal face images without requiring real frontal face images during training. Our method could produce synthesis faces using synthetic frontal faces and real faces with arbitrary poses as input. Methods {#sec:approach} ======= We first introduce our proposed multi-domain image-to-image translation model in Section \[subsec:attribute\]. Then, we explain learning from simulated data by adding realism to simulated face images in Section \[subsec:posenormalization\]. Finally, we discuss our implementation details and experimental results in Section \[implementation\] and Section \[subsec:experimentalresults\], respectively. Learn to Synthesize {#subsec:attribute} ------------------- Let $\mathcal{X}$ and $\mathcal{S}$ denote original image and side conditional image domains, respectively and $\mathcal{Y}$ set of possible facial attributes, where we consider attributes including facial expression, head pose and lighting (see Fig. \[fig:2\]). As the training set, we have $m$ triple inputs $\left (x_{i}\in \mathcal{X}, s_{i}\in \mathcal{S}, y_{i}\in \mathcal{Y} \right )$, where $x_{i}$ and $y_{i}$ are the $i^{th}$ input face image and binary attribute, respectively and $s_{i}$ represents the $i^{th}$ conditional side image as additional information to guide photo-realistic face synthesis. Then, for any categorical attribute vector $y$ from the set of possible facial attributes $\mathcal{Y}$, the objective is to train a model that will generate photo-realistic version (${x}'$ or ${s}'$) of the inputs ($x$ and $s$) from image domains $\mathcal{X}$ and $\mathcal{S}$ with desired attributes $y$. [.48]{} ![Examples of facial attribute transfer. (a) Generating images with varying poses ranging from 0 to 45 degrees (yaw angle) in 15 degrees steps. (b) Generating face images with three different lighting conditions using face image with normal illumination as input: normal illumination (reconstruction), weak illumination and dark illumination, respectively.[]{data-label="fig:2"}](Fig2_1.jpg "fig:"){width="\linewidth" height="1.1\linewidth"} [.48]{} ![Examples of facial attribute transfer. (a) Generating images with varying poses ranging from 0 to 45 degrees (yaw angle) in 15 degrees steps. (b) Generating face images with three different lighting conditions using face image with normal illumination as input: normal illumination (reconstruction), weak illumination and dark illumination, respectively.[]{data-label="fig:2"}](Fig2_2.jpg "fig:"){width="\linewidth" height="1.1\linewidth"} Our model is based on the encoder-decoder architecture with domain adversarial training. As the input to our expression synthesis method (see Fig. \[fig:3\_1\]), we propose to incorporate individual-specific facial shape model as the side conditional information $s$ in addition to the original input image $x$. The shape model can be extracted from the configuration of the facial landmarks, where the facial geometry varies with different individuals. Our goal is then to train a single generator $G$ with encoder $G_{enc}$ – decoder $G_{dec}$ networks to translate the input pair $\left ( x,s \right )$ from source domains into their corresponding output images $\left ( {x}',{s}' \right )$ in the target domain conditioned on the target domain attribute $y$ and the inputs latent representation $G_{enc}\left ( x,s \right )$, $G_{dec}\left ( G_{enc}\left ( x,s \right ),y \right )\rightarrow {x}',{s}'$. The encoder $G_{enc}:\left ( \mathcal{X}^{source}, \mathcal{S}^{source} \right )\rightarrow \mathbb{R}^{n\times \frac{h}{16}\times \frac{w}{16}}$ is a fully convolutional neural network with parameters $\theta _{enc}$ that encodes the input images into a low-dimensional feature space $G_{enc}\left ( x,s \right )$, where $n, h, w$ are the number of the feature channels and the input images dimensions, respectively. The decoder $G_{dec}:\left ( \mathbb{R}^{n\times \frac{h}{16}\times \frac{w}{16}},\mathcal{Y} \right )\rightarrow \mathcal{X}^{target}, \mathcal{S}^{target}$ is the sub-pixel [@shi2016real] convolutional neural network with parameters $\theta _{dec}$ that produce realistic images with target domain attribute $y$ and given the latent representation $G_{enc}\left ( x,s \right )$. The precise architectures of the neural networks are described in Section \[networkarchitechure\]. During training, we randomly use a set of target domain attributes $y$ to make the generator more flexible in synthesizing images. In the following, we introduce the objectives for the proposed model optimization. [0.90]{} ![image](Fig3_1.png){width="100.00000%"} [0.90]{} ![image](Fig3_2.png){width="100.00000%"} #### GAN Loss We introduce a model that discovers cross-domain image translation with GANs. Moreover, at the inference time, we should be able to generate diverse facial images by only changing attribute of interest. By doing so, we seek to learn attribute-invariant information in the latent space representing the shared features of the images sampled for different attributes. It means if the original and target domains are semantically similar (e.g. facial images of different expressions), we expect the common features across domains to be captured by the same latent representation. Then, the decoder must use the target attribute to perform image-to-image translation from the original domain to the target domain. However, this learning process is unsupervised as for each training image from the source domain, its counterpart image in the target domain with attribute $y$ is unknown. Therefore, we propose to train an additional neural network called the discriminator $D$ (with the parameters $\theta _{dis}$) using an adversarial formulation to not only distinguish between real and fake generated images, but also to classify the image to its corresponding attribute categories. We use Wasserstein GAN [@gulrajani2017improved] objective with a gradient penalty loss $\mathcal{L}_{gp}$ [@arjovsky2017wasserstein] formulated as below: $$\label{eq1} \begin{split} \mathcal{L} _{GAN}=\mathbb{E}_{x,s}\left [ D_{src}\left ( x,s \right ) \right ]-\mathbb{E}_{x,s,y}\left [ D_{src}\left ( G_{dec}\left ( G_{enc}\left ( x,s \right ),y \right ) \right ) \right ]\\ -\lambda_{gp} \thinspace \mathcal{L}_{gp}\left ( D_{src} \right ), \end{split}$$ The term $D_{src}\left ( \cdot \right )$ denotes a probability distribution over image sources given by $D$. The hyper-parameter $\lambda_{gp}$ is used to balance the GAN objective with the gradient penalty. A generator (encoder-decoder networks) used in our model has to play two roles: learning the attribute invariance representation for the input images and is trained to maximally fool the discriminator in a *min-max* game. On the other hand, the discriminator simultaneously seeks to identify the fake examples for each attribute. #### Attribute Classification Loss We deploy a classifier by returning additional output from the discriminator to perform an auxiliary task of classifying the synthesized and real facial images into their respective attribute categories. An attribute classification loss of real images $\mathcal{L}_{cls_{r}}$ to optimize the discriminator parameters $\theta _{dis}$ is defined as follow: $$\label{eq2} \begin{split} \min\limits_{\theta _{dis}}\mathcal{L}_{cls_{r}}& =\mathbb{E}_{x,s,{y}'}\left [ \ell_{r}\left ( x,s,{y}' \right ) \right ],\\ \ell_{r}\left ( x,s,{y}' \right )& =\sum_{i=1}^{m}-{y_{i}}'\log D_{cls}\left ( x,s \right )-\left ( 1-{y_{i}}' \right )\log\left ( 1-D_{cls}\left ( x,s \right ) \right ), \end{split}$$ Here, ${y}'$ denotes original attributes categories for the real images. $\ell_{r}$ is the summation of binary cross-entropy losses of all attributes. Besides, an attribute classification loss of fake images $\mathcal{L}_{cls_{f}}$ used to optimize the generator parameters $\left ( \theta _{enc},\theta _{dec} \right )$, formulated as follow: $$\label{eq3} \begin{split} \min\limits_{\theta _{enc},\theta _{dec}}\mathcal{L}_{cls_{f}} =\mathbb{E}_{x,s,{y}'}\left [ \ell_{f}\left ( {x}',{s}',y \right ) \right ],\\ \ell_{f}\left ( {x}',{s}',y \right ) =\sum_{i=1}^{m}-y_{i}\log D_{cls}\left ( {x}',{s}' \right )\\ -\left ( 1-y_{i} \right )\log\left ( 1-D_{cls}\left ( {x}',{s}' \right ) \right ), \end{split}$$ where ${x}'$ and ${s}'$ are the generated images and auxiliary outputs, which should correctly own the target domain attributes $y$. $\ell_{f}$ denotes summing up the cross-entropy losses of all fake images. #### Identity Loss Using the identity loss, we aim to preserve the attribute-excluding facial image details such as facial identity before and after image translation. By doing so, we use a pixel-wise $l_{1}$ loss to enforce the details consistency of the face original domain and suppress the face blurriness: $$\label{eq4} \begin{split} \mathcal{L}_{id} = \mathbb{E}_{x,s,{y}'}\left [\left \| G_{dec}\left ( G_{enc}\left ( x,s \right ),y \right )-x\right \|_{1} \right ], \end{split}$$ #### Face Parsing Loss The face important components (e.g., lips and eyes) are typically small and cannot be well reconstructed by solely minimizing the identity loss on the whole face image. Therefore, we use a face parsing loss to further improve the harmony of the synthetic faces. As our face parsing network, we use U-Net [@ronneberger2015u] trained on the Helen dataset [@le2012interactive], which has ground truth face semantic labels, for training parsing network. Instead of utilizing all semantic labels, we use three key face components (lips, eyes and face skin). Once the network is trained, it remains fixed in our framework. The parsing loss is back-propagated to the generator to further regularize generator. Fig. \[fig:4\] shows some parsing results on the RaFD dataset [@langner2010presentation]. $$\label{eq5} \begin{split} \mathcal{L}_{p} =\mathbb{E}_{x,s,{y}'}\left [ A_{p}\left ( P\left ( x \right )-P\left ( {x}' \right ) \right ) \right ], \end{split}$$ where $A_{p}\left ( \cdot ,\cdot \right )$ denotes a function to compute pixel-wise softmax loss and $P\left ( \cdot \right )$ is the face parsing network. ![Face parsing maps on the RaFD dataset. **Left to right**: input *neutral* face and parsing maps for its constituent facial parts containing lips (second column), face skin (third column), eyes (fourth column) and color visualization generated by all three category parsing maps (last column), respectively. []{data-label="fig:4"}](Fig4.pdf){height="6.5cm" width="9cm"} #### Bidirectional Loss Using GAN loss alone usually leads to mode collapse, generating identical labels regardless of the input face photo. This problem has been observed in various applications of conditional GANs [@isola2017image; @dosovitskiy2016generating] and to our knowledge, there is still no proper approach to deal with this issue. To address this problem, we show that using the trained generator, images of different domains can be translated bidirectionally. We decompose this objective into two terms: a bidirectional loss for the image latent representation, and a bidirectional loss between synthesized images and original input images, respectively. This objective is formulated using $l_{1}$ loss as follow: $$\label{eq6} \begin{split} \mathcal{L} _{bi} =\mathbb{E}_{x,s,{y}'}\left [\left \| x-\hat{x} \right \|_{1}+\left \| s-\hat{s} \right \|_{1} \right ]+\\ \mathbb{E}_{x,s,y}\left [\left \| G_{enc}\left ( x,s \right )-G_{enc}\left ( {x}',{s}' \right ) \right \|_{1} \right ], \\ {x}',{s}'=G_{dec}\left ( G_{enc}\left ( x,s \right ),y \right ), \\ \hat{x},\hat{s} =G_{dec}\left ( G_{enc}\left ( {x}',{s}' \right ),{y}' \right ), \end{split}$$ In the above equation, $\hat{x}$ and $\hat{s}$ denote the reconstructed original image and the side conditional image, respectively. Unlike [@zhu2017unpaired], where only the cycle consistency losses are used at the image level, we additionally seek to minimize the reconstruction loss using latent representation. #### Overall Objective Finally, the generator $G$ is trained with a linear combination of five loss terms: adversarial loss, attribute classification loss for the fake images, bidirectional loss, identity loss and face parsing loss. Meanwhile, the discriminator $D$ is optimized using an adversarial loss and attribute classification loss for the real images: $$\label{eq7} \begin{split} \mathcal{L}_{G}&=\mathcal{L}_{GAN}+\lambda _{bi}\mathcal{L}_{bi}+\lambda _{cls}\mathcal{L}_{cls_{f}}+\lambda _{id}\mathcal{L}_{id}+\lambda_{p}\mathcal{L}_{p},\\ \mathcal{L}_{D}&=-\mathcal{L}_{GAN}+\lambda _{cls}\mathcal{L}_{cls_{r}}, \end{split}$$ where $\lambda _{bi}$, $\lambda _{p}$, $\lambda _{id}$ and $\lambda _{cls}$ are hyper-parameters, which tune the importance of bidirectional loss, face parsing loss, identity loss and attribute classification loss, respectively. Synthesize to Learn {#subsec:posenormalization} ------------------- In an unconstrained face expression recognition, accuracy will drop significantly for large pose variations. The key solution would be using simulated faces rendered in frontal view. However, learning from synthetic face images can be problematic due to a distribution discrepancy between real and synthetic images. Here, our proposed model generates realistic face images given real profile face with arbitrary pose and a simulated face image as input (see Fig. \[fig:3\_2\]). We utilize a 3D Morphable Model using bilinear face model [@vlasic2005face] to construct a simulated frontal face image from multiple camera views. Here, the discriminator’s role is to discriminate the realism of synthetic face images using unlabeled real profile face images as a conditional side information. In addition, using the same discriminator, we can generate face images exhibiting different expressions. We compare the results of LSSL with SimGAN method [@shrivastava2017learning] on the BU-3DFE dataset [@yin20063d] to evaluate the realism of face images. SimGAN method [@shrivastava2017learning] considers learning from simulated and unsupervised images through adversarial training. However, SimGAN is devised for much simpler scenarios e.g., eye image refinement. In addition, categorical information was ignored in SimGAN, which limits the model generalization. In contrast, LSSL overcomes this issue by introducing attribute classification loss into objective function. For a fair comparison with SimGAN method, we add the attribute classification loss by modifying the SimGAN’s discriminator, while keeping the rest of network unchanged. We achieve more visually pleasing results on test data compared to the SimGAN method (see Fig. \[fig:7\]). Implementation Details {#implementation} ====================== All networks are trained using Adam optimizer [@kingma2014adam] $\left ( \beta _{1}=0.5,\beta _{2}=0.999 \right )$ and with a base learning rate of $0.0001$. We linearly decay learning rate after the first 100 epochs. We use a simple data augmentation with only flipping the images horizontally. The input image size and the batch size are set to $128\times 128$ and 8 for all experiments, respectively. We update the discriminator five times for each generator (encoder-decoder) update. The hyper-parameters in Eq. \[eq7\] and Eq. \[eq1\] are set as: $\lambda _{bi}=10$ and $\lambda _{id}=10$, $\lambda _{p}=10$, $\lambda_{gp}=10$ and $\lambda _{cls}=1$, respectively. The whole model is implemented using PyTorch on a single NVIDIA GeForce GTX 1080. Networks Architectures {#networkarchitechure} ---------------------- For the discriminator, we use PatchGAN [@isola2017image] that penalizes structure at the scale of image patches. In addition, LSSI has the generator network composed of five convolutional layers with the stride size of two for downsampling, six residual blocks, and four transposed convolutional layers with the stride size of two for upsampling. We use sub-pixel convolution instead of transposed convolution followed by instance normalization [@ba2016layer]. For the face parsing network, we used the same net architecture as U-Net proposed in [@ronneberger2015u], but our face parsing network consists of depthwise convolutional blocks proposed by MobileNets [@sandlerv2]. The network architecture of LSSL is shown in Fig. \[fig:5\]. ![image](Fig5.pdf){height="10cm" width="17cm"} Experimental Results {#subsec:experimentalresults} ==================== In this section, we first propose to carry out comparison between our LSSL method and recent methods in image-to-image translation from a qualitative perspective, then we demonstrate the generality of our method (quantitative analysis) using different techniques for the face expression recognition. Datasets -------- **Oulu-CASIA VIS [@zhao2011facial]**: This dataset contains 480 sequences (from 80 subjects) of six basic facial expressions under the visible (VIS) normal illumination conditions. The sequences start from a neutral face and end with peak facial expression. This dataset is chosen due to high intra-class variations caused by the personal attributes. We conducted our experiments using subject-independent 10-fold cross-validation strategy. **MUG [@aifanti2010mug]**: The MUG dataset contains image sequences of seven different facial expressions belonging to 86 subjects comprising 51 men and 35 women. The image sequences were captured with a resolution of $896\times 896$. We used image sequences of 52 subjects and the corresponding annotation, which are available publicly via the internet. **BU-3DFE [@yin20063d]**: The Binghamton University 3D Facial Expression Database (BU-3DFE) [@yin20063d] contains 3D models from 100 subjects, 56 females and 44 males. The subjects show a neutral face as well as six basic facial expressions and at four different intensity levels. Following the setting in [@tariq2013maximum] and [@zhang2018joint], we used an openGL based tool from the database creators to render multiple views from 3D models in seven pan angles $\left ( 0^{\circ},\pm 15^{\circ},\pm 30^{\circ},\pm 45^{\circ} \right )$. **RaFD [@langner2010presentation]** : The Radboud Faces Database (RaFD) contains 4,824 images belonging to 67 participants. Each subject makes eight facial expressions. #### Qualitative evaluation As shown in Fig. \[fig:6\], our facial attribute transfer test results (unseen images during the training step) are more visually pleasing compared to recent baselines including IcGAN [@perarnau2016invertible] and CycleGAN. [@zhu2017unpaired]. We believe that our proposed losses (parsing loss and identity losses) help to preserve the face image details and identity. IcGAN even fails to generate subjects with desired attributes, while our proposed method could learn attribute invariant features applicable to synthesize multiple images with desired attributes. In addition, to evaluate the proposed pose normalization method, the face attribute transfer results of our proposed method have been compared with the SimGAN method [@shrivastava2017learning] on the BU-3DFE dataset [@yin20063d] (see Fig. \[fig:7\]). ![Facial attribute transfer results of LSSL compared with IcGAN [@perarnau2016invertible] and CycleGAN [@zhu2017unpaired], respectively.[]{data-label="fig:6"}](Fig6.pdf){height="7cm" width="8.6cm"} [0.46]{} ![Pose-normalized face attribute transfer results of (a) LSSL method compared with (b) SimGAN method [@shrivastava2017learning] on the BU-3DFE dataset [@yin20063d]. The input synthetic frontal face and real profile face are fed into our model to exhibit specified attribute. **Left to right**: input synthetic face and seven different attributes including *angry*, *disgusted*, *fearful*, *happiness*, *neutral*, *sadness* and *surprised*, respectively.[]{data-label="fig:7"}](Fig7_1.png "fig:"){width="\linewidth"} [0.46]{} ![Pose-normalized face attribute transfer results of (a) LSSL method compared with (b) SimGAN method [@shrivastava2017learning] on the BU-3DFE dataset [@yin20063d]. The input synthetic frontal face and real profile face are fed into our model to exhibit specified attribute. **Left to right**: input synthetic face and seven different attributes including *angry*, *disgusted*, *fearful*, *happiness*, *neutral*, *sadness* and *surprised*, respectively.[]{data-label="fig:7"}](Fig7_2.png "fig:"){width="\linewidth"} #### Quantitative Evaluation To conduct the quantitative analysis, we apply LSSL to data augmentation for facial expression recognition. We augment real images from Oulu-CASIA VIS dataset with the synthetic expression images generated by LSSL as well as its variants and then compare with other methods to train an expression classifier. The purpose of this experiment is to introduce more variability and enrich the dataset further, in order to improve the expression recognition performance. In particular, from each of the six expression category, we generate 0.5K, 1K, 2K, 5K and 10K images, respectively. As shown in Fig. \[fig:8\], when the number of synthetic images is increased to 30K, the accuracy is improved drastically, reaching to 87.40%. The performance starts to become saturated when more images (60K) are used. We achieved a higher recognition accuracy value using the images generated from LSSL than other CNN-based methods including popular generative model, StarGAN [@choi2018stargan] (see Table \[aug\_synthesis\]). This suggests that our model has learned to generate more realistic facial images controlled by the expression category. In addition, we evaluate the sensitivity of the results for different components of LSSL method (face parsing loss, bidirectional loss and side conditional image, respectively). We observe that our LSSL method trained with each of the proposed loss terms yields a notable performance gain in facial expression recognition. ![Impact of the amount of training synthetic images on performance in terms of expression recognition accuracy.[]{data-label="fig:8"}](Fig8.png){height="6.5cm" width="8.6cm"} Method Accuracy --------------------------------- ------------ -- HOG 3D [@klaser2008spatio] 70.63% AdaLBP [@zhao2011facial] 73.54% Atlases [@guo2012dynamic] 75.52% STM-ExpLet [@liu2014learning] 74.59% DTAGN [@jung2015deep] 81.46% StarGAN [@choi2018stargan] 83.90% **LSSL W/O Side Input** **84.70%** **LSSL W/O Bidirectional Loss** **84.30%** **LSSL W/O Face Parsing Loss** **86.95%** **LSSL** **87.40%** : Performance comparison of expression recognition accuracy between the proposed method and other state-of-the-art methods.[]{data-label="aug_synthesis"} Moreover, we evaluate the performance of LSSL on the MUG facial expression dataset [@aifanti2010mug] using the video frames of the peak expressions. Fig. \[fig:9\] shows sample facial attribute transfer results on the MUG facial dataset [@aifanti2010mug]. It should be noted that the MUG facial expression dataset are only available to authorized users. We only have permission from few subjects including 1 and 20 for using their photos in our paper. In Table \[MUG1\], we report the results of average accuracy of a facial expression on synthesized images. We trained a facial expression classifier with $\left ( 90\%/10\% \right )$ splitting for training and test sets using a ResNet-50 [@he2016deep], resulting in a near-perfect accuracy of $90.42\%$. We then trained each of baseline models including CycleGAN, IcGAN and StarGAN using the same training set and performed image-to-image translation on the same test set. Finally, we classified the expression of these generated images using the above-mentioned classifier. As can be seen in Table \[MUG1\], our model achieves the highest classification accuracy (close to real image), demonstrating that our model could generate the most realistic expressions among all the methods compared. [0.46]{} ![Facial attribute transfer results from our proposed method for (a) subject 1 and (b) subject 20, respectively. The input face images are manipulated to exhibit desired attribute. **Left to right**: input *neutral* face and seven different attributes including *anger*, *disgust*, *fear*, *happiness*, *neutral*, *sadness* and *surprise*, respectively.[]{data-label="fig:9"}](Fig9_1.png "fig:"){width="\linewidth"} [0.46]{} ![Facial attribute transfer results from our proposed method for (a) subject 1 and (b) subject 20, respectively. The input face images are manipulated to exhibit desired attribute. **Left to right**: input *neutral* face and seven different attributes including *anger*, *disgust*, *fear*, *happiness*, *neutral*, *sadness* and *surprise*, respectively.[]{data-label="fig:9"}](Fig9_2.png "fig:"){width="\linewidth"} Method Accuracy --------------------------------- ------------ -- Real Test Set 90.42% CycleGAN [@zhu2017unpaired] 84.40% IcGAN [@perarnau2016invertible] 80.32% StarGAN [@choi2018stargan] 85.15% **LSSL W/O Face Parsing Loss** **89.91%** **LSSL** **90.35%** : Performance comparison on the MUG dataset in terms of average classification accuracy. []{data-label="MUG1"} #### Pose Normalization Analysis Using BU-3DFE dataset [@yin20063d], we have designed subject-independent experimental setup. We performed 5-fold cross validation using 100 subjects. Training data includes images of 80 (frontal face) subjects, while test data includes images of 20 subjects with varying poses. We use VGG-Face model [@parkhi2015deep], which is pretrained on the (RaFD) [@langner2010presentation] and then we further fine-tune it on the frontal face images from BU-3DFE dataset. It can be observed from Table \[frontal\_cnn\] that pose normalization helps to improve expression recognition performance of the non-frontal faces (ranging from 15 to 45 degrees in 15 degrees steps). Having said that, adding realism to simulated face images helps to bring additional gains in terms of expression recognition accuracy. In particular, our method outperforms two recent works, [@lai2018emotion; @zhang2018joint] that addressed pose normalization task. Our proposed losses (parsing loss and identity losses) facilitates the synthesized frontal face images to preserve much detail of face characteristics (e.g. expression and identity). \[frontal\_cnn\] Visualizing Representation -------------------------- Fig. \[fig:10\] visualizes some activations of hidden units in the fifth layer of an encoder (the first component of the generator). Although all units are not semantic, but these visualizations indicate that the network learns to identity the most informative visual cues from the face regions. ![Visualization of some hidden units in the encoder of LSSL trained on the BU-3DFE dataset [@yin20063d]. We highlight regions of face images that a particular convolutional hidden unit maximally activates on.[]{data-label="fig:10"}](Fig10_1.jpg "fig:"){height="1.3cm" width="8.9cm"} \[fig:doc1\] ![Visualization of some hidden units in the encoder of LSSL trained on the BU-3DFE dataset [@yin20063d]. We highlight regions of face images that a particular convolutional hidden unit maximally activates on.[]{data-label="fig:10"}](Fig10_2.jpg "fig:"){height="1.3cm" width="8.9cm"} \[fig:doc2\] ![Visualization of some hidden units in the encoder of LSSL trained on the BU-3DFE dataset [@yin20063d]. We highlight regions of face images that a particular convolutional hidden unit maximally activates on.[]{data-label="fig:10"}](Fig10_3.jpg "fig:"){height="1.3cm" width="8.9cm"} \[fig:doc3\] ![Visualization of some hidden units in the encoder of LSSL trained on the BU-3DFE dataset [@yin20063d]. We highlight regions of face images that a particular convolutional hidden unit maximally activates on.[]{data-label="fig:10"}](Fig10_4.jpg "fig:"){height="1.3cm" width="8.9cm"} \[fig:doc3\] Training Losses Additional Qualitative Results ---------------------------------------------- Fig. \[fig:11\] shows the training losses of the proposed attribute guided face image synthesis model for the discriminator. Here, we use the face landmark heatmap as the side conditional image. The face landmark heatmap contains 2D Gaussians centered at the landmarks’ locations, which are then concatenated with the input image to synthesize different facial expressions on the RaFD dataset [@langner2010presentation]. In addition, the target attribute label is spatially replicated and concatenated with the latent feature. Results in Fig. \[fig:11\] are for 100 epochs, 50,000 iterations of training on the RaFD dataset. Moreover, Fig. \[fig:12\] shows additional images generated by LSSL. Conclusion ========== In this work, we introduced LSSL, a model for multi-domain image-to-image translation applied to the task of face image synthesis. We present attribute guided face image generation to transform a given image to various target domains controlled by desired attributes. We argue that learning image-to-image translation between image domains requires a proper modeling the shared latent representation across image domains. Additionally, we proposed face parsing loss and identity loss to preserve much detail of face characteristics (e.g. identity). More importantly, we seek to add realism to the synthetic images while preserving the face pose angle. We also demonstrate that the synthetic images generated by our method can be used for data augmentation to enhance facial expression classifier’s performance. We reported promising results on the task of domain adaptation by adding the realism to the simulated faces. We showed that by leveraging the synthetic face images as a form of data augmentation, we can achieve significantly higher average accuracy compared with the state-of-the-art result. [.5]{} ![image](Fig11_1.png){width="9cm" height="6cm"} [.5]{} ![image](Fig11_2.png){width="9cm" height="6cm"} [.5]{} ![image](Fig11_3.png){width="9cm" height="6cm"} [.5]{} ![image](Fig11_4.png){width="9cm" height="6cm"} ![image](Fig12.png){height="22cm" width="16cm"} [^1]: We denote *domain* as a set of images owning the same attribute value.
--- abstract: 'We study theoretically and by means of molecular dynamics (MD) simulations the generation of mechanical force by grafted polyelectrolytes in an external electric field, which favors its adsorption on the grafting plane. The force arises in deformable bodies linked to the free end of the chain. Varying the field, one controls the length of the non-adsorbed part of the chain and hence the deformation of the target body, i.e., the arising force too. We consider target bodies with a linear force-deformation relation and with a Hertzian one. While the first relation models a coiled Gaussian chain, the second one describes the force response of a squeezed colloidal particle. The theoretical dependencies of generated force and compression of the target body on applied field agree very well with the results of MD simulations. The analyzed phenomenon may play an important role in a future nano-machinery, e.g. it may be used to design nano-vices to fix nano-sized objects.' author: - 'N. V. Brilliantov' - 'Yu. A. Budkov' - 'C. Seidel' title: Generation of mechanical force by grafted polyelectrolytes in an electric field --- Introduction ============ Due to its obvious importance for applications, the response of polyelectrolytes to external electric fields has been of high scientific interest for the last few decades, e.g.,  [@Muthu1987; @Bajpai1997; @Borisov1994; @Boru98; @Joanny98; @Muthu2004; @Dobry2000; @Dobry2001; @Borisov2001; @Netz2003; @Netz2003a; @Borisov2003; @FriedsamGaubNetz2005; @BrilliantovSeidel2012; @SeidBudBrill2013]. Moreover, novel experimental techniques that allow exploration of a single polymer chain aided developments in this area [@FriedsamGaubNetz2005]. In fact, the ability of polyelectrolyte chains to adapt their conformation in external electric fields, i.e., to change between expanded and contracted states when the applied field varies, is an important property. It may be used in future nano-machinery: possible examples of such nano-devices may be nano-vices or nano-nippers manipulated by an electric field. Suppose one end of a chain is fixed on a plane (i.e., the polyelectrolyte is grafted), while the other end is linked to a nano-sized (target) body that can suffer deformation. If the polyelectrolyte is exposed to an external electric field that favors adsorption at the grafting plane, its conformation will be determined by both the field and the restoring force exerted by the deformed target body on the chain, see Fig. \[fig:spring\_up\]. Increased adsorption of the polyelectrolyte in response to a changing electric field will cause a deformation of the target body and give rise to a force acting between the chain and target. More precisely, the force will depend both on the magnitude of the deformation and the specific force-deformation relation of the target body. Hence, by applying an electric field, one can manipulate the conformation of polyelectrolyte chains as well as the force affecting the target body. The nature of target bodies may be rather different, however, the most important ones with respect to possible applications seem to be either polymer chains or nano-particles, e.g., colloidal particles, see Figs. \[fig:spring\_up\] and \[fig:spring\_down\]. In the latter case the force-deformation relation is given by the Hertzian law, which accurately describes the elastic response of squeezed nano-particles [@Hisao:2009; @Hisao:2010]. On the other hand, polymer chains can exhibit coiled states with a linear force-deformation relation or stretched conformations with a non-linear relation, e.g. [@GrossKhokh]. To describe the phenomenon it is necessary to express the size of the polyelectrolyte chain as well as the force acting on the target body as a function of the applied electric field. In the present study we address the problem theoretically and numerically by means of molecular dynamics (MD) simulations. We analyze a model of a polyelectrolyte chain grafted to a plane, linked by its free end to a deformable target body and exposed to an external electric field. The target body is modeled by linear or non-linear springs with corresponding force-deformation relations. A time-independent electric field is applied perpendicular to the grafting plane so that it favors complete polyelectrolyte adsorption on the plane. For simplicity we consider a salt-free solution, i.e., there are only counterions that compensate the charge of the chain. For intermediate and strong electric fields (the definition is given below), additional salt leads to a renormalization of the surface charge. This happens because the salt co-ions simply screen the plane leaving the qualitative nature of the phenomenon unchanged. Hence the salt-free case addressed here is the basic one, which allows a simpler analytical treatment. The general case of a solution with additional salt ions will be studied elsewhere [@Budkov_salt]. Counterions having the same charge sign as the grafting plane are repelled, leaving the chain unscreened see (see Fig. \[fig:spring\_up\]). This feature is dominating if the specific volume per chain is not small and the electric field is not weak. In weak fields a noticeable fraction of counterions is located close to the chain, which leads to a partial screening of the external field and of the Coulomb interactions between monomers. Here we consider systems with a large specific volume and with fields that are not very weak. The screening of the chain in this case may be treated as a small perturbation. We study the static case when the current across the system is zero. It is noteworthy that for the specific volumes and magnitudes of the electric field addressed here, MD simulations demonstrate a lack of the counterion screening even at finite electric current [@SeidBudBrill2013]. ![ (Color online) Illustration of the generation of mechanical force by electric field. The electric field causes chain contraction indicated by down arrows. The restoring force $f$ of the deformed target body (up arrows) can be both linear and nonlinear, depending on the nature of the target body. The right panel shows that the target body is modelled by a spring. []{data-label="fig:spring_up"}](Fig1.jpg){width="0.98\columnwidth"} Here we present a first-principle theory of the phenomena and compare theoretical predictions with MD simulations results. We observe a quantitative agreement between theory and MD data for all magnitudes of electric field, except very weak fields when screening of the polyelectrolyte becomes significant. The simpler problem of the conformation of a grafted polyelectrolyte exposed to a constant force in electric field, has been explored theoretically and numerically in a previous study [@BrilliantovSeidel2012]. In Refs. [@BrilliantovSeidel2012; @SeidBudBrill2013] we also reported some simulation results for a chain linked to a deformable target body along with our previous simpler theory for the restoring force. In the present study we develop a first-principle theory, based on a unified approach that describes the adsorbed part of the chain as well as the bulk part under the action of the force from the target body. ![ (Color online) Illustration of the work principle of nano-vice: The target particle – the colloidal particle is fixed at sufficiently strong fields due to the polyelectrolyte chain compression; it will be released at zero field. The restoring force $f$ corresponds in this case to the Hertzian response of a compressed sphere. To illustrate a possible device, two polyelectrolyte chains are sketched, although only one chain, linked to the Hertzian spring, was used in the simulations reported, see the right panel. []{data-label="fig:spring_down"}](Fig2.jpg){width="0.98\columnwidth"} The paper is organized as follows: In Section II, we present our analytical theory, where we calculate the free energy of the chain and the force acting on the target body. In Section III the numerical setup is discussed and in Section IV we present the MD results and compare them with our theoretical predictions. Finally, in Section V we summarize our findings. Theory ====== We consider a system, composed of a chain of $N_0+1$ monomers, which is anchored to a planar surface at $z=0$. The anchoring end-monomer is uncharged, while each of the remaining $N_0$ beads carries the charge $-qe$ ($e>0$ is the elementary charge); $N_0$ counterions of charge $+qe$ make the system neutral. The external electric field ${\bf E}$ acts perpendicular to the plane and favors the adsorption of the chain, Fig. \[fig:Setup\]. The free end of the polyelectrolyte is linked to a deformable body, modeled by a spring with various force-deformation relations. We study a few different cases. The reaction force $f$ and the energy of deformation $U_{\rm sp}$ for a linear spring reads: \[eq:1\] f=-(h- h\_0), U\_[sp]{}= (h- h\_0)\^2. Here $\kappa$ is the elastic constant of the spring and $h$ and $h_0$ are the lengths of deformed and undeformed spring, respectively. A linear force-deformation relation corresponds, for instance, to a target body given by a polymer chain in coiled Gaussian state, e.g. [@GrossKhokh]. The corresponding relation for a non-linear spring has the form \[eq:2\] f=|h- h\_0 |\^ [sign]{}(h\_0-h), U\_[sp]{}= |h- h\_0 |\^[+1]{}, where $\gamma>1$ characterizes the stiffness of the body, which may be e.g. a polymer chain in a semi-stretched conformational state, i.e., in a state intermediate between a coiled and stretched one. It is known that stretched polymer chains demonstrate much larger stiffness than Gaussian ones [@GrossKhokh]. Hence, varying the exponent $\gamma$ one can mimic different states of a chain. From the point of view of applications it is worthwhile studying the special case of a Hertzian spring with $\gamma=3/2$, which corresponds to the elastic response of a squeezed nano-particle [@Hisao:2009; @Hisao:2010], e.g., a colloidal particle: \[eq:3\] f=(h\_0-h)\^[3/2]{}(h\_0-h), U\_[sp]{}= (h\_0-h)\^[5/2]{} (h\_0-h). Here $h_0=d_c$ is the diameter of an unloaded colloidal particle and $h$ that of the deformed one. The unit Heaviside step function $\theta(x)$ reflects the fact that the Hertzian elastic respond arises for compressive deformations only. Although we performed MD simulations only for the above models of the elastic response, the theoretical analysis is given for the general case: \[eq:Uspgen\] U\_[sp]{}=U\_[sp]{}(h- h\_0) f=- U\_[sp]{}(h- h\_0), where again $h_0$ and $h$ are the sizes of undeformed and deformed target bodies, respectively. To find the polyelectrolyte conformation in electric field and the force acting on the target body we evaluate the conditional free energy of the system and minimize it with respect to relevant variables. Let the number of (charged) monomers adsorbed at the (oppositely charged) plane be $N_s$, so that $N=N_0-N_s$ is the number of monomers in the bulk. Let $z_{\rm top}$ be the distance of the “free” chain end, linked to the target body, from the charged plane and ${\bf R} $ – the end-to-end distance of the adsorbed polymer part of $N_s$ monomers. In what follows we compute the conditional free energy $F(N, z_{\rm top}, R)$ which may be written as $$\label{eq:F_tot} F(N, z_{\rm top}, R) \approx F_{\rm b}+F_{\rm s} + F_{\rm bs},$$ where $F_{\rm b}= F_{\rm b}(N, z_{\rm top})$ is the free energy of the system associated with the bulk part of the chain and the target body, $F_{\rm s}=F_{\rm s}(N_s, R)$ is the free energy of the adsorbed part of the chain and $F_{\rm bs}=F_{\rm bs}(N, z_{\rm top},R)$ accounts for the interactions between the bulk and adsorbed parts. Minimizing then $F(N, z_{\rm top}, R)$ with respect to $N$, $z_{\rm top}$ and $R$ one can find the conformation of the chain and the force acting on the target body (see the detailed discussion below). In the present study we focus on the range of parameters where the polyelectrolyte chain is weekly screened. This allows us to treat the interaction of the chain with counterions as a small perturbation and estimate it separately; this significantly simplifies calculations. In what follows we compute separately different parts of the free energy. Free energy of the bulk part of the chain ----------------------------------------- For simplicity, we use the freely jointed chain model with $b$ being the length of the inter-monomer links, that is, the size of the monomer beads. The MD simulations discussed bellow provide a justification for this model. The location of all monomers of the chain is determined by $N_0$ vectors ${\bf b}_i={\bf r}_{i}-{\bf r}_{i+1}$, which join the centers of $i+1$-st and $i$-th monomers ($i=1,2, \ldots N_0$). It is convenient to enumerate the monomers, starting from the “free” end linked to the target body. Then the beads with numbers $1, 2, \ldots N$ refer to the bulk part of the chain and with numbers $N+1, N+2, \ldots N_0$ to the adsorbed part. The $N_0+1$-st neutral bead is anchored to the surface. Let the centers of the adsorbed beads lie at the plane $z=0$, [^1] and for simplicity the anchored bead is located at the origin, ${\bf r}_{N_0+1} =0$. Then the location of the $k$-th bead of the bulk part ($k=1,2, \ldots, N)$ may be written as $$\begin{aligned} % \nonumber to remove numbering (before each equation) {\bf r}_{k} \!\!&=& \!\!{\bf r}_{k}- {\bf r}_{k+1} + {\bf r}_{k+1}- {\bf r}_{k+2} \ldots +{\bf r}_{N_0}- {\bf r}_{N_0+1} +{\bf r}_{N_0+1}\\ \!\!&=&\!\! \sum_{s=k}^{N_0} {\bf b}_{s}=\sum_{s=k}^N {\bf b}_{s}+ \sum_{s=N+1}^{N_0} {\bf b}_{s}= {\bf r}_{N+1}+\sum_{s=k}^N {\bf b}_{s},\end{aligned}$$ where ${\bf r}_{N+1}$ is the radius vector of the $N+1$-st bead, which is a surface bead; it is linked to the $N$-th bead, located in the bulk. The inter-center distance of $i$-th and $j$-th beads reads, $$\label{eq:rkl} {\bf r}_{ij}= \sum_{s=i}^j {\bf b}_{s},$$ where each of the vectors ${\bf b}_s$ has the same length $b$. Its orientation may be characterized by the polar $\theta_s$ and azimuthal $\psi_s$ angles, where the axis $OZ$ is directed perpendicularly to the grafting plane, Fig. \[fig:Setup\]. Hence, the distances between the reference plane $z=0$ and the $k$-th bead, as well as between the plane and the top bead are $$\label{eq:zk} z_k=b\sum_{s=k}^N \cos \theta_s, \qquad z_{\rm top}=z_1=b\sum_{s=1}^N \cos \theta_s$$ The location of the top bead, linked to the target body, $z_{\rm top}$ determines its deformation and the elastic energy due to the body deformation, $$\label{eq:Us} U_{\rm sp}(z_{\rm top})= U_{\rm sp} ( z_{\rm top}- z_{\rm top,0}) , \quad \qquad f = -\frac{\partial U_{\rm sp}}{\partial z_{\rm top}}.$$ Here $z_{\rm top,0}$ is the location of the top bead of the chain when the target body is not deformed. Because the chain is assumed to be weakly screened, here we ignore screening effects, which we estimate later as a perturbation. Then the potential of the external field $\varphi_{\rm ext}$ depends on $z$ simply as $\varphi_{\rm ext}(z) =-Ez$, so that the electrostatic energy of $k$-th bead, associated with this field reads $-qe\varphi_{\rm ext}(z_k)=bqeE \sum_{s=k}^N \cos \theta_s$. Hence the interaction energy of the bulk part of the chain with the external field has the form $$\begin{aligned} % \nonumber to remove numbering (before each equation) \label{eq:Ext} H_{\rm ext} &=& \sum_{k=1}^N -qe\varphi_{\rm ext}(z_k)= bqeE \sum_{k=1}^N\sum_{s=k}^N \cos \theta_s \\ &=& bqeE\sum_{s=1}^N s\cos \theta_s. \nonumber\end{aligned}$$ Now we need to take into account the electrostatic interactions between chain monomers. Because of vanishing screening we have, $$\label{eq:Eself} H_{\rm self,b} = \frac12 \sum_{i=1}^N\sum_{j=1\,j\neq i}^N V( {\bf r}_i - {\bf r}_j) = \frac12 \sum_{i=1}^N\sum_{j=1\,j\neq i}^N \frac{q^2 e^2}{\varepsilon r_{ij}},$$ where $\varepsilon$ is the dielectric permittivity of the solution. Using the Fourier transform of the Coulomb potential $V(r)=e^2q^2/\varepsilon r$, and the expression (\[eq:rkl\]) for the inter-monomer distances, the last equation may be recast into the form (see the Appendix A): $$\label{eq:Eself1} H_{\rm self,b} = \frac{q^2 e^2 }{2 \varepsilon } \sum_{s_{1}\neq s_{2}} \int \frac{d {\bf k }}{(2\pi )^3} \left( \frac{4 \pi}{k^2} \right) e^{ i \sum_{s=s_{1}}^{s_{2}} {\bf k}_{\perp } \cdot {\bf b}_s^{\perp } + k_{z} \cdot b_s^z},$$ where ${\bf k}_{\perp }$, ${\bf b}_s^{\perp }$ and $k_{z}$, $b_s^z = b \cos \theta_s$ are respectively the transverse and longitudinal (parallel to the axis $OZ$) components of the vectors ${\bf k}$ and ${\bf b}_s$. In the following we first compute the partition function associated with the bulk part of the chain. We impose the condition that the distance between the surface and the top bead, attached to the target body, is $z_{\rm top}$. Then the bulk part of the partition function reads: $$\begin{aligned} \label{eq:Zb} % \nonumber to remove numbering (before each equation) {\cal Z}_b(z_{\rm top}) \!\!\! &=& \!\!\! \int_{0}^{2\pi}\!\!d\psi_{1} \ldots \int_{0}^{2\pi}\!\!d\psi_{N}\! \! \int_{0}^{1} \!\!d\!\cos{\theta_{1}} \ldots \int_{0}^{1}\! \! d\!\cos{\theta_{N}} \nonumber \\ \!\!\! &\times& \!\!\! e^{-\beta U_{\rm sp} -\beta H_{\rm self,b} - \beta H_{\rm ext}} \delta\!\left(\!\!z_{\rm top} \!- \! b \sum_{s=1}^{N}\cos{\theta_{s}}\!\!\right)b, \nonumber \\\end{aligned}$$ where $\beta =1/k_BT$, with $T$ being the temperature of the system and $k_B$ is the Boltzmann constant; the energies $U_{\rm sp}$, $H_{\rm ext}$ and $H_{\rm self,b}$ are defined by Eqs. (\[eq:Us\])–(\[eq:Eself1\]) and the factor $b$ keeps ${\cal Z}_b$ dimensionless. In Eq. (\[eq:Zb\]) we also assume that the vectors ${\bf b}_s$ can not be directed downwards ($\cos \theta_s \ge 0$), which guarantees that the constrain $z_s>0$, $s=1, \ldots N$ holds true; this has been confirmed in our MD simulations. To proceed we assume that the value of $H_{\rm self,b}$ may be approximated by its average over the angles $\psi_1, \ldots \psi_N$, that is, $H_{\rm self,b}\approx \left<H_{\rm self,b}\right>_{\psi}$, hence we assume that the transversal fluctuations of the polyelectrolyte chain are small. Then, with the use of (\[eq:Ext\]), we rewrite Eq. (\[eq:Zb\]) as $$\begin{aligned} \label{eq:Zb1} \mathcal{Z}_b(z_{\rm top}) \!\!&=& \!\!(2\pi)^{N}\int_{0}^{1}\!\! \! d\eta_{1} \! \ldots \!\!\! \int_{0}^{1}\!\! \!d\eta_{N} \delta \!\left(\sum_{s=1}^{N}\eta_{s}-\tilde{z}_{\rm top}\right) \\ \!\!&\times&\!\! \exp\!\left[\!-\beta U_{\rm sp}(z_{\rm top })\!-\! \tilde{E} \sum_{s=1}^{N}s\eta_{s}\!-\!\beta\! \left<H_{\rm self,b}\right>_{\psi} \right], \nonumber\end{aligned}$$ where $\eta_s=\cos \theta_s$, $\tilde{z}_{\rm top}={z}_{\rm top}/b$, $\tilde{E}=\beta q e E b$ and $$\begin{aligned} \label{eq:22} \left<H_{\rm self,b}\right>_{\psi}&=&\frac{ q^2e^2}{2\varepsilon}\sum_{s_{1}\neq s_{2}}\int\frac{d{\bf k} }{(2\pi )^3} \left( \frac{4 \pi}{k^2 } \right) \\ & \times & \left<e^{i{\bf k}_{\perp } \cdot \sum_{s=s_{1}}^{s_{2}}{\bf b}_s^{\perp }}\right >_{\psi}e^{ik_{z}b\sum_{s=s_{1}}^{s_{2}}\eta_{s}}. \nonumber\end{aligned}$$ To evaluate the latter expression we exploit the following approximation, $$\label{eq:etas} \eta_s \approx \left< \eta_s \right> =\left< \cos \theta_s \right> =\frac{{z}_{\rm top }}{bN},$$ which implies that $z_{\rm top} \leq bN$ (recall that we consider a freely joined chain with constant links $b$) and that $\sum_{s=s_{1}}^{s_{2}}\eta_{s}\approx z_{\rm top}\left|s_{2}-s_{1}\right|/ (b\,N)$. Referring for details to Appendix A we present here the result for $H_{\rm self,b}$, averaged over transverse fluctuations: $$\label{eq:Hselfin} \beta \left<H_{\rm self,b}\right>_{\psi} =\frac{l_Bq^2N^2}{z_{\rm top}}\left( \log N -1 \right),$$ where $l_B=e^2/\varepsilon k_B T$ is the Bjerrum length. Using the integral representation of the $\delta$-function, $$\delta(x)=(2 \pi)^{-1} \int_{-\infty}^{+\infty} \!\!\!d \xi e^{i\xi x},$$ we recast $\mathcal{Z}_b(z_{\rm top})$ in Eq. (\[eq:Zb1\]) into the form $$\label{eq:Zbp} \mathcal{Z}_b(z_{\rm top})= (2\pi)^{N-1} e^{-\beta U_{\rm sp}-\beta \left<H_{\rm self,b}\right>_{\psi } } \! \! \int_{-\infty}^{+\infty} \!\!\! \! d\xi e^{-i\xi \tilde{z}_{\rm top } +W(\xi)}, %\nonumber$$ where $W(\xi)$ contains the integration over $\eta_{1}, \ldots \eta_{N}$. Its explicit expression is given in Appendix B. For large $N \gg 1$, one can use the the steepest descend method to estimate the above integral over $\xi$. Neglecting small terms we finally obtain: $$\begin{aligned} \mathcal{Z}_b(z_{\rm top})\approx (2\pi)^{N-1} e^{-\beta U_{\rm sp}-\beta \left<H_{\rm self,b}\right>_{\psi } -\xi_{0}\tilde{z}_{\rm top}+W(\xi_{0})} %-\frac{1}{2}\log{\frac{|W^{\prime \prime}(\xi_{0})|}{2\pi}}}, \nonumber\end{aligned}$$ where $\xi_0$ is the root of the saddle point equation, $iz_{\rm top} -\partial W /\partial \xi=0$, $$\label{eq:xi0sol1} \xi_0 \simeq \beta qe Ez_{\rm top}.$$ and $$\begin{aligned} % \nonumber to remove numbering (before each equation) W(\xi_{0}) = \frac{1}{\tilde{E}} \left[ {\rm Ei}(\zeta_0) - {\rm Ei}(\zeta_N) + \log\left| {\zeta_0}/{\zeta_N} \right| \right], %\\ %W^{\prime \prime}(\xi_{0}) &=& \frac{1}{\tilde{E}} %\left[\frac{e^{\zeta_{N}}}{e^{\zeta_{N}}-1} -\frac{e^{\zeta_{0}}}{e^{\zeta_{0}}-1} %-\frac{1}{\zeta_{N}}+\frac{1}{\zeta_{0}} \right]. \nonumber\end{aligned}$$ with ${\rm Ei}(x)$ being the exponential integral function, $\zeta_0=\xi_0-\tilde{E}$, and $\zeta_N=\xi_0-\tilde{E}N$. (The complete expression for $\mathcal{Z}_b$ and the derivation details are given in the Appendix B). This yields the free energy, $\overline{F}_b(z_{\rm top},N)=-k_BT \log \mathcal{Z}_b(z_{\rm top})$, associated with the bulk part of the chain without the account of counterions: $$\begin{aligned} \beta \overline{F}_b(z_{\rm top},N)& \approx & \beta U_{\rm sp}(z_{\rm top} ) + \beta \left<H_{\rm self,b}\right>_{\psi } \\ &+&\xi_{0} \tilde{z}_{\rm top}- W(\xi_{0}) - N \log 2\pi. \nonumber\end{aligned}$$ The impact of counterions on the conformation of the bulk part of the chain may be estimated as a weak perturbation, so that the bulk component of free energy reads, $$\label{eq:Fbtot} F_b(z_{\rm top},N) =\overline{F}_b(z_{\rm top},N) + F_{\rm c.ch.}(z_{\rm top},N)$$ with $$\label{eq:Fc_ch} F_{\rm c.ch.}=\frac{4 \pi \sigma_c qe^2 b}{\varepsilon\tilde{E}} \frac{e^{\tilde{E}(\tilde{z}_{\rm top}-\tilde{L})}}{e^{\tilde{E} \tilde{z}_{\rm top}/N} -1} - \frac{\pi \sigma_c qe^2b}{\varepsilon} \tilde{z}_{\rm top} N.$$ Here $L$ ($\tilde{L} =L/b$) is the size of the system in the $OZ$-direction, $S$ is its lateral area and $e\sigma_c = eqN_0/S$ is the apparent surface charge density, associated with the counterions. The derivation of $F_{\rm c.ch.}$ is given in Appendix C. As it may be seen from the above equation, the impact of the counterions on the chain conformation is small, provided $e\sigma_c/E \ll 1$ and $\tilde{E} \tilde{L} \gg 1$. Assuming that these conditions are fulfilled in the case of interest, the above equation simplifies to $$\label{eq:Fc_ch1} \beta F_{\rm c.ch.} \simeq -\frac{z_{\rm top}}{2 \mu} N,$$ where $\mu =1/(2 \pi \sigma_c l_B q)$ is the Gouy-Chapman length based on the apparent surface charge density $\sigma_c = qN_0/S$. Free energy of the adsorbed part of the chain --------------------------------------------- Using the notations of previous section one can write the radius vector of $l$-th bead of the adsorbed part of the chain as ${\bf r}_l= \sum_{i=N_0}^l {\bf b}_i$. Then the radius vector that joins two ends of the adsorbed part reads $$\label{eq:R} {\bf R} = \sum_{i=N_0}^{N+1} {\bf b}_i = \sum_{s=1}^{N_s} {\bf d}_s,$$ where we introduce ${\bf d}_s = {\bf b}_{N_0+1-s}$ for the sake of notation simplicity. Obviously, for the adsorbed beads we have ${\bf r}_{kl}= \sum_{s=k}^l {\bf d}_s$. Thus, the free energy of the adsorbed part may be written as $$\label{eq:FZR} \beta F_{\rm s} =- \log \mathcal{Z}_{\rm s} (N_s, {\bf R}) ,$$ where $\mathcal{Z}_{\rm s} (N_s,{\bf R})$ is the conditional partition function, $$\begin{aligned} \label{eq:Z(R)} \mathcal{Z}_{\rm s}(N_s,{\bf R} )&=& \int_{0}^{2\pi}d\phi_{1} \ldots \int_{0}^{2\pi}d\phi_{N_{s}}e^{-\beta H_{\rm self, s} } \nonumber \\ &\times & \delta\left(\sum_{s=1}^{N_{s}}{\bf d}_{s}-{\bf R}\right) b^2\, ,\end{aligned}$$ where $H_{\rm self, s} = (1/2) \sum_{s_{1}\neq s_{2}}V({\bf r}_{s_{1}}-{\bf r}_{s_{2}})$ describes self-interaction of the adsorbed monomers with the potential $V( {\bf r}_i - {\bf r}_j)$ defined in Eq. (\[eq:Eself\]). The factor $b^2 $ in the above equation keeps $\mathcal{Z}_{\rm s}$ dimensionless. Since we assume that the adsorbed part of the chain forms a flat two-dimensional structure, the integration in Eq. (\[eq:Z(R)\]) is performed over $N_s$ azimuthal angles $\phi_1, \ldots \phi_{N_s}$, which define the directions of $N_s$ vectors ${\bf d}_{1}, \ldots {\bf d}_{N_s}$ on the plane. Note that the evaluation of the conditional partition sum $\mathcal{Z}_{\rm s}({\bf R} )$ allows also to estimate the equilibrium configuration of the adsorbed part of the chain. Using as previously the integral representation of the $\delta$-function we recast the above equation into the form $$\begin{aligned} \label{eq:Z(R)_1} &&\mathcal{Z}_{\rm s}(N_s, {\bf R} ) \!= \!\! \int \frac{d{\bf p}}{(2\pi)^2} b^2 e^{-i {\bf p} \cdot {\bf R}} \int_{0}^{2\pi}d\phi_{1} \ldots \int_{0}^{2\pi}d\phi_{N_{s}} \nonumber \\ &&~~~\times \exp \left\{ {-\frac{\beta}{2}\sum_{s_{1}\neq s_{2}}V({\bf r}_{s_{1}}-{\bf r}_{s_{2}}) + i {\bf p} \cdot \sum_{s=1}^{N_s} {\bf d}_s }\right\} \\ &&~~~ = \int \frac{d{\bf p}b^2}{(2\pi)^2} e^{-i {\bf p} \cdot {\bf R}} \mathcal{Z}_{\rm sp}({\bf p}) \left< e^{ -\frac{\beta}{2}\sum_{s_{1}\neq s_{2}}V({\bf r}_{s_{1}}-{\bf r}_{s_{2}})} \right>_{ {\bf p}} , \nonumber\end{aligned}$$ where we define $$\label{eq:Z0q} \mathcal{Z}_{\rm sp}({\bf p}) \!=\!\!\int_{0}^{2\pi} \!\!\!\!d\phi_{1} \!\ldots \!\!\! \int_{0}^{2\pi}\!\!\!d\phi_{N_{s}}e^{i {\bf p} \cdot\sum_{s=1}^{N_{s}} {\bf d}_s}\!\!=\! (2\pi)^{N_{s}}\!\left[J_{0}(pb)\right]^{N_{s}}.$$ Here $J_{0}(x)=(2\pi)^{-1}\int_{0}^{2\pi}\cos(x\cos{\phi })d\phi $ is the zero-order Bessel function; we also take into account that $({\bf p} \cdot {\bf d}_s) = p\,b \cos \phi_s$. In Eq. (\[eq:Z(R)\_1\]) the average over the angles $\phi_{1}, \ldots \phi_{N_s}$ is denoted as $$\left< (\ldots) \right>_{\bf p }=\frac{1}{\mathcal{Z}_{\rm sp}({\bf p})}\int_{0}^{2\pi}d\phi_{1} \ldots \int_{0}^{2\pi}d\phi_{N_{s}}e^{i{\bf p} \cdot\sum_{s=1}^{N_{s}}{\bf d}_{s}} (\ldots). \nonumber$$ Referring for computational details to Appendix D, below we give the final result for the conditional partition function: $$\label{eq:ZRfin} \mathcal{Z}_{\rm s}(N_s,{\bf R} )= \frac{(2\pi)^{N_{s}}}{\pi N_{s}} \, e^{ -\frac{R^2}{N_{s}b^2}-\frac{\pi \sqrt{2}q^2 l_B N_s^2}{R} },$$ where $R= \left| {\bf R}\right|$. From Eq. (\[eq:FZR\]) then follows, $$\begin{aligned} \label{eq:Fs} \beta F_s (N_s, R) &=& \frac{R^2}{N_s b^2} +\frac{\pi \sqrt{2} q^2 l_B N_s^2}{R} \nonumber \\ &-&N_s \log 2 \pi - \log \pi N_s .\end{aligned}$$ Note that $N_s= N_0 -N$. If we neglect the interaction of the adsorbed part of the chain with the bulk part we can estimate the equilibrium end-to-end distance of the adsorbed part $R$. Minimizing $F_s(N_s, R)$ with respect to $R$ and keeping $N_s$ fixed, $\left( \partial F_s /\partial R \right)_{N_s}=0$, we obtain the equilibrium value of $R$, $$\label{eq:extR} R= \left(q^2 b^2 l_B \pi /\sqrt{2} \right)^{1/3} N_{s} .$$ The above equation (\[eq:extR\]) implies that the adsorbed part is stretched, $R \sim N_s$. Note that the condition of a stretched conformation does not necessarily imply a linearly stretched chain. Loose configurations of chaotic surface loops or circular conformations are also possible. Interaction between bulk and adsorbed parts of the chain -------------------------------------------------------- The part of the free energy which accounts for interactions between the bulk part of the chain and adsorbed part may be estimated as (see the Appendix E for more detail) $$\label{eq:Fbs1} F_{\rm bs} (N,z_{\rm top},R) \approx \left< H_{\rm sb} \right>_{N,z_{\rm top},R}\,.$$ Here $H_{\rm sb}$ is the interaction energy between $N$ charged monomers of the bulk part of the chain and $N_s=N_0-N$ monomers of the adsorbed part, $$\label{eq:Hsb} \beta H_{\rm sb} = \sum_{l=1}^N \sum_{m=1}^{N_s} \frac{l_B}{|{\bf r}_l - {\bf r}_m |},$$ where ${\bf r}_l$ is the radius vector of the $l$-th monomer of the bulk part and ${\bf r}_m$ of the $m$-th monomers of the adsorbed part and $\left< (\ldots ) \right>_{N,z_{\rm top},R}$ denotes the averaging at fixed $N$, $z_{\rm top}$ and $R$. Using the definition of vectors ${\bf b}_i$ and ${\bf d}_j$, given in previous sections, we can write \[eq:rlm\] [**r**]{}\_l - [**r**]{}\_m = \_[s=l]{}\^N [**b**]{}\_[s]{}+ \_[s=1]{}\^m [**d**]{}\_[s]{} and recast $H_{\rm sb}$ into the form, $$\label{eq:Hsb1} \beta H_{\rm sb} \!=\! \sum_{l=1}^N \sum_{m=1}^{N_s} \!\int \!\!\frac{d {\bf k}}{(2 \pi)^3} \!\! \left(\frac{4 \pi l_B}{ k^2} \right)e^{i {\bf k} \cdot \sum_{s=l}^N {\bf b}_{s} +i {\bf k} \cdot \sum_{s=1}^m {\bf d}_{s}} \,.$$ In Eq. (\[eq:Hsb1\]) we again use the Fourier representation of the interaction potential $1/r$ given in Appendix A. Since the averaging is to be performed at fixed $N$, $z_{\rm top}$ and $R$ we can approximate the exponential factor in (\[eq:Hsb1\]) as $$\begin{aligned} \label{eq:esp_bs} &&\left< e^{i {\bf k} \cdot \sum_{s=l}^N {\bf b}_{s} +i {\bf k} \cdot \sum_{s=1}^m {\bf d}_{s}} \right>_{\!\!N,z_{\rm top},R} \\ && ~~~~~~~ \approx e^{ -\frac{k_{\perp}^2 b^2 (N-l)}{4} \left( 1-\frac{\tilde{z}^2_{\rm top}}{N^2} \right)+ik_z\frac{z_{\rm top}}{N}(N-l) +i ({\bf k}_{\perp} \cdot {\bf R}) \frac{m}{N_s} }, \nonumber\end{aligned}$$ where we apply the same approximations as in Eqs. (\[eq:Zb1\]), (\[eq:etas\]) and (\[eq:Angav\]) for the bulk part of the chain and a similar one for the adsorbed part, $$\label{eq:dsR} \sum_{s=1}^m {\bf d}_{s} \approx {\bf R} (m/ N_s) .$$ Substituting (\[eq:esp\_bs\]) into (\[eq:Hsb1\]) and performing integration over $d{\bf k}$ (see the Appendix E for detail) we finally obtain, $$\begin{aligned} \label{eq:fbsfin} \beta F_{\rm bs} &=& \frac{l_B N N_s}{R}\left[ \log \left(1+\sqrt{1+z^{*\,2}_{\rm top}} \right) \right. \\ &+& \left. \frac{1}{z^*_{\rm top}} \log \left(z^*_{\rm top} +\sqrt{1+z^{*\,2}_{\rm top}}\right) -\log z^*_{\rm top}\right] \nonumber \\ &-&\frac{l_B N}{R z^*_{\rm top}} \log (2 N_s z^*_{\rm top}) \nonumber\end{aligned}$$ where $z^*_{\rm top} = z_{\rm top}/R$ characterizes the relative dimensions of the bulk and adsorbed parts of the chain. Dependence of the force and deformation on the external field ------------------------------------------------------------- Now we can determine the dependence on the electric field of the polyelectrolyte dimensions as well as the deformation of target body. Simultaneously one obtains the dependence on applied field of the force that arises between chain and target. This may be done minimizing the total free energy of the system $$F(N,z_{\rm top}, R)= F_{\rm b}(N,z_{\rm top})+F_{\rm s}(N_s, R)+F_{\rm bs}(N,z_{\rm top}, R)$$ with respect to $N$, $z_{\rm top}$ and $R$ and using $N_s=N_0-N$ and the constrain $z_{\rm top} \leq bN$ (see the discussion above). The above three components of the free energy are given respectively by Eqs. (\[eq:Fbtot\]), (\[eq:Fs\]) and (\[eq:fbsfin\]). This allows to find $N$, $z_{\rm top}$ and $R$ as functions of the applied electric field, that is, to obtain $N=N(E)$, $z_{\rm top}=z_{\rm top}(E)$ and $R=R(E)$. Then one can compute the force acting onto the target body. It reads, $$\label{eq:minztop} \tilde{f}(\tilde{z}_{\rm top})= \tilde{E}\tilde{z}_{\rm top} -\frac{\tilde{l}_B q^2 N^2 (\log N -1)}{\tilde{z}_{\rm top}^2} -\frac{N}{2 \tilde{\mu}} +\frac{\partial \beta F_{\rm bs}}{\partial \tilde{z}_{\rm top}} ,$$ where $\tilde{f}=\beta b f(z_{\rm top})$, with $f(z_{\rm top})= - \partial U_{\rm sp} /\partial z_{\rm top}$ being the reduced force for a particular force-deformation relation, Eq. (\[eq:Us\]) and $\tilde{\mu} = \mu/b$ is the reduced Gouy-Chapman length. In the above equation we exploit Eq. (\[eq:xi0sol1\]) for $\xi_0$ and the saddle point equation, $iz_{\rm top} -\partial W /\partial \xi=0$, valid for $\xi=\xi_0$ (see the Appendix B). MD simulations ============== We report MD simulations of a polyelectrolyte modeled by a freely jointed bead-spring chain of length $N_{0}+1$. The $(N_0+1)$-th end-bead is uncharged and anchored to a planar surface at $z=0$. All the remaining $N_{0}$ beads carry one (negative) elementary charge. Electroneutrality of the system is fulfilled by the presence of $N_{0}$ monovalent free counterions of opposite charge, i.e., in our simulations $q=1$ . For simplicity, we consider the counterions to have the same size as monomers. We also assume that the implicit solvent is a good one, which implies short-ranged, purely repulsive interaction between all particles, described by a shifted Lennard-Jones potential. Neighboring beads along the chain are connected by a finitely extensible, nonlinear elastic FENE potential. For the set of parameters used in our simulations, the bond length at zero force is $b \simeq \sigma_{LJ}$ with $\sigma_{LJ}$ being the Lennard-Jones parameter. All particles except the anchor bead are exposed to a short-ranged repulsive interaction with the grafting plane at $z=0$ and with the upper boundary at $z=L_{z}$. The charged particles interact with the bare Coulomb potential. Its strength is quantified by the Bjerrum length $l_B=e^2/\varepsilon k_B T$. In the simulations we set $l_B=\sigma_{LJ}$ and use a Langevin thermostat to hold the temperature $k_{B}T=\epsilon_{LJ}$ with $\epsilon_{LJ}$ being the Lennard-Jones energy parameter. For more details of the simulation model and method see Refs. [@CSA00; @KUM05]. The free end of the chain is linked to a deformable target body, which is modeled by springs with various force-deformation relations. In this study we considered the two cases which seem to be the most important ones in terms of possible applications: linear and Hertzian springs described by Eqs. (\[eq:1\]) and (\[eq:3\]), respectively. In the simulations we use two different setups: one where the spring is anchored at the top plane, Fig. \[fig:spring\_up\], right panel, while in the second setup the spring is attached to the grafting plane, Fig. \[fig:spring\_down\], right panel. For simplicity, we assume that the anchor of springs is fixed and that they are aligned in the direction of the applied field, i.e., perpendicular to the grafting plane. Under this assumptions, the instantaneous length of the spring is $L-z_{\rm top}$ in the first case and $z_{\rm top}$ in the second one, see Figs. \[fig:spring\_up\] and \[fig:spring\_down\]. Here we report simulation results obtained at total chain length $N_{0}=$ 320. The footprint of the simulation box is $L_{x}\times L_{y}$ = 424 $\times$ 424 (in units of $\sigma_{\rm LJ}$) and the box height is $L_{z}=L$ = 160. ![ (Color online) Typical simulation snapshot of a grafted polyelectrolyte exposed to electrical field $\tilde {E}=1 $, perpendicular to the grafting plane and coupled to a deformable colloidal particle of diameter $h_0= 80\,b$. The action of the particle is modeled by a Hertzian spring with spring constant $\tilde{\kappa}= \kappa b^{5/2}/k_BT=1$. The total length of the chain is $N_0=320$. As it may be seen from the figure, for the addressed system parameters, counterions are practically decoupled from polyelectrolyte. []{data-label="fig:Setup"}](Fig3.jpg){width="0.90\columnwidth"} A typical simulation snapshot is shown in Fig. \[fig:Setup\]. We found, that starting from relatively weak fields of $Eqeb/k_BT \geq 0.1$ (recall that $qe$ is the monomer charge), the adsorbed part of the chain forms an almost flat, two-dimensional structure. Small loops of the chain rise out of the plane up to a height of one monomer radius. The bulk part of the polyelectrolyte is strongly stretched in perpendicular direction to the grafting plane with the inter-bead bonds being strongly aligned along the applied field. In sharp contrast to the field-free case, [@Winkler98; @Brill98; @Gole99; @Diehl96; @Pincus1998; @MickaHolm1999; @Naji:2005] the counterion subsystem is practically decoupled from the polyelectrolyte which drastically simplifies the analysis. Results and discussion ====================== In Figs. \[fig4\] - \[fig8\] we show results of MD simulations compared to the predictions of our theory. ![ (Color online) End-point height, $\tilde{z}_{\rm top} = z_{\rm top}/b$, of a chain linked to a *linear* spring, as a function of reduced applied field $\tilde{E}=qeEb/k_BT$. Line – results of the theory, symbols – MD data. The dashed black line demonstrates $\tilde{z}_{\rm top} = N$ of the previous simplified theory [@SeidBudBrill2013] with $N$ taken from the MD data. Inset: reduced force generated by the applied field $\tilde{f}=fb/k_BT$ as a function of reduced field. The bar equilibrium length of the spring is $h_0=10\, b$ and its force constant is $\tilde{\kappa}= \kappa b^2/k_BT=1$. The length of the deformed spring reads, $L_z - z_{\rm top} = 160\,b -z_{\rm top}$. The total length of the chain is $N_0=320$. The arrow indicates $z_{\rm top}$ for the undeformed spring. []{data-label="fig4"}](Fig4.jpg){width="0.99\columnwidth"} In particular, spring length and magnitude of the induced force are shown as functions of the applied electric field. The spring length characterizes the deformation of the target body caused by the force acting from the polyelectrolyte chain. ![ (Color online) Reduced length of a Hertzian spring $\tilde{z}_{\rm} = z_{\rm top}/b$ as a function of reduced field $\tilde{E}=qeEb/k_BT$. Line – results of the theory, symbols – MD data. Inset: reduced force generated by the applied field $\tilde{f}=fb/k_BT$ as a function of field. The bare equilibrium length of the Hertzian spring (undeformed colloidal particle) is $z_{\rm top,0}=d_c=20\,b$ and the force constant is $\tilde{\kappa}= \kappa b^{5/2}/k_BT=1$. The total length of the chain is $N_0=320$. The arrow indicates $z_{\rm top}$ for the undeformed spring. []{data-label="fig5"}](Fig5.jpg){width="0.99\columnwidth"} Fig. \[fig4\] refers to a linear spring anchored to the upper wall. Figs. \[fig5\] – \[fig9\] show the behavior of Hertzian springs of different bare equilibrium lengths (i.e. of colloidal particles of different size); these springs are anchored to the lower wall. The figures clearly demonstrate the very good agreement between theory and MD data obtained in our study. ![ (Color online) The same as Fig.\[fig5\], but for $z_{\rm top,0}=d_c=40\,b$. The dashed black line demonstrates $\tilde{z}_{\rm top} = N$ of the previous simplified theory [@SeidBudBrill2013] with $N$ taken from the MD data.[]{data-label="fig6"}](Fig6.jpg){width="0.99\columnwidth"} We wish to stress the lack of any fitting parameters used in these plots. Note however, that the theory has been developed for a highly charged chain with a relatively strong self-interaction and interaction with the charged plane. This results in an almost flat 2D structure of the adsorbed part of the chain and small transversal fluctuations of the bulk part; the bond vectors of the bulk part cannot be directed down. Although the theory is rather accurate, some systematic deviations are observed for very small fields and for the shortest Hertzian springs with $z_{\rm top,0}=20\,b$. ![ (Color online) The same as Fig.\[fig5\], but for $z_{\rm top,0}=d_c=60\,b$. []{data-label="fig7"}](Fig7.jpg){width="0.99\columnwidth"} In the latter case the deformation of the spring and the force acting on a target body are slightly underestimated. This possibly happens since the condition $N \gg 1$ is not as accurate for short springs as for long ones. ![ (Color online) The same as Fig.\[fig5\], but for $z_{\rm top,0}=d_c=80\,b$. []{data-label="fig8"}](Fig8.jpg){width="0.99\columnwidth"} The theory also underestimates the number of monomer beads $N$ in a bulk for small fields. While the theory is rather accurate when $\tilde{E} >1$, there occur noticeable deviations from MD data at small fields $\tilde{E} <1$, see Fig. \[fig9\]. ![ (Color online) The number of chain monomers in the bulk $N$ as a function of reduced applied field, $\tilde{E}=qeEb/k_BT$. Line – results of the theory, symbols – MD data. The length of the undeformed Hertzian spring is $z_{\rm top,0}=40\,b$ and the force constant is $\tilde{\kappa}= \kappa b^{5/2}/k_BT=1$. The total length of the chain is $N_0=320$. []{data-label="fig9"}](Fig9.jpg){width="0.99\columnwidth"} Fortunately, this deficiency of the new theory with respect to $N$ does not degrade the accuracy of the theoretical dependencies $z_{\rm top} (E)$ and $f(E)$, which seem to be the most important quantities in terms of possible applications. It is noteworthy that for aqueous solutions at the ambient conditions, the characteristic units of force and field are $k_BT/b \approx k_BT/l_B \approx 6\, {\rm pN}$ and $k_BT/be \approx k_BT/l_Be \approx 35\, {\rm V/\mu m}$, respectively. The latter value is about one order of magnitude smaller than the critical breakdown field for water [@DielBreakdown]. Another feature is worth noting. While the electric field alters within a relatively narrow range, the magnitude of the resulting force varies over a rather wide range, which is clearly of great interest for applications. It is also noteworthy to compare the theoretical results of the present study with the corresponding results of the previous simplified theory, see Ref. [@SeidBudBrill2013]. Some representative examples are shown in Figs. \[fig4\] and \[fig6\]. Obviously the simplified theory is accurate for linear springs, except at small fields, $\tilde{E} < 1$. At the same time it fails to satisfactorily describe the behavior of Hertzian springs. The simplified theory drastically underestimates deformation of a target body at small fields ($\tilde{E} < 0.5$), noticeably underestimates it at intermediate range ($1<\tilde{E} < 3$) and overestimates it in strong fields ($\tilde{E} >4$). The simplified theory has an acceptable accuracy only in a rather narrow field interval. The phenomenon addressed in the present study may be used in future nano-machinery: A prototype of a possible nano-device, that may be called a “nano-vice” or “nano-nippers” is illustrated in Fig. \[fig:spring\_down\]. Here the contraction of two polyelectrolyte chains in an external electric field allows one to fix firmly a colloidal particle, which would otherwise perform Brownian motion. At zero or weak fields the particle will be released. Using our theory one can compute the magnitude of the field needed to keep the particle fixed, although additional knowledge about the intensity of the Brownian motion and friction forces is required. Naturally, one can think about other nano-size objects, e.g. viruses, cellular organelles or small bacteria. These objects would be characterized by other force-deformation relations. Consider for example nano-vices in aqueous solutions at the ambient conditions with $l_B=0.7 \,{\rm nm}$. For simplicity we analyze the case of only one chain (see the right panel of Fig. \[fig:spring\_down\]); the generalization for a few chains is straightforward. Let the polyelectrolyte chain be flexible and consist of $N_0 =180$ monomers of size $b\approx l_B$, each carrying a charge $1\,e$. Let the colloidal particle be of diameter $d_0=50b=35\,{\rm nm}$. If we use the Young modulus $Y=0.01 {\rm GP}$, as for rubber [@rubber], for the particle material and $\nu =0.1$ for the Poisson ratio, we obtain $\tilde{\kappa}= 2.86$ [^2]. In this case the field $\tilde{E} =1$ of about $35\, {\rm V/\mu m}$ generates a force of about $240 \, {\rm pN}$ and the relative deformation of $\Delta d/d_0=0.123$. $44$ monomers remains in the bulk and $136$ are adsorbed. If the field increases up to $\tilde{E} =2$, that is, up to $70\, {\rm V/\mu m}$, the force increases to about $440 \, {\rm pN}$, with the deformation of $\Delta d/d_0=0.186$ and $41$ monomers in the bulk. Naturally, there exist plenty of other possible applications of the mechanism studied, which we plan to address in future research. Conclusion ========== We analyze the generation of a mechanical force by external electric field, applied to a grafted polyelectrolyte that is linked to a deformable target body. We develop a theory of this phenomenon and perform MD simulations. The case of strong electrostatic self-interaction of the chain and its interaction with the charged plane is addressed. We consider target bodies with two different force-deformation relations, which seem to be the most important for possible applications: (i) a linear relation and (ii) that of a Hertzian spring. The first relation models the behavior of a coiled Gaussian chain, while the second one represents that of a squeezed colloidal particle. The theoretical dependencies of the generated force and of the compression of the target body are in a very good agreement with the simulation data. The theory, however, underestimates the number of beads $N$ of the bulk part of the chain for weak fields and small sizes of colloidal particles. Interestingly, the generated force strongly depends on the applied electric field. While the magnitude of the force varies over a wide interval, the field itself alters within a rather narrow range only. The phenomenon addressed here may play an important role in future nano-machinery. For instance, it could be utilized to design vice-like devices (nano-vices, nano-nippers) that keep nano-sized objects fixed. Other applications of this phenomenon, which require manipulations with nano-objects, such as e.g. fusing them together by an applied pressure are also possible. Appendix ======== Here we present some calculation detail of quantities derived in the main text. Computation of $\left<H_{\rm self,b}\right>_{\psi}$ --------------------------------------------------- First we show that $H_{\rm self,b} $ given in Eq. (\[eq:Eself\]) may written in the form (\[eq:Eself1\]). Using the integral representation of the $\delta$-function, $$\delta({\bf r})= (2 \pi)^{-d} \int e^{i {\bf k} \cdot{\bf r}} d{\bf k}$$ where $d$ is the dimension of the vector ${\bf r}$, we write, $$\begin{aligned} \label{eq:rlm} \frac{1}{|{\bf r}_{lm}|} &=& \frac{1}{(2\pi)^3} \int d {\bf x} \int d {\bf k} e^{i {\bf k} \cdot ({\bf r}_{lm} - {\bf x}) }\, \frac{1}{|{\bf x}|} \nonumber \\ &=& \frac{1}{(2\pi)^3} \int d {\bf k} \left(\frac{4 \pi}{k^2} \right) e^{i {\bf k} \cdot \sum_{s=l}^m{\bf b}_s } \\ &=& \int \frac{d {\bf k}}{(2\pi)^3} \left(\frac{4 \pi}{k^2} \right) e^{i \sum_{s=l}^m {\bf k}_{\perp} \cdot {\bf b}_s^{\perp} + k_z b_s^z}, \nonumber\end{aligned}$$ where $(4 \pi /k^2)$ is the Fourier transform of $1/{|{\bf x}|}$. Summation of $|{\bf r}_{lm}|^{-1}$ with the prefactor $q^2e^2/2 \varepsilon$ over all $l, m =1,\ldots N$ yields Eq. (\[eq:Eself1\]). To find $\left<H_{\rm self,b}\right>_{\psi}$ in Eq. (\[eq:22\]) first we compute the following average $$\left<e^{i\sum_{s=s_{1}}^{s_{2}} {\bf k}_{\perp } \cdot {\bf b}^{\perp }_{s}}\right >_{\psi}\!=\! \frac{1}{(2\pi)^N}\!\!\int_{0}^{2\pi}\!\! \!\!d\psi_{1} \!\ldots \int_{0}^{2\pi}\!\!\!\!\!d\psi_{\!N}e^{i\sum_{s=s_{1}}^{s_{2}}\!\! {\bf k}_{\perp} \cdot {\bf b}^{\perp}_{s}}.$$ Due to the lateral symmetry we choose the direction of vector ${\bf k}_{\perp}$ along the $OX$ axis to obtain, $$\begin{aligned} \label{eq:Angav} &&\left <e^{i{\bf k}_{\perp } \cdot \sum_{s=s_{1}}^{s_{2}}{\bf b}^{\perp }_{s}}\right >_{\psi} =\\ &&\quad=\frac{1}{(2\pi)^N}\int_{0}^{2\pi}d\psi_{1}..\int_{0}^{2\pi}d\psi_{N}e^{ik_{\perp}b \sum_{s=s_{1}}^{s_{2}}\cos\psi_{s}\sin\theta_{s}} \nonumber \\ &&\quad=\prod_{s=s_{1}}^{s_{2}}\int_{0}^{2\pi}\frac{d\psi_{s}}{2\pi}e^{ik_{\perp}b\cos\psi _{s}\sin\theta_{s}} \!= \!\!\prod_{s=s_{1}}^{s_{2}}J_{0}(k_{\perp}b\sin\theta_{s}) \nonumber \\ &&\quad=\exp \left[{\sum_{s=s_{1}}^{s_{2}}\log J_{0}(k_{\perp}b\sin\theta_{s})} \right] \nonumber \\ &&\quad \simeq \exp \left[{\sum_{s=s_{1}}^{s_{2}}\log \left(1- k_{\perp}^2b^2\sin ^2\theta_{s}/4\right)}\right] \nonumber \\ &&\quad \approx \exp \left[ -\sum_{s=s_{1}}^{s_{2}} \frac{k_{\perp}^2b^2\sin ^2\theta_{s}}{4}\right] \nonumber \\ &&\quad \approx \exp \left[ -\frac{k_{\perp}^2b^2|s_{1}-s_{2}|}{4}\left(1-\frac{\tilde{z}_{\rm top}^2}{N^2}\right) \right], \nonumber\end{aligned}$$ where we use the approximation $\cos ^{2}\theta_{s}\approx \tilde{z}_{\rm top}^2/N^2$ and keep in the Bessel function expansion only the leading terms $J_0(x) =1-x^2/4 +\ldots$, where $x \sim k$. The latter approximation is justified since the main contribution from the integrand in (\[eq:22\]) is accumulated in the vicinity of $k=0$. Using now the approximation $$\label{eq:app2} \sum_{s=s_{1}}^{s_{2}}\eta_{s}\approx \frac{\tilde{z}_{\rm top}(s_{2}-s_{1})}{N},$$ and substituting it together with (\[eq:Angav\]) into Eq. (\[eq:22\]) we obtain, $$\begin{aligned} \beta \left<H_{\rm self,b}\right>_{\psi } &=& \\ \quad &=& \frac{\beta q^2e^2}{2\varepsilon}\sum_{s_{1}\neq s_{2}}\int\frac{d{\bf k }}{(2\pi )^3}\left(\frac{4 \pi} {k_{\perp}^2+k_{z}^2} \right) \nonumber \\ \quad &\times & e^{-\frac{k_{\perp}^2b^2|s_{1}-s_{2}|}{4}\left(1-\frac{\tilde{z}_{\rm top}^2}{N^2}\right)} e^{i\frac{k_{z}\tilde{z}_{\rm top }|s_{1}-s_{2}|}{N}}. \nonumber\end{aligned}$$ In the above expression, one can integrate over ${\bf k}$ (first, over $k_z$, using residues) to get the result $$\label{eq:Hselfres} \beta \left<H_{\rm self,b}\right>_{\psi } = \frac{l_B q^2}{2}\sum_{s_{1}\neq s_{2}}\frac{\sqrt{\pi}}{2h}\,\,e^{g^2/4h^2} {\rm Erfc} \left(\frac{g}{2h} \right),$$ where $h^2= b^2 |s_{1}-s_{2}|(1-\tilde{z}_{\rm top}^2/N^2)/4$ and $g=|s_{1}-s_{2}|\tilde{z}_{\rm top } /N$. Since $\tilde{z}_{\rm top } \sim N$ and $|s_{1}-s_{2}| \sim N \gg 1$, it is easy to show that $g/2h \gg 1$. With $e^{x^2}{\rm Erfc}(x) =(\sqrt{\pi} x)^{-1}$ for $x \gg 1$ we obtain, $$\begin{aligned} \label{eq:Hselfres1} \beta \left<H_{\rm self,b}\right>_{\psi } &=& \frac{l_B q^2 N}{2\, z_{\rm top}}\sum_{s_{1}\neq s_{2}} \frac{1}{|s_{1}-s_{2}|} \\ &\simeq & \frac{l_B q^2 N}{z_{\rm top}} \int_1^{N-1}ds_1 \int_{s_1+1}^N \frac{ds_2}{s_2-s_1} \nonumber\\ &\simeq & \frac{l_B q^2 N^2}{z_{\rm top}}(\log N -1 ),\end{aligned}$$ that is, Eq. (\[eq:Hselfin\]) of the main text. Computation of ${\cal Z}_b(z_{\rm top})$ ---------------------------------------- From Eqs. (\[eq:Zbp\]) and (\[eq:Zb1\]) follows that $W(\xi)$ is defined as, $$\begin{aligned} \label{eq:Wxi} W(\xi) \!\!&=&\!\! \log \int_{0}^{1}d\eta_{1} \ldots \int_{0}^{1}d\eta_{N}\exp \left\{ \sum_{s=1}^{N}(i\xi-\tilde{E}s)\eta_{s} \right\} \nonumber \\ \!\!&=&\!\! \log \prod_{s=1}^{N}\int_{0}^{1}d\eta_{s}e^{(i\xi-\tilde{E}s)\eta_{s}} \\ \!\!&\simeq&\!\! \int_{1}^{N}\!\!ds\left[\log(e^{i\xi-\tilde{E}s}-1)\!-\!\log(i\xi-\tilde{E}s)\right]. \nonumber \end{aligned}$$ The integral $\int_{-\infty}^{+\infty} d\xi \exp[{-i\xi \tilde{z}_{\rm top } +W(\xi)}]$ in Eq. (\[eq:Zbp\]) may be estimated with the use of the steepest descend method, that is, using the fact that for large $N$ the value of $\tilde{z}_{\rm top}$ is also large, $z_{\rm top }/b \gg 1$. Then the saddle point equation reads, $$\begin{aligned} \label{eq:xiztop} && \quad \frac{d}{d\xi}\left(-i\xi \tilde{z}_{\rm top}+W(\xi)\right) =\\ && \qquad =-i\tilde{z}_{\rm top}+i\int_{1}^{N}ds\left[\frac{e^{i\xi-\tilde{E}s}}{e^{i\xi-\tilde{E}s}-1} -\frac{1}{i\xi-\tilde{E}s}\right]=0. \nonumber\end{aligned}$$ With the new variable $\xi_{0}=i\xi$, we obtain the equation that defines the implicit dependence of $\xi_{0}$ on $\tilde{z}_{\rm top}$ and $N$: $$\label{eq:xiztop1} \tilde{z}_{\rm top} =\frac{1}{\tilde{E}}\left[\log{\frac{e^{\xi_{0}-\tilde{E}}-1}{\xi_{0}-\tilde{E}}}- \log{\frac{e^{\xi_{0}-\tilde{E}N}-1}{\xi_{0}-\tilde{E}N}}\right].$$ For $N \gg 1$, one can find rather accurately the solution of the above equation. Indeed, the assumption, that $\xi_0 \sim 1 \ll N$ leads to the conclusion that $z_{\rm top} \sim \log N$, which may not hold true, neither for the coiled chain nor for the chain stretched by the force. On the other hand the assumption $\xi_0 \sim N$, which yields $\xi_0- \tilde{E} \sim N$, implies that one can apply the approximation $\log[(e^x-1)/x ] \simeq x-\log x $ at $x \gg 1$. Using the evident condition $\xi_0- \tilde{E} \gg \xi_0- \tilde{E}N$ one obtains $$\tilde{E}\tilde{z}_{\rm top} \simeq \xi_0 - \tilde{E} -\log (\xi_0 - \tilde{E})$$ or $$\xi_0 \simeq (\tilde{z}_{\rm top}+1)\tilde{E} + \log \tilde{z}_{\rm top}\tilde{E}.$$ If we again take into account that $\tilde{z}_{\rm top} \sim N \gg 1$ and $\tilde{E} \sim 1 \ll N$ we arrive at an even more simple solution for $\xi_0$ $$\xi_0 \simeq\tilde{E}\tilde{z}_{\rm top}.$$ Hence we obtain the following approximate expression of the partition sum, $$\begin{aligned} \mathcal{Z}_b(z_{\rm top})&\approx& (2\pi)^{N-1} e^{-\beta U_{\rm s}-\beta \left<H_{\rm self,b}\right>_{\psi }} \\ &\times& e^{-\xi_{0}\tilde{z}_{\rm top}+W(\xi_{0})-\frac{1}{2}\log{\frac{|W^{\prime \prime}(\xi_{0})|}{2\pi}}}, \nonumber\end{aligned}$$ with $\xi_0$ given in the above equation and with $W(\xi_{0})$ defined by Eq. (\[eq:Wxi\]). It may be written as $$\label{eq:Wxi1} W(\xi_{0}) = (1/\tilde{E}) \left[ {\rm Ei}(\zeta_0) + \log\left| \zeta_0/\zeta_N \right| - {\rm Ei}(\zeta_N) \right], %\frac{1}{\tilde{E}}$$ where ${\rm Ei}(x)$ is the exponential integral function and we abbreviate $\zeta_0=\xi_0-\tilde{E}$ and $\zeta_N=\xi_0-\tilde{E}N$. Similarly, we write $W^{\prime \prime}$ as $$\label{eq:Wprpr} W^{\prime \prime}(\xi_{0}) = \tilde{E}^{-1} \left[\frac{e^{\zeta_{N}}}{e^{\zeta_{N}}-1} -\frac{e^{\zeta_{0}}}{e^{\zeta_{0}}-1} - \frac{1}{\zeta_{N}}+\frac{1}{\zeta_{0}} \right].$$ Finally we obtain the free energy $\overline{F}_b(z_{\rm top},N)$ associated with the bulk part of the chain (without taking into account counterions) $$\begin{aligned} &&\beta \overline{F}_b(z_{\rm top},N) \approx \beta U_{\rm sp}(z_{\rm top} ) \!+ \!\beta \left<H_{\rm self,b}\right>_{\psi } - N \log 2\pi \nonumber\\ &&~~~~~~~~~~~~~~~+\xi_{0} z_{\rm top}- W(\xi_{0})+ \log{ |W^{\prime \prime} (\xi_{0})|^{1/2}}.\end{aligned}$$ Note that for $N \gg 1$ the term containing $W^{\prime \prime} (\xi_{0})$ is logarithmically small as compared to other terms and may be neglected. Free energy of counterions -------------------------- The results of the MD simulations show that the counterions are well separated from the chain if the field and volume of the systems are not very small. Therefore the impact of the counterions on the chain conformation may be treated as a small perturbation. Here we perform simple estimates of the free energy of counterions. We can approximate it as, $$F_{\rm count} \simeq F_{\rm c.c.}+F_{\rm c.E.}+F_{\rm c.ch.},$$ where $F_{\rm c.c.}$ is the free energy associated with the counterion-counterion interactions, $F_{\rm c.E.}$ refers to the free energy of the counterions interactions with the external field $E$ and $F_{\rm c.ch.}$ to that with the charged chain. In the case of interest one can neglect the dependence of $F_{\rm c.c.}$ and $F_{\rm c.E.}$ on the chain conformation, so that we do not need to compute these terms. At the same time $F_{\rm c.ch.}$ can be estimated as the electrostatic energy of the chain in the additional potential $\varphi_c(z)$ caused by counterions, $$\label{eq:Fcch} F_{\rm c.ch.} \approx \sum_{i=1}^N -qe \varphi_c(z_i).$$ To find $\varphi_c(z)$ we start with the equilibrium Boltzmann distribution of counterions $\rho_c(z)$ in the external field $E$ neglecting their self-interaction: $$\rho_c(z) = \rho_0 e^{\frac{qe E z}{k_BT}} = \frac{N_0 eq E}{S k_BT} e^{\frac{qe E (z-L)}{k_BT}},$$ where $L$ is the size of the system in the direction along $OZ$ and $S$ is its lateral area. To obtain constant $\rho_0$ in the above equation, we apply the normalization condition, $S\int_0^L \rho_c(z)dz =N_0$. Next we compute the electric field $E_c$ due to counterions, performing the same derivation as for the electric field of a uniformly charged plane $$\begin{aligned} \label{eq:Ecz} E_c(z)\! \! &=& \! \!\frac{qe}{\varepsilon} \int_0^L \! \! \!dz_1 \rho_c(z_1) \int_0^{2 \pi} \! \! \! d \phi \int_0^{\infty} \! \! \! \frac{\partial}{\partial z} \frac{r dr}{ \sqrt{(z_1-z)^2 +r^2}} \nonumber \\ \! \!&=& \! \!\frac{2 \pi eq N_0}{\varepsilon S} e^{-\frac{qe E L}{k_BT}} \left[ 2e^{\frac{qe E z}{k_BT}}- e^{\frac{qe E L}{k_BT}}-1 \right] \\ \! \!&=& \! \! (4 \pi e\sigma_c/\varepsilon) e^{\tilde{E}(\tilde{z}-\tilde{L})} -(2\pi e \sigma_c/\varepsilon), \nonumber\end{aligned}$$ where $\sigma_c = qN_0/S$ corresponds to the apparent surface charge density due to counterions and $\tilde{L}=L/b$. The second term in the above equation, $2\pi e \sigma_c/\varepsilon$, corresponds to the renormalization of the external field $E$ due to the counterion screening of the upper plane, $E \to E - 2\pi e \sigma_c/\varepsilon$. From Eq. (\[eq:Ecz\]), finally we get the additional potential $$\label{eq:phic} \varphi_c(z) = 2\pi e\sigma_c z/\varepsilon - (4\pi e \sigma_c b /\varepsilon\tilde{E}) e^{\tilde{E}(\tilde{z}-\tilde{L})}.$$ Substituting Eq. (\[eq:phic\]) into Eq. (\[eq:Fcch\]) we obtain, $$\label{eq:Fcch2} F_{\rm c.ch.} = -\sum_{i=1}^N \frac{2\pi \sigma_c}{\varepsilon} qe^2 z_i + \frac{4\pi e\sigma_c b}{\varepsilon \tilde{E}}e^{-\tilde{E} \tilde{L} } \sum_{i=1}^N e^{-\tilde{E} \tilde{z}_i }.$$ Using $\tilde{z}_i=\sum_{s=i}^N \cos \theta_s$ (see Eq. (\[eq:zk\])) along with the approximation, $\cos \theta_s = \overline{\cos \theta_s}=\tilde{z}_{\rm top}/N$, we find for the first and second term in Eq. (\[eq:phic\]): $$\frac{4\pi e \sigma_c b}{\varepsilon \tilde{E}}e^{-\tilde{E} \tilde{L} } \sum_{i=1}^N e^{-\tilde{E} \tilde{z}_i } = \frac{4 \pi e \sigma_cb}{\varepsilon\tilde{E}} \frac{e^{\tilde{E}(\tilde{z}_{\rm top}-\tilde{L})}}{e^{\tilde{E} \tilde{z}_{\rm top}/N} -1}$$ $$\sum_{i=1}^N \sum_{s=1}^N \frac{2\pi \sigma_c q e^2 b}{\varepsilon} \cos \theta_s = \frac{\pi \sigma_c q e^2 b}{\varepsilon} \, \tilde{z}_{\rm top} N\, ,$$ which yields Eq. (\[eq:Fc\_ch\]) of the main text. Computation of $ \mathcal{Z}_{\rm s}({\bf R} )$ ------------------------------------------------ We start with the computation of $\left<e^{-\beta H_{\rm self,s} }\right>_{\bf p}$. Using only the first-order term in the cumulant expansion of the exponent we write, $$\left<e^{-\frac{\beta}{2}\sum_{s_{1} \neq s_{2} }V({\bf r}_{s_{1}}-{\bf r}_{s_{2}})}\right>_{\bf p} \approx e^{-\frac{\beta}{2}\sum_{s_{1} \neq s_{2} } \left<V({\bf r}_{s_{1}}-{\bf r}_{s_{2}})\right>_{\bf p} }.$$ This is a mean-field approximation, which is usually a good approximation for systems with a long-range interactions. Since $V(r)$ refers to the unscreened Coulomb interactions, we expect this approximation to be rather accurate. Similar as in Eq. (\[eq:rlm\]) we can write, $$V( {\bf r}_{s_{1}}-{\bf r}_{s_{2}}) =\int \frac{d {\bf k}}{(2\pi)^3} \tilde{V}(k) e^{i {\bf k} \cdot ({\bf r}_{s_{1}}-{\bf r}_{s_{2}})},$$ where $\tilde{V}(k)=(q^2e^2/\varepsilon)(4\pi/k^2)$ is the Fourier transform of the interaction potential. This yields, $$\begin{aligned} \label{eq:<Vr>} \left< V({\bf r}_{s_{1}}-{\bf r}_{s_{2}}) \right>_{\bf p}&=& \frac{1}{\mathcal{Z}_{0}(\bf p )} \int_{0}^{2\pi} d\phi_{1} \ldots \int_{0}^{2\pi} d\phi_{N_{s}}e^{i {\bf p } \cdot \sum_{s=1}^{N_{s}}{\bf d}_{s}} V({\bf r}_{s_{1}}-{\bf r}_{s_{2}}) =\int \frac{d {\bf k}}{(2\pi)^3}\tilde{V}(k) \left< e^{i {\bf k} \cdot ({\bf r}_{s_{1}}-{\bf r}_{s_{2}})} \right>_{\bf p}\nonumber \\ &=&\frac{1}{\mathcal{Z}_{0}(\bf p)} \int_{0}^{2\pi} d\phi_{1} \ldots \int_{0}^{2\pi}d\phi_{N_{s}} \int\frac{d {\bf k}}{(2\pi)^3} \tilde{V}(k) e^{i {\bf p } \cdot \sum_{s=1}^{N_{s}}{\bf d}_{s} + i{\bf k}_{\perp} \sum_{s_{1}}^{s_{2}}{\bf d}_{l} }\nonumber\\ \nonumber\\ &=&\int\frac{d {\bf k} }{(2\pi)^3} \tilde{V}(k) \left[\frac{J_{0}(|{\bf k }_{\perp}+{\bf p}|\,b)}{J_{0}(pb)}\right]^{|s_{2}-s_{1}|}.\end{aligned}$$ Here we take into account that ${\bf p}$ is a two-dimensional vector and use the definition (\[eq:Z0q\]) of ${\mathcal{Z}_{0}(\bf p)}$. Substituting Eq. (\[eq:&lt;Vr&gt;\]) into Eq. (\[eq:Z(R)\_1\]) we arrive at $$\begin{aligned} \mathcal{Z}_{\rm s}({\bf R}) &\approx& (2\pi)^{N_{s}}\int\frac{d {\bf p} }{(2\pi)^2} e^{-i {\bf p} \cdot {\bf R}} \left[J_{0}(pb)\right]^{N_{s}}e^{-\frac{\beta}{2}\int\frac{d{\bf k}}{(2\pi)^3} \tilde{V}(k)\sum_{s_{1}\neq s_{2}}^{}\left[\frac{J_{0}(|{\bf k}_{\perp}+{\bf p}|\, b)}{J_{0}(p\,b)}\right]^{|s_{1}-s_{2}|}} \nonumber \\ &=&(2\pi)^{N_{s}}\int\frac{d{ \bf p}}{(2\pi)^2}e^{-i {\bf p} \cdot {\bf R}+N_{s}\log\left(J_{0}(pb)\right)-\frac{\beta}{2}\int\frac{d{\bf k}}{(2\pi)^3}\tilde{V}(k)\sum_{s_{1}\neq s_{2}}^{}\left[\frac{J_{0}(|{\bf k}_{\perp}+{\bf p}|\,b)}{J_{0}(p\,b)}\right]^{|s_{1}-s_{2}|}} \nonumber \\ &\simeq& (2\pi)^{N_{s}}\int\frac{d {\bf p}}{(2\pi)^2}e^{ -i {\bf p} \cdot {\bf R} -\frac14 {N_{s}p^2b^2}-\frac{\beta}{2}\int\frac{d \bf k}{(2\pi)^3}\tilde{V}(k)\sum_{s_{1}\neq s_{2}} \left[\frac{J_{0}(|{\bf k}_{\perp}+{\bf p}|b)}{J_{0}(pb)}\right]^{|s_{1}-s_{2}|}}.\end{aligned}$$ Using the new integration variable $${\bf G} ={\bf p} -\frac{2i {\bf R} }{N_{s}b^2},$$ we obtain $$\begin{aligned} \label{eq:Zs1} \mathcal{Z}_{\rm s}({\bf R} )&=&(2\pi)^{N_s}e^{-\frac{R^2}{N_{s}b^2}} \int\frac{d {\bf G} }{(2\pi)^2}e^{-\frac14{N_{s}b^2G^2}} \, \exp\left\{{-\frac{\beta}{2}\sum_{s_{1}\neq s_{2}}^{}\int\frac{d{\bf k}}{(2\pi)^3}\tilde{V}(k)e^{|s_{2}-s_{1}| \log\left[\frac{J_{0}\left(|{\bf k}_{\perp}+{\bf G}+\frac{2i{\bf R}}{N_{s}b^2}|\,b\right)} {J_{0}\left(|{\bf G}+\frac{2i{\bf R}}{N_{s}b^2}|b\right)}\right]}}\right\}\nonumber \\ &\simeq &(2\pi)^{N_{s}} e^{-\frac{R^2}{N_{s}b^2}}\int\frac{d{\bf G}}{(2\pi)^2}e^{-\frac14{N_{s}b^2G^2}}\exp\left\{{-\frac{\beta}{2}\sum_{s_{1}\neq s_{2}}^{}\int\frac{d {\bf k}}{(2\pi)^3}\tilde{V}( k)e^{|s_{2}-s_{1} |\log\left[\frac{J_{0}\left(|{\bf k}_{\perp}+\frac{2i{\bf R}}{N_{s}b^2}|b\right)}{J_{0} \left(\frac{2i|{\bf R}|}{N_{s}b}\right)}\right]}}\right\} \nonumber \\ &\approx &(2\pi)^{N_{s}}\frac{1}{\pi N_{s}b^2}e^{-\frac{R^2}{N_{s}b^2}-\beta W_1(R)}.\end{aligned}$$ To derive Eq. (\[eq:Zs1\]) we take into account that since $N_s \gg 1$, only values of $G \sim 1/(b \sqrt{N_s})$ contribute to the above integral. The analysis also shows that $R \sim N_sb$ (see Eq. (\[eq:extR\])), which allows to neglect ${\bf G}$ as compared to $({\bf R}/N_s) b^2$ and to perform the Gaussian integration in the last line of (\[eq:Zs1\]). Furthermore we define $$\begin{aligned} \label{eq:W1R} \beta W_1(R)&\simeq &\frac{\beta}{2}\sum_{s_{1}\neq s_{2}}^{}\int\frac{d{\bf k}}{(2\pi)^3}\tilde{V}(k) e^{-\frac{i{\bf k}_{\perp}\cdot {\bf R}|s_{2}-s_{1}|}{N_{s}}} e^{-\frac{{\bf k}_{\perp}^2b^2|s_{1}-s_{2}|}{4}} \nonumber \\ &\approx & \frac{q^2 l_{B}}{2\pi^2}\int_{1}^{N_{s}-1}ds_{1}\int_{s_1+1}^{N_{s}}ds_{2}\int_{-\infty}^{\infty}dk_{z}\int d{\bf k}_{\perp}\frac{e^{-\frac{i{\bf k}_{\perp}\cdot{\bf R}|s_{2}-s_{1}|}{N_{s}}}e^{-\frac{{\bf k}_{\perp}^2b^2|s_{1}-s_{2}|}{4}}}{k_{z}^2+{\bf k}_{\perp}^2},\end{aligned}$$ where we use again the expansion of $J_0(x)$ and keep only the leading term. Integration over $k_z$ may be easily performed, yielding $\pi/k_{\perp}$. Hence we obtain, $$\begin{aligned} \label{eq:overkz} &&\int_{-\infty}^{\infty}dk_{z}\int d{\bf k}_{\perp}\frac{e^{-\frac{i{\bf k}_{\perp}\cdot{\bf R}|s_{2}-s_{1}|}{N_{s}}}e^{-\frac{{\bf k}_{\perp}^2b^2|s_{1}-s_{2}|}{4}}}{k_{z}^2+{\bf k}_{\perp}^2} = \pi \int_0^{\infty} dk_{\perp} e^{-\frac14 b^2|s_2-s_1| k^2_{\perp}} \int_0^{2 \pi} e^{-i\cos \phi k_{\perp}R|s_2-s_1|/N_s} d \phi \nonumber \\ &&~~~~~~~= 2 \pi^2 \int_0^{\infty} e^{-\frac14 b^2|s_2-s_1| k^2_{\perp}} J_0\left(\frac{k_{\perp}R|s_2-s_1|}{N_s} \right) dk_{\perp} =\pi^{5/2} \frac{e^{-\frac{R^2|s_2-s_1|}{2 N_s^2 b^2}}}{b \sqrt{|s_2-s_2|}} \, {\rm I}_0 \left( \frac{R^2|s_2-s_1|}{2 N_s^2 b^2}\right)\end{aligned}$$ where ${\rm I}_0(x)$ is the modified Bessel function of the first kind. Substituting the above result into Eq. (\[eq:W1R\]) we observe that since $R\sim bN_s$, the main contribution in the integrals over $s_1$ and $s_2$ comes from the region where $|s_2-s_1|$ is small; here we can approximate ${\rm I}_0(x)\approx 1$ [^3]. Therefore we can write, $$\beta W_1(R) \approx q^2 \tilde{l}_B\sqrt{\pi}N_s^{3/2} \int_0^1 dx \int_x^1 dy \, \frac{e^{-\frac{R^2}{2N_sb^2} |y-x|}}{\sqrt{|y-x|}} =q^2 \tilde{l}_B\sqrt{\pi}N_s^{3/2} H\left(\frac{R^2}{2 N_s b^2} \right).$$ Here the function $H(x)$ reads; $$H(x)=\frac{\sqrt{\pi}erf(\sqrt{x})(x-1/2)+xe^{-x}}{x^{3/2}},$$ it behaves as $H(x) \simeq \sqrt{\frac{\pi}{x}}$ for $x \gg 1$. Hence, for $R^2 \gg N_s b^2$ we obtain, $$\beta W_1(R) = \frac{\pi \sqrt{2} q^2l_B N_s^2}{R}.$$ and finally, the conditional partition function, $$\begin{aligned} \label{eq:Zs2} \mathcal{Z}_{\rm s}({\bf R} )\simeq \frac{{\,\, \,}(2\pi)^{N_{s}}}{\pi N_{s}}\, e^{-\frac{R^2}{N_{s}b^2}- \frac{\pi \sqrt{2} q^2 l_B N_s^2}{R} }.\end{aligned}$$ Calculation of $F_{bs}(N, z_{\rm top}, R)$ ------------------------------------------ The conditional free energy of the system $F(N, z_{\rm top}, R)$ may be written in the following form: $$\begin{aligned} \label{eq:FNZR1} e^{-\beta F(N,z_{\rm top}, R)} &=&\int_0^{2 \pi} d\psi_1\ldots d\psi_N \int_0^{1} d \cos \theta_1 \ldots \int_0^{1} d \cos \theta_N \delta \left(z_{\rm top} - b \sum_{s=1}^N \cos \theta_s \right) b\\ &\times& \int_0^{2 \pi} d\phi_1\ldots d\phi_{N_s} \delta \left( \sum_{s=1}^{N_s} {\bf d}_s - {\bf R} \right) b^2 e^{ -\beta U_{\rm sp} (z_{\rm top}) -\beta H_{\rm ext} -\beta H_{\rm self, b} -\beta H_{\rm self, s} -\beta H_{\rm bs}} \nonumber \\ &=& \int d\Gamma_b e^{-\beta H_1} \int d\Gamma_s e^{-\beta H_2} \, \,\frac{\int d\Gamma_b \int d\Gamma_s e^{-\beta (H_1+H_2)}e^{-\beta H_{\rm bs}}}{\int d\Gamma_b \int d\Gamma_s e^{-\beta (H_1+H_2)}} \nonumber\\ &=& e^{-\beta F_{\rm b}(N, z_{\rm top})} e^{-\beta F_{\rm s}(N_s, R)} \left<e^{-\beta H_{\rm bs}} \right>_{N,z_{\rm top}, R} \nonumber\\ &\approx & e^{-\beta F_{\rm b}(N, z_{\rm top})} e^{-\beta F_{\rm s}(N_s, R)} e^{-\beta \left< H_{\rm bs} \right>_{N,z_{\rm top}, R}} \nonumber\end{aligned}$$ which yields Eq. (\[eq:F\_tot\]) of the main text: $$F(N,\!z_{\rm top}, \!R)\!\approx\! F_{\rm b}(N, \!z_{\rm top}) \! +\! F_{\rm s}(N_s,\! R)\!+\!F_{\rm bs}(N,\!z_{\rm top}, R).$$ Here $F_{\rm bs}(N,z_{\rm top}, R)= \left< H_{\rm bs} \right>_{N,z_{\rm top}, R}$. In Eq. (\[eq:FNZR1\]) we introduce the short-hand notations, $$\begin{aligned} &&\int d\Gamma_b \!= \!\int_0^{2 \pi} \!\!d\psi_1\ldots \!\!\int_0^{2 \pi} \!\!d\psi_N \int_0^{1} \!\! d \cos \theta_1 \ldots \!\! \int_0^{1} \!\! d \cos \theta_N \nonumber \\ && \int d\Gamma_s \! = \! \int_0^{2 \pi} d\phi_1\ldots d\phi_{N_s} \nonumber\end{aligned}$$ as well as $$\begin{aligned} && e^{-\beta H_1} \!=\! e^{-\beta U_{\rm sp} (z_{\rm top}) -\beta H_{\rm ext} -\beta H_{\rm self, b} } \delta \!\! \left(\!z_{\rm top} - b \sum_{s=1}^N \cos \theta_s \!\!\right)b \nonumber \\ &&e^{-\beta H_2} \!=\! e^{-\beta H_{\rm self, s}} \delta \left( \sum_{s=1}^{N_s} {\bf d}_s - {\bf R} \right)b^2. \nonumber\end{aligned}$$ To compute $\left< H_{\rm bs} \right>_{N,z_{\rm top}, R}$ we use as previously the approximation of small transverse fluctuations for the bulk part of the chain, $H_{\rm bs} \approx \left< H_{\rm bs} \right>_{\psi}$. With this approximation one can write, $$\begin{aligned} \label{eq:esp_bs1} &&\left< \! e^{i {\bf k} \cdot \sum_{s=l}^N {\bf b}_{s} +i {\bf k} \cdot \sum_{s=1}^m {\bf d}_{s}} \!\right>_{\!\!N,z_{\rm top},R} \\ &&~~~\approx \!\! \left< \!e^{i {\bf k}_{\perp} \cdot \sum_{s=l}^N {\bf b}_{s}^{\perp} }\right>_{\!\!\psi} \!\! \left< \!e^{i k_z b \cdot \sum_{s=l}^N \eta_{s} +i {\bf k}_{\perp} \cdot \sum_{s=1}^m {\bf d}_{s}} \! \right>_{\!\!N,z_{\rm top},R} , \nonumber\end{aligned}$$ with the same notations as above. The first factor in the right-hand side of Eq. (\[eq:esp\_bs1\]) may be computed as in Eq. (\[eq:Angav\]), yielding $$\left< e^{i {\bf k}_{\perp} \cdot \sum_{s=l}^N {\bf b}_{s}^{\perp} }\right>_{\psi} = e^{-\frac{k_{\perp}^2b^2(N-l)}{4}\left(1-\frac{\tilde{z}_{\rm top}^2}{N^2}\right) } =e^{-k_{\perp}^2 h_1^2}.$$ Using the same approximation as in Eqs. (\[eq:app2\]) and (\[eq:dsR\]), $$b\sum_{s=l}^{N}\eta_{s}\approx \frac{z_{\rm top}}{N}(N-l) = g_1; \qquad \sum_{s=1}^m {\bf d}_{s} = \frac{m}{N_s} {\bf R} ={\bf R}^{\prime}$$ we arrive at Eq. (\[eq:esp\_bs\]), which we write as $$\begin{aligned} \label{eq:esp_bs2} &&\left< \! e^{i {\bf k} \cdot \sum_{s=l}^N {\bf b}_{s} +i {\bf k} \cdot \sum_{s=1}^m {\bf d}_{s}} \!\right>_{\!\!N,z_{\rm top},R} = e^{-k_{\perp}^2 h_1^2 +ik_z g_1 +i{\bf k}_{\perp} \cdot {\bf R}^{\prime}} , \nonumber\end{aligned}$$ where $h_1$, $g_1$ and ${\bf R}^{\prime}$ have been defined in the above equations. Below we give the calculation detail of Eq. (\[eq:fbsfin\]) where we need to compute the integral in Eq. (\[eq:Hsb1\]) with the substitute from (\[eq:esp\_bs\]). With the above notations for $h_1$, $g_1$ and ${\bf R}^{\prime}$ it may be written as $$\frac{1}{(2 \pi)^3} \int \frac{4 \pi }{k_{\perp}^2 +k_z^2} e^{-k_{\perp}^2 h_1^2 +ik_z g_1 +i{\bf k}_{\perp} \cdot {\bf R}^{\prime}} d {\bf k}.$$ First we compute the integral over $k_z$ using the residue at $k_z =i k_{\perp}$: $$\label{eq:overkz} \int_{-\infty}^{\infty} \frac{4 \pi}{k_{\perp}^2 +k_z^2} e^{i k_z g_1 }\, dk_z =\frac{4 \pi^2}{k_{\perp}}e^{-k_{\perp} g_1 } .$$ Next the integration over ${\bf k}_{\perp}$ may be performed to yield: $$\begin{aligned} \label{eq:overkper} && \frac{4 \pi^2}{8\pi^3} \int_0^{\infty} k_{\perp} dk_{\perp} \frac{e^{-k_{\perp}^2 h_1^2 -k_{\perp}g_1}}{k_{\perp}} \int_0^{2 \pi} e^{ik_{\perp} R^{\prime} \cos \phi} d\phi \nonumber \\ &&~~~~~=\int_0^{\infty} e^{-k_{\perp}^2h_1^2- k_{\perp}g_1} J_0(k_{\perp} R^{\prime}) dk_{\perp} \\ &&~~~~~=\frac{1}{R^{\prime}} \int_0^{\infty} e^{-z^2 (h_1/R^{\prime})^2- z (g_1/R^{\prime})} J_0(z) dz \nonumber \\ &&~~~~~ \simeq \frac{1}{\sqrt{R^{\prime\, 2} +g_1^2}} \nonumber\end{aligned}$$ where we take into account that $g_1/R^{\prime} \gg h_1/R^{\prime}$ for $N\sim N_s \gg 1$. Using the above result for the integral over ${\bf k}$ we can find $\left< H_{\rm bs} \right>_{N,z_{\rm top}, R}$: $$\begin{aligned} \label{eq:Hbsfin1} &&\beta \left< H_{\rm bs} \right>_{N,z_{\rm top}, R} \simeq l_B\int_1^{N} \!\!\!dl \int_1^{N_s} \!\!\! \frac{dm}{ \sqrt{ \frac{z_{\rm top}^2(N-l)^2}{N^2} +R^2\frac{m^2}{N_s^2}}} \nonumber \\ &&~~ \!= \frac{l_B NN_s}{z_{\rm top} } \!\log Z_1 \!+\! \frac{l_BNN_s}{R}\! \log Z_2 \!+\! \frac{l_BN}{z_{\rm top}} \!\log Z_3\end{aligned}$$ where $$\begin{aligned} \label{eq:Z1Z3} Z_1&=& (z_{\rm top}/R) + \sqrt{ 1 + (z_{\rm top}/R)^2} \\ Z_2&=& (R/z_{\rm top}) \left( 1+ \sqrt{ 1 + (z_{\rm top}/R)^2} \right) \\ Z_3&=& \frac{R}{2 z_{\rm top} N_s}\end{aligned}$$ and we use definitions of $g_1$ and $R^{\prime}$ and approximate the summation over $l$ and $m$ by the integration. After a simple algebra we arrive at the expression (\[eq:fbsfin\]) for $\left< H_{\rm bs} \right>_{N,z_{\rm top}, R}$. This work was supported by a grant from the President of the RF (No MK-2823.2015.3). [00]{} M. Muthukumar, J. Chem. Phys. 86 (1987) 7239. A. K. Bajpai, Prog. Polym. Sci. 22 (1997) 523. O. V. Borisov, E. B. Zhulina, and T. M. Birshtein, J. Phys. II France 4 (1994) 913. I. Borukhov and D. Andelman and H. Orland, Macromolecules 31 (1998) 1665. X. Chatellier, and J.-F. Joanny, Phys. Rev. E 57 (1998) 6923. M. Muthukumar, J. Chem. Phys. 120 (2004) 9343. A. V. Dobrynin, and A. Deshkovski, and M. Rubinstein, Phys. Rev. Lett. 84 (2000) 3101. A. V. Dobrynin, and A. Deshkovski, and M. Rubinstein, Macromolecules 34 (2001) 3421. O. V. Borisov, and F. A. M. Leermakers, and G. J. Fleer, and E. B. Zhulina, J. Chem. Phys. 114 (2001) 7700. R.R. Netz, Phys. Rev. Lett. 90 (2003) 128104. C. Friedsam, and H. E. Gaub, and R. R. Netz, Europhys. Lett. 72 (2005) 844. N. V. Brilliantov, and C. Seidel, Europhys. Lett. 97 (2012) 28006. C.Seidel, Yu. A. Budkov and N. Brilliantov, Nanoengineering and Nanosystems, 227 (2013) 142149. R. R. Netz [**]{}, J. Phys. Chem. B 107 (2003) 8208. O. V. Borisov, and A. B. Boulakh, and E. B. Zhulina [**]{}, Eur. Phys. J. E 12 (2003) 543. P. Podgornik, and B. Jonsson [**]{}, Europhys. Lett. 24 (1993) 501. P. Podgornik and T. Akesson, and B. Jonsson [**]{}, J. Chem. Phys. 102 (1995) 9423. P. Podgornik, and M. Licer [**]{}, Curr. Op. Coll. Interf. Sci. 11 (2006) 273. H. Kuninaka, and H. Hayakawa [**]{}, Phys. Rev. E 79 (2009) 031309. K. Saitoh, and A. Bodrova, and H. Hayakawa, and N. V. Brilliantov [**]{}, Phys. Rev. Lett. 105 (2010) 238001. A. Yu. Grosberg and A. R. Khokhlov, Statistical Physics of Macromolecules (AIP Press, Woodbury, NY, 1994). Yu. A. Budkov, C.Seidel and N. Brilliantov, (2016) in preparation. F. S. Csajka, and C. Seidel [**]{}, Macromolecules 33 (2000) 2728. N. A. Kumar, and C. Seidel [**]{}, Macromolecules 38 (2005) 9341. R. G. Winkler, and M. Gold, and P. Reineker [**]{}, Phys. Rev. Lett. 80 (1998) 3731. N. V. Brilliantov, and D. V. Kuznetsov and R. Klein [**]{}, Phys. Rev. Lett. 81 (1998) 1433. R. Golestanian, and M. Kardar and T. B. Liverpool [**]{}, Phys. Rev. Lett. 82 (1999) 4456. H. Schiessel, and P. Pincus [**]{}, Macromolecules 31 (1998) 7953. U. Micka, and C. Holm, and K. Kremer [**]{}, Langmuir 15 (1999) 4033. A. Diehl, and M. C. Barbosa, and Y. Levin [**]{}, Phys. Rev. E 54 (1996) 6516. A. Naji, and R. R. Netz [**]{}, Phys. Rev. Lett. 95 (2005) 185703. M.Zahn, Y. Ohki, D. B. Fenneman, R. J. Gripshover, and V. H. Gehman, Jr.[**]{}, Proceedings of the IEEE 74 (1986) 1182. *Elastic Properties and Young Modulus for some Materials.* The Engineering ToolBox. Retrieved 2012-01-06. [^1]: Here we ignore the off-surface loops of the adsorbed part of the chain. These may be taken into account [@BrilliantovSeidel2012], but do not give an important contribution to the total free energy for the range of parameters addressed here. [^2]: The Hertzian force $F_H$ depends on the Young modulus $Y$, Poisson ratio $\nu$, radius of particle $R$ and deformation $\xi$ as $F_H= \kappa \xi^{3/2}=\frac23 \frac{Y\sqrt{R}}{(1-\nu^2)} \xi^{3/2}$, see e.g. [@Hisao:2009; @Hisao:2010]. [^3]: More precisely, the function $e^{-x^2} {\rm I}_0(x)$ is rather close to $e^{-x^2}$, when $x = R^2 |s_2-s_1|/N_s^2b^2$ is of the order of unity; this guarantees that the discussed approximation has an acceptable accuracy.
--- abstract: 'We assess the relationship between model size and complexity in the time-varying parameter VAR framework via thorough predictive exercises for the Euro Area, the United Kingdom and the United States. It turns out that sophisticated dynamics through drifting coefficients are important in small data sets while simpler models tend to perform better in sizeable data sets. To combine best of both worlds, novel shrinkage priors help to mitigate the curse of dimensionality, resulting in competitive forecasts for all scenarios considered. Furthermore, we discuss dynamic model selection to improve upon the best performing individual model for each point in time.' author: - Martin Feldkircher - 'Florian Huber[^1]' - Gregor Kastner bibliography: - './bibtex/favar.bib' - './bibtex/mpShocks.bib' title: - 'Complexity versus simplicity: When does it pay off to introduce drifting coefficients?' - 'Sophisticated and small versus simple and sizeable: When does it pay off to introduce drifting coefficients in Bayesian VARs?' --- --------------- --------------------------------------------------------------------------------------------------------------------------- **Keywords:** Global-local shrinkage priors, density predictions, hierarchical modeling, stochastic volatility, dynamic model selection --------------- --------------------------------------------------------------------------------------------------------------------------- ---------------- --------------------- **JEL Codes:** C11, C30, C53, E52. ---------------- --------------------- Introduction ============ In contemporary econometrics, two main bearings can be found. First, simple models are increasingly replaced by more sophisticated versions in order to avoid functional misspecification. Second, due to increased data availability, small information sets become more sizeable and models thus higher dimensional which in turn decreases the likelihood of omitted variable bias. The goal of this paper is a systematic assessment of the relationship between model size and complexity in the popular time-varying parameter vector autoregressive framework with stochastic volatility (TVP-VAR-SV). Our conjecture is that the introduction of drifting coefficients can control for an omitted variable bias in small-scale models or conversely, larger information sets can substitute for non-linear model dynamics. Since recent research increasingly focuses on combining large models with non-linear model dynamics, appropriate solutions to combine the best of both worlds are needed to avoid overfitting and decreased predictive power. Within a Bayesian framework, it is thus necessary to develop suitable shrinkage priors for the TVP-VAR-SV case that overcome issues related to overfitting. In this paper we exploit the non-centered parameterization of the state space model [see @fruhwirth2010stochastic] to disentangle the time-invariant component of the model from the dynamic part.[^2] Shrinkage is achieved by modifying two global-local shrinkage priors to accommodate features of the Minnesota prior [@Doan1984; @Sims1998]. The first specification proposed is a modified version of the Normal-Gamma (NG) shrinkage prior [@griffin2010inference; @griffin2016hierarchical; @bitto2015achieving] while the second version modifies the recent Dirichlet-Laplace (DL) shrinkage prior [@bhattacharya2015dirichlet] to cater for lag-wise shrinkage [@Huber2017]. Both priors proposed combine recent advances on Bayesian VARs [@korobilispettenuzzo] with the literature on infinite dimensional factor models [@bhattacharya2011sparse]. Our prior controls for model uncertainty by pushing higher lag orders dynamically towards zero and in the same step applies shrinkage on the time-variation of the autoregressive coefficients and covariance parameters. Loosely speaking, we introduce a lag-specific shrinkage parameter that controls how much lags to include and to what extend the corresponding coefficients drift over time. This lag-specific shrinkage parameter is expected to grow at an undetermined rate, increasingly placing more mass around zero for coefficients associated with higher lags of endogenous variables. By contrast, the standard implementations of the NG and the DL priors rely on a single global shrinkage parameter that pushes all coefficients to zero. To render computation feasible, we apply the algorithm put forward in [@carriero2016large] and estimate the TVP-VAR-SV on an equation-by-equation basis. This, in combination with the two proposed shrinkage priors, permits fast and reliable estimation of large-dimensional models. In an empirical exercise, we examine the forecasting properties of the TVP-VAR-SV equipped with our proposed shrinkage priors using three well-known data sets for the Euro area (EA), the United Kingdom (UK) and the United States (US). We evaluate the merits of our model approach relative to a set of other forecasting models, most notably a constant parameter Bayesian VAR with SV and a TVP-VAR with a weakly informative shrinkage prior. Since the size of the information set could play a crucial role in assessing whether time-variation is necessary, we investigate for each data set a small model that features 3 variables, a moderately sized one with 7 variables and a large model with 15 variables. Our results are three-fold: First, we show that the proposed TVP-VAR-SV shrinkage models improve one-step ahead forecasts. Allowing for time variation and using shrinkage priors leads to smaller drops in forecast performance during the global financial crisis – a finding that is also corrob orated by looking at model weights in a dynamic model selection exercise. Second, comparing the proposed priors we find that the DL prior shows a strong performance in small-scale applications, while the NG prior outperforms using larger information sets. This is driven by the higher degree of shrinkage the NG prior provides which is especially important for large scale applications. Last, we demonstrate that the larger the information set the stronger the forecast performance of a simple, constant parameter VAR with SV. However, also here the NG-VAR-SV model turns out to be a valuable alternative providing forecasts that are not far off those of the constant parameter competitor. To allow for different models at different points in time, we also discuss the possibility of dynamic model selection. The remainder of the paper is structured as follows. The second section sets the stage, introduces a standard TVP-VAR-SV model and highlights typical estimation issues involved. Section 3 describes in detail the prior setup adopted. Section 4 presents the necessary details to estimate the model, including an overview of the Markov chain Monte Carlo (MCMC) algorithm and the relevant conditional posterior distributions. Section 5 provides empirical results alongside the main findings of our forecasting comparison. Furthermore, it contains a discussion of dynamic model selection. Finally, the last section summarizes and concludes the paper. Econometric framework ===================== In this paper, the model of interest is a TVP-VAR with stochastic volatility (SV) in the spirit of [@primiceri2005time]. The model summarizes the joint dynamics of an $M$-dimensional zero-mean vector of macroeconomic time series $\{\boldsymbol{y}_t\}_{t=1}^T$ as follows:[^3] $$\boldsymbol{y}_t = \boldsymbol{A}_{1t} \boldsymbol{y}_{t-1}+\dots+\boldsymbol{A}_{pt} \boldsymbol{y}_{t-p} + \boldsymbol{\varepsilon}_t,~\boldsymbol{\varepsilon}_t \sim \mathcal{N}(\boldsymbol{0}_M, \boldsymbol{\Sigma}_t). \label{eq: obs1}$$ The $M \times M$ matrix $\boldsymbol{A}_{jt}$ ($j=1,\dots,p$) contains time-varying autoregressive coefficients, $\boldsymbol{\varepsilon}_t$ is a vector white noise error with zero mean and a time-varying variance-covariance matrix $\boldsymbol{\Sigma}_t= \boldsymbol{H}_t \boldsymbol{V}_t \boldsymbol{H}_t'$. $\boldsymbol{H}_t$ is a lower unitriangular matrix and $\boldsymbol{V}_t=\text{diag}(e^{v_{1t}},\dots, e^{v_{Mt}})$ denotes a diagonal matrix with time-varying shock variances. The model in can be cast in a standard regression form as follows, $$\boldsymbol{y}_t = \boldsymbol{A}_t \boldsymbol{x}_t + \boldsymbol{\varepsilon}_t, \label{eq: obs2}$$ with $\boldsymbol{A}_t = (\boldsymbol{A}_{1t}, \dots, \boldsymbol{A}_{pt})$ being an $M\times (pM)$ matrix and $\boldsymbol{x}_t = (\boldsymbol{y}'_{t-1},\dots, \boldsymbol{y}'_{t-p})'$. Following [@cogley2005drifts] we can rewrite as $$\boldsymbol{y}_t - \boldsymbol{A}_t \boldsymbol{x}_t = \boldsymbol{H}_t \boldsymbol{\eta}_t, \text{ with } \boldsymbol{\eta}_t \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{V}_t),$$ and multiplying from the left with $\tilde{\boldsymbol{H}}_t := \boldsymbol{H}_t^{-1}$ yields $$\tilde{\boldsymbol{H}}_t \boldsymbol{\varepsilon}_t = \boldsymbol{\eta}_t.$$ For further illustration, note that the first two equations of the system are given by $$\begin{aligned} \varepsilon_{1t} &= \eta_{1t},\\ \tilde{h}_{2 1,t} \varepsilon_{1t} + \varepsilon_{2t}&= \eta_{2t}, \label{secondeq}\end{aligned}$$ with $\tilde{h}_{2 1,t}$ denoting the second element of the first column of $\tilde{\boldsymbol{H}}_t$. can be rewritten as $$y_{2t} = \boldsymbol{A}_{2 \bullet, t} \boldsymbol{x}_t - \tilde{h}_{2 1,t} \varepsilon_{1t} + \eta_{2t}, \label{secondeqaug}$$ where $\boldsymbol{A}_{i \bullet, t}$ denotes the $i$th row of $\boldsymbol{A}_t$. More generally, the $i$th equation of the system is a standard regression model augmented with the residuals of the preceding $i-1$ equations, $$y_{it} = \boldsymbol{A}_{i \bullet, t} \boldsymbol{x}_t - \sum_{s=1}^{i-1} \tilde{h}_{i s, t} \varepsilon_{st} + \eta_{it}.$$ Thus, the $i$th equation is a standard regression model with $K_i = pM + i-1$ explanatory variables given by $\boldsymbol{z}_{it} = ( \boldsymbol{x}_t', -\varepsilon_{1t}, \dots, -\varepsilon_{i-1,t})'$ and a $K_i$-dimensional time-varying coefficient vector $\boldsymbol{B}_{it}= ( \boldsymbol{A}_{i\bullet,t}, \tilde{h}_{i1, t}, \dots, \tilde{h}_{i i-1, t})'$. For each equation $i>1$, the corresponding dynamic regression model is then given by $$y_{it} = \boldsymbol{B}'_{it} \boldsymbol{z}_{it} + \eta_{it}. \label{eq: regression_i}$$ The states in $\boldsymbol{B}_{it}$ evolve according to a random walk process, $$\boldsymbol{B}_{it} = \boldsymbol{B}_{it-1}+\boldsymbol{v}_t,~\text{ with } \boldsymbol{v}_t \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{\Omega}_i), \label{eq: state_i}$$ where $\boldsymbol{\Omega}_i = \text{diag}(\omega_1, \dots, \omega_{K_i})$ is a diagonal variance-covariance matrix. Note that if a given diagonal element of $\boldsymbol{\Omega}_i$ is zero, the corresponding regression coefficient is assumed to be constant over time. Typically, conjugate inverted Gamma priors are specified on $\omega_j$ ($j=1,\dots,K_i$). However, as [@fruhwirth2010stochastic] demonstrate, this choice is suboptimal if $\omega_j$ equals zero, since the inverted Gamma distribution artificially places prior mass away from zero and thus introduces time-variation even if the likelihood points towards a constant parameter specification. To alleviate such concerns, [@fruhwirth2010stochastic] exploit the non-centered parameterization of Eqs. (\[eq: regression\_i\]) and (\[eq: state\_i\]), $$y_{it} = \boldsymbol{B}'_{i0} \boldsymbol{z}_{it}+\tilde{\boldsymbol{B}}'_{it} \sqrt{\boldsymbol{\Omega}_i} \boldsymbol{z}_{it} + \eta_{it}. \label{eq: regression_NC}$$ We let $\sqrt{\boldsymbol{\Omega}_i}$ denote the matrix square root such that $\boldsymbol{\Omega}_i=\sqrt{\boldsymbol{\Omega}_i}\sqrt{\boldsymbol{\Omega}_i}$ and $\tilde{\boldsymbol{B}}_{it}$ has typical element $j$ given by $\tilde{b}_{ij,t}=\frac{ b_{ij,t}-b_{ij,0}}{\sqrt{\omega_{ij}}}$. The corresponding state equation is given by $$\tilde{\boldsymbol{B}}_{it} = \tilde{\boldsymbol{B}}_{it-1}+\boldsymbol{u}_{it},~\text{ with } \boldsymbol{u}_{it} \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I}_{K_i}).$$ Moving from the centered to the non-centered parameterization allows us to treat the (signed) square root of the state innovation variances as additional regression parameters to be estimated. Moreover, this parameterization also enables us to control for model uncertainty associated with whether a given element of $\boldsymbol{z}_{it}$, i.e.,  both autoregressive coefficients and covariance parameters, should be included or excluded from the model. This can be achieved by noting that if $b_{ij, 0} \neq 0$ the $j$th regressor is included. The second dimension of model uncertainty stems from the empirically relevant question whether a given regression coefficient should be constant or time-varying. Thus, if $\omega_{jj} \neq 0$, the $j$th regressor drifts smoothly over time. Especially for forecasting applications, appropriately selecting which subset of regression coefficients should be constant or time-varying proves to be one of the key determinants in achieving superior forecasting properties [@d2013macroeconomic; @korobilis2013hierarchical; @belmonte2014hierarchical; @bitto2015achieving] Finally, we also have to introduce a suitable law of motion for the diagonal elements of $\boldsymbol{V}_t$. Here we assume that the $v_{it}$s evolve according to independent AR(1) processes, $$v_{it}=\mu_i + \rho_i (v_{it-1}-\mu_i) + w_{it}, \quad w_{it} \sim \mathcal{N}(0,\sigma_i^2), \label{eq: stateLOGVOLA}$$ for $i=1,\dots, M$. The parameter $\mu_i$ denotes the mean of the $i$th log variance, $\rho_i$ is the corresponding persistence parameter and $\sigma_i^2$ stands for the error variance of the relevant shocks. Prior specification =================== We opt for a fully Bayesian approach to estimation, inference, and prediction. This calls for the specification of suitable priors on the parameters of the model. Typically, inverse Gamma or inverted Wishart priors are used for the state innovation variances in . However, as discussed above, such priors bound the diagonal elements of $\boldsymbol{\Omega}_i$ artificially away from zero, always inducing at least some movement in the parameters of the model. We proceed by utilizing two flexible global-local (GL) shrinkage priors [see @polson2010shrink] on $\boldsymbol{B}_{i0}$ and $\boldsymbol{\omega}_i = (\omega_{i1}, \dots, \omega_{i K_i})'$. A GL shrinkage prior comprises of a global scaling parameter that pushes all elements of the coefficient vector towards zero and a set of local scaling parameters that enable coefficient-specific deviations from this general pattern. The Normal-Gamma shrinkage prior -------------------------------- The first prior we consider is a modified variant of the Normal-Gamma (NG) shrinkage prior proposed in [@griffin2010inference] and adopted within the general class of state space models in [@bitto2015achieving]. In what follows we let $\boldsymbol{a}_{0}=\text{vec}(\boldsymbol{A}_{0})$ denote the time-invariant part of the VAR coefficients with typical element $a_{0j}$ for $j=1,\dots,K=pM^2$. The corresponding signed squared root of the state innovation variance is consequently denoted by $\pm \sqrt{\omega}_{j}$ or simply $\sqrt{\omega}_{j}$. Thus, $\sqrt{\omega}_{j}$ crucially determines the amount of time variation in the $j$th element of $\boldsymbol{a}_t$. With this in mind, our prior specification is a scale mixture of Gaussians, $$\begin{aligned} a_{0j}| \tau_{a j}^2, \lambda_l &\sim \mathcal{N}(0, 2/\lambda_l ~ \tau_{a j}^2), \quad \tau_{a j}^2 \sim \mathcal{G}(\vartheta_{l}, \vartheta_{l})\\ \sqrt{\omega}_{j}| \tau_{\omega j}^2, \lambda_l &\sim \mathcal{N}(0, 2/\lambda_l ~ \tau_{\omega j}^2), \quad \tau_{\omega j}^2 \sim \mathcal{G}(\vartheta_{ l},\vartheta_{ l})\\ \lambda_l &= \prod_{s=1}^l \nu_s,~\nu_s \sim \mathcal{G}(c_\lambda, d_\lambda),\end{aligned}$$ where $\tau_{a j}^2$ and $\tau_{\omega j}^2$ denote a set of local scaling parameters that follow a Gamma distribution and $\lambda_l$ is a lag-specific shrinkage parameter. Thus, if the $j$th element of $\boldsymbol{a}_0$ is related to the $l$th lag of the endogenous variables, $\lambda_{l}$ applies a lag-specific degree of shrinkage to all coefficients associated to $\boldsymbol{y}_{t-l}$ as well as the corresponding standard deviations $\sqrt{\omega}_j$. The hyperparameter $\vartheta_{l}= \vartheta/l^2$ also depends on the lag length of the system and controls the excess kurtosis of the marginal prior, $$p(a_{0j}|\lambda_l) = \int p(a_{0j}|\tau_{a j}^2 ,\lambda_l) d \tau_{a j}^2,$$ obtained after integrating out the local scaling parameters. For the marginal prior, $\lambda_l$ controls the overall degree of shrinkage. Lower values of $\vartheta_{l}$ place increasing prior mass on zero while at the same time lead to heavy tails of $p(a_{0j}|\lambda_l)$. Thus, our specification implies that with increasing lag length we increasingly place more mass on zero while maintaining heavy tails. In our case, we specify $\lambda_l$ to be a lag-wise shrinkage parameter that follows a multiplicative Gamma process proposed in [@bhattacharya2011sparse],[^4] with $c_\lambda$ and $d_\lambda$ denoting hyperparameters. As long as $\nu_s$ exceeds unity, this prior stochastically introduces more shrinkage for higher lag orders. Note that $\lambda_l$ simultaneously pulls all elements in $\boldsymbol{a}_{0}$ associated with the $l$th lag and the corresponding $\sqrt{\omega}_j$s to zero. This implies that if a given lag of the endogenous variables is not included in the model, time-variation is also less likely. However, it could be the case that a given element in $\boldsymbol{a}_{0}$ associated with a higher lag order might be important to explain $\boldsymbol{y}_t$. In that case, the local scaling parameters introduce sufficient flexibility to pull sufficient posterior mass away from zero, enabling non-zero regression sign als if necessary. On the covariance parameters $\tilde{h}_{is,0}~(i=2,\dots, M; s=pM+1,\dots, K_i)$ and the associated innovation standard deviations $\gamma_{is} = \sqrt{\omega}_{i s}$ we impose the standard implementation of the NG prior. To simplify prior implementation we collect the $v=M (M-1)/2$ free covariance parameters in a vector $\tilde{\boldsymbol{h}}_0$ and the corresponding elements of $\boldsymbol{\Omega}=\text{diag}(\boldsymbol{\Omega}_1, \dots, \boldsymbol{\Omega}_M)$ in a $v$-dimensional vector $\boldsymbol{\gamma}$ with typical elements $\tilde{h}_{i0}$ and $\gamma_{i}$, $$\begin{aligned} \tilde{h}_{i0}|\tau_{h i}^2, \varpi &\sim \mathcal{N}(0, 2/\varpi ~ \tau_{h i}^2), \quad \tau_{h i}^2 \sim \mathcal{G}(\vartheta_{h},\vartheta_{h}),\\ \gamma_{i}| \tau^2_{\gamma i}, \varpi &\sim \mathcal{N}(0, 2/\varpi ~ \tau^2_{\gamma i}), \quad \tau^2_{\gamma i} \sim \mathcal{G}(\vartheta_{h},\vartheta_{h}),\\ \varpi &\sim \mathcal{G}(c_{\varpi}, d_{\varpi}).\end{aligned}$$ Here, $\tau_{h i}^2$ and $\tau^2_{\gamma i}$ are local scaling parameters and $\varpi$ is a global shrinkage parameter that pushes all covariance parameters and the corresponding state innovation standard deviations across equations to zero. The hyperparameter $\vartheta_h$ again controls the excess kurtosis of the marginal prior. Note that this prior also captures several features of the Minnesota prior [@Doan1984; @Sims1998] since it captures the notion that more distant lags appear to be less relevant to predict the current value of $\boldsymbol{y}_t$. However, as opposed to the deterministic penalty function on higher lag orders introduced in a standard Minnesota prior our model specification entails an increasing degree of shrinkage in a stochastic manner, effectively allowing for deviations if the data suggests it. The Dirichlet-Laplace shrinkage prior ------------------------------------- The NG prior possesses good empirical properties. However, from a theoretical point of view its properties are still not well understood. In principle, GL shrinkage priors aim to approximate a standard spike and slab prior [@george1993variable; @george2008bayesian] by introducing suitable mixing distributions on the local and global scaling parameters of the model. [@bhattacharya2015dirichlet] introduce a prior specification and analyze its properties within the stylized normal means problem. Their prior, the Dirichlet-Laplace (DL) shrinkage prior, excels both in theory and empirical applications, especially in very high dimensions. Thus, for the TVP-VAR-SV it seems to be well suited given the large dimensional parameter and state space. Similarly to the NG prior, the DL prior also depends on a set of global and local shrinkage parameters, $$\begin{aligned} a_{0j}| \psi_{a j}, \xi^2_{a j}, \tilde{\lambda}_l &\sim \mathcal{N}(0, \psi_{a j} \xi^2_{a j} / \tilde{\lambda}_l^2), \quad \psi_{a j} \sim {Exp}(1/2), \quad \xi_{j} \sim Dir(n_a,\dots,n_a),\\ \sqrt{\omega}_{j}| \psi_{\omega j}, \xi^2_{\omega j}, \tilde{\lambda}_l &\sim \mathcal{N}(0, \psi_{\omega j} \xi^2_{\omega j} / \tilde{\lambda}_l^{2}), \quad \psi_{\omega j} \sim {Exp}(1/2), \quad \xi_{j} \sim Dir(n_a,\dots,n_a),\\ \tilde{\lambda}_l &= \prod_{s=1}^l \tilde{\nu}_s,~\tilde{\nu}_s \sim \mathcal{G}(c_\lambda, d_\lambda). \label{eq: globalpriorDL}\end{aligned}$$ Hereby, for $s\in\{a, \omega\}$, $\psi_{sj}$ is again a set of local scaling parameters and $\xi_{s j}$ constitutes an auxiliary scaling parameter defined on the $(K-1)$-dimensional unit simplex $\mathcal{S}^{K-1}=\{\boldsymbol{x}=(x_1,\dots,x_K)': {x}_{j} \ge 0, \sum_{j=1}^{K} {x}_{j}=1 \}$ with $\boldsymbol{\xi}_s=(\xi_{s1},\dots, \xi_{sK})'$. The lag-specific shrinkage parameter $\tilde{\lambda}_l$ is defined analogously to the NG prior. Our specification of the global-shrinkage parameter differs from the original implementation by assuming that $\tilde{\lambda}_l$ is applied to a subset of the regression coefficients only; the original variant of the prior features one single global shrinkage coefficient. The parameter $n_a$ controls the overall tightness of the prior. [@bhattacharya2015dirichlet] show that if $n_a=K^{-(1+\epsilon)}$ for $\epsilon$ close to zero, the corresponding prior displays excellent theoretical shrinkage properties. For the variance-covariance matrix we also impose the DL prior, $$\begin{aligned} \tilde{h}_{i 0}|\psi_{h i}, \xi^2_{h i}, \tilde{\varpi} &\sim \mathcal{N}(0, \psi_{h i} \xi^2_{h i} / \tilde{\varpi}^{2}), \quad \psi_{h i}^2 \sim {Exp}(1/2), \quad \xi_{h i} \sim Dir(n_h, \dots, n_h),\\ \gamma_{i}| \psi_{\gamma i}, \xi^2_{\gamma i}, \tilde{\varpi} &\sim \mathcal{N}(0, \psi_{\gamma i} \xi^2_{\gamma i} / \tilde{\varpi}^{2}), \quad \psi^2_{\gamma i} \sim {Exp}(1/2), \quad \xi_{\gamma i} \sim Dir(n_h, \dots, n_h),\\ \tilde{\varpi} &\sim \mathcal{G}^{-1}(2 v n_h, 1/2). \end{aligned}$$ The local shrinkage parameters $\psi_{si}$ and $\xi^2_{s i}$ for $s \in \{h, \gamma\}$ are defined analogously to the case of the regression coefficients described above. We let $\tilde{\varpi}$ denote a global shrinkage parameter with large values implying heavy shrinkage on the covariance parameters of the model. The main differences of the NG and the DL prior are the presence of the Dirichlet components that introduce even more flexibility. [@bhattacharya2015dirichlet] show that in the framework of the stylized normal means problem this specification yields excellent posterior contraction rates in light of a sparse data generating process. Within an extensive simulation exercise they moreover provide some evidence that this prior also works well in practice. Finally, the prior setup on the coefficients in the state equation of the log-volatilities closely follows [@kastner2016dealing]. Specifically, we place a weakly informative Gaussian prior on $\mu_i$, $\mu_i \sim \mathcal{N}(0,10^2)$ and a Beta prior on $\frac{\rho_i +1}{2} \sim \mathcal{B}(25, 1.5)$. Additionally, $\sigma_i^2 \sim \mathcal{G}(1/2, 1/2)$ introduces some shrinkage on the process innovation variances of the log-volatilities. This setup is used for all equations. Bayesian inference ================== The joint posterior distribution of our model is analytically intractable. Fortunately, however, the full conditional posterior distributions mostly belong to some well known family of distributions, implying that we can set up a conceptually straightforward Gibbs sampling algorithm to estimate the model. A brief sketch of the Markov chain Monte Carlo algorithm -------------------------------------------------------- Our algorithm is related to the MCMC scheme put forward in [@carriero2016large] and estimates the latent states on an equation-by-equation basis. Specifically, conditional on a suitable set of initial conditions, the algorithm cycles through the following steps: 1. Draw $(\boldsymbol{B}_{i0}', \omega_{i1},\dots, \omega_{i K_i})'$ for $i=1,\dots,M$ from $\mathcal{N}(\boldsymbol{\mu}_{B i}, \boldsymbol{V}_i)$ with $\boldsymbol{V}_i= (\boldsymbol{Z}'_i\boldsymbol{Z}_i+ \underline{\boldsymbol{V}}_i^{-1})^{-1}$ and $\boldsymbol{\mu}_{B i}=\boldsymbol{V}_i ( \boldsymbol{Z}_i \boldsymbol{Y}_i)$. We let $\boldsymbol{Z}_i$ be a $T \times (2 K_i)$ matrix with typical $t$th row $[\boldsymbol{z}'_{it}, (\boldsymbol{B}_{it} \odot\boldsymbol{z}_{it})']~e^{-(v_{it}/2)}$, $\boldsymbol{Y}_i$ is a $T$-dimensional vector with element $y_{it} ~e^{-(v_{it}/2)}$, and $\underline{\boldsymbol{V}}_i$ is a prior covariance matrix that depends on the prior specification adopted. Note that in contrast to [@carriero2016large] who sample the VAR parameters in ${\boldsymbol{A}}_0$ and the elements of $\tilde{\boldsymbol{H}}_0$ conditionally on each other, we propose to draw these jointly which speeds up the mixing of the sampler. 2. Simulate the full history of $\{\tilde{\bm{B}}_{it}\}_{t=1}^T$ by means of a forward filtering backward sampling algorithm [see @carter1994gibbs; @fruhwirth1994data] per equation. 3. The log-volatilities and the corresponding parameters of the state equation in are simulated using the algorithm put forward in [@kastner2014ancillarity] via the R package stochvol [@kastner2016dealing]. 4. Depending on the prior specification adopted, draw the parameters used to construct $\underline{\boldsymbol{V}}_i$ using the conditional posterior distributions detailed in Section \[sec:condpostNG\] (NG prior) or Section \[sec:condpostDL\] (DL prior). This algorithm produces draws from the joint posterior distribution of the states and the model parameters. In the empirical application that follows we use 30,000 iterations where we discard the first 15,000 as burn-in. Full conditional posterior distributions associated with the NG prior {#sec:condpostNG} --------------------------------------------------------------------- Conditional on the full history of all latent states in our model as well as the lag-specific and global shrinkage parameters it is straightforward to show that the conditional posterior distributions of $\tau_{sj}^2$ for $s \in \{a, \omega\}$ and $j=1,\dots,K$ are given by $$\begin{aligned} \tau_{aj}^2|\bullet \sim \mathcal{GIG}(\vartheta_{l}-1/2, a_{0j}^2, \vartheta_{l} \lambda_l), \quad \tau_{\omega j}^2|\bullet \sim \mathcal{GIG}(\vartheta_{ l}-1/2, \omega_{j}^2, \vartheta_{l} \lambda_l),\end{aligned}$$ where $\bullet$ indicates conditioning on the remaining parameters and states of the model. Moreover, $\mathcal{GIG}(\zeta, \chi, \varrho)$ denotes the Generalized Inverse Gaussian distribution with density proportional to $x^{\zeta-1} \exp\{-(\chi/x+\varrho x)/2\}$. To draw from this distribution, we use the algorithm of [@hoe-ley:gen] implemented in the R package GIGrvg [@r:gig]. The conditional posteriors of the local scalings for the covariance parameters and their corresponding innovation standard deviations also follow GIG distributions, $$\begin{aligned} \tau_{h i}^2|\bullet \sim \mathcal{GIG}(\vartheta_h-1/2, \tilde{h}_{h i}^2, \vartheta_h \varpi), \quad \tau_{\gamma i}^2|\bullet \sim \mathcal{GIG}(\vartheta_h-1/2, \gamma_{i}^2, \vartheta_h \varpi).\end{aligned}$$ Concerning the sampling of $\nu_l$, note that combining each component of the Gamma likelihood given by $p(\tau^2_{aj}, \tau^2_{\omega j}|\nu_l, \lambda_{l-1})=p(\tau^2_{aj}|\nu_l, \lambda_{l-1}) \times p(\tau^2_{\omega j}|\nu_l, \lambda_{l-1})$ with the Gamma prior $p(\nu_l)$ yields a conditional posterior that itself follows a Gamma distribution, $$\nu_1 |\bullet \sim \mathcal{G}\left\lbrace c_\lambda+ 2 \vartheta_{1} M^2, d_{\lambda}+\frac{\vartheta_{1}}{2} \sum_{j \in \mathcal{A}_1} (\tau^2_{aj}+\tau^2_{\omega j})\right\rbrace~\text{ for } l=1,$$ where $\mathcal{A}_1$ denotes an index set that allows selecting all elements in $\boldsymbol{A}_0$ and $\sqrt{\boldsymbol{\Omega}}$ associated with the first lag of the endogenous variables. For lags $l>1$, the conditional posterior is also Gamma distributed, $$\nu_l | \lambda_{l-1}, \bullet \sim \mathcal{G}\left\lbrace c_\lambda+ 2 \vartheta_{l} M^2, d_{\lambda}+\lambda_{l-1} \frac{\vartheta_{l}}{2}\sum_{j \in \mathcal{A}_l} (\tau^2_{aj}+\tau^2_{\omega j}) \right\rbrace~\text{ for } l>1.$$ Likewise, the conditional posterior of $\varpi$ is given by $$\varpi|\bullet \sim \mathcal{G}\left\{c_\varpi+ 2 \vartheta_h v, d_\varpi + \frac{\vartheta_h}{2} \sum_{i=1}^v (\tau^2_{hj}+\tau^2_{\gamma j}) \right\}.$$ Full conditional posterior distributions associated with the DL prior {#sec:condpostDL} --------------------------------------------------------------------- We start by outlining the conditional posterior distribution of $\psi_{aj}$. Similar to the NG case, [@bhattacharya2015dirichlet] show that $\psi_{aj}$ and $\psi_{\omega j}$ follow a GIG distribution, $$\begin{aligned} \psi_{aj}|\bullet &\sim \mathcal{GIG}(1/2, |a_{j0}| \tilde{\lambda}_l/\xi_{aj}, 1), \quad \psi_{\omega j}|\bullet \sim \mathcal{GIG}(1/2, |\sqrt{\omega}_{j}| \tilde{\lambda}_l/\xi_{\omega j} ,1).\end{aligned}$$ For the Dirichlet components, the conditional posterior distribution is obtained by sampling a set of $K$ auxiliary variables $N_{aj}, N_{\omega j}~(j=1,\dots,K)$, $$\begin{aligned} N_{aj}|\bullet &\sim \mathcal{GIG}(n_a-1, 2 |a_{j0}|,1),\quad N_{\omega j}|\bullet \sim \mathcal{GIG}(n_a-1, 2 |\sqrt{\omega}_{j0}|,1).\end{aligned}$$ After obtaining the $K$ scaling parameters we set $\xi_{aj}= N_{aj}/ N_a$ and $\xi_{\omega j}= N_{\omega j}/N_\omega$ with $N_a=\sum_{j=1}^K N_{aj}$ and $ N_\omega=\sum_{j=1}^K N_{\omega j}$. The lag-specific shrinkage parameters under the DL prior are obtained by stating the DL prior in its hierarchical form, $$\begin{aligned} a_{0j}|\tilde{\lambda}_l \sim DE(\xi_{aj} / \tilde{\lambda}_l), \quad \xi_{aj} \sim Dir(n_a,\dots,n_a),\\ \sqrt{\omega}_{j}|\tilde{\lambda}_l \sim DE(\xi_{\omega j} / \tilde{\lambda}_l), \quad \xi_{\omega j} \sim Dir(n_a,\dots,n_a),\end{aligned}$$ with $DE(\lambda)$ denoting the double exponential distribution whose density is proportional to $\lambda^{-1} e^{-|x|/\lambda}$. Using the same prior representation for $\sqrt{\omega}_j$ and noting that $p(a_{0j}, \sqrt{\omega}_{j}| \tilde{\lambda}_l, \xi_{aj}, \xi_{\omega j})= p(a_{0j}| \tilde{\lambda}_l, \xi_{aj}, \xi_{\omega j}) \times p(\sqrt{\omega}_{j}| \tilde{\lambda}_l, \xi_{aj}, \xi_{\omega j})$ yields\ $$\prod_{j \in \mathcal{A}_l} p(a_{0j}, \sqrt{\omega_j}| \tilde{\lambda}_l, \xi_{\omega j}, \xi_{a j}) = {\tilde{\lambda}_l}^{2 M^2} \exp\left\{ -{\tilde{\lambda}_l}\sum_{j \in \mathcal{A}_l}\left(\frac{|a_{0j}|}{\xi_{aj}}+\frac{|\sqrt{\omega}_j|}{\xi_{\omega j}}\right)\right\}. \label{eq:likelihoodLAMBDA}$$\ Combining with for $l=1$ leads to $$p(\tilde{\nu}_1|\bullet) \propto \tilde\nu_1^{(c_\lambda+2 M^2)-1} \exp \left\lbrace -\left[d_\lambda + \sum_{j \in \mathcal{A}_1} \left( \frac{|a_{0j}|}{\xi_{aj}}+\frac{|\sqrt{\omega}_{j}|}{\xi_{\omega j}}\right)\right]\tilde\nu_1 \right\rbrace,$$ which is the kernel of a Gamma density $\mathcal{G}\left\{c_\lambda+2M^2, d_\lambda + \sum_{j \in \mathcal{A}_1} \left(\frac{|a_{0j}|}{\xi_{aj}}+\frac{|\sqrt{\omega}_{j}|}{\xi_{\omega j}} \right)\right\}$. For higher lag orders $l>1$ we obtain $$p(\tilde{\nu}_l| \tilde{\lambda}_{l-1}, \bullet) \propto \tilde\nu_l^{(c_\lambda+2 M^2)-1} \exp \left\lbrace -\left[d_\lambda + \tilde{\lambda}_{l-1} \sum_{j \in \mathcal{A}_l}\left( \frac{|a_{0j}|}{\xi_{aj}}+\frac{|\sqrt{\omega}_{j}|}{\xi_{\omega j}}\right)\right]\tilde\nu_l \right\rbrace, \label{eq: kernellambda>1}$$ i.e. $\mathcal{G}\left\{c_\lambda+2M^2, d_\lambda + \tilde{\lambda}_{l-1} \sum_{j \in \mathcal{A}_l} \left( \frac{|a_{0j}|}{\xi_{aj}}+\frac{|\sqrt{\omega}_{j}|}{\xi_{\omega j}}\right)\right\}$. The conditional posterior distributions of $\psi_{hi}$ and $\psi_{\gamma i}$ for $i=1,\dots,v$ are given by $$\begin{aligned} \psi_{h i}|\bullet &\sim \mathcal{GIG}\left(1/2, |\tilde{h}_{i0}|/ (\tilde{\varpi} \zeta_{h i}),1 \right),\quad \psi_{\gamma i}|\bullet \sim \mathcal{GIG}\left(1/2, |\gamma_{i}|/(\tilde{\varpi} \zeta_{\gamma i}),1\right).\end{aligned}$$ Again, we introduce a set of auxiliary variables $N_{hi}, N_{\gamma i}$, $$\begin{aligned} N_{hi}|\bullet \sim \mathcal{GIG}(n_h-1, 2 |\tilde{h}_{i0}|,1),\quad N_{\gamma i}|\bullet \sim \mathcal{GIG}(n_h-1, 2 |\gamma_{i}|,1),\end{aligned}$$ and obtain draws from $\xi_{hi}$ and $\xi_{\gamma i}$ by using $\xi_{hi}= N_{hi}/\sum_{i=1}^v N_{hi}$ and $\xi_{\gamma i}= N_{\gamma i}/\sum_{i=1}^v N_{\gamma i}$. The final component is the global shrinkage parameter on the covariance parameters and the process innovation variances which again follow a GIG distribution, $$\tilde{\varpi} | \bullet \sim \mathcal{GIG}\left\{2 v (n_h-1), 2 \sum_{j=1}^v \left(\frac{|h_{i0}|}{\xi_{hi}}+\frac{|\gamma_{i}|}{\xi_{\gamma i}} \right),1\right\}.$$ Forecasting macroeconomic quantities for three major economies ============================================================== In what follows we systematically assess the relationship between model size and model complexity by forecasting several macroeconomic indicators for three large economies, namely the EA, the UK and the US. In Section \[data\], we briefly describe the different data sets and discuss model specification issues. Section \[visualize\] deals with simple visual summaries of posterior sparsity in terms of the VAR coefficients and their time-variation for the two shrinkage priors proposed. The main forecasting results are discussed in Section \[forecast\]. Finally, Section \[pool\] discusses the possibility to dynamically select among different specifications in an automatic fashion. Data and model specification {#data} ---------------------------- We use prominent macroeconomic data sets for the EA, the UK and the US. All three data sets are on a quarterly frequency but span different periods of time. For the euro area we take data from the area wide model [@awm] and additionally include equity prices available from 1987Q1 to 2015Q4. UK data stem from the Bank of England’s “A millenium of macroeconomic data” [@ukdata] and covers the period from 1982Q2 to 2016Q4. For the US, we use a subset from the FRED QD data base [@McCracken2016] which covers the period from 1959Q1 to 2015Q1. For each of the three cases we use three subsets, a small (3 variables), a medium (7 variables) and a large (15 variables) subset. The small subset covers only real activity, prices and short-term interest rates. The medium models cover in addition investment and consumption, the unemployment rate and either nominal or effective exchange rates. For the large models we add wages, money (measured as M2 or M3), government consumption, exports, equity prices and 10-year government bond yields. To complete the data set for the large models, we include additional variables depending on data availability for each country set. For example, the UK data set offers a wide range of financial data, so we complement the large model by including also data on mortgage rates and bond spreads. For the EA data set we include also a commodity price indicator and labor market productivity, while for the US we add consumer sentiment and hours worked. In what follows we are interested not only in the relative performance of the different priors, but also in the forecasting performance using different information sets. Thus we have opted to first strike a good balance between different types of data (e.g., real, labor market and financial market data) and secondly to alter variables for the large data sets slightly. This is done to rule out that performance between information sets depends crucially on the type of information that is added (e.g., labor market data versus financial market dat a). For data that are non-stationary we take first differences, see in the appendix for more details. Consistent with the literature [@cogley2005drifts; @primiceri2005time; @d2013macroeconomic] we include $p = 2$ lags of the endogenous variables in all models. Before proceeding to the empirical results, a brief word on the specific choice of the hyperparameters is in order. For the NG prior we set $\vartheta=\vartheta_h=0.1$ and $c_\lambda=1.5, d_\lambda=1$. The first choice is motivated by recent empirical evidence provided in [@Huber2017] who integrate $\vartheta$ out of the joint posterior in a Bayesian fashion. The second choice is not critical empirically but serves to place sufficient prior mass on values of $\nu_s$ above unity. Moreover, we set $c_\varpi=d_\varpi=0.01$ to induce heavy shrinkage on the covariance parameters. For the DL prior $c_\lambda$ and $d_\lambda$ are specified analogously to the NG case and $n_a = 1/K, n_h=1/v$. Note that if $n_a$ is set to larger values the degree of shrinkage is too small and the empirical performance of the DL prior becomes much worse. Inspecting posterior sparsity {#visualize} ----------------------------- [.5]{} ![Posterior means in the large model – Euro area.[]{data-label="fig:heat_ea"}](Heatmaps/Densities_diff_DL_largeEA/heatmap_t_1_1_.pdf "fig:"){width="\textwidth"} [.5]{} ![Posterior means in the large model – Euro area.[]{data-label="fig:heat_ea"}](Heatmaps/Densities_diff_tvp_sv_largeEA/heatmap_t_1_1_.pdf "fig:"){width="\textwidth"} \ [.5]{} ![Posterior means in the large model – Euro area.[]{data-label="fig:heat_ea"}](Heatmaps/Densities_diff_DL_largeEA/heatmap_t_1_2_.pdf "fig:"){width="\textwidth"} [.5]{} ![Posterior means in the large model – Euro area.[]{data-label="fig:heat_ea"}](Heatmaps/Densities_diff_tvp_sv_largeEA/heatmap_t_1_2_.pdf "fig:"){width="\textwidth"} \ [.5]{} ![Posterior means in the large model – UK.[]{data-label="fig:heat_uk"}](Heatmaps/Densities_diff_DL_largeUK/heatmap_t_1_1_.pdf "fig:"){width="\textwidth"} [.5]{} ![Posterior means in the large model – UK.[]{data-label="fig:heat_uk"}](Heatmaps/Densities_diff_tvp_sv_largeUK/heatmap_t_1_1_.pdf "fig:"){width="\textwidth"} \ [.5]{} ![Posterior means in the large model – UK.[]{data-label="fig:heat_uk"}](Heatmaps/Densities_diff_DL_largeUK/heatmap_t_1_2_.pdf "fig:"){width="\textwidth"} [.5]{} ![Posterior means in the large model – UK.[]{data-label="fig:heat_uk"}](Heatmaps/Densities_diff_tvp_sv_largeUK/heatmap_t_1_2_.pdf "fig:"){width="\textwidth"} \ [.5]{} ![Posterior means in the large model – USA.[]{data-label="fig:heat_us"}](Heatmaps/Densities_diff_DL_largeUS/heatmap_t_1_1_.pdf "fig:"){width="\textwidth"} [.5]{} ![Posterior means in the large model – USA.[]{data-label="fig:heat_us"}](Heatmaps/Densities_diff_tvp_sv_largeUS/heatmap_t_1_1_.pdf "fig:"){width="\textwidth"} \ [.5]{} ![Posterior means in the large model – USA.[]{data-label="fig:heat_us"}](Heatmaps/Densities_diff_DL_largeUS/heatmap_t_1_2_.pdf "fig:"){width="\textwidth"} [.5]{} ![Posterior means in the large model – USA.[]{data-label="fig:heat_us"}](Heatmaps/Densities_diff_tvp_sv_largeUS/heatmap_t_1_2_.pdf "fig:"){width="\textwidth"} \ Before we turn to the forecasting exercise we assess the amount of sparsity induced by our two proposed global-local shrinkage specifications, labeled TVP-SV NG and TVP-SV DL. This analysis is based on inspecting heatmaps that show the posterior mean of the coefficients as well as the posterior mean of the standard deviations that determine the amount of time variation in the dynamic regression coefficients. Figs. \[fig:heat\_ea\] to \[fig:heat\_us\] show the corresponding heatmaps. Red and blue squares indicate positive and negative values, respectively. To permit comparability we use the same scaling across priors within a given country. We start by inspecting posterior sparsity attached to the time-invariant part of the models, provided in the upper panels of Figs. \[fig:heat\_ea\] to \[fig:heat\_us\]. We generally find that the first own lag of a given variable appears to be important while the second lag is slightly less important in most equations. This can be seen by dense (i.e., colored) main diagonals elements. Turning to variables along the off-diagonal elements, i.e. the coefficients associated with variables $j \neq i$ in equation $i$, we find considerable evidence that the (un)employment rate as well as long-term interest rates appear to load heavily on the other quantities in most country models, as indicated by relatively dense columns associated with the first lag of unemployment and interest rates. Equations that are characterized by a large amount of non-zero coefficients (i.e., dense rows) are mostly related to financial variables, namely exchange rates, equity and commodity prices. These observations are general in nature and relate to all three countries considered. In the next step we investigate sparsity in terms of the degree of time variation of the VAR coefficients (see the lower panels of Figs. \[fig:heat\_ea\] to \[fig:heat\_us\]). Here, we observe that, consistent with the dense pattern in $\boldsymbol{a}_0$, equations associated with financial variables display the largest amount of time-variation. Interestingly, the results suggest that coefficients in the euro area tend to display a greater propensity to drift as compared to the coefficients of the UK country model. Comparing the degree of shrinkage between both the DL and the NG prior reveals that the latter specification induces much more sparsity in large dimensional systems. While both priors yield rather sparse models, the findings point towards a much stronger degree of shrinkage of the NG prior. Notice that the NG prior also favors constant parameter specifications. This suggests that in large scale applications the NG prior might be particular useful when issues of overparametrization are more of a concern, while in smaller models the flexibility of the DL prior might be beneficial. Forecasting results {#forecast} ------------------- In this section we examine the forecasting performance of the proposed prior specifications. The forecasting set-up largely follows [@Huber2017] and focuses on the one-quarter and one-year ahead forecast horizons and three different information sets: small (3 variables), medium (7 variables) and large (15 variables). We use an expanding window and a hold-out sample of 80 quarters which results into the following hold out samples: 1995Q4-2015Q3 for the EA, 1997Q1-2016Q4 for the UK and 1995Q4-2015Q3 for the USA. Forecasts are evaluated using log predictive scores (LPSs), a widely used metric to measure density forecast accuracy [see e.g., @geweke2010comparing]. We compare the NG and DL specifications with a simpler constant parameter Bayesian VAR (BVAR-SV) and a time-varying parameter VAR with a loose prior setting (TVP-SV) as a general benchmark. Specifically, this benchmark model assumes that the prior on $\sqrt{\omega}_j$ is given by $${\omega}_j \sim \mathcal{G}(1/2, 1/2) \Leftrightarrow \pm \sqrt{\omega}_j \sim \mathcal{N}(0, 1).$$ On $\boldsymbol{a}_0$ and for the BVAR-SV we use the NG shrinkage prior described in Section 3. For the evaluation, we focus on the joint predictive distribution of three focal variables, namely GDP growth, inflation and short-term interest rates. This allows us to assess the predictive differences obtained by switching from small to large information sets. summarizes the results for the one-step-ahead forecast horizon. All panels display log predictive scores for the three focus variables relative to the TVP-SV specification. To assess the overall forecast performance over the hold-out sample, particularly consider the rightmost point in the respective figures. [.329]{} ![One-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR without shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps1"}](outputinflationIR.pdf "fig:"){width="\textwidth"} [.329]{} ![One-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR without shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps1"}](outputinflationIR.pdf "fig:"){width="\textwidth"} [.329]{} ![One-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR without shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps1"}](outputinflationIR.pdf "fig:"){width="\textwidth"} \ [.329]{} ![One-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR without shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps1"}](outputinflationIR.pdf "fig:"){width="\textwidth"} [.329]{} ![One-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR without shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps1"}](outputinflationIR.pdf "fig:"){width="\textwidth"} [.329]{} ![One-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR without shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps1"}](outputinflationIR.pdf "fig:"){width="\textwidth"} \ [.329]{} ![One-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR without shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps1"}](outputinflationIR.pdf "fig:"){width="\textwidth"} [.329]{} ![One-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR without shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps1"}](outputinflationIR.pdf "fig:"){width="\textwidth"} [.329]{} ![One-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR without shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps1"}](outputinflationIR.pdf "fig:"){width="\textwidth"} \ Doing so reveals that the time-varying parameter specifications, TVP-SV NG and TVP-SV DL outperform the benchmark for all three countries and information sets as indicated by positive log predictive Bayes factors. With the exception of the euro area and the small information set, this finding holds also true for the constant parameter VAR-SV specification. Zooming in and looking at performance differences among the priors reveals that the TVP-SV DL specification dominates in the case of small models. The TVP-SV NG prior ranks second and the constant parameter VAR-SV model performs worst. The dominance of the DL prior stems from the performance during the period of the global financial crisis 2008/09. While predictions from all model specifications worsen, they deteriorate the least for the DL specification. In particular for the EA and the UK, the dominance of the DL prior stems mainly from improved forecast for short-term interest rates, see Figs. \[fig:marg\_lps\_ea1\] and \[fig:marg\_lps\_uk1\] in Appendix \[univLPS\]. It is worth noting that in small-dimensional models the TVP-SV specification also performs quite well and proves to be a competitive alternative relative to the BVAR-SV model. This is due to the fact that parameters are allowed to move significantly with only little punishment introduced through the prior, effectively controlling for structural breaks and sharp movements in the underlying structural parameters. This result corroborates findings in [@d2013macroeconomic] and appears to support our conjecture that for small information sets, allowing for time-variation proves to dominate the detrimental effect of the large number of additional parameters to be estimated. In the next step we enlarge the information set and turn our focus to the seven variable VAR specifications. Here, the picture changes slightly and the NG prior outperforms forecasts of its competitors. Depending on the country, either forecasts of the DL specification or the constant parameter VAR-SV model rank second. For US data it pays off to use a time-varying parameter specification since – as with the small information set – the BVAR-SV model performs worst. Finally, we turn to the large VAR specifications featuring 15 variables. Here we see a very similar picture as with the seven variable specification. The TVP-SV NG prior yields the best forecasts with the constant parameter model turning out to be a strong competitor. Only for US data, both time-varying parameter specifications clearly outperform the constant parameter competitor. [.329]{} ![Four-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps4"}](outputinflationIR4step.pdf "fig:"){width="\textwidth"} [.329]{} ![Four-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps4"}](outputinflationIR4step.pdf "fig:"){width="\textwidth"} [.329]{} ![Four-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps4"}](outputinflationIR4step.pdf "fig:"){width="\textwidth"} \ [.329]{} ![Four-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps4"}](outputinflationIR4step.pdf "fig:"){width="\textwidth"} [.329]{} ![Four-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps4"}](outputinflationIR4step.pdf "fig:"){width="\textwidth"} [.329]{} ![Four-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps4"}](outputinflationIR4step.pdf "fig:"){width="\textwidth"} \ [.329]{} ![Four-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps4"}](outputinflationIR4step.pdf "fig:"){width="\textwidth"} [.329]{} ![Four-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps4"}](outputinflationIR4step.pdf "fig:"){width="\textwidth"} [.329]{} ![Four-quarter-ahead cumulative log predictive Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:lps4"}](outputinflationIR4step.pdf "fig:"){width="\textwidth"} We now briefly examine forecasts for the four quarter horizon displayed in . For the small and medium sized models, all competitors yield forecasts that are close or worse compared to the loose shrinkage benchmark prior model. The high degree of shrinkage induced by the NG prior yields particularly poor forecasts, especially for observations that fall in the period of the global financial crisis. The picture slightly reverses when considering the large-scale models. Here, all competitors easily outperform forecasts of the loose benchmark model implying that shrinkage pays off. Viewed over all settings, the DL prior does a fine job in balancing the degree of shrinkage across model sizes. Improving predictions through dynamic model selection {#pool} ----------------------------------------------------- The discussion in the previous subsection highlighted the marked heterogeneity of model performance over time. In terms of achieving superior forecasting results one could ask whether there are gains from dynamically selecting models. Following [@raftery2010online; @koop2012forecasting; @onorante2016dynamic] we perform dynamic model selection by computing a set of weights for each model within a given model size. These weights are based on the predictive likelihood for the three focus variables at $t-1$. Intuitively speaking, this combination scheme implies that if a given model performed well in predicting last quarters output, inflation and interest rates, it receives a higher weight in the next period. By contrast, models that performed badly receive less weight in the model pool. We further employ a so-called forgetting factor that induces persistence in the model weights over time. This implies that the weights are not only shaped by the most recent forecast performance of the underlying models but also by their historical forecasting performance. Finally, to select a given model we simply pick the one with the highest weight. The predicted weight associated with model $i$ is computed as follows $$\mathfrak{w}_{t|t-1, i} := \frac{\mathfrak{w}^\alpha_{t-1|t-1, i}}{\sum_{i \in \mathcal{M}} \mathfrak{w}^\alpha_{t-1|t-1, {i}}},$$ with $\alpha=0.99$ denoting a forgetting factor close to unity and $\mathfrak{w}_{t-1|t-1, i}$ is given by $$\mathfrak{w}_{t-1|t-1, i} = \frac{\mathfrak{w}_{t-1|t-2,i}p_{t-1|t-2,i}}{\sum_{i \in \mathcal{M}} \mathfrak{w}_{t-1|t-2,i}p_{t-1|t-2,i}}.$$ Here, $p_{t-1|t-2,i}$ denotes the one-step-ahead predictive likelihood for the three focus variables in $t-1$ for model $i$ within the model space $\mathcal{M}$. Letting $t_0$ stand for the final quarter of the training sample, the initial weights $\mathfrak{w}_{t_0+1|t_0,i}$ are assumed to be equal for each model. Before proceeding to the forecasting results, shows the model weights over time. One interesting regularity for small-scale models is that especially during the crisis period, the algorithm selects the benchmark, weak shrinkage TVP-SV model. This choice, however, proves to be of transient nature and the algorithm quickly adapts and switches back to either the TVP-SV NG or the TVP-SV DL model. We interpret this finding to be related to the necessity to quickly adjust to changes in the underlying macroeconomic conditions in light of the small information set adopted. The TVP-SV model allows for large shifts in the underlying regression coefficients whereas the specifications based on hierarchical shrinkage priors introduce shrinkage, which excels over the full hold-out period but proves to be detrimental during crisis episodes. For the medium and large information set, the model weights corroborate the results reported in the previous subsection. Specifically, we see that TVP-SV NG and TVP-SV DL receive high weights during the global financial crisis while the BVAR-SV receives large shares of posterior probability during the remaining periods. This implies that during periods with overall heightened uncertainty, gains from using a time-varying parameter framework are sizable. [.329]{} ![Model weights over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod2"}](Jointweights.pdf "fig:"){width="\textwidth"} [.329]{} ![Model weights over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod2"}](Jointweights.pdf "fig:"){width="\textwidth"} [.329]{} ![Model weights over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod2"}](Jointweights.pdf "fig:"){width="\textwidth"} \ [.329]{} ![Model weights over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod2"}](Jointweights.pdf "fig:"){width="\textwidth"} [.329]{} ![Model weights over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod2"}](Jointweights.pdf "fig:"){width="\textwidth"} [.329]{} ![Model weights over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod2"}](Jointweights.pdf "fig:"){width="\textwidth"} \ [.329]{} ![Model weights over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod2"}](Jointweights.pdf "fig:"){width="\textwidth"} [.329]{} ![Model weights over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod2"}](Jointweights.pdf "fig:"){width="\textwidth"} [.329]{} ![Model weights over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod2"}](Jointweights.pdf "fig:"){width="\textwidth"} We now turn to the forecasting results using DMS, provided in . The figure shows the log predictive Bayes factors relative to the best performing models over the whole sample period. These correspond to those achieving the highest cumulative log predictive Bayes factors in . [.329]{} ![Log predictive Bayes factor of dynamic model selection relative to the best performing model over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod1"}](Jointcomb.pdf "fig:"){width="\textwidth"} [.329]{} ![Log predictive Bayes factor of dynamic model selection relative to the best performing model over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod1"}](Jointcomb.pdf "fig:"){width="\textwidth"} [.329]{} ![Log predictive Bayes factor of dynamic model selection relative to the best performing model over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod1"}](Jointcomb.pdf "fig:"){width="\textwidth"} \ [.329]{} ![Log predictive Bayes factor of dynamic model selection relative to the best performing model over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod1"}](Jointcomb.pdf "fig:"){width="\textwidth"} [.329]{} ![Log predictive Bayes factor of dynamic model selection relative to the best performing model over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod1"}](Jointcomb.pdf "fig:"){width="\textwidth"} [.329]{} ![Log predictive Bayes factor of dynamic model selection relative to the best performing model over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod1"}](Jointcomb.pdf "fig:"){width="\textwidth"} \ [.329]{} ![Log predictive Bayes factor of dynamic model selection relative to the best performing model over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod1"}](Jointcomb.pdf "fig:"){width="\textwidth"} [.329]{} ![Log predictive Bayes factor of dynamic model selection relative to the best performing model over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod1"}](Jointcomb.pdf "fig:"){width="\textwidth"} [.329]{} ![Log predictive Bayes factor of dynamic model selection relative to the best performing model over time. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:dynMod1"}](Jointcomb.pdf "fig:"){width="\textwidth"} The results indicate that dynamic model selection tends to improve forecasts throughout all model sizes and for all three country data sets. In particular, during the period of the global financial crisis, selecting from a pool of model pays off. Forecast gains during the crisis are more pronounced for the EA and the UK, whereas with US data forecasts are more gradually improving over the sample period. Forecasts for the EA that are based on the large information set are less precise during the period from 2000 to 2012 compared to the benchmark models. This might be related to the creation of the euro which in turn has triggered a fundamental shift in the joint dynamics of the euro area’s macro model. Due to the persistence in the models’ weights, the model selection algorithm takes some time to adapt to the new regime. This can be seen by investigating the latest period in the sample, in which dynamic model selection again outperforms forecasts of the benchmark model. In other words, for EA data either restricting the sample period to post 2000 or reducing the persistence via the forgetting factors might improve forecasting results. Conclusive remarks ================== In this paper we have adapted two recent global-local shrinkage priors and used them to efficiently estimate time-varying parameter VARs of differing sizes and for three large economies. The priors capture convenient features of the traditional Minnesota prior, effectively pushing coefficients associated with higher lag orders as well as their propensity to drift towards zero. Applying the proposed priors to three different data sets, we find improvements in one-step ahead forecasts from the time-varying parameter specifications against various competitors. Allowing for time variation and using shrinkage priors leads to smaller drops in forecast performance during the global financial crisis, while their forecasts remain competitive during the rest of the sample period. This finding is further corroborated by a dynamic model selection exercise which attaches sizable model weights to time-varying parameter models during the period of the global financial crisis. In that sense using flexible time-varying parameter models leads to large forecast gains during times of heightened uncertainty. Finally, and comparing the two proposed priors, we find that the DL prior outperforms in small-scale VARs. By contrast, the TVP-VAR equipped with a NG prior shows the strongest performance in medium to large scale applications along with the constant parameter NG-VAR with SV. This is driven by the fact that the NG prior induces more shrinkage on the coefficients and pushes more strongly towards a constant parameter model and the payoffs of more shrinkage in larger scale models are well documented. The same holds true for the four steps ahead forecast horizon. The DL prior does a fine job in small to medium scale models, while the merits of the NG prior play out most strongly in large models. That said, our results also point at a trade-off between complexity (i.e., allowing for time-varying parameters) and model size (i.e., data information). The larger the information set, the stronger the performance of the constant parameter model. In other words, within the VAR framework for macroeconomic time series, it is advisable to use sophisticated models for small data and simple models for sizeable data. For consistently good performance independently of the size of the data, we recommend to use sophisticated models with strong shrinkage priors such as the proposed NG shrinkage prior. This alleviates the problem of overfitting and provides a plethora of additional inferential opportunities. Data overview ============= Additional empirical results {#univLPS} ============================ [.329]{} ![Euro Area: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_ea1"}](EA_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![Euro Area: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_ea1"}](EA_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![Euro Area: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_ea1"}](EA_univ.pdf "fig:"){width="\textwidth"} \ [.329]{} ![Euro Area: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_ea1"}](EA_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![Euro Area: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_ea1"}](EA_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![Euro Area: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_ea1"}](EA_univ.pdf "fig:"){width="\textwidth"} \ [.329]{} ![Euro Area: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_ea1"}](EA_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![Euro Area: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_ea1"}](EA_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![Euro Area: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_ea1"}](EA_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United Kingdom: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_uk1"}](UK_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United Kingdom: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_uk1"}](UK_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United Kingdom: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_uk1"}](UK_univ.pdf "fig:"){width="\textwidth"} \ [.329]{} ![United Kingdom: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_uk1"}](UK_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United Kingdom: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_uk1"}](UK_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United Kingdom: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_uk1"}](UK_univ.pdf "fig:"){width="\textwidth"} \ [.329]{} ![United Kingdom: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_uk1"}](UK_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United Kingdom: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_uk1"}](UK_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United Kingdom: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_uk1"}](UK_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United States: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_us1"}](US_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United States: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_us1"}](US_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United States: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_us1"}](US_univ.pdf "fig:"){width="\textwidth"} \ [.329]{} ![United States: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_us1"}](US_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United States: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_us1"}](US_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United States: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_us1"}](US_univ.pdf "fig:"){width="\textwidth"} \ [.329]{} ![United States: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_us1"}](US_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United States: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_us1"}](US_univ.pdf "fig:"){width="\textwidth"} [.329]{} ![United States: Univariate cumulative log predictive one-quarter-ahead Bayes factors over time relative to the TVP-SV-VAR with loose shrinkage. Top row: Small model (3 variables). Middle row: Medium model (7 variables). Bottom row: Large model (15 variables).[]{data-label="fig:marg_lps_us1"}](US_univ.pdf "fig:"){width="\textwidth"} [^1]: Corresponding author: Florian Huber, WU Vienna University of Economics and Business. E-mail: <fhuber@wu.ac.at>. [^2]: For recent applications of this general modeling strategy within state space models, see [@belmonte2014hierarchical; @bitto2015achieving; @eisenstat2016stochastic]. [^3]: To simplify the model exposition, we omit an intercept term in this section. Irrespectively of this, we allow for non-zero intercepts in the empirical applications that follow. [^4]: See [@korobilis2014data] for a recent application of a similar idea to the TVP-VAR-SV case.
--- abstract: 'Worm-like filaments that are propelled homogeneously along their tangent vector are studied by Brownian dynamics simulations. Systems in two dimensions are investigated, corresponding to filaments adsorbed to interfaces or surfaces. A large parameter space covering weak and strong propulsion, as well as flexible and stiff filaments is explored. For strongly propelled and flexible filaments, the free-swimming filaments spontaneously form stable spirals. The propulsion force has a strong impact on dynamic properties, such as the rotational and translational mean square displacement and the rate of conformational sampling. In particular, when the active self-propulsion dominates thermal diffusion, but is too weak for spiral formation, the rotational diffusion coefficient has an activity-induced contribution given by $v_c/\xi_P$, where $v_c$ is the contour velocity and $\xi_P$ the persistence length. In contrast, structural properties are hardly affected by the activity of the system, as long as no spirals form. The model mimics common features of biological systems, such as microtubules and actin filaments on motility assays or slender bacteria, and artificially designed microswimmers.' author: - 'Rolf E. Isele-Holder' - Jens Elgeti - Gerhard Gompper bibliography: - 'bib.bib' title: 'Self-propelled Worm-like Filaments: Spontaneous Spiral Formation, Structure, and Dynamics' --- Introduction {#s:introduction} ============ Its importance in biology and its enormous potential impact in technical applications makes active soft matter a field of rapidly growing interest and progress.[@Elgeti.2015; @Marchetti.2013; @Cates.2011] Flexible slender bodies are of particular importance. The majority of natural swimmers propel themselves using flexible, hair-like structures like cilia and flagella.[@Elgeti.2015] Another important example are actin filaments and microtubules, major constituents of the cytoskeleton, whose capability to buckle decisively controls the mechanical properties of the cell body.[@Rodriguez.2003] Flexibility is the crucial ingredient for the formation of small-scale spirals[@RashedulKabir.2012] and possibly also for large-scale swirls[@Sumino.2012] of microtubules on motility assays. Even the structure of slender bacteria can be dominated by their flexibility.[@Lin.2014] Elextrohydrodynamic convection can propel colloid chains because they are flexible,[@Sasaki.2014] just as the swimming mechanism of assembled magnetic beads in an oscillating external magnetic field is possible because of the swimmer’s flexibility.[@Vach.2015] Flexibility is of course also the feature that allows for the instabilities leading to cilia-like beating in artificially bundled microtubules.[@Sanchez.2011; @Sanchez.2012] Despite its importance, the number of theoretical studies of active agents that incorporate flexibility is still relatively small, and can roughly be subdivided into works that focus on buckling phenomena and on free-swimming agents. Symmetry breaking instabilities leading to rotation and beating motion of active filaments on motility assays can be described with a phenomenological ordinary differential equation for the filaments.[@Sekimoto.1995] The propulsion force of motor proteins has been predicted based on a Langevin model for buckled, rotating actin filaments and microtubules.[@Bourdieu.1995] Numerical studies with Lattice-Boltzmann simulations and Brownian or multi-particle collision dynamics have demonstrated that clamped of pinned filaments composed of stresslets or propelled beads can show cilia-like beating or rotation.[@Laskar.2013; @Chelakkot.2014] The behaviour of free-swimming actin filaments on motility assays was reproduced in early numerical studies using the Langevin equation.[@Farkas.2002] However, it was only recently that theoretical study of flexible, active filaments that can move freely has received significant attention. Lattice-Boltzmann simulations reveal that spontaneous symmetry breaking in chains of stresslets can lead to rotational or translational filament motion.[@Jayaraman.2012] Brownian dynamics simulations of short self-propelled filaments suggest that different types of motion occur for single filaments[@Jiang.2014_1] and that spontaneous rotational motion can arise for pairs of filaments.[@Jiang.2014_2] A combination of Brownian dynamics simulations and analytic theory shows that shot noise in worm-like filaments leads to temporal superdiffusive filament movement and faster-decaying tangent-tangent correlation functions.[@Ghosh.2014] Finally, chains of active colloids connected by springs have the same Flory exponent but a different prefactor of the scaling law compared to chains of passive colloids, as shown recently both analytically for beads without volume exclusion and numerically with Brownian dynamics simulations for beads with volume exclusion.[@Kaiser.2015] The free-swimming behaviour of a worm-like filament that is tangentially propelled with a homogeneous force is still unexplored and is the subject of this work. The model is introduced in Section \[s:methods\]. Results for the structural and dynamic properties over a wider range or propulsion forces and filament flexibilities are presented in Section \[s:results\]. We find that the filament can spontaneously form spirals, which is the mechanism that dominates the behaviour for large propulsion forces. The relevance of our observations for natural and artificial active agents is discussed in Section \[s:discussion\]. We present our conclusions in Section \[s:conclusions\]. Model and Methods {#s:methods} ================= We study a single, active, worm-like filament, which is modelled as a sequence of $N+1$ beads connected via stiff springs. The overdamped equation of motion is given by $$\gamma \dot{\mathbf{r}_i} = -\nabla_i U + \mathbf F_\mathrm{k_BT}^{(i)} + \mathbf F_p^{(i)},$$ where $\mathbf{r}_i$ are the coordinates of bead $i$, $\gamma$ is the friction coefficient, $U$ is the configurational energy, $\mathbf{F}_{k_BT}^{(i)}$ is the thermal noise force, and $\mathbf{F}_p^{(i)}$ is the active force that drives the system out of equilibrium. The configurational potential energy $$U = U_\mathrm{bond} + U_\mathrm{angle} + U_\mathrm{EV}$$ is composed of a bond contribution between neighbouring beads $$U_\mathrm{bond} = \frac{k_S}{2} \sum_{i=1}^N (|\mathbf{r}_{i,i+1}| - r_0)^2,$$ a bending energy $$U_\mathrm{angle} = \frac{\kappa}{4} \sum_{i=1}^{N-1} (\mathbf{r}_{i,i+1} - \mathbf{r}_{i+1,i+2})^2,$$ and an excluded volume term modelled with repulsive Lennard-Jones interactions $$\begin{aligned} U_\mathrm{EV} & = & \sum_{i=1}^N\sum_{j > i}^{N+1} u_\mathrm{EV}(r_{i,j}), \\ u_\mathrm{EV}(r) & = & \left\{ \begin{array}{lr} 4\epsilon \left[ \left(\frac{\sigma}{r} \right)^{12} - \left(\frac{\sigma}{r} \right)^{6} \right] + \epsilon, & r < 2^{1/6}\sigma \\ 0, & r \geq 2^{1/6} \sigma, \end{array} \right.\end{aligned}$$ where $\mathbf{r}_{i,j} = \mathbf{r}_i - \mathbf{r}_j$ is the vector between the position of the beads $i$ and $j$, $k_S$ is the spring constant for the bond potential, $r_0$ is the equilibrium bond length, $\kappa$ is the bending rigidity, and $\epsilon$ and $\sigma$ are the characteristic volume-exclusion energy and effective filament diameter (bead size). The drag force $\gamma \dot{\mathbf{r}_i}$ is the velocity of each bead times the friction coefficient $\gamma$. The thermal force $\mathbf{F}_{k_BT}^{(i)}$ is modelled as white noise with zero mean and variance $2k_BT \gamma / \Delta t$ as described in Ref. . [[ Note that hydrodynamic interactions (HI) are not included in our model. The model is thus in particular valid for (i) neutral swimmers, for which HI are known to be of minor importance, [@Downton.2009; @Goetze.2010; @Zoettl.2014] (ii) swimmers near a wall, where HI is of less importance,[@Elgeti.2009; @Drescher.2011] and (iii) microorganisms that glide on a surface, such as nematodes like *[C. elegans]{}.[@Gray.1964; @Korta.2007]*]{}]{} Without propulsion force, $\mathbf{F}_p^{(i)} = 0$, the model matches the well-known worm-like chain model for semi-flexible polymers.[@Kratky.1949; @Saito.1967] For active filaments, we use a force per unit length $f_p$ that acts tangentially along all bonds, i.e., $$\mathbf{F}_p = \sum_i^N f_p\mathbf{r}_{i,i+1},$$ as illustrated in Fig. \[f:polymer\_model\]. The force along each bond is distributed equally onto both adjacent beads. ![Filament model: Beads are connected via stiff springs. The active force acts tangentially along all bonds. Colour gradient indicates the force direction.[]{data-label="f:polymer_model"}](figs/polymer_fp.pdf){width="2in"} We consider systems with parameters chosen such that (i) $k_S$ is sufficiently large that the bond length is approximately constant $r_0$, that (ii) the local filament curvature is low such that the bead discretization does not violate the worm-like polymer description, and that (iii) the thickness of the chain has negligible impact on the results. When these requirements are met, the system is fully characterized by two dimensionless numbers, $$\begin{aligned} \xi_P/L & = & \frac{\kappa}{k_BTL}, \\ Pe & = & \frac{v_cL}{D_t} = \frac{f_pL^2}{k_BT},\end{aligned}$$ where $L=Nr_0$ and $\xi_P$ are the length and persistence length of the chain, respectively. $\xi_P/L$ is a measure for the bending rigidity of the filament. The Péclet number $Pe$ is the ratio of convective to diffusive transport and measures the degree of activity. For its definition, we use that the filament has a contour velocity $v_c = f_p / \gamma_l $, and that the translational diffusion coefficient $D_t = k_BT/\gamma_l L$, where we have introduced the friction per unit length $\gamma_l = \gamma (N+1) /L$. The ratio of these numbers $$\mathfrak{F} = PeL/\xi_P = \frac{f_pL^3}{\kappa},$$ which we call the flexure number, provides a ratio of activity to bending rigidity. Previous studies showed that this number is decisive for buckling instabilities of active filaments.[@Sekimoto.1995; @Chelakkot.2014] It will be shown below that this is also a determining quantity for spiral stability and rotational diffusion. Simulations were performed in two dimensions, where volume exclusion interactions have major importance. Equations of motions were integrated using an Euler scheme. Simulation parameters and results are reported in dimensionless form, where length are measured in units of the filament length $L$, energies in units of the thermal energy $k_BT$, and time in units of the characteristic time for the filament to diffuse its own body length $$\tau = L^3 \gamma_l /4k_BT.$$ In our simulations we used $k_S = 4000$$k_BT/r_0^2$, $r_0=\sigma=L/N$, and $\epsilon=k_BT$ [[ if not stated otherwise.]{}]{} A large parameter space for $Pe$ and $\xi_P/L$ was explored by varying $f_p$, $N$, and $\kappa$. [[ $N$ was varied in the range from 25 to 200 from the highest to the lowest $\xi_P/L$. Almost all simulations were run for more than $5\,\tau$. An initial period of the simulation output is discarded in the analysis.]{}]{} The timestep $\Delta t$ was adjusted to the remaining settings to ensure stable simulations. Unless explicitly mentioned, results refer to simulations that were started with a perfectly straight conformation. All simulations were performed using the LAMMPS molecular simulation package[@Plimpton.1995] with in-house modifications to describe the angle potential, the propulsion forces, and to solve the overdamped equations of motion. Results {#s:results} ======= ![image](figs/measure_coilicity.pdf){width="5.7cm"} ![image](figs/coil_hist.pdf){width="5.7cm"} ![image](figs/coil_example_smooth.pdf){width="5.7cm"} ![image](figs/kurtosis_smooth.pdf){width="5.7cm"} ![image](figs/coil_phase2.pdf){width="5.7cm"} The characteristic filament behaviour depends on its bending rigidity and activity and can be divided into three regimes (see Fig. \[f:trajectories\]). At low $Pe$ or high $\xi_P/L$, the “polymer regime”, the active filament structurally resembles the passive filament with $Pe = 0$. The main difference compared to the passive filament is that the active force drives the filament along its contour, leading to a directed translational motion — we name this characteristic movement “railway motion”. At high $Pe$ and low $\xi_P/L$, the filament spontaneously winds up to a spiral. The “spiral state” is characterized by ballistic rotation but only diffusive translation. Spirals can spontaneously break up. Their lifetime determines whether spiral formation has a minor impact on the overall filament behaviour, the “weak spiral regime” at intermediate $Pe$, or whether spirals are dominating, the “strong spiral regime” at large $Pe$. Because spiral formation has a major impact on both the structure and the dynamics, features related to spiral formation are addressed first. Structural and dynamic properties of the elongated and spiral state are presented afterwards. Spiral Formation {#s:coil_formation} ---------------- The processes that lead to the formation and break-up of spirals are depicted in Fig. \[f:coil\_uncoil\]. Spontaneous spiral formation (cf. Fig. \[f:coil\_uncoil\]a) results from the leading tip colliding with a subsequent part of the chain. Volume exclusion then forces the tip to bent. By further forward movement, the chain winds to a spiral. Two spiral break-up mechanisms occurred in our simulations. The first is the thermally activated mechanism in Fig. \[f:coil\_uncoil\]b. The leading tip of the wound-up chain spontaneously changes direction and the spiral deforms. This break-up mechanism requires strong local bending and is therefore predominant for small $\xi_P$. The second mechanism is spiral break-up by widening and is depicted in Fig. \[f:coil\_uncoil\]c. The bending potential widens the spiral until the leading tip looses contact to the filament end. This break-up mechanism is predominant when $\xi_P$ is too large for spontaneous spiral break-up. Because high stiffness is also unfavourable for spiral formation, spiral break-up by widening was almost exclusively observed in simulations that started with a spiral configuration. To understand spiral formation more quantitatively, we introduce the spiral number $$\mathfrak{s} = (\phi(L) - \phi(0))/2\pi,$$ where $\phi(s)$ is the bond orientation at position $s$ along the contour of the filament, as measure for the instantaneous chain configuration. The definition is illustrated for three sample structures in Fig. \[f:coil\_phases\]a. It effectively measures how often the filament wraps around itself. The time evolution of $\mathfrak{s}$ is depicted in Fig. \[f:coil\_phases\]b for the same $Pe$ and $\xi_P/L$ as in Fig. \[f:trajectories\]. At $Pe = 200$, $\mathfrak{s}$ is always close to zero. At $Pe = 1000$, $\mathfrak{s}$ behaves similarly, except that peaks with larger values for $|\mathfrak{s}|$ occur occasionally, i.e., when spirals with a short lifetime form. At $Pe = 5000$, extended plateaus develop at large $|\mathfrak{s}|$. The spirals are wound up stronger and have a much longer lifetime. Probability distributions $p(|\mathfrak{s}|)$ are depicted in Fig. \[f:coil\_phases\]c. For the simulation without spirals ($Pe=200$), the histogram resembles the right half of a Gaussian distribution. For the simulation in the weak spiral regime, $p(\mathfrak{s})$ is similar for low $|\mathfrak{s}|$, but also has a small peak at $|\mathfrak{s}| \approx 2 - 3$. For strong spiral formation at $Pe = 5000$, $p(|\mathfrak{s}|)$ has only a small peak at low $|\mathfrak{s}|$, which corresponds to the elongated state, and a large peak at large $|\mathfrak{s}|$, the predominating spiral state. It turns out that the different regimes can be well distinguished by the kurtosis $$\beta_2 = \left\langle \left( \frac{\mathfrak{s} - \left\langle \mathfrak{s} \right\rangle}{\sigma_\mathfrak{s}} \right)^4 \right\rangle, \label{e:kurtosis}$$ where $\langle \dots \rangle$ denotes the ensemble average and $\sigma_\mathfrak{s}$ is the standard deviation of $\mathfrak{s}$. Results for the kurtosis are shown in Fig. \[f:coil\_phases\]d for selected $\xi_P/L$. $\beta_2 \approx 3$ in the polymer regime, as expected for Gaussian distributions. In the weak spiral regime, the small peak at larger values increases the numerator in Eq. (\[e:kurtosis\]) and has only a weak impact on $\sigma_\mathfrak{s}$, leading to an increase of $\beta_2$. When the spiral state is dominating, $\sigma_\mathfrak{s}$ grows drastically, resulting in a much smaller kurtosis $\beta_2$. Note that to reduce statistical uncertainties we symmetrized the $\mathfrak{s}$-distribution in the computation of $\beta_2$ by counting each measured $|\mathfrak{s}|$ as $+\mathfrak{s}$ and $-\mathfrak{s}$ [[ and only used data from the spiral state for simulations that do not show spiral break-up.]{}]{} With the kurtosis as measure to characterize spiral formation, a phase diagram can be constructed as depicted in Fig. \[f:coil\_phases\]e. Low filament rigidity $\xi_P$ and high propulsion $Pe$ is beneficial for spiral formation. In particular, for a fixed propulsion strength per unit length, any chain will form spirals if it is sufficiently long, because increasing the chain length without modifying any other parameter corresponds to moving to the lower right in the phase diagram. [[ The dimensionless numbers $\xi_P/L$ and $Pe$ completely characterize the system if the filament diameter — or the filament aspect ratio — is of minor importance. This is the true in the entire polymer regime, where volume-exclusions interactions hardly come into play because of the elongated chain structure. For the spiral regimes, the aspect ratio has an impact on the structure of the spiral and does in this way influence the results. Which features of the spiral regime can be approximated well by the dimensionless numbers can be understood from the spiral formation and break-up mechanisms. The aspect ratio is hardly relevant for]{}]{} spiral formation and spiral break-up by widening, where the decisive moments are when the filament tip collides with subsequent parts of the chain, or when it looses contact to the chain end, respectively. That break-up by widening is characterized well by the dimensionless numbers is also confirmed by a series of simulations that we start from a spiral configuration in which we vary $N$, $f_p$, $\kappa$, and $k_BT$. It turns out that spirals will break up by widening if $$\mathfrak{F} \lesssim 1000 - 1500. \label{e:widening}$$ In contrast, spontaneous spiral break-up by a change of orientation of the leading tip is dependent on a strong local curvature close to the tip and the structure of the spiral, which in turn is dependent on the filament diameter. The dimensionless description does therefore not provide a full characterization of the strong spiral regime, where spontaneous spiral break-up is the only mechanism to escape the spiral state. This is also confirmed by results for spirals that never broke up (cf. dark red squares in Fig. \[f:coil\_phases\]e), results for $\xi_P/L=0.2$ and $\xi_P/L=0.14$ show non-monotonic behaviour in the direction of $\xi_P/L$. This is a result of a combination of that the dimensionless description is only partially valid in this regime and that we chose $N=200$ for $\xi_P/L < 0.2$ but $N=100$ for $0.2 \geq \xi_P/L \geq 2.0$ in our simulations, [[ i.e., the aspect ratio $L/\sigma$ is halved in our simulations for filaments with $\xi_P/L < 0.2$.]{}]{} [[ Finally, the bead discretization with the chosen parameters favours a staggered arrangement of beads of contacting parts of the filament,[@Yang.2010; @Abkenar.2013] which implies an effective sliding friction between these parts. To study the importance of this effect, we increase the diameter of the beads at fixed bond length so that neighboring beads are heavily overlapping, which leads to a strongly smoothened interaction potential. Results for an increased diameter $\sigma = 2L/N$ are shown in Fig. \[f:coil\_phases\]b and d. We find that the spiral formation frequency is hardly affected by smoothening the filament surface. In contrast, spontaneous spiral break-up is largely alleviated for the smoother filament, leading to decreased spiral life-time, as can be seen from the evolution of $\mathfrak{s}$ for $Pe=5000$ in Fig. \[f:coil\_phases\]b. Smoother filaments thus show a qualitatively similar phase behaviour with slightly moved phase boundaries.]{}]{} Structural Properties {#s:statics} --------------------- The structural properties of the filament conformations can be best understood from the end-to-end vector $r_{e}$, as depicted in Fig. \[f:re2e\]. As long as no spirals form, simulation results are in good agreement with the Kratky-Porod model (valid for worm-like, non-active polymers without volume-exclusion interactions) that predicts[@Kratky.1949; @Saito.1967] $$\frac{\langle r_{e}^2 \rangle}{L^2} = 2\frac{\xi_P}{L} - 2\left(\frac{\xi_P}{L}\right)^2 \left(1 - e^{-L/\xi_P} \right)$$ in two dimensions. At low $\xi_P/L$, volume-exclusion interactions lead to slight deviations between the Kratky-Porod model and the simulation results. Strong deviations between the Kratky-Porod model and simulation results only occur in the strong spiral region in the phase diagram. The same trend was observed for the tangent-tangent correlation function, the radius of gyration, and the static structure factor, but is not reported here to avoid unnecessary repetition. ![Symbols: Mean end-to-end distance $\sqrt{\langle r_{e}^2 \rangle}$ over $Pe$ for different values of $\xi_P/L$. Solid lines: $\sqrt{\langle r_{e}^2 \rangle}$ as predicted from the Kratky-Porod model. The symbol shape indicates the region in the phase diagram: circle: polymer regime; triangle: weak spiral regime, squares: strong spiral regime. $N$ varies from 25 to 200 from large to small $\xi_P/L$. []{data-label="f:re2e"}](figs/re2e.pdf){width="8.3cm"} Dynamic Properties {#s:dynamics} ------------------ The characteristic filament motion can be understood from the mean square displacement (MSD) of a bead $j+i$ relative to bead $j$, as shown in Fig. \[f:rail\]. [[ For comparison, the MSD of the reference bead $j$ of a passive filament is also shown. Note that the displacement of this bead is subdiffusive at the short lag times shown here.[@Grest.1986]]{}]{} Displacement functions of the propelled beads are always larger than in the passive case. The curves show three distinct regimes. At small lag times, the MSDs of active filaments display plateaus due to the average distance of the two beads $j+i$ and $j$ along the filament. At large lag times, the increased motion caused by activity effects the MSDs of the propelled beads to grow more rapidly than that of the passive bead. The relevant part of the MSD that shows that the characteristic filament motion is movement along its contour is at intermediate lag times, where the MSDs of the propelled beads pass through minima that touch the reference MSD for [[ thermal]{}]{} motion. At that lag time, the bead $j+i$ has moved approximately to the position of bead $j$ at zero lag time. The beads have moved along the chain contour, similar to the movement of a train on a railway. The deviation from the exact starting position of bead $j$ exactly matches the [[ thermal]{}]{} motion, which results in the MSDs of the propelled beads touching the MSD of the passive filament. Thus, the characteristic movement of the filament is motion along its contour superimposed with thermal noise, as depicted in Fig. \[f:railway\_diff\]a. Note that $\xi_P/L=0.3$ was selected in Fig. \[f:rail\] because this corresponds to a rather flexible filament, where stronger deviations from the characteristic railway motion might be expected. This type of motion was observed in all simulations in the polymer regime and weak spiral regime. In the strong spiral regime, the MSDs of the propelled beads even fall below the reference line for purely [[ thermal]{}]{} motion. ![Mean square displacement of bead $25+i$ with respect to bead $j=25$ ($N=100$). $\xi_P/L=0.3$ for all lines. Black line: $Pe = 0$, coloured lines: $Pe = 1000$ (weak spiral regime). Black line describes purely diffusive motion of the leading bead $j$. []{data-label="f:rail"}](figs/railway.pdf){width="8.3cm"} The rotational diffusion can be accessed from the orientation of the end-to-end vector $\theta$. Its mean square rotation (MSR) is given in Fig. \[f:diff\_rot\]a. Note that complete rotations around the axis are accounted for in our computations. $\theta(t)$ can therefore be much larger than $2\pi$. In both the spiral and the elongated state, there is a regime at short lag times in which the MSR is dominated by the internal filament flexibility. For the spiral state, this regime is followed by a ballistic regime with MSR$\propto t^2$. For simulations in which the spirals break up spontaneously, a subsequent regime at high times with MSR$\propto t$ is expected but could not be detected in our simulations because of finite simulation time and strong noise at large lag times in the MSR. For the elongated state, the regime dominated by internal flexibility is followed by a diffusive regime with MSR$\propto t$. The rotational diffusion coefficient $D_r$ can be extracted by fitting MSR$=2D_rt$ to the regime of the MSR with gradient unity on a double–log scale. Measured $D_r$ are given in Fig. \[f:diff\_rot\]b as a function of the flexure number $\mathfrak{F}$. The diffusion coefficients collapse to a single curve, which has a plateau at low $\mathfrak{F}$ and then grows linearly. Strong deviations from this trend are only observed at high $\mathfrak{F}$ when the filament is in the weak spiral regime, and at $\mathfrak{F} = Pe = 0$ for flexible filaments, where strong deviations from a rod-like shape increase $D_r$. The rotational diffusion coefficient $D_r$ can be predicted from the characteristic railway motion in Fig. \[f:railway\_diff\] and the relation of the rotational diffusion coefficient to the autocorrelation function of the end-to-end tangent vector $\mathbf{t}_e$ $$\left\langle \mathbf{t}_e(t) \cdot \mathbf{t}_e(0) \right\rangle = e^{-D_rt}, \label{e:tangent_corr_diff_rot}$$ which is valid for lag times $t$ that are sufficiently large such that variation of $\mathbf{t}_e$ is not dominated by non-diffusive behaviour at early lag times caused by the filament flexibility (cf. Fig. \[f:diff\_rot\]a). With $$\mathbf{t}_e(t) = \frac{1}{L} \int_0^L\mathbf{t}(s,t)ds,$$ where $\mathbf{t}(s,t)$ is the tangent vector at position $s$ of the filament at time $t$, the left hand side of Eq. (\[e:tangent\_corr\_diff\_rot\]) becomes $$\left\langle \mathbf{t}_e(t) \cdot \mathbf{t}_e(0) \right\rangle = \frac{1}{L^2} \int_0^L ds^\prime \int_0^L ds \left\langle \mathbf{t}(s,t) \cdot \mathbf{t}(s\prime, 0)\right\rangle, \label{e:tang_integ}$$ where the order of summations has been changed to arrive at the right-hand side of Eq. (\[e:tang\_integ\]). As a representation of the characteristic railway motion (cf. Fig. \[f:railway\_diff\]b), we write $$\mathbf{t}(s,t) = \mathbf{t}(s+v_c t, 0). \label{e:contour_movement}$$ Note that this equation disregards the passive equilibrium rotation $D_{r,p}$. With Eq. (\[e:contour\_movement\]) and the expression for the tangent-tangent correlation function of worm-like polymers,[@Kratky.1949; @Saito.1967] the integrand in Eq. (\[e:tang\_integ\]) becomes $$\begin{aligned} \left\langle \mathbf{t}(s,t) \cdot \mathbf{t}(s^\prime,0) \right\rangle &=& \left\langle \mathbf{t}(s+v_ct,0) \cdot \mathbf{t}(s^\prime,0)\right\rangle \nonumber \\ &=& \exp[-(s+v_ct - s^\prime)/\xi_P].\end{aligned}$$ Integrating Eq. (\[e:tang\_integ\]) provides $$\left\langle \mathbf{t}_e(t) \cdot \mathbf{t}_e(0) \right\rangle = -\xi_P^2/L^2\left( e^{-v_ct/\xi_P} \left(2 - e^{L/\xi_P} - e^{-L/\xi_P} \right) \right).$$ A second order Taylor expansion in (small) $L/\xi_P$ then gives $$\left\langle \mathbf{t}_e(t) \cdot \mathbf{t}_e(0) \right\rangle = \exp[-v_ct/\xi_P],$$ so that a comparison with Eq. (\[e:tangent\_corr\_diff\_rot\]) finally yields the activity-induced rotational diffusion $$D_{r,a} = v_c/\xi_P.$$ Note that $v_c/\xi_P = \mathfrak{F}/4\tau$. Assuming that uncorrelated activity-induced and thermal rotation $D_{r,a}$ and $ D_{r,p}$ contribute to the overall rotation, we write $$D_r = D_{r,p} + D_{r,a}, \label{e:Dr}$$ where $D_{r,p}$ depends on $\xi_P/L$ and has the lower bound $D_{r,p} = (9/4)\,\tau^{-1}$ for rod-like filaments.[@Teraoka.2002] As can be seen from Fig. \[f:diff\_rot\]b, Eq. (\[e:Dr\]) matches the simulated rotational diffusion coefficient accurately. The characteristics of the center-of-mass MSD are shown in Fig. \[f:MSD\]. For the polymer regime, the typical S-shape of subsequent short-time diffusive, intermediate-time ballistic, and long-time effective diffusive behaviour develops;[@Zheng.2013] stronger propulsion increases the MSD. An important difference compared to rigid bodies is that the transition time $\tau_r = 1/D_r$ to long-time diffusive behaviour is dependent on the propulsion strength. When spiral formation becomes important, the general trend of the MSD changes, as shown in Fig. \[f:MSD\]b for a flexible filament. In the polymer regime or weak spiral regime, increasing $Pe$ leads to a larger displacement. In the strong spiral regime, however, the MSD decreases. For very stable spirals, the MSD is only weakly affected by the propulsion and almost matches the case of purely diffusive motion. The MSD for active point particles, spheres, or stiff rods is given by[@Elgeti.2015] $$\begin{aligned} \langle (r_c(t) - r_c(0))^2 \rangle &=& 4D_tt + \nonumber \\ & & (2v_0^2/D_r^2) [D_rt + \exp (-D_rt) - 1], \label{e:MSD}\end{aligned}$$ where $D_t$ is the translational diffusion coefficient and $v_0$ is a ballistic velocity. It turns out that Eq. (\[e:MSD\]) can be used to describe the MSD for active filaments, when the three coefficients $D_t$, $v_0$, and $D_r$ are chosen properly. The translational diffusion coefficient is $D_t = L^2/4\tau = k_BT/\gamma_l L$. We predict the rotational diffusion coefficient $D_r$ with Eq. (\[e:Dr\]). Finally the effective velocity can be expressed via $$v_0 = \frac{|\mathbf{F}_p|}{\gamma_l L}$$ as a balance of the net external force $|\mathbf{F}_p|$ with the total friction force $\gamma_l L v_0$. $|\mathbf{F}_p|$ can conveniently be expressed as the propulsive force per bond $f_p$ times the end-to-end vector, thus leading to $$v_0 = \frac{f_p \sqrt{\langle r_e^2 \rangle}}{\gamma_l L}. \label{e:v0}$$ As shown in Fig. \[f:MSD\], using these correlations for the coefficients provides an accurate prediction of the MSD. The last item we address is the effect of propulsion on conformational sampling. Figure \[f:SQT\]a shows results for the dynamic structure factor $$S(q,t) = \left\langle \frac{1}{N+1}\sum_{i=1}^{N+1}\sum_{j=1}^{N+1}\exp\{ i\mathbf{q} \cdot [\mathbf{r}_i(0)-\mathbf{r}_j(t)] \} \right\rangle$$ averaged over different directions of $\mathbf{q}$. In the phase without spirals, $S(q,t)/S(q,0)$ decays more rapidly with increasing $Pe$, indicating a faster change of conformations with increasing propulsion. [[ When spirals form, $S(q,t)/S(q,0)$ is indepenent of $Pe$ and larger than $S(q,t)/S(q,0)$ at $Pe=0$, indicating a slow change of conformations, which agrees with the observation of hardly any internal motion of the chain in this regime in our simulation output.]{}]{} Note that for the strong spiral regime, the data is from simulations where spirals formed spontaneously and did not break up. The depicted data is a result of averaging over the spiral states only. [[ To better quantify the behaviour of $S(q,t)$, we compute the characteristic decay time of the dynamic structure factor $$\tau_S(q) = \frac{\int_{t} tS(q,t) dt}{\int_{t} S(q,t) dt}. \label{e:ts}$$ Results for $\tau_S(q)$ at $q\approx 5\pi/L$, a $q$-vector large enough to capture the behaviour of mainly the internal degrees of freedom, are given in Fig \[f:SQT\]b. $\tau_S(q)$ decays slowly at low $Pe$. At high $Pe$, $\tau_S$ decays inversely proportinal to $Pe$ when no spirals form This is consistent with the picture that instantaneous conformations are essentially identical to those of passive filaments, but they are traversed with velocity $v_c$, corresponding to $\tau \propto Pe^{-1}$. In the strong spiral regime, $\tau_S$ is large and independent of $Pe$, which is a sign for that conformational changes are irrelevant and that $\tau_S$ is determined by the quasi-diffusive center of mass movement. Note that the measured $\tau_S$ at different $\xi_P/L$ collapse to a single line for both the polymer and the strong spiral regime.]{}]{} Discussion {#s:discussion} ========== The spontaneous formation of spirals is the feature dominating the overall behaviour of self-propelled filaments, both for dynamic and structural properties. Formation of spirals was previously observed for long, slender bacteria surrounded by short bacteria.[@Lin.2014] It was concluded that interaction with other active particles is a prerequisite for spiral formation. In contrast, the study at hand shows that spirals can form even for isolated filaments, as long as (i) the filament is sufficiently flexible, (ii) the propulsion is sufficiently strong, and (iii) excluded volume interactions force the tip of the filament to wind up. The first two conditions will be met automatically for any real system by choosing $L$ sufficiently large and leaving all other parameters constant (leading to increased $Pe$ and decreased $\xi_P/L$, i.e., favouring spirals). Meeting the third condition can in general not be achieved so easily. A free-swimming filament in three dimensions or a filament in two dimensions with low resistance of crossing its own body will not form spirals. This is also one reason why spiral formation has not yet been observed in more experimental studies. Agents that are similar to our model are actin filaments or microtubules on a motility assay. The former have a high crossing probability,[@Schaller.2010] formation of spirals is therefore not expected. The area enclosing the actin-filament parameter space in Fig. \[f:coil\_phases\] must thus be understood as that the regime where the flexibilities and propulsion strengths permit spiral formation can in principle be reached in real systems, and not so much as that sufficiently long actin filaments will form spirals. Microtubules on dynein carpets, which have a much lower crossing probability,[@Sumino.2012; @Abkenar.2013] will possibly form spirals if they are grown to sufficient size. Overall, except for slender bacteria,[@Lin.2014] we are unaware of a microscopic example in which spiral formation was observed. Yet, the formation of spirals is a feature that deserves more attention in the future. First, formation of spirals is an extremely simple non-equilibrium phenomenon that, in contrast to many other phenomena of active matter, arises for a single self-propelled particle and cannot easily be mapped qualitatively to passive systems in which activity is replaced by attractive forces. It can thus be used as a model phenomenon for the study of non-equilibrium thermodynamics. Second, our model is very simple; a realization in experiment seems possible within the near future. Finally, the formation of spirals leads to a sudden, strong change in structural and dynamic properties. The effect can thus potentially be used as a switch on the microscopic scale. Summary and Outlook {#s:conclusions} =================== We report an extensive study for the behaviour of dilute, self-propelled, worm-like filaments in two dimensions. The spontaneous formation and break-up of spirals is the feature that dominates the filament behaviour. Spiral formation is favoured by strong propulsion and low bending rigidity. Propulsion has a noticeable impact on structural properties only when spirals are dominating. The Kratky-Porod model[@Kratky.1949] is therefore valid for filaments that are weakly propelled or have high bending rigidity. When spiral formation becomes significant, structural properties change drastically. The characteristic filament motion is what we call the railway behaviour. The chain moves along its own contour superimposed with noise. With the understanding of the structural properties and the characteristic motion, rotational diffusion and the center-of-mass mean square displacement can be predicted to high accuracy when no spirals form. In contrast to rigid bodies, propulsion has an impact on the rotational diffusion coefficient. Finally, propulsion enhances conformational sampling in the regime without spirals. An obvious next step is understanding the collective motion of such active filaments. We expect that our single filament results will help to understand the collective behaviour, which is nonetheless strongly influenced by the additional interactions. In particular collision with other constituents might enhance spiral formation and lead to swirl-like patterns. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Thorsten Auth for helpful discussion. Financial support by the Deutsche Forschungsgemeinschaft via SPP 1726 “Microswimmers” is gratefully acknowledged.
--- abstract: 'Most of the theoretical models describing the translocation of a polymer chain through a nanopore use the hypothesis that the polymer is always relaxed during the complete process. In other words, models generally assume that the characteristic relaxation time of the chain is small enough compared to the translocation time that non-equilibrium molecular conformations can be ignored. In this paper, we use Molecular Dynamics simulations to directly test this hypothesis by looking at the escape time of unbiased polymer chains starting with different initial conditions. We find that the translocation process is not quite in equilibrium for the systems studied, even though the translocation time $\tau$ is about $10$ times larger than the relaxation time $\tau_{\text{r}}$. Our most striking result is the observation that the last half of the chain escapes in less than $\sim12\%$ of the total escape time, which implies that there is a large acceleration of the chain at the end of its escape from the channel.' author: - 'Michel G. Gauthier[^1]' - 'Gary W. Slater[^2]' title: | Non-driven polymer translocation through a nanopore:\ computational evidence that the escape and relaxation processes are coupled --- Introduction {#s:intro} ============ The translocation of polymers is the process during which a flexible chain moves through a narrow channel to go from one side of a membrane to the other. Many theoretical and numerical models of this fundamental problem have been developed during the past decade. These efforts are motivated in part by the fact that one of the most fundamental mechanism of life, the transfer of RNA or DNA molecules through nanoscopic biological channels, can be described in terms of polymer translocation models. Moreover, recent advances in manipulating and analyzing DNA moving through natural [@Kasianowicz1996; @Meller2001] or synthetic nanopores [@Chen2004] strongly suggest that such mechanical systems could eventually lead to the development of new ultrafast sequencing techniques [@Kasianowicz1996; @Astier-Braha-Bayley; @Deamer2002; @Howorka-Cheley-Bayley; @Kasianowicz-NatureMat; @Lagerqvist-Zwolak-Ventra; @Muthukumar2007; @Vercoutere-Winters-Hilt-Olsen-Deamer-Haussler-Akeson; @Wang-Branton]. However, even though a great number of theoretical [@Sung1996; @Muthukumar1999; @Berezhkovskii2003; @Flomenbom2003; @Kumar2000; @Lubensky1999; @Ambjornsson2004; @DiMarzio1997; @Matsuyama2004; @Metzler2003; @Slonkina2003; @Storm2005] and computational [@Ali2005; @Baumgartner1995; @Chern2001; @Chuang2001; @Dubbeldam2007; @Dubbeldam2007a; @Farkas2003; @Huopaniemi2006; @Kantor2004; @Kong2002; @Loebl2003; @Luo2006a; @Luo2006; @luo-2007-126; @Matysiak2006; @Milchev2004; @Tian2003; @Wei2007; @Wolterink2006] studies have been published on the subject, there are still many unanswered questions concerning the fundamental physics behind such a process. The best known theoretical approaches used to tackle this problem are the ones derived by Sung and Park [@Sung1996], and by Muthukumar [@Muthukumar1999]. Both of these methods study the diffusion of the translocation coordinate $s$, which is defined as the fractional number of monomers on a given side of the channel (see Fig.[ \[f:schema\]]{}). Sung and Park use a mean first passage time (MFPT) approach to study the diffusion of the translocation coordinate. Their method consists in representing the translocation process as the diffusion of the variable $s$ over a potential barrier that represents the entropic cost of bringing the chain halfway through the pore. The second approach, derived by Muthukumar, uses nucleation theory to describe the diffusion of the translocation coordinate. Several other groups have worked on these issues (see Refs. [@Berezhkovskii2003; @Kumar2000; @Lubensky1999; @Slonkina2003] for example), and many were inspired by Sung and Park’s and/or by Muthukumar’s work. However, such models assume that the subchains on both sides of the membrane remain in equilibrium at all times; this is what we call the *quasi-equilibrium hypothesis*. This assumption effectively allows one to study polymer translocation by representing the transport of the chain using a simple biased random-walk process [@Flomenbom2003; @GauthierMC2007a; @GauthierMC2007b]. In the case of driven translocation, simulations monitoring the radius of gyration of the subchains on both sides of the membrane have shown that the chains are not necessarily at equilibrium during the complete translocation process [@Luo2006a; @Tian2003]. However, as far as we know, no direct investigation of the quasi-equilibrium hypothesis has been carried out so far for *unbiased* translocations, although it is commonly used to conduct theoretical studies. For example, the fundamental hypothesis behind the one-dimensional model of Chuang[[*[ et al.]{}*]{}]{} [@Chuang2001] is that the translocation time is much larger than the relaxation time so that the polymer would have time to equilibrate for each new value of $s$. Chuang[[*[ et al.]{}*]{}]{} found that the translocation time should scale like $N^{9/5}$ and $N^{11/5}$ with and without hydrodynamic interactions, respectively. Their assumption is indirectly supported by the observation made by Guillouzic and Slater [@Guillouzic2006] using Molecular Dynamics simulation with explicit solvent that the scaling exponent of the translocation time $\tau$ with respect to the polymer length $N$ ($\tau \sim N^{2.27}$) is larger than the one measured for the relaxation time ($\tau{_\mathrm{\scriptstyle r}} \sim N^{1.71}$). We recently made similar observations for larger nanopore diameters [@GauthierMD2007]. The main goal of the current paper is to carry out a *direct test* of the fundamental assumption that is behind most of the theoretical models of translocation: that the chain can be assumed as relaxed at all times during the translocation process (the quasi-equilibrium hypothesis). We will be using two sets of simulations to compare the translocation dynamics of chains that start with the same initial value of $s$ but that differ in the way they reached this initial state. ![Schematic representation of our simulation system. The wall consists in a single layer of beads on an triangular lattice while the pore itself is formed by simply removing one wall bead (some wall beads and all of the solvents beads have been removed for clarity reasons). This simulation system is described in details in Ref [@Guillouzic2006; @GauthierMD2007]. The trans-side of the membrane is defined as the side where the chain terminates its translocation process (its final destination). The translocation coordinate $s$ is defined as the ratio of the number of monomers on the cis-side of the membrane, $n$, to the total number of monomers in the chain $N$ ($0\leq s=n/N \leq 1$). []{data-label="f:schema"}](schema.pdf){width="\figwidth"} Simulation Method {#s:method} ================= We use the same simulation setup as in our previous publication [@Guillouzic2006; @GauthierMD2007]. In short, we use coarse-grained Molecular Dynamics (MD) simulations of unbiased polymer chains initially placed in the middle of a pore perforated in a one bead thick membrane (see Fig.[ \[f:schema\]]{}). The simulation includes an explicit solvent. All particles interact via a truncated (repulsive part only) Lennard-Jones potential and all connected monomers interact via a FENE (Finitely Extensible Nonlinear Elastic) potential. The membrane beads are held in place on a triangular lattice using an harmonic potential and the pore consists in a single bead hole. All quantities presented in this paper are in standard MD units; i.e. that the lengths and the energies are in units of the characteristic parameters of the Lennard-Jones potential $\sigma$ and $\epsilon$, while the time scales are measured in units of $\sqrt{m\sigma^2/\epsilon}$ where $m$ represents the mass of the fluid particles. The simulation box size is of $\sim 28.1\sigma \times 29.2\sigma \times 27.5\sigma$, where the third dimension is the one perpendicular to the wall, and periodic boundary conditions are used in all directions during the simulation. We refer the reader to Ref. [@Guillouzic2006; @GauthierMD2007] for more details. Note that this simulation setup was shown to correctly reproduce Zimm relaxation time scalings [@GauthierMD2007]. The simulation itself is divided into two steps; (1) the warm-up period during which the $i^{\text{th}}$ bead of the polymer is kept fixed in the middle of the pore while its two subchains are relaxing on opposite sides of the wall, and (2) the translocation (or escape) period itself during which the polymer is completely free to move until all monomers are on the same side of the membrane (note that the final location of the chain defines the *trans*-side of the membrane in this study since we have no external driving force that would define a direction for the translocation process). The time duration of the first period was determined from previous simulations [@Guillouzic2006; @GauthierMD2007] using the characteristic decay time of the autocorrelation function of the chain end-to-end vector. The time elapsed during the second period is what we refer to as the translocation time $\tau$. In previous papers [@Guillouzic2006; @GauthierMD2007], we calculated both the relaxation time $\tau{_\mathrm{\scriptstyle r}}(N)$ and the translocation time $\tau (N)$ for polymers of lengths $N$ between 15 to 31 monomers in the presence of the same membrane-pore system. Our simulation results, $\tau \approx 1.38N^{2.3}$ and $\tau{_\mathrm{\scriptstyle r}} \approx 0.43N^{1.8}$ in MD units, indicate that the escape time is at least 10 times longer than the relaxation time for this range of polymer sizes. These translocation times correspond to polymers starting halfway through the channel and the relaxation times were calculated with the center monomer (i.e. monomer $i= (N+1)/2$, where $N$ is an odd number) kept fixed in the middle of the pore. ![Schematic representation of our two sets of simulations, called R (for **R**elaxed) and NR (for **N**ot **R**elaxed). For the NR case, the middle monomer is kept fixed inside the pore during the initial warm-up relaxation phase. The polymer then moves freely until it completely escapes from the pore. However, the translocation clock then starts only when the polymer reaches state $s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}$ for the *first* time. In the R case, the polymer is initially prepared in the $s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}$ state and allowed to relax with its $(N{\mathit{s}_\mathrm{\scriptscriptstyle 0}}+1)^{\textrm{th}}$ monomer fixed inside the pore. The translocation clock then starts immediately after the chain is released. The two sets of simulations thus differ only in the way the initial chain is prepared. []{data-label="f:2sims"}](2sims.pdf){width="\figwidth"} As we mentioned in the Introduction, the goal of this paper is to run two different sets of simulations in order to *directly* test the quasi-equilibrium hypothesis (see Fig.[ \[f:2sims\]]{}). In the first type of simulations (that we will call NR for *Not Relaxed*), we start with the same configuration as in the previous paper: the polymer chain is initially placed halfway through the pore, then allowed to relax with its middle monomer fixed, and is finally released. However, we do not start to calculate the translocation time from that moment; instead, we wait until the translocation coordinate has reached a particular value $s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}$ for the first time (see the top part of Fig.[ \[f:2sims\]]{}). The translocation time $\tau^{\mathrm{NR}}({\mathit{s}_\mathrm{\scriptscriptstyle 0}})$ thus corresponds to a chain that starts in state $s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}$ with a conformation that is affected by the translocation process that took place between states $s=1/2$ and $s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}$. In the second series of simulations (called R for *Relaxed*), we allow the chain to relax in state $s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}$ before it is released. In other words, the $(N{\mathit{s}_\mathrm{\scriptscriptstyle 0}}+1)^{\textrm{th}}$ monomer is fixed during the warm-up period (see the bottom part of Fig.[ \[f:2sims\]]{}); the corresponding translocation time $\tau^{\mathrm{R}}({\mathit{s}_\mathrm{\scriptscriptstyle 0}})$ now corresponds to a chain that is fully relaxed in its initial state $s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}$. Obviously, the quasi-equilibrium hypothesis implies the equality $\tau^{\mathrm{NR}}({\mathit{s}_\mathrm{\scriptscriptstyle 0}})=\tau^{\mathrm{R}}({\mathit{s}_\mathrm{\scriptscriptstyle 0}})$, a relationship that we will be testing using extensive Molecular Dynamics simulations. In both cases, we include all translocation events in the calculations, including those that correspond to backward translocations (i.e., translocations towards the side where the smallest subchain was originally found). NR vs R: The escape times ========================= ![ Translocation times $\tau$ for relaxed (R) and not relaxed (NR) polymers. The initial condition is ${\mathit{s}_\mathrm{\scriptscriptstyle 0}}=6/N$ for all molecular sizes. []{data-label="f:trans_centered6"}](tau_NR_vs_R2.pdf){width="\figwidth"} Figure[ \[f:trans\_centered6\]]{} shows the translocation times obtained from these two sets of simulations when we choose the starting point ${\mathit{s}_\mathrm{\scriptscriptstyle 0}}=6/N$ (six monomers on one side of the wall, and all the others on the other side). We clearly see that the translocation process is faster when the polymer is initially relaxed (R). The difference between the two escape times is around $25\%$ for all polymer lengths $N$. Since the relaxation state of the chain at $s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}$ is the only difference between the two set of results, this indicates that the NR polymers are not fully relaxed at $s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}$. Thus, contrary to the commonly used assumption, even an unbiased polymer is not in quasi-equilibrium during its translocation process. Also interesting is the probability to escape on the side where the longest subchain was at the beginning of the simulation. We observed (data not shown) that this probability was always $\sim 10-20\%$ times larger in the R simulations. This observation also confirms the fact that the chain is out of equilibrium during translocation since its previous trajectory even affects the final outcome of the escape process. Note that we did verify that this difference is not the reason why the escape times are different. NR vs R: The radii of gyration ============================== As we will now show, the slower NR translocation process is due to a non-equilibrium compression of the subchain located on trans-side of the wall. By compression, we mean that the radius of gyration ${\mathit{R}_\mathrm{\scriptstyle g}}$ of that part of the polymer is smaller than the one it would have if it were in a fully relaxed state. Figure[ \[f:rg\]]{} compares the mean radius of gyration of the subchain on the trans-side at $s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}$ for both the relaxed (R) and the non-relaxed (NR) states (the two first curves from bottom). The radius of gyration is larger for the relaxed state when the number of monomers is greater than about 19, i.e. ${\mathit{R}_\mathrm{\scriptstyle g}}^{\mathrm{R}}(s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}) > {\mathit{R}_\mathrm{\scriptstyle g}}^{\mathrm{NR}}(s={\mathit{s}_\mathrm{\scriptscriptstyle 0}})$ if $N>19$. This is the second result that suggest the translocation process is not close to equilibrium. Moreover, this discrepancy between the two states increases with $N$ (the two curves diverge) over the range of polymer lengths studied here. Figure[ \[f:rg\]]{} also shows that this difference is negligible by the time the escape is completed (${\mathit{R}_\mathrm{\scriptstyle g}}^{\mathrm{R}}(s=0) \approx {\mathit{R}_\mathrm{\scriptstyle g}}^{\mathrm{NR}}(s=0)$). However, it is important to note that the final radius of gyration is always smaller than the value we would obtain for a completely relaxed chain of size $N$ (the top line, ${\mathit{R}_\mathrm{\scriptstyle g}}\approx 0.357\,N^{0.631}$). Of course, this means that the R simulations, which start with equilibrium conformations, also finish with non-equilibrium states. ![ The radius of gyration of the longest subchain vs. polymer size $N$. We show values corresponding to the beginning ($s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}=6/N$) and the end (${\mathit{s}_\mathrm{\scriptscriptstyle 0}}=0$) of the process, both for chains that were initially relaxed (R) and non-relaxed (NR). The fifth data set is the radius of gyration of a relaxed chain of length $N$ [@Guillouzic2006; @GauthierMD2007]. []{data-label="f:rg"}](radius_trans_side_starts_and_ends_loglog.pdf){width="\figwidth"} The curve ========= Why do we observe such a large amount of compression when the translocation time is more than ten times larger than the relaxation time? A factor of ten would normally suggest that a quasi-equilibrium hypothesis would be adequate. The answer to this question is clearly illustrated in Fig.[ \[f:s\_vs\_t\]]{} where we look at the normalized translocation coordinate $s^\prime=s(t^\prime)/{\mathit{s}_\mathrm{\scriptscriptstyle 0}}$ as a function of the scaled time $t^\prime$. These NR simulations used the initial condition ${\mathit{s}_\mathrm{\scriptscriptstyle 0}}=1/2$ (thus starting with symmetric conformations and maximizing the escape times). For a given polymer length, each $s(t)$ curve (we have typically used $\sim500$ runs per polymer length) was rescaled using its own escape time $t{_\mathrm{\scriptstyle max}}$, such that $t^\prime=t/t{_\mathrm{\scriptstyle max}}$ and $0 \leq t^\prime \leq 1$. These curves were then averaged to obtain eight rescaled data sets (one for each molecular size in the range $13\leq N \leq 31$; note that $N$ must be an odd number). Remarkably, the eight rescaled curves were essentially undistinguishable (data not shown). This result thus suggests that the time evolution of the translocation coordinate $s(t)$ follows a *universal* curve; the latter, defined as an average over all molecular sizes, is shown (circles) in Fig.[ \[f:s\_vs\_t\]]{}. Please note that the translocation coordinate is defined with respect to the final destination of the chain ($s=N{_\mathrm{\scriptstyle cis}}/N$), and not the side with the shortest subchain at a given time. This unexpected universal curve has two well-defined asymptotic behaviors: (1) for short times, we observe the apparent linear functional $$s^\prime (t^\prime)= 1-0.318 t^\prime \,,$$ which we obtain using only the first 10% of the data, and (2) as $t^\prime \rightarrow 1$, the average curve decays rapidly towards zero following the power law relation $$s^\prime (t^\prime)= 1.31 \times (1-t^\prime)^{0.448} \,,$$ this time using the last 10% of the data. The whole data set can then be fitted using the interpolation formula $$s^\prime (t^\prime)= (1+0.130\,t^\prime+0.216\,{t^\prime}^2)\times(1-t^\prime)^{0.448} \,, \label{e:s_vs_t} $$ where the coefficient of the ${t^\prime}^2$ term is the only remaining fitting parameter. Equation[ \[e:s\_vs\_t\]]{} is the solid line that fits the complete data set in Fig.[ \[f:s\_vs\_t\]]{}. As we can see, this empirical fitting formula provides an excellent fit. ![ Scaled translocation coordinate $s^\prime=s(t^\prime)/{\mathit{s}_\mathrm{\scriptscriptstyle 0}}$ as a function of scaled time $t^\prime=t/t{_\mathrm{\scriptstyle max}}$, where $t{_\mathrm{\scriptstyle max}}$ is the individual translocation time for each translocation event that was simulated. Eight curves (not shown) were obtained for $N=13,\,15,\,17,\,19,\,21,\,23,\,27,\,\mathrm{and}\; 31$ the following way: for a given chain length initially placed halfway through the pore, (1) each of the translocation events gives a $s(t)$ curve that goes from $s(0)=1/2$ to $s(t{_\mathrm{\scriptstyle max}})=0$, (2) then each of these curves is rescaled in time using $t^\prime=t/t{_\mathrm{\scriptstyle max}}$, (3) and finally, the time-axis is discretized and all the curves for that given $N$ are averaged along the $y$-axis. Data points (circles) are the average of these eight curves which are not shown since their distribution was of the order of the data point sizes. The solid line that fits the universal curve represented by the complete data set is given by Eq.[ \[e:s\_vs\_t\]]{}. The inset presents the acceleration of the scaled translocation coordinate $\mathrm{d}^2 s^{\prime}/\mathrm{d} {t^\prime}^2$ obtained from Eq.[ \[e:s\_vs\_t\]]{}. []{data-label="f:s_vs_t"}](s_vs_t_4.pdf){width="\figwidth"} Figure[ \[f:s\_vs\_t\]]{} can be viewed as the percentage of the translocation process (in terms of the number of monomers that have yet to cross the membrane in the direction of the trans side) as a function of the percentage of the (final translocation) time elapsed since the beginning. The small shaded region in Fig.[ \[f:s\_vs\_t\]]{} represents the second *material* half (as opposed to *temporal* half) of the escape process ($s={\mathit{s}_\mathrm{\scriptscriptstyle 0}}/2$). However, this region approximatively covers only the last $\sim 12\%$ of the rescaled time axis; this clearly implies a strong acceleration of the chain at the end of its exit. The first $50\%$ of the monomer translocations take the first $\sim 88\%$ of the total translocation time. The inset in Fig.[ \[f:s\_vs\_t\]]{} emphasizes the fact that the translocation coordinate is submitted to a strong acceleration at the late stage of the translocation process. This large acceleration of the translocation process is entropy-driven. At short times, the difference in size between the two subchains is small, and entropy is but a minor player. At the end of the process, however, this difference is very large and the corresponding gradient in conformational entropy drives the process, thus leading to a positive feedback mechanism. Translocation is then so fast that the subchains cannot relax fast enough and the quasi-equilibrium hypothesis fails. The trans-subchain is compressed because the monomers arrive faster than the rate at which this coil can expand. The ratio of ten between the translocation time and the relaxation time (for the polymer lengths and initial conditions that we have used) is too small because half of the translocation takes place in the last tenth of the event. Finally, the existence of a universal curve is a most interesting result. Clearly, our choice of rescaled variables has allowed us to find the fundamental mechanisms common to all translocation events. This universal curve is expected to be valid as long as the radius of gyration of the polymer chain is much larger than the pore size, and it demonstrates that our results are not due to finite size effects. Finally, we present in Appendix an asymptotic derivation to explain the apparent short time linear scaling of the translocation coordinate. This demonstration is based on the the fact that, in this particular limit, the motion is purely described by unbiased diffusion, a case for which we can do analytical calculations. The *R*$_\textrm{\textbf{g}}$(*t*) curve ======================================== Still more evidence that un-driven (no external field) translocation is not a quasi-equilibrium process is presented in Fig.[ \[f:Rg\_vs\_t\]]{}a where we show how the mean radius of gyration of the subchain located on the trans-side of the wall changes with (rescaled) time during the NR translocation process (like in the previous section, we have chosen the initial condition ${\mathit{s}_\mathrm{\scriptscriptstyle 0}}=1/2$ here). All the curves have approximatively the same shape, i.e. an initial period during which the radius of gyration increases rather slowly, followed by an acceleration period that becomes very steep at the end. When these curves are rescaled by a three-dimensional Flory factor of $N^{3/5}$ (${\mathit{R}_\mathrm{\scriptstyle g}}^\prime(t^\prime) \equiv {\mathit{R}_\mathrm{\scriptstyle g}}(t^\prime)/N^{3/5}$, see Fig.[ \[f:Rg\_vs\_t\]]{}b), they seem to all fall approximatively onto each other. As we observed for the translocation coordinate $(s^\prime)$, the radius of gyration ${\mathit{R}_\mathrm{\scriptstyle g}}(t^\prime)$ is experiencing a noticeable acceleration at the end of the translocation process. Again, the shaded zone in Fig.[ \[f:Rg\_vs\_t\]]{} shows that the second half of the process occurs in the last $\sim 11\%$ of the translocation time. If we assume that Flory’s argument (${\mathit{R}_\mathrm{\scriptstyle g}}\sim N^{3/5}$) is valid during the complete translocation process, we must be able to *translate* the expression given by Eq.[ \[e:s\_vs\_t\]]{} in order to fit the increase of the radius of gyration presented in Fig.[ \[f:Rg\_vs\_t\]]{}b, i.e. we should have $$\label{e:Rg_vs_s} {\mathit{R}_\mathrm{\scriptstyle g}}^\prime (t^\prime) = b \times \left( 1 - \frac{s^\prime (t^\prime)}{2} \right)^{3/5} \,,$$ where the $1 - s^\prime (t^\prime)/2$ represents the fraction of the chain that is on the trans-side at the time $t^\prime$ and $b$ is a length scale proportional to the Kuhn length of the chain. We used Eqs.[ \[e:s\_vs\_t\]]{} and [ \[e:Rg\_vs\_s\]]{} to fit the average of the eight ${\mathit{R}_\mathrm{\scriptstyle g}}^\prime (t^\prime)$ curves presented in Fig.[ \[f:Rg\_vs\_t\]]{}b and obtained $b=0.315$ (see the smooth curve). This one-parameter fit does a decent job until we reach about $80\%$ of the maximum time. However, it clearly underpredicts ${\mathit{R}_\mathrm{\scriptstyle g}}$ in the last stage of the translocation process, i.e. during the phase of strong acceleration discussed previously. This observation also validates the fact that the translocation process is out of equilibrium during that period. In fact, the failure of the three-dimensional Flory’s argument is also highlighted by the scaling of the radius of gyration at the end of the translocation process. Indeed, the third and fourth data sets presented in Fig.[ \[f:rg\]]{} have a slope that is around $0.73$, which is closer to the two-dimensional Flory’s scaling of ${\mathit{R}_\mathrm{\scriptstyle g}}\sim N^{3/4}$. ![ (a) Radius of gyration on the trans-side of the wall as a function of the scaled translocation time. Each simulation event is rescaled using the time $t{_\mathrm{\scriptstyle max}}$ it took to exit the channel ($t^\prime=t/t{_\mathrm{\scriptstyle max}}$). The scaled time is then always bounded between $0 \leq t^\prime \leq 1$. From bottom to top, the eight curves were obtained for $N=13,\,15,\,17,\,19,\,21,\,23,\,27,\,\mathrm{and}\;31$ by averaging ${\mathit{R}_\mathrm{\scriptstyle g}}(t^\prime)$ over hundreds of simulations (typically $\sim500$ runs). (b) Rescaling of the curves presented in part (a). Each radius of gyration curve was divided by $N^{3/5}$ to obtain the gray curves (${\mathit{R}_\mathrm{\scriptstyle g}}^\prime = {\mathit{R}_\mathrm{\scriptstyle g}}/N^{3/5}$). The smooth curve is given by Eq.[ \[e:Rg\_vs\_s\]]{} with a proportionality constant of $0.315$. The shaded region covers the last $11\%$ of the translocation time and begins at the mid-point of the average radius of gyration increases, i.e. at ${{\mathit{R}_\mathrm{\scriptstyle g}}^\prime}(t^\prime)\approx({{\mathit{R}_\mathrm{\scriptstyle g}}^\prime}(1)+{{\mathit{R}_\mathrm{\scriptstyle g}}^\prime}(0))/2$. []{data-label="f:Rg_vs_t"}](Rg_vs_t_v3.pdf){width="\figwidth"} Conclusion ========== In summary, we presented three different numerical results that contradict the hypothesis that polymer translocation is a quasi-equilibrium process in the case of unbiased polymer chains in the presence of hydrodynamic interactions. First, we reported a difference in translocation times that depends on the way the chain conformation is prepared, with relaxed chains translocating faster than chains that were in the process of translocating in the recent past. Second, we saw that the lack of relaxation also leads to conformational differences (as measured by the radius-of-gyration ${\mathit{R}_\mathrm{\scriptstyle g}}$) between our two sets of simulations; in fact, translocating chains are highly compressed. Third, perhaps the strongest evidence is the presence of a large acceleration of both the translocation process (as measured by the translocation parameter $s$) and the growth of the radius of gyration: roughly half of the escape actually occurs during a time duration comparable to the relaxation time! The large difference between the mean relaxation and translocation times is not enough to insure the validity of the quasi-equilibrium hypothesis under such an extreme situation. It is important to note, however, that a longer channel would increase the frictional effects (and hence the translocation times) while reducing the entropic forces on both sides of the wall; we thus expect the quasi-equilibrium hypothesis to be a better approximation in such cases. The curve presented in Fig.[ \[f:s\_vs\_t\]]{} is quite interesting. It demonstrates that the translocation dynamic is a highly nonlinear function of time. We proposed an empirical formula (Eq.[ \[e:s\_vs\_t\]]{}) to express the evolution of the translocation coordinate as a function of time (both in rescaled units) that provides an excellent fit to our simulation data. Based on Flory’s argument for a three-dimensional chain, we presented a second expression (Eq.[ \[e:Rg\_vs\_s\]]{}) of a similar form for the increase of the radius of gyration during the translocation process. However, this relationship is not valid for the complete translocation process, yet more evidence of the lack of equilibrium at the late stage of the chain escape. Finally, going back to the question in the title of this article, we conclude that the chain shows some clear signs of not being in a quasi-equilibrium state during unforced translocation (especially at the end of the escape process). However, although the difference is as large as $25\%$ when we start with only 6 monomers on one side, we previously demonstrated [@Guillouzic2006; @GauthierMD2007] that this simulation setup gives the expected scaling laws. The latter observation is quite surprising and leads to a non-trivial question: why scaling laws that were derived using a quasi-equilibrium hypothesis predict the proper dynamical exponents for chains that are clearly out of equilibrium during a non-negligeable portion of their escape? Perhaps the impact of these non-equilibrium conformations during translocation would be larger for thicker walls or stiffer chains; this remains to be explored. Obviously, the presence of an external driving force, such as an electric field, would lead the system further away from equilibrium; we thus speculate that there is a critical field below which the quasi-equilibrium hypothesis remains approximately valid, but beyond which the current theoretical exponents may have to be revisited. This work was supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (*NSERC*) to GWS and by scholarships from the *NSERC* and the University of Ottawa to MGG. The results presented in this paper were obtained using the computational resources of the SHARCNET and HPCVL networks. Derivation of the scaled translocation coordinat in the short time limit {#s:a1} ======================================================================== In the short time limit, the translocation variable diffuses normally from its initial position (the potential landscape is very flat, see Fig. 2 in Ref. [@Muthukumar1999]). In such a case, entropic pulling can be neglected and the translocation problem is equivalent to a non biased first-passage problem in which the displacement $x(t)$ from the initial position as a function of time grows following a Gaussian distribution. Consequently, if diffusion is normal, the scaled translocation coordinate is given by $$s^{\prime}(t) = \int_{-\infty}^{\infty} \frac{1}{\sqrt{2Dt\pi}} \exp \left ( \frac{-x^2}{2Dt} \right ) \; s^{\prime}(x) \; dx \,,$$ where D is the diffusion coefficient and $s^\prime(x) $ is the scaled translocation coordinate of the chain when it has moved over a curvilinear distance $x(t)$. According to our definition of the scaled translocation coordinate $s^\prime$, the latter value is given by $$\begin{aligned} s^\prime(x) &=& \frac{1}{s_0} \left \{ \overbrace{\left ( \frac{1}{2} + \frac{\left | x \right | }{L} \right )}^{\mathrm{prob. \, exit \, same \, side}} \underbrace{\left ( \frac{1}{2} - \frac{\left | x \right | }{L} \right )}_{s(t)} \right . \;\;\;\;\;\;\;\;\;\;\;\;\; \nonumber \\ && \left . \;\;\;\;\;\;\; +\overbrace{\left ( \frac{1}{2} - \frac{\left | x \right | }{L} \right )}^{\mathrm{prob. \, exit \, other \, side}} \underbrace{\left ( \frac{1}{2} + \frac{\left | x \right | }{L} \right )}_{s(t)} \right \} \,,\end{aligned}$$ where $L$ is the total length of the chain. The first term is the probability to exit on the side where the chain is, times the corresponding translocation coordinate ($s<0.5$). The second term refers to chains that will eventually exit on the other side of the channel ($s>0.5$). Remember that we defined the translocation coordinate with respect to the side where the chain eventually exits the channel. Consequently, the probabilities used in the last equation are obtained from the solution of the one-dimensional first-passage problem of an unbiased random walker diffusing between two absorbing boundaries. The solution to this problem is explained in great details in Ref. [@Redner2001]; the only result of interest for us is that the probabilities to be absorbed by the two boundaries are given by $0.5 \pm x^\prime$, where $x^\prime$ is the fractional distance between the particle position and the midpoint between the two boundaries. Combining the last two equations gives (using $s_0=1/2$) $$\begin{aligned} s^{\prime}(t) &=& \int_{-\infty}^{\infty} \frac{1}{\sqrt{2Dt\pi}}\exp \left ( \frac{-x^2}{2Dt} \right ) \; \frac{1-4x^2/L^2}{2s_0} \; dx \nonumber \\ &=& \frac{1}{2s_0} \; \int_{-\infty}^{\infty} \frac{1}{\sqrt{2Dt\pi}}\exp \left (\frac{-x^2}{2Dt} \right ) \; dx \nonumber \\ & & + \frac{2}{s_0} \int_{-\infty}^{\infty} \frac{1}{\sqrt{2Dt\pi}}\exp \left ( \frac{-x^2}{2Dt} \right ) \; \frac{x^2}{L^2} \; dx \nonumber \\ &=& 1 - \frac{4Dt}{L^2} \,,\end{aligned}$$ which predicts a linear decrease of the scaled translocation coordinate in the limit of very short time. One should note that the later derivation is strictly for short times since entropic pulling will eventually bias the chain translocation process. Finally, we can test our derivation by comparing our result with the linear regression presented in Fig. 5. This gives us that $$\frac{0.318}{t_{\mathrm{max}}} = \frac{4D}{L^2} \,,$$ which is equivalent to $$2Dt_{\mathrm{max}} = (0.399L)^2\,.$$ This means that, if entropic effects are neglected, a chain would travel a distance approximatively equal to 0.4 of its total length during a time equal to the observed translocation time. The fact that this result is smaller than $0.5L$ (the value corresponding to complete translocation) is an indication that entropy accelerates the escape of the chain. The slope of $-0.318$ indicates that translocation would be approximatively 3 times slower if entropic effects were cancelled. Finally, one should bear in mind that our linear decrease prediction is for normal diffusion only. However, Chuang [[*[ et al.]{}*]{}]{} predicted an anomalous diffusion exponent of 0.92 [@Chuang2001]. It would not be possible to observe the effect of such slightly subdiffusive regime with the precision of our data here. [47]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , ****, (). , , , ****, (). , , , , , , ****, (). , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , , , ****, (). , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , (). , ****, (). , ****, (). , ** (, ). [^1]: E-mail: gauthier.michel@uOttawa.ca [^2]: E-mail: gary.slater@uOttawa.ca
--- abstract: | We cross-matched Wide-field Infrared Survey Explorer (WISE) sources brighter than 1 mJy at 12$\mu$m with the Sloan Digital Sky Survey (SDSS) galaxy spectroscopic catalog to produce a sample of $\sim10^5$ galaxies at $\langle z \rangle$=0.08, the largest of its kind. This sample is dominated (70%) by star-forming (SF) galaxies from the blue sequence, with total IR luminosities in the range $\sim10^{8}-10^{12}\,L_\odot$. We identify which stellar populations are responsible for most of the 12$\mu$m emission. We find that most ($\sim$80%) of the 12$\mu$m emission in SF galaxies is produced by stellar populations younger than 0.6 Gyr. In contrast, the 12$\mu$m emission in weak AGN ($L_{\rm[OIII]}<10^7\, L_\odot$) is produced by older stars, with ages of $\sim 1-3$ Gyr. We find that $L_{\rm 12\mu m}$ linearly correlates with stellar mass for SF galaxies. At fixed 12$\mu$m luminosity, weak AGN deviate toward higher masses since they tend to be hosted by massive, early-type galaxies with older stellar populations. Star-forming galaxies and weak AGN follow different $L_{\rm 12\mu m}$–SFR (star formation rate) relations, with weak AGN showing excess 12$\mu$m emission at low SFR ($0.02-1\,M_\odot$yr$^{-1}$). This is likely due to dust grains heated by older stars. While the specific star formation rate (SSFR) of SF galaxies is nearly constant, the SSFR of weak AGN decreases by $\sim$3 orders of magnitude, reflecting the very different star formation efficiencies between SF galaxies and massive, early-type galaxies. Stronger type II AGN in our sample ($L_{\rm[OIII]}>10^7\,L_\odot$), act as an extension of massive SF galaxies, connecting the SF and weak AGN sequences. This suggests a picture where galaxies form stars normally until an AGN (possibly after a starburst episode) starts to gradually quench the SF activity. We also find that 4.6–12$\mu$m color is a useful first-order indicator of SF activity in a galaxy when no other data are available. author: - 'E. Donoso$^1$, Lin Yan$^1$, C. Tsai$^1$, P. Eisenhardt$^2$, D. Stern$^2$, R. J. Assef$^{2,6}$, D. Leisawitz$^3$, T. H. Jarrett$^5$, S. A. Stanford$^4$' title: 'Origin of 12$\mu$ Emission Across Galaxy Populations from WISE and SDSS Surveys' --- Introduction ============ A detailed picture of the present day galaxy populations, their evolution and emission properties across different wavelengths is still far from complete. Surveys like the Sloan Digital Sky Survey (SDSS; @york) have collected large amounts of information in the optical regime, while 2MASS (@skrutskie) and the IRAS mission (@neugebauer) have provided valuable, albeit relatively shallow, data sets from $J$ band up to 100 $\mu$m. More recently, the Wide-field Infrared Survey Explorer (WISE; @wright) has completed mapping the whole sky in the mid and far infrared, at sensitivities much deeper than any other large-scale infrared survey. For example, WISE is about 100 times more sensitive at 12$\mu$m than IRAS. While our understanding of the optical and far-IR properties of galaxies (long-ward of 24$\mu$m) has grown steadily, thanks mainly to Spitzer and Herschel, the spectral region between 10-15 $\mu$m remains comparatively unexplored. In normal galaxies, the light at the redder optical bands and in the $J$, $H$ and $K$ near-IR bands is closely tied to the total mass of the galaxy, as it is dominated by the red population of older stars. At wavelengths longer than $\sim$8 $\mu$m, emission from dust heated by younger stars becomes increasingly relevant and begins to trace the star formation rate (SFR). The rate at which galaxies transform gas into stars is one of the most fundamental diagnostics that describes the evolution of galaxies. Of major importance is to find what physical parameter(s) drive changes in the SFR. As dust-reprocessed light from young stars is re-emitted mainly in the far-infrared (FIR) regime, the FIR luminosity is one of the best tracers of star formation activity (@kennicutt98). It is well known that commonly used SFR indicators, such as the UV continuum and nebular emission line fluxes, require sometimes substantial corrections for dust extinction. Furthermore, these corrections are highly uncertain and difficult to derive. For this reason, integrated estimators based on the total infrared (IR) luminosity, either alone or in combination with the ultraviolet luminosity (e.g. @heckman), and monochromatic estimators based mainly on 24 $\mu$m fluxes, alone or in combination with H$\alpha$ luminosity (e.g. @wu; @alonsoh; @calzetti07; @zhu; @rieke; @kennicutt09), are increasingly being considered as reliable star formation indicators for normal and dusty star-forming galaxies. The use of any IR flux as a SFR indicator relies on the assumption that the IR continuum emission is due to warm dust grains heated by young stars. However there is also a contribution to dust reprocessed emission by old stellar populations, associated more with the interstellar medium around evolved stars than to recently born stars. In addition, some fraction of the IR luminosity may be attributed to active galactic nuclei (AGN), if present (in Section \[sec:agn\_effect\] we find AGN emission is of minor importance in most of our sample). The exact contribution of each component is difficult to estimate without detailed spectral analysis. @charyelbaz and @rieke found correlations between the 12$\mu$m luminosity and the total IR luminosity for small samples of nearby galaxies. @duc found that sources in Abell 1689 with high ISO mid-IR color index \[15 $\mu$m\]/\[6.75 $\mu$m\] are mostly blue, actively star forming galaxies, while low mid-IR flux ratios correspond to passive early-type systems. They suggest that 15 $\mu$m emission is a reliable indicator of obscured star formation. Similarly, shorter wavelength mid-IR emission such as WISE 12$\mu$m is expected to be a practical tracer of star formation activity. An important caveat is that far-infrared and/or radio measurements are only available for a small fraction of known galaxies. Early work by @spinoglio using IRAS all-sky data used 12$\mu$m to select unbiased samples of active galaxies with fluxes above 220 mJy and to study their luminosity function. Deep pencil-beam surveys have complemented these samples with high redshift data. @seymour conducted a 12$\mu$m survey of the ESO-Sculptor field (700 arcmin$^2$) with the ISO satellite down to 0.24 mJy. @roccav used it to model mid-IR galaxy counts, revealing a population of dusty, massive ellipticals in ultra luminous infrared galaxies (ULIRGS). @teplitz performed imaging of the GOODS fields (150 arcmin$^2$) at 16 $\mu$m with the Spitzer spectrometer, finding that $\sim$15% of objects are potentially AGN at their depth of 40-65 $\mu$Jy. These surveys illustrate the necessary tradeoff between depth and area covered, potentially limiting the statistical significance of results due to cosmic variance. WISE provides the data to significantly improve the situation. Our sample of $\sim$10$^5$ galaxies (see below) is over 200 times more sensitive than IRAS-based surveys and covers an area $\sim$370 times larger than the GOODS samples. In this work we explore the physical properties of 12$\mu$m-selected galaxies in the local Universe, using a large sample of star forming galaxies and AGN with available redshifts and emission line measurements from the SDSS. This is by far the largest 12$\mu$m sample to date, and we use it to trace the origin of IR emission across different galaxy populations and to investigate how IR emission relates to stellar mass. We also explore using 12$\mu$m luminosity as a proxy for SFR to distinguish intense starburst activity from quiescent star formation. Since we employ the SDSS spectroscopic catalog, our results apply to relatively bright galaxies at low redshift. WISE certainly recovers other populations of galaxies, ranging from low metallicity blue compact dwarf galaxies at very low redshift (@griffith) to highly obscured sources at high redshift (@eisenhardt). @lake shows that WISE detects $L^*$ galaxies out to $z\sim0.8$ in the 3.4$\mu$m band, while @stern2011 shows that WISE is a very capable AGN finder, sensitive to both obscured and unobscured QSOs. A companion paper by @yan analyzes more diverse galaxy populations observed by WISE and SDSS (including deeper photometric SDSS objects), while here we focus on 12$\mu$m-selected sources with available spectra. This paper is organized as follows. In Section 2 we describe the surveys used in this work as well as explain the construction of our joint WISE-SDSS sample. In Section 3 we characterize the galaxy populations and present the results on the analysis of the mid-IR emission and SFR. Finally, Section 4 summarizes our results and discusses the implications of this work. Throughout the paper we assume a flat $\Lambda$CDM cosmology, with $\Omega_{m}=0.3$ and $\Omega_{\Lambda}=0.7$. We adopt $H_{0}$=70 km s$^{-1}$ Mpc$^{-1}$. Data ==== The Wide-field Infrared Survey Explorer Catalog ----------------------------------------------- WISE has mapped the full sky in four bands centered at 3.4, 4.6, 12 and 22 $\mu$m, achieving 5$\sigma$ point source sensitivities better than 0.08, 0.11, 1 and 6 mJy, respectively. Every part of the sky has been observed typically around 10 times, except near the ecliptic poles where the coverage is much higher. Astrometric precision is better than 0.15$^{\prime\prime}$ for high S/N sources (@jarrett) and the angular resolution is 6.1, 6.4, 6.5 and 12$^{\prime\prime}$ for bands ranging from 3 $\mu$m to 22 $\mu$m. This paper is based on data from the WISE Preliminary Release 1 (April 2011), which comprises an image atlas and a catalog of over 257 million sources from 57% of the sky. An object is included in this catalog if it: (1) is detected with SNR$\geq$7 in at least one of the four bands, (2) is detected on at least five independent single-exposure images in at least one band, (3) has SNR$\geq$3 in at least 40% of its single-exposure images in one or more bands, (4) is not flagged as a spurious artifact in at least one band. We refer the reader to the WISE Preliminary Release Explanatory Supplement for further details[^1] (@cutri). The MPA-JHU Sloan Digital Sky Survey Catalog -------------------------------------------- The Sloan Digital Sky Survey (@york; @stoughton) is a five-band photometric ($ugriz$ bands) and spectroscopic survey that has mapped a quarter of the sky, providing photometry, spectra and redshifts for about a million galaxies and quasars. The MPA-JHU catalog (@brinchman, hereafter B04) is a value-added catalog based on data from the Seventh Data Release (DR7, @abazajian) of the SDSS[^2]. It consists of almost $10^6$ galaxies with spectra reprocessed by the MPA-JHU team, for which physical properties based on detailed emission line analysis are readily available. Here we give a brief description of the catalog and the methodology employed to estimate SFRs. We refer the reader to the original papers for an in-depth discussion. The MPA-JHU catalog classifies galaxies according to their emission lines, given the position they occupy in the BPT (@bpt) diagram that plots the \[OIII\] $\lambda$5007Å/H$\beta$ versus \[NII\] $\lambda$6584Å/H$\alpha$ flux ratios. This separates galaxies with different ionizing sources as they populate separate sequences on the BPT diagram. In most galaxies, normal star formation can account for the flux ratios. However, in some cases an extra source such as an AGN is required. In this paper we follow this BPT classification to distinguish between: **(i) star-forming galaxies** (class SF and LOW S/N SF from B04), **(ii) active galactic nuclei** (class AGN and Low S/N LINER from B04), and **(iii) composite systems** that present signatures of the two previous types (class C from B04). Note that broad-lined AGN like quasars and Seyfert 1 galaxies are are not included in the sample, as they were targeted by different criteria by the SDSS. Star formation rates are derived by different prescriptions depending on the galaxy type. The methodology adopted by B04 is based on modeling emission lines using @bruzual93 models along with the CLOUDY photoionization model (@ferland) and spectral evolution models from @charlot08 to subtract the stellar continuum. To correct lines for dust attenuation, MPA-JHU adopts the multicomponent dust model of @charlot_fall, where discrete dust clouds are assumed to have finite lifetime and a realistic spatial distribution. This approach produces SFRs that take full advantage of all modeled emission lines. For AGN and composite galaxies in the sample, SFRs were estimated by the relationship between the D$_{4000}$ spectral index and the specific SFR (SFR/M$_{\odot}$ or SSFR), as calibrated for star forming galaxies (see Fig. 11 in B04). These estimates have been corrected in the latest MPA-JHU DR7 release by using improved dust attenuations and improved aperture corrections for non-SF galaxies following the work by @salim07. Gas-phase oxygen abundances, 12+log(O/H), are available for star forming galaxies as calculated by @tremonti. Thoughout this paper we adopt the spectral line measurements as well as estimates of SFR, metallicity and dust extinction given by the MPA-JHU catalog. The SDSS pipeline calculates several kinds of magnitudes. In this work we have adopted modified Petrosian magnitudes for flux measurements, which capture a constant fraction of the total light independent of position and distance. When appropriate, we have also used model magnitudes (*modelMag*) as they provide the most reliable estimates of galaxy colors. Magnitudes are corrected for galactic reddening using the dust maps of @schlegel. The Joint WISE-SDSS Galaxy Sample --------------------------------- We have crossmatched data from WISE and the MPA-JHU catalog to construct a galaxy sample covering an effective area of 2344 deg$^2$, or 29% of the DR7 (legacy) spectroscopic footprint. WISE sources were selected to have 12$\mu$m fluxes above 1 mJy, also requiring objects to have clean photometry at 3.4, 4.6 and 12$\mu$m. For MPA-JHU sources, we selected objects with de-reddened Petrosian magnitude $r_{\rm petro}<17.7$ and $r$-band surface brightness $\mu_{r}<23$ mag arcsec$^{-2}$. This selects a conservative version of the SDSS main galaxy sample (see @strauss). Using a matching radius of 3$^{\prime\prime}$ we find 96,217 WISE objects with single optical matches (40% of the SDSS sample in the intersection area), and 73 sources with two or more counterparts. The latter are mostly large extended sources or close interacting systems of two members. For the rest of this paper we will focus on the single IR-optical matches that constitute the vast majority ($>$99.9%) of the galaxy population. By using random catalogs generated over the effective area, the expected false detection fraction at 3$^{\prime\prime}$ is 0.05%. Each region of the sky has been observed at least 10 times by WISE, with the number of observations increasing substantially toward the ecliptic poles. Within our effective area, the median coverage depth at 12$\mu$m is about 13, varying tipically between 10 and 20. At these levels, the average completeness of the sample is expected to be over 90%, as shown in the WISE Preliminary Release Explanatory Supplement (Sec. 6.6). Analysis and Results ==================== Derived Properties ------------------ We derived stellar masses for all galaxies using the *kcorrect* algorithm (@blanton), which fits a linear combination of spectral templates to the flux measurements for each galaxy. These templates are based on a set of @bruzual03 models that span a wide range of star formation histories, metallicities and dust extinction. This algorithm yields stellar masses that differ by less than 0.1 dex on average from estimates using other methods (for example, the method based on fitting the 4000[Å]{} break strength and $H\delta$ absorption index proposed by @kauff03). To derive restframe colors and monochromatic luminosities in the infrared we used the fitting code and templates[^3] of @assef, applied to our combined *ugriz* photometry plus WISE 3.4 $\mu$m, 4.6 $\mu$m and 12$\mu$m fluxes. @assef present a set of low-resolution empirical spectral energy distribution (SED) templates for galaxies and AGN spanning the wavelength range from 0.03 $\mu$m to 30 $\mu$m, constructed with data from the NOAO Deep Wide-Field Survey Boötes field (NDFWS, @jannuzi) and the AGN and Galaxy Evolution Survey (AGES, @kochanek). The code fits three galaxy SED templates that represent an old stellar population (elliptical), a continuously star-forming population (spiral) and a starburst component (irregular), plus an AGN SED template with variable reddening and IGM absorption. These templates have been successfully used to test the reliability of IRAC AGN selection, and to predict the color-color distribution of WISE sources (@assef). We also use these templates to assess the relative contribution of AGN to the energy budget. Instead of trying to derive a new calibration of the SFR in the IR, we take the approach of comparing IR luminosities directly to optical dust-corrected SFRs. This makes our results largely independent of any particular SFR calibration. Note all optical SFRs used in this paper have been corrected for dust extinction. General Properties of 12$\mu$ Galaxies -------------------------------------- We begin our analysis by exploring the general properties of the WISE 12$\mu$m-selected galaxy sample. The sample is composed of a mixture of 70% SF galaxies, 15% AGN, 12% composite galaxies and 3% galaxies lacking BPT classification due to the absence of detectable lines in the spectra (most lack H$\alpha$ emission). The composition of the MPA-JHU optical sample is 44% SF, 12% AGN, 6% composite, 37% unclassifiable, which means that the 12$\mu$m selection is highly efficient in recovering SF systems and avoids objects with weak or no emission lines. In terms of the total optical galaxy populations, 61% (SF), 53% (AGN) and 76% (composite) of the SDSS galaxies have 12$\mu$m flux densities above 1 mJy. In Figure \[fig:dist\_lumw3\] (top row) we plot the distribution of redshift, stellar mass, D$_{4000}$ index and restframe $u-r$ color for SF galaxies, AGN and composite systems, as well as for the three classes all together. The majority of WISE-SDSS 12$\mu$m sources are SF galaxies at $\langle z\rangle=0.08$ with stellar masses of $\sim 10^{10.2}$M$_{\odot}$; these are typical values for the SF class. They clearly populate the blue peak of the galaxy bimodal distribution around $(u-r)_{0}=1.6$ and have inherently young stellar populations (D$_{4000}\sim$1.3, or ages of $\sim$0.5 Gyr). AGN are, as expected, comparatively more massive (M$^{*}\sim10^{10.7}$M$_{\odot}$), redder ($(u-r)_{0}\sim 2.1$) and older ($\sim$1-6 Gyr), dominating the massive end of the 12$\mu$m galaxy distribution. As a population, AGN do not differ significantly (in terms of these properties) in comparison to the corresponding purely optical sample. Composite galaxies present intermediate properties between the SF and AGN samples. Note that the bulk of galaxies lacking BPT classification (due to weak or absent emission lines) is missed in our IR-optical sample. These objects primarily populate the red sequence of the galaxy distribution (e.g. @baldry) and hence are not expected to be prominent at 12$\mu$m. We then divide the sample into three subsamples of monochromatic infrared luminosity: faint ($L_{\rm 12\mu m}<10^{9.2}L_{\odot}$), intermediate ($10^{9.2}L_{\odot}<L_{\rm 12\mu m}<10^{10}L_{\odot}$), and bright ($L_{\rm 12\mu m}>10^{10}L_{\odot}$) sources. There are no large biases with redshift, i.e. the different classes are sampled roughly without preference at all luminosities. Both SF galaxies and AGN become more massive for higher IR luminosities and we have checked that this also holds in narrow redshift slices. As we will show below, this is due to the coupling between the IR and the optical emission. Interestingly, the 12$\mu$m SF galaxies change by a factor of 0.8 dex in mass toward high 12$\mu$m luminosities while keeping the same color and stellar content. The AGN population, while getting slightly more massive, becomes bluer and dominated by younger stars as IR luminosity increases. Figure \[fig:zlumw3\] shows the redshift distribution of the restframe 12$\mu$m luminosity for our sample. It can be seen that while high luminosity sources naturally lie at higher redshifts, the redshift distribution of the different classes is very similar, except at the lowest redshifts ($z<0.02$) where very few AGN/composite galaxies are observed. The absence of red sequence galaxies in our sample is not surprising, but there is also a number of SF galaxies without IR emission due to our flux limit. Figure \[fig:dist\_sf\] shows the mass, redshift and colors of SF galaxies with and without 12$\mu$m emission, as well as for the entire SF optical sample. On average, SF galaxies not present in our sample have bluer colors, lie at lower redshifts, and have stellar masses around $10^{9.3}$M$_{\odot}$, roughly an order of magnitude below the 12$\mu$m SF galaxy sample. At this level, galaxies have very little dust mass, and hence can not re-radiate much in the IR. The main result here is that WISE 12$\mu$m-selected galaxies are primarly typical blue sequence (SF) galaxies. It is safe to assume that the majority of blue sequence galaxies correspond to late morphological types (e.g. @strateva; @shimasaku; @baldry). AGN and composite objects are also represented, belonging either to the red sequence or to a transitional regime among the two former classes. It is interesting that the detection efficiency is largest for composite systems, which was also found by @salim09 for 24 $\mu$m sources that lie in the so called *green valley* (e.g. @martin). This is a region located between the red and blue cloud sequences, best identified in the NUV-r color-magnitude diagram, where SF activity is being actively quenched and galaxies are thought to be in transitional stage in their migration from the blue to the red sequence. Most galaxies in our sample are either normal luminosity IR galaxies ($L_{\rm 12\mu m}\sim10^{9.2-10}$L$_{\odot}$; 60%), low luminosity IR galaxies ($L_{\rm 12\mu m}<10^{9.2}$L$_{\odot}$; 31%) or luminous infrared galaxies (LIRGs; $L_{\rm 12\mu m}\sim10^{10-10.8}$L$_{\odot}$; 9%). However, ULIRGs are also present. There are 114 objects with $L_{\rm 12\mu m}>10^{10.8}$L$_{\odot}$ (roughly equivalent to $L_{\rm TIR}>10^{12}$L$_{\odot}$ using the conversion of @charyelbaz), corresponding to a surface density of 0.049 deg$^{-2}$. This is comparable to the 0.041 deg$^{-2}$ surface density found by @hou. These ULIRGs naturally lie at higher redshift ($\langle z \rangle=0.2$) and populate the massive end of the SF sequence above $\sim 10^{11}$M$_{\odot}$. We reiterate that these results come from matching WISE to the relatively bright SDSS spectroscopic sample. WISE galaxy populations down to $r\sim$22.6 are analyzed in @yan. 12$\mu$m Luminosity and Stellar Mass {#sec:lumass} ------------------------------------ We now have a large sample of 12$\mu$m-selected galaxies that range from low-IR to ULIRG luminosities, for which high quality optical spectra and dust-corrected optical SFRs are available. First we examine the relation between $L_{\rm 12\mu m}$ and stellar mass. B04 have shown that, at least for star forming systems, SFR and stellar mass are strongly correlated in the local universe. There is also evidence that this relationship evolves with redshift (@noeske, @daddi). Although it is expected that more massive galaxies are naturally more luminous, it is unclear whether more massive systems would have more dust emission in the mid-IR. Figure  \[fig:mass\_lumw3\_sptype\] shows the correlation between monochromatic 12$\mu$m luminosity and stellar mass for our sample. The correlation is tight for SF systems over nearly three orders of magnitude in stellar mass (gray points, top panels). Several studies have found that the distributions of \[OIII\] emission line flux to the AGN continuum flux at X-ray, mid/far-IR and radio wavelengths (i.e. where stellar emission and absorption by the torus are least significant) are very similar for both type I and type II AGN (@mulchaey; @keel; @alonsoh97). Based on this, @kauff03agn and @heckman04 have shown that \[OIII\] flux is a reliable estimator of AGN activity. Following these works, we split the AGN sample by \[OIII\] luminosity. We see that strong AGN ($L_{\rm [OIII]}>10^{7}$L$_{\odot}$) have IR luminosities considerably larger than weak AGN ($L_{\rm [OIII]}<10^{7}$L$_{\odot}$), following approximately the same relationship with mass as SF systems. Weak AGN, instead, lie well below that correlation and show a larger scatter. We note that the contribution by star forming regions to the \[OIII\] flux is $<$7% (@kauff03agn). In the right bottom panel of Figure \[fig:mass\_lumw3\_sptype\] we compare the ages of stars in all subsamples as derived from the D$_{4000}$ spectral index. SF galaxies have the youngest stellar populations, peaking at $\sim$0.5 Gyr, followed by considerably older composite galaxies ($\sim$1.5-2 Gyr). Strong AGN, which dominate the massive end, have intermediate stellar populations ($\sim$1.5 Gyr) that are closer to SF/Composite systems than to weak AGN (see Fig. \[fig:dist\_lumw3\]). In contrast, weak AGN tend to be hosted by early-type galaxies with significantly older stars ($\sim$3 Gyr) as found also by @kauff03agn after comparing AGN host sizes, surface densities and concentration ratios with those of normal early-type galaxies. This highlights the importance that young/old stars have in powering the 12$\mu$m emission. For AGN of roughly similar stellar mass, only when younger stars begin to dominate (and the active nuclei becomes more powerful) is the IR emission comparable to actively star-forming systems. Qualitatively, the same result holds if we use the dust-corrected $r$-band absolute magnitude instead of stellar mass. Since galaxies of different ages have very different IR output, an interesting question to address is how the 12$\mu$m luminosity budget depends upon the age of the stellar populations. Figure \[fig:lumage\] shows the cumulative fraction of the *integrated* 12$\mu$m luminosity produced in galaxies of a given age. In SF galaxies, $\sim$80% of the total IR luminosity is produced by galaxies younger than 0.6 Gyr. Composite galaxies and strong AGN reach the same fraction at ages of 1.5 Gyr and 2 Gyr, respectively. In weak AGN, instead, most of the mid-IR emission is produced within a range of $\sim$1-3 Gyr. This inventory of 12$\mu$m luminosity in the local universe shows where the bulk of the IR emission resides, shifting from stellar populations of a few hundred Myr in actively SF galaxies to a few Gyr in galaxies hosting weak AGN. Thus, it underlines again the important role that young/old stars have in powering 12$\mu$m emission. As we will see later in Section \[sec:ssfr\], this also supports the idea that transition galaxies (i.e. composite/strong AGN) form a smooth sequence that joins highly active galaxies with quiescent galaxies. 12$\mu$m Luminosity and Star Formation Rate ------------------------------------------- We now explore the relationship between infrared luminosity and optical, dust-corrected SFR. Figure \[fig:ltir\_sfr\_d4000\] shows $L_{\rm 12\mu m}$ versus $SFR_{\rm opt}$, color-coded by the D$_{4000}$ spectral index. As discussed by @kauff03, the D$_{4000}$ index is a good indicator of the mean age of the stellar population in a galaxy. The dashed line indicates the reference relation of @kennicutt98, as given by @charyelbaz in terms of 12$\mu$m luminosity, to convert IR luminosity into “instantaneous” SFR. It was derived from simple theoretical models of stellar populations with ages 10-100 Myr without considering factors like metallicity or more complex star formation histories. While this makes it strictly valid only for young starbursts, the Kennicutt relation is quite often applied to the more general population of star forming galaxies. Figure \[fig:ltir\_sfr\_d4000\] shows that the IR emission from SF galaxies (left panel) correlates fairly well with $SFR_{\rm opt}$. The correlation is tighter for high SFRs becoming broader and slightly asymmetric for low SFRs. A least-squares fit to the SF sample is given by $$\log L^{\rm SF}_{\rm 12\mu m}=(0.987\pm0.002)\log SFR_{\rm opt}+(8.962\pm0.003).$$ This expression is close to the @charyelbaz relation at high SFRs, which is not surprising given that relation was calibrated using the IRAS Bright Galaxy Sample (@soifer), i.e. luminous galaxies with $L_{\rm 12\mu m}>10^{9}$L$_{\odot}$. The small differences are likely attributable to luminosity/redshift selection effects and the slight differences between ISO and WISE $12\mu$m filters. More importantly, the agreement is quite good considering the different origin (FIR vs optical emission lines) of the SFRs. Relative to the @charyelbaz conversion, $L_{\rm 12\mu m}$ is comparatively suppressed by a factor $>1.6$ for $SFR_{\rm opt}$ below $\sim$0.1M$_{\odot}$yr$^{-1}$. This is likely because low SFR systems have very low stellar masses ($<10^9$M$_{\odot}$), and therefore become more transparent due to the increasing fraction of stellar light that escapes unabsorbed by dust. We note, however, that the spatial distrbution of dust in HII regions and/or molecular clouds could also have influence (e.g. @leisawitz). In addition, we have repeated the test for the bluest galaxies in the SF galaxy class, obtaining no significant differences. This suggests that other effects like metallicity could also be relevant. For AGN, the coupling between optical SFR and IR luminosity follows a different relationship (right panel of Figure \[fig:ltir\_sfr\_d4000\]). Most AGN lie in a broader distribution *above* the instantaneous conversion of Kennicutt, particularly those with $SFR_{\rm opt}$ below $\sim$1M$_{\odot}$yr$^{-1}$. For a fixed SFR, the IR emission is higher by a factor of several relative to SF galaxies, suggesting that $L_{\rm 12\mu m}$ is not driven by the current SF for most AGN. Weak AGN are predominantely associated with massive, early-type galaxies increasingly dominated by old stars at low SFRs ($\sim$0.1M$_{\odot}$yr$^{-1}$). Given their red optical colors, it is unlikely that recent SF could be responsible for their IR emission. More likely, dust grains heated due to older stars or an AGN are driving this emission. Note, however, that in Section \[sec:agn\_effect\] we will show that the contribution of AGN at 12$\mu$m is of minor importance for most AGN, except perhaps for most powerful ones. Only strong AGN, which are dominated by intermediate-to-young stellar populations, tend to occupy a region similar to SF galaxies in Figure \[fig:ltir\_sfr\_d4000\]. They show a clear “excess” in $L_{\rm 12\mu m}$ at $SFR_{\rm opt}\sim$0.5M$_{\odot}$yr$^{-1}$ that gradually decreases when stars get younger toward higher SFRs. This shows that the age of stars in an AGN is an important factor in determining the origin of the 12$\mu$m emission. An expression fitting AGN (weak and strong) is given by $$\log L^{\rm AGN}_{\rm 12\mu m}=(0.582\pm0.004)\log SFR_{\rm opt}+(9.477\pm0.002).$$ Recent work by @salim09 compared NUV/optical SFRs with $L_{\rm TIR}$ calibrated from 24 $\mu$m fluxes for red and green sequence objects at $z\sim0.7$ (corresponding to restframe 14$\mu$m, close to the WISE 12$\mu$m band). They find large excess IR emission for a given SFR, attributed mainly to older stellar populations, and to a lesser extent to an AGN. We find broadly consistent results, but for 12$\mu$m-selected AGN sources with optical SFRs. @kelson have suggested that thermally pulsating AGB carbon stars (TP-AGB) with ages of 0.2-1.5 Gyr (corresponding to D$_{4000}\sim$1.2-1.5) can also contribute significantly to the mid-IR flux. As seen in Section \[sec:lumass\], this does not seem to be important for our much older, weak AGN, and perhaps marginally relevant in strong AGN that have typical stellar ages slightly above the upper 1.5 Gyr limit. The case of SF galaxies is interesting because most of the 12 $\mu$m luminosity seems to originate from galaxies in the $\sim$0.3-0.6 Gyr age range and this luminosity correlates relatively well with the optical SFR. While the age ranges seems to overlap, it is difficult to prove whether TP-AGB dominate the emission or not. Further SED decomposition and modeling of stellar populations is required to find the fraction of mid-IR luminosity powered by TP-AGB stars, a task that is potentially complicated by metallicity effects and uncertainties in the ensemble colors of such stars. However, the general picture is consistent with previous results (@salim09; @kelson) that find evidence for the mid-IR being sensitive to star formation over relatively long ($>$1.5 Gyr) timescales. Finally, we consider composite systems, which present considerable SF activity along with spectral signatures of an AGN (middle panel of Figure \[fig:ltir\_sfr\_d4000\]). By definition, these objects have up to 40% (see B04) of their H$\alpha$ emission coming from a non-stellar origin, though the fraction is $<$15% for most galaxies. With masses, ages and optical colors intermediate between the SF and AGN sequences, composite galaxies closely follow the Kennicutt relation, except at the low SFR end where older stars once again begin to dominate. A least-squares fit for composite galaxies has a slope intermediate between AGN and SF galaxies, and is given by $$\log L^{\rm comp}_{\rm 12\mu m}=(0.671\pm0.003)\log SFR_{\rm opt}+(9.249\pm0.002).$$ We note that the optical SFRs utilized here, while not ideal, are probably the best estimates publicly available for such a large and diverse population of galaxies in the local universe. Other methodologies that use more sophisticated dust corrections and employ H$_2$, FIR or radio data could provide more accurrate values, but are difficult to apply across the entire sample and data are not always available (see @saintonge for a calibration of SFR based on H$_2$ masses). This is particularly relevant for SFRs in AGN, which can present large $>$0.5 dex formal errors (see Figure 14 of @brinchman). We have verified that the SDSS SFRs in our sample are in broad agreement with UV-based SFRs derived by @salim07 (S. Salim, private communication), with an average offset/scatter of 0.055/0.387 for the total sample (0.013/0.334 for strong AGN, 0.242/0.567 for weak AGN). 22 $\mu$m Luminosity and Star Formation Rate -------------------------------------------- WISE is less sensitive at 22 $\mu$m than at 12$\mu$m, but a significant fraction of our sample ($\sim$30%) has measured 22 $\mu$m fluxes above 5 mJy (note we find no 22 $\mu$m galaxies without 12$\mu$m detection). Similar to the 12$\mu$m sample, this 22 $\mu$m subsample is a mixture comprised of 65% SF galaxies, 14% AGN and 19% composite systems. However, the fraction of strong AGN is 38%, compared to 16% for 12$\mu$m sources. As 22 $\mu$m is closer to the dust emission peak and is not affected by PAH emission features, it is interesting to compare with the 12$\mu$m galaxies of our previous analysis. Figure \[fig:ltir\_sfr\_d4000\] shows the linear fits for the 22 $\mu$m subsample (triangle markers). These relations are very similar to the 12$\mu$m fits (solid line for SF galaxies, square markers for AGN and composite galaxies), supporting the independence of our results on the particularities of a single mid-IR band. Specific Star Formation Rate {#sec:ssfr} ---------------------------- Given the strong correlations between optical or IR light and stellar mass, a more interesting metric for comparison is the specific star formation rate, SSFR or SFR/M$_{\odot}$, that measures the current relative to past SFR needed to build up the stellar mass of the galaxy. The SSFR traces the star formation efficiency and its inverse defines the timescale for galaxy formation or the time the galaxy required to build up its current mass. Higher values of SSFR are indicative of a larger fraction of stars being formed recently. While ideally we would use gas mass or total mass instead of stellar mass for the normalization, such masses are not easily measured. Figure \[fig:ltir\_ssfr\_sptype\_metal\] shows the SSFR as a function of 12$\mu$m luminosity for the different galaxy classes. The nearly flat correlation for SF galaxies means that no matter the IR output, the amount of star formation per unit mass remains relatively constant. SF galaxies display a weak dependence with $L_{\rm 12\mu m}$ that gets narrower towards higher luminosities. As noted before, a possible origin for such residual SSFR could be due to a metallicity gradient. @calzetti07 studied individual star forming regions of fixed aperture in nearby galaxies with known Pa$\alpha$ surface density, and found that low metallicity galaxies have a small deficit in 24 $\mu$m emission compared to high metallicity galaxies. @relano confirmed that while 24 $\mu$m luminosity is a good metallicity-independent tracer for the SFR of individual HII regions, the metallicity effect should be taken into account when analyzing SFRs integrated over the whole galaxy. We test qualitatively for a metallicity effect by calculating the SFR per unit mass per unit metallicity, where the metallicity is given by the 12+log(O/H) gas-phase oxygen abundance derived from optical nebular emission lines. The SF population displays an almost perfectly flat relationship over almost 4 orders of magnitude in $L_{\rm 12\mu m}$ such that independent of IR output, a galaxy of given mass and metal content converts gas into stars at a nearly constant rate. We note that these metallicities represent the current metal abundance rather than the luminosity-weighted average of past stellar populations. They also do not suffer from complications due to $\alpha$-element enhancement or age uncertainties, characteristic of methods relying on absorption-line indices. @bond arrive at a similar conclusion regarding a constant SSFR in nearby galaxies using Herschel 250$\mu$m and WISE 3.4$\mu$m data. Compared to SF galaxies, AGN have SSFRs lower by a factor of $\sim$10, mainly because of their higher stellar masses. However, strong AGN lie much closer to the SF sequence than weak AGN. The former are hosted by high stellar mass galaxies, but also have young stellar populations that drive up the SFR at high $L_{\rm 12\mu m}$. Weak AGN do not have this boost in SFR and hence have lower SSFRs. Once again, composite systems populate a region intermediate between SF galaxies and strong AGN. Previous studies (e.g. @brinchman; @salim07) have shown the relationship between the SFR and $M^*$, identifying two different sequences: galaxies on a star-forming sequence and galaxies with little or no detectable SF. While the general result is that the SSFR of massive, red galaxies is lower at $0<z<3$, the exact dependence of SSFR on mass is still a matter of debate, particularly in view of recent results that trace the evolution at higher redshift (e.g. @noeske; @dunne; but see the discussion by @rodighiero). In our sample of SF galaxies we find that the dependence of $L_{\rm 12\mu m}$ on $M^*$ is such that the efficiency by which gas is transformed into stars is nearly independent of the IR emission reprocessed by the dust. In strong AGN the star formation activity is “suppressed” moderately, but considerably more in the case of weak AGN. This suggests a sequence where a strong-AGN phase is a continuation of the SF sequence at high stellar mass, that gradually turns AGN into a population with weakened SF activity and lower $L_{\rm 12\mu m}$, dominated by older and redder stars. Based on optical data, @kauff03agn found that strong AGN hosts are indeed populated by relatively young stars, suggesting many of them could be post-starburst systems with extended star formation. With UV data, @salim07 showed there is a close connection between massive SF galaxies and strong AGN. Using IR data, we find that the smooth sequence of galaxies from Figures \[fig:ltir\_sfr\_d4000\] and \[fig:ltir\_ssfr\_sptype\_metal\] supports the hypothesis of strong AGN being the continuation at the massive-end of the normal SF sequence. An interesting question is whether the mid-IR luminosity in powerful AGN is driven by “normal” ongoing SF, or by hot dust left over *after* the last episode of SF. Further investigation is required to fully understand this matter. SSFR Dependence on Mid-IR Color ------------------------------- Recent work on resolved nearby galaxies has shown a definite correlation between IR color and luminosity. @shi found that for a variety of sources ranging from ULIRGs to blue compact dwarf galaxies, the flux ratio $f_{24\mu m}/f_{5.8\mu m}$ traces the SSFR and also correlates with the compactness of star forming regions. While ideally we would like to know the SFR surface density, we first explore the relation between SSFR and 4.6–12$\mu$m galaxy color, as shown in Figure \[fig:ssfr\_w2w3\_sptype\]. Galaxies from the main SF sequence (blue contours) correlate strongly with IR color, with strong AGN (black contours) continuing the trend at bluer colors. Weak AGN extend (red contours) that relationship remarkably well toward the low star formation end, albeit with higher dispersion and a slightly steeper slope. Hence, for the same increase in SSFR, AGN experience a smaller variation in IR color than typical SF objects. This is probably due to the combination of a metallicity effect and the different stellar populations that regulate the IR emission budget. In any case, this suggests that the higher the SSFR, the more prominent the 12$\mu$m IR emission becomes relative to 4.6 $\mu$m, where the latter is expected to strongly correlate with stellar mass. This shows that the 4.6–12$\mu$m color serves well as a rough first-order indicator of star formation activity over three orders of magnitude in SFR. A simple expression fitting all galaxies is given by $$\log SSFR=(0.775\pm0.002)(W2-W3)_{0}-(12.561\pm0.006).$$ If the galaxy is known to host an AGN, the more accurate expression is $$\log SSFR=(0.840\pm0.008)(W2-W3)_{0}-(12.991\pm0.020).$$ These results show that there is a tight link between stellar mass, current star formation rate and IR color in SF galaxies, which emphasizes the role of the dominant stellar population in regulating star formation. Effect of AGN on the Energy Budget {#sec:agn_effect} ---------------------------------- In the previous sections we noted that the emission from the AGN could have a significant effect on the IR emission, and potentially bias the luminosities of the AGN galaxy class. This could be particularly important for low SFR galaxies. We test for this effect by estimating the contribution of the AGN to the total energy budget for sources classified from the BPT diagram as either an AGN or as an SF galaxy. To do so, we analyze the fraction of the 12$\mu$m luminosity contributed by each of the four templates used in the SED fitting process to our optical+IR photometry, paying particular attention to the AGN component. Figure \[fig:atype\_sf\] shows the median fraction of 12$\mu$m luminosity contributed by each template, for objects classified as SF galaxies. The majority of the power is split among the irregular (Im), spiral (Sbc) and elliptical (E) templates, though the AGN contribution becomes significant for the most luminous sources, above $10^{10.8}L_{\odot}$. This implies that the AGN has a negligible influence in SF galaxies. Figure \[fig:atype\_agn\] shows these fractions for weak and strong AGN galaxy classes. The elliptical component is now more prominent in weak AGN of low luminosity, which is not unexpected. In general, the AGN component is now more important but is still far from contributing significantly below $10^{10.8}L_{\odot}$. In most weak AGN ($\sim$80%) the AGN contribution to the 12$\mu$m luminosity is below 40%. About $\sim$70% of strong AGN show similar low AGN contributions at 12$\mu$m. We note that in Figure \[fig:atype\_agn\] we used WISE aperture photometry for extended, nearby sources and profile-fitting magnitudes for unresolved galaxies with $\chi^2<3$ (see Section 4.5 of WISE Preliminary Release Explanatory Supplement for further details). Although the differences are small, aperture photometry improves the quality of the SED fit of low luminosity galaxies, as it captures the more extended flux of objects at low redshifts. Note that in both cases, SF galaxies and AGN, the emission is dominated by the spiral (Sbc) component, which has a relatively high mid-IR SED. This is because the template, originally constructed from @coleman and extended into the UV and IR with @bruzual03 synthesis models, also considers emission in the mid-IR from dust and polycyclic aromatic hydrocarbons (PAHs). These are added by ad hoc linear combinations of appropriate parts of NGC 4429 and M82 SEDs obtained by @devriendt. Figure \[fig:seds\_agn\] shows the SED fit for AGN with luminosities of $L_{\rm[OIII]}\sim10^6\, L_\odot$ and $L_{\rm[OIII]}\sim10^7\, L_\odot$. In each case we plot the object with the median $\chi^2$, i.e. the typical fit for sources of those luminosities. The 9 photometric bands are well fitted by the model in most cases. For AGN with $L_{\rm[OIII]}>10^{7.5}\, L_\odot$ we find the SED fit is reasonably good (although in general with higher $\chi^2$) except at the 22$\mu$m band. We believe this is caused by the limitation of the algorithm (not the templates themselves) to properly fit highly reddened AGN fainter than their hosts, as it is designed to punish the excessive use of reddening on the AGN when few relevant data points are used. Modifying the algorithm slightly to remove this behavior, we are able to obtain fits with better $\chi^2$ values for these objects assigning much higher AGN fractions and reddening values, yet the lack of farther IR data to determine the origin of the 22$\mu$m excess makes these numbers also uncertain. Therefore, while these results do not definitely rule out that the central AGN could have a considerable effect in some extreme sources (e.g. the very strong AGN), it certainly shows that they are not relevant for most of the galaxy populations analyzed in this paper. Summary ======= In this work we have taken advantage of recently released data from WISE and SDSS to construct the largest IR-optical sample of galaxies with 12$\mu$m fluxes and optical spectra available at $\langle z \rangle \sim 0.1$. This sample allowed us to investigate with high statistical significance how physical parameters such as color, stellar mass, metallicity, redshift, and SFR of 12$\mu$m-selected galaxies compare with purely optically selected samples. We have quantified how the SFR estimates compare for the different spectral types as a function of stellar mass, galaxy age and IR color in order to pinpoint the underlying source of 12$\mu$m emission and therefore up to what extent it could be interpreted as a useful SFR indicator. The main results of this paper can be summarized as follows: - In general, the WISE-SDSS 12$\mu$m-selected galaxy population traces the blue, late-type, low mass sequence of the bimodal galaxy distribution in the local Universe. It also traces intermediate-type objects with active nuclei, avoiding the bulk of the red and “dead” galaxies without emission lines. Most sources have normal to LIRG luminosities, but (few) ULIRGs are also present. - The IR emission of SF galaxies and strong AGN, dominated by the blue, young stellar population component, is well correlated with the optical SFR. There is a small tendency of low SFR systems to have slighly lower IR luminosity when compared to the canonical relation of @charyelbaz. These are low SFR, low mass systems that likely become more transparent due to the increasing fraction of light that escapes unabsorbed and hence suppresses $L_{\rm 12\mu m}$. However other effects like the dust distribution or metallicity could be relevant as well. The latter is shown by the (weak) SSFR dependence on $L_{\rm 12\mu m}$. In general, the mid-IR emission at 22$\mu$m follows similar correlations seen for the 12$\mu$m-selected sample, suggesting that these results do not critically depend on a single IR band. - SF galaxies are forming stars at an approximately constant rate per unit mass for an IR output ranging over five orders of magnitude. There is small tendency for more luminous objects to have enhanced SSFR, which could be interpreted as a sign of SF histories peaking toward later times. However, this residual dependence seems to be caused by a metallicity gradient. Once factored out, the relationship becomes nearly flat. Strong AGN behave as a continuation at the massive-end of the normal SF sequence, where the AGN (possibly after a starburst episode) gradually quenches SF and weakens as it consumes the gas available, with the mid-IR emission fading in consequence. - The mid-IR 4.6–12$\mu$m restframe color can be used as a first-order indicator of the overall SF activity in a galaxy, as it correlates well with the specific SFR. For increasing SFR/M$^*$, the IR emission becomes more prominent at 12$\mu$m (associated with dust emission) relative to 4.6 $\mu$m (associated with stellar mass). - For the case of SF galaxies, most of the mid-IR luminosity distribution is concentrated in systems younger than $\sim$0.5 Gyr. Redder galaxies are dominated by older stellar populations, which contribute increasingly to the 12$\mu$m emission. While many of these galaxies host an AGN (usually weak) the 12$\mu$m energy budget is generally not dominated by the central active nuclei. This might well not be the case of bright galaxies with very strong active nuclei ($L_{\rm[OIII]}>10^7.5\, L_\odot$) where a considerably larger fraction of mid-IR emission could be due to the AGN. Spatially resolved, longer wavelength IR data and further modeling is necessary to fully understand these sources. The authors thank G. Kauffmann, J. Brinchmann and S. Salim for useful suggestions. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the US Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. R.J.A. was supported by an appointment to the NASA Postdoctoral Program at the Jet Propulsion Laboratory, administered by Oak Ridge Associated Universities through a contract with NASA. [99]{} Abazajian K. N. et al., 2009, ApJS, 182, 543 Alonso-Herrero A.; Ward M. J., Kotilainen J. K., 1997, MNRAS, 288, 977 Alonso-Herrero A., Rieke G. H., Rieke M. J., Colina L., Perez-Gonzalez P. G., Ryder S. D., 2006, ApJ, 650, 835 Assef R. J. et al., 2010, ApJ, 713, 970 Baldwin J., Phillips M., Terlevich R., 1981, PASP, 93, 5 Baldry I. K., Glazebrook K., Brinkmann J., Ivezic Z., Lupton R. H., Nichol R. C., Szalay A. S., 2004, ApJ, 600, 681 Blanton M. R., Roweis S., 2007, AJ, 133, 734 Bond N. A. et al., 2011, in preparation Brinchmann J., Charlot S., White S. D. M., Tremonti C., Kauffmann G., Heckman T., Brinkmann J., 2004, MNRAS, 351, 1151 Bruzual G., Charlot S., 1993, ApJ, 405, 538 Bruzual G., Charlot S., 2003, MNRAS, 344, 1000 Calzetti D. et al. 2007, ApJ, 666, 870 Charlot S., Fall S. M., 2000, ApJ, 539, 718 Charlot S., Bruzual G., 2008, in preparation Chary R., Elbaz D., 2001, ApJ, 556, 562 Coleman G. D., Wu C.-C., Weedman D. W., 1980, ApJS, 43, 393 Cutri R. et al., 2011, WISE Explanatory Supplement. Daddi E. et al., 2007, ApJ, 670, 156 Devriendt J. E. G., Guiderdoni B., Sadat R., 1999, A&A, 350, 381 Duc P.-A. et al., 2002, A%A, 382, 60 Dunne L. et al., 2009, MNRAS, 394, 3 Eisenhardt P. et al., 2011, in preparation Ferland G., 1996, Ferland G., 1996, Hazy: A Brief Introduction to CLOUDY. Internal Report, Univ. Kentucky Griffith R. L. et al.,2011, ApJ, 736, 22 Heckman T. M., Robert C., Leitherer C., Garnett D. R., van der Rydt F., 1998, ApJ, 503, 646 Heckman, T. M., Kauffmann G., Brinchmann J., Charlot S., Tremonti C., White, S. D. M., 2004, ApJ, 613, 109 Hou L. G., Wu Xue-Bing, Han J. L., 2009, ApJ, 704, 794 Jannuzi B. et al., 2010, Bulletin of the American Astronomical Society Meeting 215, Vol 42., p.513 Jarrett T. et al., 2011, ApJ, 735, 112 Kauffmann G. et al., 2003a, MNRAS, 341, 33 Kauffmann G. et al., 2003b, MNRAS, 346, 1055 Keel W. C., De Grijp M. H. K., Miley G. K., Zheng W., 1994, A&A, 283, 791 Kelson D. D., Holden B. P., 2010, ApJL, 713, 28 Kennicutt, R. C. Jr., 1998, ARA&A, 36, 189 Kennicutt, R. C. Jr. et al. 2009, ApJ, 703, 1672 Kochanek C. et al., 2011, in preparation Lake S. E. et al., 2011, in preparation Leisawitz D, Hauser M. G., 1988, ApJ, 332, 954 Martin D. C. et al., 2007, ApJS, 173, 342 Mulchaey J. S., Koratkar A., Ward M. J., Wilson A. J., Whittle M., Antonucci, R. R. J., Kinney A. L., Hurt T., 1994, ApJ, 436, 58 Neugebauer G. et al., 1984, ApJ, 278, 1 Noeske K. G. et al., 2007, ApJ, 660, L43 Rieke G. H., Alonso-Herrero A., Weiner B. J., Pérez-González P. G., Blaylock M., Donley J. L., Marcillac D., 2009, ApJ, 692, 556 Relaño M., Lisenfeld U., Pérez-González P. G., Vílchez J. M., Battaner, E., 2007, ApJ, 667, 141 Rocca-Volmerange B., de Lapparent V., Seymour N., Fioc M, 2007, A&A, 475, 801 Rodighiero G. et al., 2010, A&A, 518, 25 Saintonge A. et al., 2011, MNRAS, 415, 61 Salim S. et al., 2007, ApJS, 173, 267 Salim S. et al., 2009, AJ, 700, 161 Schlegel D. J., Finkbeiner D. P., Davis M., 1998, AJ, 500, 525 Seymour N., Rocca-Volmerange B., de Lapparent V., 2007, A&A, 475, 791 Shi Y. et al., 2011, in preparation Shimasaku K. et al., 2001, AJ, 122, 1238 Skrutskie M.F. et al., 2006, AJ, 131, 1163 Soifer B. T. et al., 1987, ApJ, 320, 238 Spinoglio L., Malkan M. A., 1989, AJ, 342, 83 Stern D. et al., 2011, in preparation Stoughton C. et al., 2002, AJ, 123, 485 Strateva I. et al., 2001, AJ, 122, 1861 Strauss M. A. et al., 2002, AJ, 124, 1810 Teplitz H. I. et al., 2011, AJ, 141, 1 Tremonti C. A. et al., 2004, AJ, 613, 898 Wright E. L. et al., 2010, AJ, 140, 1868 Wu H. et al., 2005, ApJ, 632, 79 Yan L. et al., 2011, in preparation York D. G. et al., 2000, AJ, 120, 1579 Zhu Y. N., Wu H., Cao C., Li H.N., 2008, ApJ, 686, 155 [^1]: WISE data products and documentation are available at\ http://irsa.ipac.caltech.edu/Missions/wise.html [^2]: The MPA-JHU catalog is publicly available at\ http://www.mpa-garching.mpg.de/SDSS/DR7/ [^3]: Templates and code are available at\ http://www.astronomy.ohio-state.edu/$\sim$rjassef/lrt/
--- abstract: 'In this paper we study the behavior of solutions of a nonlinear Schrödinger equation in presence of an external potential, which is allowed be singular at one point. We show that the solution behaves like a solitary wave for long time even if we start from a unstable solitary wave, and its dynamics coincide with that of a classical particle evolving according to a natural effective Hamiltonian.' address: 'Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo n.5, 56127 Pisa, Italy' author: - Claudio Bonanno title: Long time dynamics of highly concentrated solitary waves for the nonlinear Schrödinger equation --- Introduction and statement of the results {#sec:intro} ========================================= In this paper we study the long time dynamics of a solitary wave solution of a nonlinear Schrödinger equation (NLS) in presence of an external potential. This problem has been considerably studied in the last years, following the tradition of the work on the stability of solitons which dates back to Weinstein [@weinstein]. The first dynamical results are given in [@bj] and improved, along the same lines, in [@keraani]. This first approach is purely variational and is based on the non-degeneracy conditions proved in [@we2] for the ground state of the elliptic equation solved by the function describing the profile of a soliton. This approach has been used also in [@sing], where the results of [@bj; @keraani] are extended to the case of a potential with a singularity. A second line of investigations on our problem has been initiated in [@froh1; @froh2]. In these papers the authors have strongly used the Hamiltonian nature of NLS, approximating the solution by its symplectic projection on the finite dimensional manifold of solitons (see , which is a sub-manifold of that used in [@froh1; @froh2], since we fix the profile $U$). This approach has been improved in [@hz2; @hz] for the Gross-Pitaevskii equation by showing that it is possible to obtain an exact dynamics for the center of the soliton approximation. In the previous papers the non-degeneracy condition for the ground state is a fundamental assumption. It has been removed in a more recent approach introduced in [@bgm1; @bgm2]. The idea of these papers is that it is possible for the solution of the NLS to remain concentrated for long time and to have a soliton behavior, even if the profile of the initial condition is degenerate for the energy associated to the elliptic equation. In fact the concentration of the solution follows in the semi-classical regime from the role played by the nonlinear term, which in [@bgm1; @bgm2] is assumed to be dependent on the Planck constant. This approach has been used in [@choq] for the NLS with a Hartree nonlinearity, in which case the non-degeneracy of the ground state is for the moment an open question. In this paper we put together the last two approaches and try to weaken as much as possible the assumptions on the solitary wave. First of all, one main difference is that we control only the $L^{2}$ norm of the difference between the solution of NLS and the approximating traveling solitary wave. This has been done also in [@colliding], and allows to drop the non-degeneracy condition and consider more general nonlinearities. Moreover we prove that the approximation of the solution of NLS with a traveling solitary wave is good also if the solitary wave is not stable, that is it is not a soliton, and the profile is fixed. This choice partly destroys the symplectic structure used in [@froh1] and subsequent papers, but we prove that there exists a particular projection on the manifold ${\mathcal{M}}_{{\varepsilon}}$ defined in which is almost symplectic for long time. Actually this particular projection is natural, since it is defined in terms of the Hamiltonian functional of NLS restricted to the manifold ${\mathcal{M}}_{{\varepsilon}}$, called the *effective Hamiltonian* in [@hz]. Then, this almost symplectic projection is enough to prove that the approximation is good for long time. Finally, we remark that we are able to consider the cases of regular and singular external potentials at the same time, and slightly improve on the range of allowed behavior at the singularity with respect to [@sing]. In the remaining part of this section we describe the problem and the main result and discuss the assumptions. In Section \[sec:ham\] we use the Hamiltonian nature of NLS to introduce the effective Hamiltonian on the manifold ${\mathcal{M}}_{{\varepsilon}}$ and to find the “natural” projection of the solution on ${\mathcal{M}}_{{\varepsilon}}$. In Sections \[sec:approx\] and \[sec:proof\] we describe the approximation of the solution of NLS and prove the main result. Finally in the appendix we show that our projection is almost symplectic for long time. The problem and the assumptions ------------------------------- We study the behavior of solutions $\psi(t,\cdot) \in H^{1}({\mathbb R}^{N},{\mathbb C})$, with $N\ge 3$ to the initial value problem $$\label{prob-eps} \left\{ \begin{array}{l} i {\varepsilon}\psi_{t} + {\varepsilon}^{2} \triangle \psi - f({\varepsilon}^{-2\alpha}|\psi|^{2}) \psi = V(x) \psi \\[0.2cm] \psi(0,x) = {\varepsilon}^{\gamma} U({\varepsilon}^{-\beta}(x-a_{0}))\, e^{\frac i{\varepsilon}(\frac 12 (x-a_{0}) \cdot \xi_{0} + \theta_{0} )} \end{array} \right. \tag{$\mathcal{P}_{\varepsilon}$}$$ where ${\varepsilon}>0$ represents the Planck constant, $\alpha,\beta,\gamma$ are real parameters, $(a_{0},\xi_{0}) \in {\mathbb R}^{N}\times {\mathbb R}^{N}$ are the initial conditions of the finite dimensional dynamics which the solution follows, $\theta\in {\mathbb R}$ is the phase shift. Moreover $U\in H^{1}({\mathbb R}^{N})$ is a positive function which satisfies $$\label{ellittica} -\triangle U + f(U^{2})U - \omega U = 0$$ for some $\omega \in {\mathbb R}$, and such that - ${\left\VertU\right\Vert}_{L^{2}}^{2} = \rho>0$; - $U(x)$ is in $L^{\infty}({\mathbb R}^{N})$ and vanishes as $|x|\to \infty$ fast enough so that $${\left\Vert|x|U^{2}\right\Vert}_{L^{1}} +{\left\Vert|x|^{2}U^{2}\right\Vert}_{L^{2}}+ {\left\Vert|x| |\nabla U|\right\Vert}_{L^{2}} < \infty$$ Finally we assume that $f$ satisfies - There exists a $C^{3}$ functional $F:H^{1}\to {\mathbb R}$ such that $d(F(|\psi|^{2})) = 2 f(|\psi|^{2})\psi$; - if $\varphi \in L^{r}$, for some $r\in (2,\frac{2N}{N-2})$, and $U$ is as above, then there exists $C=C(\varphi,U)>0$ only depending on $\varphi$ and $U$ such that $$\Big| \int_{{\mathbb R}^{N}}\, \Big[f(|U+v|^{2})(U+v) - f(U^{2})U - (2f'(U^{2})U^{2}+f(U^{2})){\mathrm{Re}}(v) - i\, f(U^{2}) {\mathrm{Im}}(v) \Big]\, \overline{\varphi}\, dx \Big| \le C(\varphi,U)\, {\left\Vertv\right\Vert}_{L^{2}}$$ for all $v\in H^{1}$ with ${\left\Vertv\right\Vert}_{L^{2}}\le 1$. In studying the behavior of a solution $\psi(t,x)$ to , the potential $V$ is considered as an external perturbation and, when $V\equiv 0$ we ask the solution to the initial value problem to be a solitary wave traveling along the unperturbed trajectory $a(t) = a_{0} + \xi_{0} t$, $\xi(t) = \xi_{0}$, namely $$\psi_{_{V\equiv 0}}(t,x) = {\varepsilon}^{\gamma} U({\varepsilon}^{-\beta}(x-a(t)))\, e^{\frac i{\varepsilon}(\frac 12 (x-a(t)) \cdot \xi(t) + \theta(t) )}$$ Using this expression for $\psi$ in with $V\equiv 0$, we obtain an identity if $$- {\varepsilon}^{2-2\beta} \triangle U + f({\varepsilon}^{2(\gamma-\alpha)} U^{2})U + \Big(\dot \theta(t) - \frac 14 |\xi_{0}|^{2}\Big) U =0$$ Using this implies that either $$\label{par1} \alpha = \gamma\, , \quad \beta=1\quad \text{and} \quad \theta(t) = \Big(\frac 14 |\xi_{0}|^{2}- \omega \Big) t \, ,$$ or we assume that - $f$ is homogeneous of degree $p\in (0, \frac{2}{N-2})$; - $\beta = 1 + (\alpha - \gamma) p$ and $\theta(t) = \Big(\frac 14 |\xi_{0}|^{2} - {\varepsilon}^{2-2\beta} \omega \Big) t$. Notice that is a particular case of condition (N4), for which we don’t need (N3). So in the sequel we assume (N3) and (N4) with the warning that (N3) is not necessary if the particular condition holds. Finally for what concerns the potential $V$ we consider two possible cases: - $V:{\mathbb R}^{N}\to {\mathbb R}$ is a $C^{2}$ function which is bounded from below and with bounded second derivatives, namely $$h_{V}:= \sup_{x} \Big( \max_{i,j} \Big| \frac{\partial^{2}V}{\partial x_{i} \partial x_{j}}(x)\Big| \Big) < \infty\, ;$$ - $V$ is singular at $x=0$ and satifisfies - $V:{\mathbb R}^{N}\setminus \{0\} \to {\mathbb R}$ is of class $C^{2}$; - $|V(x)| \sim |x|^{-\zeta}$ and $|\nabla V(x)| \lesssim |x|^{-\zeta -1}$ as $|x|\to 0$ for some $\zeta \in (0,2)$; - $V\in L^{m}(\{|x|\ge 1\})$ for $m>\frac N2$ and $|\nabla V(x)|\to 0$ as $|x|\to \infty$. The main result --------------- In this paper we prove that \[main-result\] Let $U$ be a positive solution of for some $\omega \in {\mathbb R}$ and satisfying (C1) and (C2). Let $f$ satisfy (N1)-(N3) and $\alpha, \beta, \gamma \in {\mathbb R}$ satisfy (N4) and assume that $\beta \ge 1$, $\alpha\ge \gamma\ge 0$ and, if $N=3$ $$\label{serve3} \beta < 2\gamma +2 ,$$ and no further assumption if $N\ge 4$. Then we have $$\delta := 1 + \gamma + \beta\left(\frac N2 -2 \right)> 0 \, .$$ Let $(a(t), \xi(t), \vartheta(t))$ be the solution of the system $$\left\{ \begin{aligned} & \dot a = \xi \\ & \dot \xi = - \frac 2\rho\, \int_{{\mathbb R}^{N}} \nabla V(a+{\varepsilon}^{\beta}x) U^{2}(x)\, dx\\ & \dot \vartheta = \frac 14 |\xi(t)|^{2} - {\varepsilon}^{2-2\beta}\, \omega - V(a(t)) \end{aligned} \right.$$ with initial condition $(a_{0},\xi_{0},\theta_{0})$, and assume that $V$ satisfies (Vr) or (Vs). If $V$ satisfies (Vs) we also assume that $(a_{0},\xi_{0},\theta_{0})$ is such that $\min_{t} |a(t)| = \bar a>0$. 0.2cm If the solution $\psi(t,x)$ to exists for all $t\in {\mathbb R}$, then we can write $$\label{forma} \psi(t,x) = {\varepsilon}^{\gamma}\, U({\varepsilon}^{-\beta}(x-a(t))) \, e^{\frac i{\varepsilon}( \frac 12 (x-a(t))\cdot \xi(t) + \vartheta(t)} + w(t,x)\, e^{\frac i{\varepsilon}( \vartheta(t)-\theta_{0})}$$ where for any fixed $\eta \in (0,\delta)$ $$\label{errore} {\left\Vertw(t,x)\, e^{\frac i{\varepsilon}( \vartheta(t)-\theta_{0})}\right\Vert}_{L^{2}} = O({\varepsilon}^{\eta})$$ for all $t\in (0,T)$ with $T=O({\varepsilon}^{\eta-\delta})$. We now comment on the results of the theorem, in particular with respect to the values of the parameters $\alpha,\beta,\gamma$. We notice that if $\psi(t,x)$ is a solution to , then $\tilde \psi(t,x) := \psi({\varepsilon}t, {\varepsilon}x)$ is a solution to $$\label{prob-uno} \left\{ \begin{array}{l} i \tilde \psi_{t} + \triangle \tilde \psi - f({\varepsilon}^{-2\alpha}|\tilde\psi|^{2}) \tilde\psi = V({\varepsilon}x) \tilde\psi \\[0.2cm] \tilde \psi(0,x) = {\varepsilon}^{\gamma} U({\varepsilon}^{-\beta+1}x)\, e^{\frac i{\varepsilon}(\frac 12 x \cdot \xi_{0} + \theta_{0} )} \end{array} \right. \tag{$\mathcal P_{1}$}$$ where we set $a_0=0$ for simplicity. Under the same assumptions for $U$, $F$ and $V$ of Theorem \[main-result\], we obtain that $\tilde \psi(t,x)$ can still be written as in , but now for any fixed $\eta \in (0,\delta)$, the estimate of the error is of the order $$\label{forma2} {\left\Vertw(t,x)\right\Vert}_{L^{2}} = O({\varepsilon}^{\eta-\frac N2})$$ for all $t\in (0,\tilde T)$ with $\tilde T= O({\varepsilon}^{\eta-\delta-1})$. Hence the time of validity of the approximation has increased by a factor ${\varepsilon}^{-1}$, but for the estimate to make sense we need that $$\label{aiuto} \delta > \frac N2 \quad \Leftrightarrow \quad \left\{ \begin{aligned} & \beta < 2\gamma -1\, , & \text{if $N=3$;} \\ & \beta \left( \frac N2 -2 \right) + \gamma > \frac N2 -1\, , & \text{if $N\ge 4$.} \end{aligned} \right.$$ Notice in particular that if $\alpha$ is big enough, we can choose $\beta$ and $\gamma$ satisfying and . So the bigger the enhancement of the nonlinear term the better the approximation of the solution in . Finally, if $\psi(t,x)$ is a solution to , then ${\varepsilon}^{-\alpha} \psi(t,x)$ is a solution to $$\label{prob} \left\{ \begin{array}{l} i {\varepsilon}\psi_{t} + {\varepsilon}^{2}\triangle \psi - f(|\psi|^{2}) \psi = V(x)\psi \\[0.2cm] \psi(0,x) = {\varepsilon}^{\gamma-\alpha} U({\varepsilon}^{-\beta}(x-a_{0}))\, e^{\frac i{\varepsilon}(\frac 12 (x-a_{0}) \cdot \xi_{0} + \theta_{0} )} \end{array} \right. \tag{$\mathcal{P}$}$$ Hence under the assumptions of Theorem \[main-result\], for any $\eta\in (0,\delta)$ solutions of can be written as in up to times $T=O({\varepsilon}^{\eta-\delta})$. Remarks on the assumptions -------------------------- We briefly discuss the assumptions on: The positive function $U$ is the profile of the solitary wave which is the approximation of the solution of . Typically one assumes that $U$ is not only solution of but the minimizer of the energy $$\mathcal{E}(u):= \int_{{\mathbb R}^{N}}\, \Big[\frac 12 |\nabla u|^{2} + F(u^{2})\Big] \, dx$$ constrained to the manifold $\Sigma_{\rho}:= {\left\{u\in H^{1}\, :\, {\left\Vertu\right\Vert}_{L^{2}}^{2}= \rho\right\}}$. This is useful because the minimizer of $\mathcal{E}$ is orbitally stable (see [@gss87] and [@bbgm]), and is then called soliton. Conditions sufficient for orbital stability are for example assumed in [@froh1], [@hz] and [@colliding]. In this paper we only assume that $U$ is a solution of , that is just a critical point for $\mathcal{E}$ constrained to $\Sigma_{\rho}$, and is not necessarily orbitally stable. For what concerns assumption (C2), we only need $U\in L^{\infty}$ for the case $V$ singular. The speed of vanishing at infinity is verified for example when $U$ is a ground state, in which case $U$ and $\nabla U$ decay exponentially ([@bl; @kwong]). We first discuss (N2). This assumption is used also in [@colliding] to which we refer for the proof that (N2) is satisfied if: - $f$ is a Hartree nonlinearity $$f(|\psi|^{2})\psi = (W(x) \star |\psi|^{2})\psi$$ with $W$ positive, spherically symmetric, in $L^{q}+L^{\infty}$ with $q> \max{\left\{\frac N2,2\right\}}$, and decaying at infinity; - $f$ is a local nonlinearity, that is we can write $f:{\mathbb R}^{+}\to {\mathbb R}$ with $f(s) = F'(s)$ for a $C^{3}$ function $F:{\mathbb R}^{+}\to {\mathbb R}$, and $$\sup_{s\in {\mathbb R}^{+}}\, s^{\frac{2k-1}{2}}\, f^{(k)}(s) < \infty\, , \qquad k=1,2$$ Assumption (N3) is satisfied by the Hartree nonlinearities as above with $p=1$ and for and by power local nonlinearities $f(s) = s^{p}$. However we remark that (N3) is not needed if we assume . The assumptions (Vr) and (Vs) on the potential are needed to have local well-posedness for by results in [@caze Chapter 4]. In particular, concerning (Vs2), local well-posedness is implied by $\zeta<2$ for all $N$. It also implies the finiteness of the Hamiltonian and the vector field in for which $\zeta<N-1$ is sufficient. Notice that in [@sing] it was assumed $\zeta<1$. Hamiltonian formulation for NLS and the trajectories of the solitary waves {#sec:ham} ========================================================================== Following [@froh1], we consider the space $H^{1}({\mathbb R}^{N},{\mathbb C})$ equipped with the symplectic form $$\omega(\psi,\phi) := {\mathrm{Im}}\int_{{\mathbb R}^{N}}\, \psi\, \bar\phi \, dx$$ and problem associated to the Hamiltonian functional $$\label{ham-nls} {\mathcal{H}}(\psi) := \frac 12 \int_{{\mathbb R}^{N}}\, \Big[ {\varepsilon}^{2} |\nabla \psi|^{2} + V(x) |\psi|^{2} + {\varepsilon}^{2\alpha} F({\varepsilon}^{-2\alpha} |\psi|^{2}) \Big] dx$$ via the law ${\varepsilon}\psi_{t} = X_{{\mathcal{H}}}$, where $X_{{\mathcal{H}}}$ is the vector field satisfying $$\omega(\phi, X_{{\mathcal{H}}}) = d{\mathcal{H}}[\phi] \qquad \forall\, \phi\, .$$ Since the Hamiltonian ${\mathcal{H}}$ is not dependent on time, a solution to satisfies ${\mathcal{H}}(\psi(t,x)) = {\mathcal{H}}(\psi(0,x))$ for all $t$. Moreover the Hamiltonian ${\mathcal{H}}$ is invariant under the global gauge transformation $\psi \mapsto e^{i\theta} \psi$ for all $\theta \in {\mathbb R}$. This implies that there exists another conserved quantity for the Hamiltonian flow of ${\mathcal{H}}$, and it is given by the charge $${\mathcal{C}}(\psi) := \int_{{\mathbb R}^{N}}\, |\psi|^{2}\, dx\, .$$ Hence, by assumption (C1), it follows that a solution to satisfies $$\label{carica-cost} {\left\Vert\psi(t,x)\right\Vert}_{L^{2}}^{2} = {\varepsilon}^{2\gamma+\beta N} {\left\VertU\right\Vert}_{L^{2}}^{2} = {\varepsilon}^{2\gamma+\beta N} \rho \qquad \forall\, t\, .$$ When $V\equiv 0$, we have seen that the solution belongs to the manifold $$\label{man-eps} {\mathcal{M}}_{{\varepsilon}}:= {\left\{ U_{\sigma}(x) := {\varepsilon}^{\gamma} U( {\varepsilon}^{-\beta}(x-a)) e^{\frac i{\varepsilon}(\frac 12 (x-a) \cdot \xi + \theta)}\, \Big/ \, \sigma:= (a,\xi,\theta) \in {\mathbb R}^{N}\times {\mathbb R}^{N}\times {\mathbb R}\right\}}$$ Following [@hz], we construct an Hamiltonian flow associated to ${\mathcal{H}}$ on the manifold ${\mathcal{M}}_{{\varepsilon}}$. To this aim we first have to compute $\Omega_{\sigma}$, the restriction of the symplectic form $\omega$ on ${\mathcal{M}}_{{\varepsilon}}$. The tangent space $T_{_{U_{\sigma}}}{\mathcal{M}}_{{\varepsilon}}$ to ${\mathcal{M}}_{{\varepsilon}}$ in a point $U_{\sigma}$ is generated by $$\begin{aligned} & z_{j,\sigma}^{{\varepsilon}}(x):= \frac{\partial U_{\sigma}(x)}{\partial a_{j}} = - \Big({\varepsilon}^{\gamma-\beta} \partial_{j} U ( {\varepsilon}^{-\beta}(x-a)) + i\, \frac{{\varepsilon}^{\gamma-1}}{2}\, \xi_{j} U( {\varepsilon}^{-\beta}(x-a))\Big) e^{\frac i{\varepsilon}(\frac 12 (x-a) \cdot \xi + \theta)},\ \ j=1,\dots,N \label{tang-eps-a}\\ & z_{j,\sigma}^{{\varepsilon}}(x):= \frac{\partial U_{\sigma}(x)}{\partial \xi_{j}} = i\, \frac{1}{2{\varepsilon}} x_{j-N}\, U_{\sigma}(x)\, ,\qquad j=N+1,\dots, 2N \label{tang-eps-xi}\\ & z_{2N+1,\sigma}^{{\varepsilon}}(x):= \frac{\partial U_{\sigma}(x)}{\partial \theta} = i\, \frac{1}{2{\varepsilon}} \, U_{\sigma}(x) \label{tang-eps-theta}\end{aligned}$$ Hence $$\Omega_{\sigma} := \omega(z_{i,\sigma}^{{\varepsilon}},z_{j,\sigma}^{{\varepsilon}})_{_{1\le i,j \le 2N+1}} = \frac 14 {\varepsilon}^{2\gamma + \beta N -1} \rho \left( \begin{array}{ccc} 0_{_{N\times N}} & -I_{_{N\times N}} & 0_{_{1\times N}}\\ I_{_{N\times N}} & 0_{_{N\times N}} & 0_{_{1\times N}} \\ 0_{_{N\times 1}} & 0_{_{N\times 1}} & 0 \end{array} \right) = \frac 14 {\varepsilon}^{2\gamma + \beta N -1} \rho \, d\xi \wedge da$$ The form $\Omega_{\sigma}$ is degenerate and so ${\mathcal{M}}_{{\varepsilon}}$ is not a symplectic manifold. This is in contrast to [@froh1] and [@hz] where the soliton manifold was defined also varying the parameter $\omega$ in . Anyway we use $\Omega_{\sigma}$ to obtain a dynamical system for $\sigma = (a,\xi,\theta)$ associated to the effective Hamiltonian $${\mathcal{H}}_{{\mathcal{M}}}(a,\xi,\theta) := {\mathcal{H}}(U_{\sigma}) = \frac 18 {\varepsilon}^{2\gamma+\beta N} \rho |\xi|^{2} + \frac 12 {\varepsilon}^{2\gamma + \beta N} \int_{{\mathbb R}^{N}} V(a+{\varepsilon}^{\beta}x) U^{2}(x)\, dx + const(U)\, .$$ It follows that $${\varepsilon}\dot \sigma = \frac 4\rho {\varepsilon}^{-(2\gamma + \beta N -1)} \left( \begin{array}{c} \partial_{\xi} {\mathcal{H}}_{{\mathcal{M}}} \\ - \partial_{a} {\mathcal{H}}_{{\mathcal{M}}} \\ 0 \end{array} \right)$$ hence the system of differential equations $$\label{traj} \left\{ \begin{aligned} & \dot a = \xi \\ & \dot \xi = - \frac 2\rho\, \int_{{\mathbb R}^{N}} \nabla V(a+{\varepsilon}^{\beta}x) U^{2}(x)\, dx\\ & \dot \theta = 0 \end{aligned} \right.$$ For a solution $\sigma(t) = (a(t), \xi(t), \theta(t))$ of we introduce the following notation which is needed below: $$\label{vt} {\mathrm{v}}(t) := \frac 1 \rho\, \int_{{\mathbb R}^{N}} \Big(\nabla V(a(t)+{\varepsilon}^{\beta}x) -\nabla V(a(t))\Big)U^{2}(x)\, dx$$ Approximation of the solution {#sec:approx} ============================= In [@froh1] and related papers, the main idea was to prove the existence of a unique symplectic decomposition for the solution of up to a given time $\tau$. This was achieved by proving that the solution stays for $t\le \tau$ in a small tubular neighbourhood of the symplectic manifold ${\mathcal{M}}_{{\varepsilon}}$, and using the existence of a symplectic projection on ${\mathcal{M}}_{{\varepsilon}}$. In this paper instead we define a particular projection of the solution on the manifold ${\mathcal{M}}_{{\varepsilon}}$, projection which turns out to be “almost” symplectic up to some time $\tau$, and show that difference between the solution and the projection is small for $t\le \tau$. Let $\psi(t,x)$ be the solution of and assume that it is defined for all $t\in {\mathbb R}$. Let $\sigma(t)= (a(t), \xi(t), \theta(t))$ be the solution of with initial conditions $\sigma_{0}= (a_{0}, \xi_{0}, \theta_{0})$, and $U_{\sigma(t)}$ the element in ${\mathcal{M}}_{{\varepsilon}}$ associated to $\sigma(t)$. Moreover, let $\omega_{{\varepsilon}}(t)$ be a solution of the Cauchy problem $$\label{cauchy-om} \left\{ \begin{aligned} & \dot \omega_{{\varepsilon}}(t) = {\varepsilon}^{2-2\beta} \omega - \frac 14 |\xi(t)|^{2}+ V(a(t)) \\ & \omega_{{\varepsilon}}(0) = 0 \end{aligned} \right.$$ Notice that in the statement of Theorem \[main-result\] we use the notation $\vartheta(t) = \theta(t) - \omega_{{\varepsilon}}(t)$. Then we define $$\label{w} w(t,x) := e^{\frac i{\varepsilon}\, \omega_{{\varepsilon}}(t)}\, \psi(t,x) - U_{\sigma(t)}(x)\, .$$ and using $\tilde x = {\varepsilon}^{-\beta}(x-a(t))$ $$\label{w-tilde} \begin{aligned} \tilde w(t,\tilde x) := & {\varepsilon}^{-\gamma} e^{-\frac i{\varepsilon}(\frac 12 {\varepsilon}^{\beta}\tilde x \cdot \xi(t) + \theta(t))}\, w(t,a(t) + {\varepsilon}^{\beta}\tilde x)\\ = & {\varepsilon}^{-\gamma} e^{-\frac i{\varepsilon}(\frac 12 {\varepsilon}^{\beta}\tilde x \cdot \xi(t) + \theta(t) - \omega_{{\varepsilon}}(t))}\, \psi(t,a(t) + {\varepsilon}^{\beta}\tilde x) - U(\tilde x) \end{aligned}$$ The functions $w(t,x)$ and $\tilde w(t,\tilde x)$ represent the distance between the solution and the solitary wave, solution with $V\equiv 0$, in the moving and in the fixed space-time frame respectively. Recall from - that the tangent space to the soliton manifold ${\mathcal{M}}_{{\varepsilon}}$ with ${\varepsilon}=1$ at $\sigma=0$ is generated by $$\begin{aligned} &z_{j,0}(x) = -\partial_{j}U(x)\, , \qquad j=1,\dots,N \label{tang-a-0}\\ &z_{j,0}(x) = i \frac 12\, x_{j-N} U(x)\, , \qquad j=N+1,\dots,2N \label{tang-xi-0}\\ &z_{2N+1,0}(x) = i U(x)\, . \label{tang-theta-0}\end{aligned}$$ Then \[sympl-w-tilde-w\] For all $t\in {\mathbb R}$ and all $\sigma=(a,\xi,\theta) \in {\mathbb R}^{2N+1}$, we have $$\omega( w, z_{j,\sigma}^{{\varepsilon}}) = \left\{ \begin{aligned} & {\varepsilon}^{2\gamma+\beta (N-1)}\, \omega( \tilde w, z_{j,0}) - \frac 12 {\varepsilon}^{2\gamma + \beta N-1} \, \xi_{j}(t)\, \omega( \tilde w, z_{2N+1,0})\, , & j=1,\dots,N \\ & {\varepsilon}^{2\gamma+\beta (N+1)-1}\, \omega( \tilde w, z_{j,0}) + \frac 12 {\varepsilon}^{2\gamma + \beta N-1} \, a_{j-N}(t)\, \omega( \tilde w, z_{2N+1,0})\, , & j=N+1,\dots,2N\\ & {\varepsilon}^{2\gamma+\beta N-1}\, \omega( \tilde w, z_{j,0})\, , & j=2N+1 \end{aligned} \right.$$ First of all, by standard manipulations, for all $j=1,\dots,2N+1$ $$\omega( w(t,x), z_{j,\sigma}^{{\varepsilon}}(x)) = {\mathrm{Im}}\int_{{\mathbb R}^{N}}\, w(t,x)\, \overline{z_{j,\sigma}^{{\varepsilon}} (t,x)}\, dx ={\varepsilon}^{\beta N} {\mathrm{Im}}\, \int_{{\mathbb R}^{N}}\, w(t,a(t)+{\varepsilon}^{\beta}\tilde x)\, \overline{z_{j,\sigma}^{{\varepsilon}} (t,a(t)+{\varepsilon}^{\beta}\tilde x)}\, d\tilde x =$$ $$= {\varepsilon}^{\beta N} {\mathrm{Im}}\, \int_{{\mathbb R}^{N}}\, e^{-\frac i{\varepsilon}(\frac 12 {\varepsilon}^{\beta}\tilde x \cdot \xi(t) + \theta(t))} w(t,a(t)+{\varepsilon}^{\beta}\tilde x)\ \overline{e^{-\frac i{\varepsilon}(\frac 12 {\varepsilon}^{\beta}\tilde x \cdot \xi(t) + \theta(t))} z_{j,\sigma}^{{\varepsilon}} (t,a(t)+{\varepsilon}^{\beta}\tilde x)}\, d\tilde x =$$ $$= {\varepsilon}^{\gamma + \beta N}\, \omega\Big(\tilde w(t,\tilde x), e^{-\frac i{\varepsilon}(\frac 12 {\varepsilon}^{\beta}\tilde x \cdot \xi(t) + \theta(t))} z_{j,\sigma}^{{\varepsilon}} (t,a(t)+{\varepsilon}^{\beta}\tilde x)\Big)\, ,$$ where in the last equality we have used . Moreover, using - and -, we have $$e^{-\frac i{\varepsilon}(\frac 12 {\varepsilon}^{\beta}\tilde x \cdot \xi(t) + \theta(t))} z_{j,\sigma}^{{\varepsilon}} (t,a(t)+{\varepsilon}^{\beta}\tilde x) = \left\{ \begin{aligned} & {\varepsilon}^{\gamma-\beta} z_{j,0}(\tilde x) - \frac 12 {\varepsilon}^{\gamma-1}\, \xi_{j}(t) z_{2N+1,0}(\tilde x)\, , & j=1,\dots,N\\ & {\varepsilon}^{\gamma+\beta-1} z_{j,0}(\tilde x) + \frac 12 {\varepsilon}^{\gamma-1}\, a_{j-N}(t) z_{2N+1,0}(\tilde x)\, , & j=N+1,\dots,2N\\ & {\varepsilon}^{\gamma -1}\, z_{j,0}(\tilde x)\, , & j= 2N+1 \end{aligned} \right.$$ and the proof is finished. We first study the evolution in time of the function $\tilde w(t,\tilde x)$. \[der-w-tilde\] Let $\tilde w(t,\tilde x)$ be defined as in for all $t\in {\mathbb R}$, then $$\begin{aligned} \partial_{t} \tilde w (t,\tilde x) = & \frac i {\varepsilon}\Big( {\varepsilon}^{\beta}\tilde x \cdot {\mathrm{v}}(t) - \mathcal{R}_{V}(t,\tilde x)\Big) \, \Big(U(\tilde x) + \tilde w(t,\tilde x)\Big) +\\ & + i {\varepsilon}^{1-2\beta} \Big[ \triangle \tilde w(t,\tilde x) + \omega\, \tilde w(t,\tilde x) - \Big( 2 f'(U^{2}(\tilde x)) U^{2}(\tilde x) + f(U^{2}(\tilde x)) \Big) {\mathrm{Re}}(\tilde w(t,\tilde x)) + \\ & - i\, f(U^{2}(\tilde x)) {\mathrm{Im}}(\tilde w(t,\tilde x))- \mathcal{R}_{F}(t,\tilde x)\Big] \end{aligned}$$ where ${\mathrm{v}}(t)$ is defined in and $$\begin{aligned} & \mathcal{R}_{V}(t,\tilde x):= V(a(t)+{\varepsilon}^{\beta}\tilde x) -V(a(t)) - {\varepsilon}^{\beta}\tilde x \cdot \nabla V(a(t))\\ & \mathcal{R}_{F}(t,\tilde x):= f(|U(\tilde x) + \tilde w(t,\tilde x)|^{2})\Big( U(\tilde x) + \tilde w(t,\tilde x) \Big) - f(U^{2}(\tilde x))U(\tilde x) +\\ & - \Big( 2 f'(U^{2}(\tilde x)) U^{2}(\tilde x) + f(U^{2}(\tilde x)) \Big) {\mathrm{Re}}(\tilde w(t,\tilde x)) - i\, f(U^{2}(\tilde x)) {\mathrm{Im}}(\tilde w(t,\tilde x))\end{aligned}$$ From , we have $$\psi(t,x) = {\varepsilon}^{\gamma}\, e^{\frac i{\varepsilon}(\frac 12 (x-a(t)) \cdot \xi(t) + \theta(t) - \omega_{{\varepsilon}}(t))}\, \Big(U({\varepsilon}^{-\beta}(x-a(t))) + \tilde w(t, {\varepsilon}^{-\beta}(x-a(t)))\Big)$$ hence, using the notation $$g(t,x) := e^{\frac i{\varepsilon}(\frac 12 (x-a(t)) \cdot \xi(t) + \theta(t) - \omega_{{\varepsilon}}(t))}$$ we have $$\begin{aligned} \partial_{t} \psi(t,x) = & \frac i {\varepsilon}\, \Big(-\frac 12 \dot a(t)\cdot \xi(t) + \frac 12 (x-a(t)) \cdot \dot \xi(t) + \dot \theta(t) - \dot \omega_{{\varepsilon}}(t)\Big) \psi(t,x) + \\ & + {\varepsilon}^{\gamma}\, g(t,x)\, \Big[ \partial_{t} \tilde w (t, {\varepsilon}^{-\beta}(x-a(t))) - {\varepsilon}^{-\beta} \dot a(t) \cdot \Big(\nabla U ({\varepsilon}^{-\beta}(x-a(t))) + \nabla \tilde w(t, {\varepsilon}^{-\beta}(x-a(t)))\Big) \Big]\end{aligned}$$ Hence $$\begin{aligned} \partial_{t} \tilde w (t, {\varepsilon}^{-\beta}(x-a(t))) = & {\varepsilon}^{-\gamma} g^{-1}(t,x) \partial_{t} \psi(t,x) + {\varepsilon}^{-\beta} \dot a(t) \cdot \Big(\nabla U ({\varepsilon}^{-\beta}(x-a(t))) + \nabla \tilde w(t, {\varepsilon}^{-\beta}(x-a(t)))\Big)+\\ & + \frac i {\varepsilon}\, \Big(\frac 12 \dot a(t)\cdot \xi(t) - \frac 12 (x-a(t)) \cdot \dot \xi(t) - \dot \theta(t) + \dot \omega_{{\varepsilon}}(t)\Big) {\varepsilon}^{-\gamma} g^{-1}(t,x) \psi(t,x)\end{aligned}$$ At this point we use that $\psi(t,x)$ is a solution of and change variable $\tilde x = {\varepsilon}^{-\beta}(x-a(t))$, to obtain $$\begin{aligned} \partial_{t} \tilde w (t, \tilde x) = & -\frac i {\varepsilon}\Big( V(a(t) + {\varepsilon}^{\beta}\tilde x) + f({\varepsilon}^{2\gamma-2\alpha} |U(\tilde x) + \tilde w(t,\tilde x)|^{2}) \Big)\, \Big( U(\tilde x) + \tilde w(t,\tilde x) \Big) +\\ & + i {\varepsilon}^{1-2\beta} \Big( \triangle U(\tilde x) + \triangle \tilde w(t,\tilde x)\Big) - \frac i 4 {\varepsilon}^{-1} |\xi(t)|^{2} \Big( U(\tilde x) + \tilde w(t,\tilde x) \Big) +\\ & + {\varepsilon}^{-\beta} \dot a(t) \cdot \Big(\nabla U (\tilde x) + \nabla \tilde w(t, \tilde x)\Big)- {\varepsilon}^{-\beta}\xi(t) \cdot \Big( \nabla U(\tilde x) + \nabla \tilde w(t,\tilde x) \Big) + \\ & + \frac i {\varepsilon}\, \Big(\frac 12 \dot a(t)\cdot \xi(t) - \frac 12 {\varepsilon}^{\beta}\tilde x \cdot \dot \xi(t) - \dot \theta(t) + \dot \omega_{{\varepsilon}}(t)\Big) \Big( U(\tilde x) + \tilde w(t,\tilde x) \Big)\end{aligned}$$ We now use that $\sigma(t)=(a(t),\xi(t),\theta(t))$ is a solution of and $\omega_{{\varepsilon}}(t)$ satisfies to see that $$\frac 12 \dot a(t)\cdot \xi(t) - \dot \theta(t) + \dot \omega_{{\varepsilon}}(t) -\frac 14 |\xi(t)|^{2} = {\varepsilon}^{2-2\beta}\, \omega + V(a(t))$$ Moreover using that $U$ solves we have $$\begin{aligned} \partial_{t} \tilde w (t, \tilde x) = & \frac i {\varepsilon}\Big( \frac 1\rho {\varepsilon}^{\beta}\tilde x\cdot \int_{{\mathbb R}^{N}}\, \nabla V (a(t) + {\varepsilon}^{\beta} y) U^{2}( y)\, dy + V(a(t)) - V(a(t)+{\varepsilon}^{\beta}\tilde x)\Big) \Big( U(\tilde x) + \tilde w(t,\tilde x) \Big) +\\ & + \frac i {\varepsilon}\Big( {\varepsilon}^{2-2\beta} f(U^{2}(\tilde x))U(\tilde x) - f({\varepsilon}^{2\gamma-2\alpha} |U(\tilde x) + \tilde w(t,\tilde x)|^{2})\Big( U(\tilde x) + \tilde w(t,\tilde x) \Big)\Big)+\\ & + i{\varepsilon}^{1-2\beta} \Big( \triangle \tilde w(t,\tilde x) + \omega\, \tilde w(t,\tilde x)\Big)\end{aligned}$$ Finally, in the first line we use $$\frac 1\rho {\varepsilon}^{\beta}\tilde x\cdot \int_{{\mathbb R}^{N}}\, \nabla V (a(t) + {\varepsilon}^{\beta} y) U^{2}( y)\, dy + V(a(t)) - V(a(t)+{\varepsilon}^{\beta}\tilde x) = {\varepsilon}^{\beta}\tilde x \cdot {\mathrm{v}}(t) - \mathcal{R}_{V}(t,\tilde x)$$ Then in the second line we write $$\begin{aligned} & f({\varepsilon}^{2\gamma-2\alpha} |U(\tilde x) + \tilde w(t,\tilde x)|^{2})\Big( U(\tilde x) + \tilde w(t,\tilde x) \Big) = f({\varepsilon}^{2\gamma-2\alpha} U^{2}(\tilde x))U(\tilde x) +\\ & + \Big[ 2 {\varepsilon}^{2\gamma-2\alpha} f'({\varepsilon}^{2\gamma-2\alpha} U^{2}(\tilde x)) U^{2}(\tilde x) + f({\varepsilon}^{2\gamma-2\alpha} U^{2}(\tilde x)) \Big] {\mathrm{Re}}(\tilde w(t,\tilde x)) + i\, f({\varepsilon}^{2\gamma-2\alpha} U^{2}(\tilde x)) {\mathrm{Im}}(\tilde w(t,\tilde x)) + \\ & + r_{_{F}}(t,\tilde x)\end{aligned}$$ where $r_{_{F}}(t,\tilde x)$ is defined by this equality. Using now assumption (N3) we have $$\begin{aligned} & f({\varepsilon}^{2\gamma-2\alpha} U^{2}(\tilde x))U(\tilde x) = {\varepsilon}^{2(\gamma-\alpha)p}\, f(U^{2}(\tilde x))U(\tilde x)\\ & {\varepsilon}^{2\gamma-2\alpha} f'({\varepsilon}^{2\gamma-2\alpha} U^{2}(\tilde x)) = {\varepsilon}^{2(\gamma-\alpha)p}\, f'(U^{2}(\tilde x))\end{aligned}$$ and by assumption (N4) $2-2\beta = 2(\gamma-\alpha)p$. Hence $$\begin{aligned} & {\varepsilon}^{2-2\beta} f(U^{2}(\tilde x))U(\tilde x) - f({\varepsilon}^{2\gamma-2\alpha} |U(\tilde x) + \tilde w(t,\tilde x)|^{2})\Big( U(\tilde x) + \tilde w(t,\tilde x) \Big) = \\ & = - {\varepsilon}^{2-2\beta} \Big[ \Big(2 f'(U^{2}(\tilde x)) U^{2}(\tilde x) + f(U^{2}(\tilde x)) \Big) {\mathrm{Re}}(\tilde w(t,\tilde x)) + i\, f(U^{2}(\tilde x)) {\mathrm{Im}}(\tilde w(t,\tilde x)) + \mathcal{R}_{F}(t,\tilde x)\Big]\end{aligned}$$ and the proof is finished. We now use Lemma \[sympl-w-tilde-w\] and Proposition \[der-w-tilde\] to estimate the growth of the function $w(t,x)$ in $L^{2}$ norm. \[main-part-1\] Let $\psi(t,x)$ be the solution of assumed to be defined for all $t\in {\mathbb R}$. Let $\sigma(t)= (a(t), \xi(t), \theta(t))$ be the solution of with initial conditions $\sigma_{0}= (a_{0}, \xi_{0}, \theta_{0})$, and $U_{\sigma(t)}$ the element in ${\mathcal{M}}_{{\varepsilon}}$ associated to $\sigma(t)$. Finally let $\omega_{{\varepsilon}}(t)$ be the solution of the Cauchy problem . If the function $w(t,x)$ defined in satisfies ${\left\Vertw(t,\cdot)\right\Vert}_{L^{2}}\le 1$ for $t\in (0, \tau)$ then $$\label{stima-princ-1} \Big| \partial_{t} {\left\Vertw(t,\cdot)\right\Vert}_{L^{2}}^{2} \Big| \le 4\, {\varepsilon}^{\gamma +\beta \frac N2}\, {\left\Vertw(t,\cdot)\right\Vert}_{L^{2}} \Big( \frac 12 {\varepsilon}^{\beta-1} {\left\Vert|x| U(x)\right\Vert}_{L^{2}}\, |{\mathrm{v}}(t)| + {\varepsilon}^{-1} {\left\Vert|\mathcal{R}_{V}(t,x)|\, U(x)\right\Vert}_{L^{2}} + {\varepsilon}^{1-2\beta}\, C(z_{j,0},U)\Big)$$ for all $t\in (0, \tau)$, where ${\mathrm{v}}(t)$ is defined in , $\mathcal{R}_{V}(t,x)$ is as in Proposition \[der-w-tilde\], and $C(z_{j,0},U)$ is defined in (N2). From we write $${\left\Verte^{\frac i{\varepsilon}\, \omega_{{\varepsilon}}(t)} \psi(t,\cdot)\right\Vert}_{L^{2}}^{2} = {\left\Vertw(t,\cdot)\right\Vert}_{L^{2}}^{2} + {\left\VertU_{\sigma(t)}(\cdot)\right\Vert}_{L^{2}}^{2} + 2\, {\mathrm{Re}}\int_{{\mathbb R}^{N}}\, w(t,x)\, \overline{U_{\sigma(t)}(x)}\, dx\, .$$ Moreover using $${\mathrm{Re}}\int_{{\mathbb R}^{N}}\, w(t,x)\, \overline{U_{\sigma(t)}(x)}\, dx = - {\mathrm{Im}}\int_{{\mathbb R}^{N}}\, w(t,x)\, \overline{i\, U_{\sigma(t)}(x)}\, dx = -2{\varepsilon}\, \omega(w, z_{2N+1,\sigma}^{{\varepsilon}})$$ and using $${\left\Verte^{\frac i{\varepsilon}\, \omega_{{\varepsilon}}(t)} \psi(t,\cdot)\right\Vert}_{L^{2}}^{2} = {\left\VertU_{\sigma(t)}(\cdot)\right\Vert}_{L^{2}}^{2}= {\varepsilon}^{2\gamma +\beta N}\, \rho\, , \qquad \forall\, t\, .$$ Hence $$\partial_{t} {\left\Vertw(t,\cdot)\right\Vert}_{L^{2}}^{2} = 4{\varepsilon}\, \partial_{t}\, \Big( \omega(w, z_{2N+1,\sigma}^{{\varepsilon}})\Big)$$ We now use Lemma \[sympl-w-tilde-w\] for $j=2N+1$ to write $$\label{primo-passo} \partial_{t} {\left\Vertw(t,\cdot)\right\Vert}_{L^{2}}^{2} = 4{\varepsilon}^{2\gamma+\beta N} \, \partial_{t}\, \Big( \omega(\tilde w, z_{2N+1,0})\Big) = 4{\varepsilon}^{2\gamma+\beta N} \, \omega(\partial_{t} \tilde w, z_{2N+1,0})$$ The final step is to use the results in Appendix \[app:calcoli\]. In particular, notations - and Lemmas \[i1\]-\[i4\], imply $$\omega(\partial_{t} \tilde w, z_{2N+1,0}) = \omega(I_{2}, z_{2N+1,0}) + \omega(I_{4}, z_{2N+1,0})$$ and $$\Big| \omega(\partial_{t} \tilde w, z_{2N+1,0}) \Big| \le {\left\Vert\tilde w\right\Vert}_{L^{2}} \Big( \frac 12 {\varepsilon}^{\beta-1} {\left\Vert|\tilde x| U(\tilde x)\right\Vert}_{L^{2}}\, |{\mathrm{v}}(t)| + {\varepsilon}^{-1} {\left\Vert|\mathcal{R}_{V}(t,\tilde x)|\, U(\tilde x)\right\Vert}_{L^{2}} + {\varepsilon}^{1-2\beta}\, C(z_{j,0},U)\Big)$$ This, together with and $${\left\Vert\tilde w\right\Vert}_{L^{2}} = {\varepsilon}^{-\gamma -\beta \frac N2}\, {\left\Vertw\right\Vert}_{L^{2}}$$ which follows from , imply . Proof of Theorem \[main-result\] {#sec:proof} ================================ We first study the behavior of ${\mathrm{v}}(t)$ and $\mathcal{R}_{V}(t,x)$ as defined in and Proposition \[der-w-tilde\]. Notice that they are defined only in terms of $U$ and $V$, and do not depend on the solution $\psi(t,x)$ of . Let first consider the case of potentials $V$ satisfying assumptions (Vr). By (C2) it is immediate that system can be written as $$\left\{ \begin{aligned} & \dot a = \xi \\ & \dot \xi = -2\, \nabla V(a) + O({\varepsilon}^{\beta})\\ & \dot \theta = 0 \end{aligned} \right.$$ and for all $t\in {\mathbb R}$ $$\begin{aligned} & |{\mathrm{v}}(t)| \le {\varepsilon}^{\beta}\ N\, \frac{h_{V}}{\rho}\, \int_{{\mathbb R}^{N}}\, |x|\, U^{2}(x)\, dx \label{stima-v-r} \\ & |\mathcal{R}_{V}(t,x)| \le {\varepsilon}^{2\beta}\, \frac{N^{2}}{2} \, h_{V}\, |x|^{2} \label{stima-R-r}\end{aligned}$$ Let now $V$ satisfy assumptions (Vs), then by (Vs2) and (Vs3) it follows that if the solution of satisfies $a(t)\not= 0$ for all $t\in {\mathbb R}$, then $$\begin{aligned} & \Big| \int_{{\mathbb R}^{N}}\, V(a(t)+{\varepsilon}^{\beta}x)\, U^{2}(x)\, dx \Big| < \infty \\ & |{\mathrm{v}}(t)| \le |\nabla V(a(t))| + \Big| \int_{{\mathbb R}^{N}}\, \nabla V(a(t)+{\varepsilon}^{\beta}x)\, U^{2}(x)\, dx \Big| < \infty \label{stima-v-s} \\ & {\left\Vert\, |\mathcal{R}_{V}(t,x)|\, \varphi(x)\right\Vert}_{L^{2}} \le const(\zeta,\varphi) \label{stima-R-s}\end{aligned}$$ for all $t\in {\mathbb R}$ and for $\varphi(x) = U(x)$, $|x|U(x)$, $U(x)|\nabla U(x)|$. Let $\psi(t,x)$ be the solution of . Let $\sigma(t)= (a(t), \xi(t), \theta(t))$ be the solution of with initial conditions $\sigma_{0}= (a_{0}, \xi_{0}, \theta_{0})$ such that $a(t)\not= 0$ for all $t\in {\mathbb R}$ if $V$ is singular at the origin, and $U_{\sigma(t)}$ the element in ${\mathcal{M}}_{{\varepsilon}}$ associated to $\sigma(t)$. Finally let $\omega_{{\varepsilon}}(t)$ be the solution of the Cauchy problem . Then the function $w(t,x)$ defined in satisfies $${\left\Vertw(0,x)\right\Vert}_{L^{2}} = 0$$ Hence holds for $t\in (0,\tau)$ with $\tau$ small enough. Moreover from - and - follow that in both cases for $V$, there exists a constant $C(U,a_{0},\xi_{0},\zeta)$ not depending on $t$, such that from we get $$\Big| \partial_{t} {\left\Vertw(t,\cdot)\right\Vert}_{L^{2}} \Big| \le C(U,a_{0},\xi_{0},\zeta)\, {\varepsilon}^{\gamma+\beta \frac N2} \Big( {\varepsilon}^{\beta -1} + {\varepsilon}^{-1} + {\varepsilon}^{1-2\beta}\Big)\, ,\qquad \forall\, t\in (0,\tau)$$ Since $\beta \ge 1$, it follows $$\label{stima-princ-2} \Big| \partial_{t} {\left\Vertw(t,\cdot)\right\Vert}_{L^{2}} \Big| \le C(U,a_{0},\xi_{0},\zeta)\, {\varepsilon}^{1+\gamma+\beta (\frac N2-2)}\, ,\qquad \forall\, t\in (0,\tau)$$ First of all, by (N4) we can write $$\delta:= 1+\gamma+\beta \left(\frac N2-2 \right) = \frac N2 - 1 + \gamma + (\alpha-\gamma)\, p \left( \frac N2 -2\right)$$ and $\delta >0$ if $N\ge 4$. If $N=3$ instead we also need to assume $$\alpha - \gamma < \frac 2p \left(\gamma + \frac 12 \right)$$ to have $\delta >0$. However, in both cases we get $\tau = O({\varepsilon}^{-\delta})$, hence our argument is consistent. Moreover, for any fixed $\eta \in (0,\delta)$, estimates immediately implies $${\left\Vertw(t,\cdot)\right\Vert}_{L^{2}} = O({\varepsilon}^{\eta})$$ for all $t \in (0,T)$ with $T= O({\varepsilon}^{\eta-\delta})$. The proof is complete. The approximation of the symplectic projection {#app:calcoli} ============================================== We have approximated the solution $\psi(t,x)$ of by a projection on the manifold ${\mathcal{M}}_{{\varepsilon}}$ of solitons. As stated above, the manifold ${\mathcal{M}}_{{\varepsilon}}$ is not symplectic and the projection $U_{\sigma(t)}$ is not obtained by a symplectic decomposition as in [@froh1] and subsequent papers. However we now show that the difference $$w(t,x) = e^{\frac i {\varepsilon}\, \omega_{{\varepsilon}}(t)}\psi(t,x) - U_{\sigma(t)}(t,x)$$ is almost symplectic orthogonal to ${\mathcal{M}}_{{\varepsilon}}$ for long time. In particular we show that the quantities $\omega(w,z_{j,\sigma}^{{\varepsilon}})$ increase slowly. By Lemma \[sympl-w-tilde-w\], we only need to compute the derivatives $$\partial_{t} \omega(\tilde w, z_{j,0}) = \omega(\partial_{t}\tilde w, z_{j,0})$$ and use . We use Proposition \[der-w-tilde\] and write $$\partial_{t}\tilde w = I_{1} + I_{2} + I_{3} + I_{4}$$ where $$\begin{aligned} & I_{1}:= \frac i {\varepsilon}\Big( {\varepsilon}^{\beta}\tilde x \cdot {\mathrm{v}}(t) - \mathcal{R}_{V}(t,\tilde x)\Big) \, U(\tilde x) \label{di1} \\ & I_{2}:= \frac i {\varepsilon}\Big( {\varepsilon}^{\beta}\tilde x \cdot {\mathrm{v}}(t) - \mathcal{R}_{V}(t,\tilde x)\Big) \, \tilde w(t,\tilde x) \label{di2}\\ & I_{3}:= \begin{aligned}& i {\varepsilon}^{1-2\beta} \Big[ \triangle \tilde w(t,\tilde x) + \omega\, \tilde w(t,\tilde x) - \Big( 2 f'(U^{2}(\tilde x)) U^{2}(\tilde x) + f(U^{2}(\tilde x)) \Big) {\mathrm{Re}}( \tilde w(t,\tilde x)) + \\ & - i\, f(U^{2}(\tilde x)) {\mathrm{Im}}( \tilde w(t,\tilde x)) \Big] \end{aligned} \label{di3}\\ & I_{4}:= - i {\varepsilon}^{1-2\beta} \mathcal{R}_{F}(t,\tilde x) \label{di4}\end{aligned}$$ \[i1\] Recalling notation , we have $$\omega(I_{1}, z_{j,0}) = \left\{ \begin{aligned} & \frac 12 {\varepsilon}^{\beta-1} \rho {\mathrm{v}}_{j}(t) + \frac{1}{2{\varepsilon}} \int_{{\mathbb R}^{N}}\, \mathcal{R}_{V}(t,\tilde x)\, \partial_{j} (U^{2}) (\tilde x)\, d\tilde x\, , & j=1,\dots,N\\ & 0\, , & j=N+1,\dots, 2N+1 \end{aligned} \right.$$ For $j=1,\dots,N$, using and (C2), $$\begin{aligned} \omega(I_{1}, z_{j,0}) = & {\mathrm{Im}}\int_{{\mathbb R}^{N}}\, \frac i {\varepsilon}\Big( {\varepsilon}^{\beta}\tilde x \cdot {\mathrm{v}}(t) - \mathcal{R}_{V}(t,\tilde x)\Big) \, U(\tilde x) \, \overline{(- \partial_{j}U(\tilde x))}\, d\tilde x =\\ = & - \frac 12 {\varepsilon}^{\beta-1} \int_{{\mathbb R}^{N}}\, \tilde x \cdot {\mathrm{v}}(t) \, \partial_{j} (U^{2}) (\tilde x)\, d\tilde x + \frac{1}{2{\varepsilon}} \int_{{\mathbb R}^{N}}\, \mathcal{R}_{V}(t,\tilde x)\, \partial_{j} (U^{2}) (\tilde x)\, d\tilde x =\\ = & \frac 12 {\varepsilon}^{\beta-1} \int_{{\mathbb R}^{N}}\, U^{2}(\tilde x)\, \partial_{j}(\tilde x \cdot {\mathrm{v}}(t))\, d\tilde x + \frac{1}{2{\varepsilon}} \int_{{\mathbb R}^{N}}\, \mathcal{R}_{V}(t,\tilde x)\, \partial_{j} (U^{2}) (\tilde x)\, d\tilde x = \\ = & \frac 12 {\varepsilon}^{\beta-1} \rho {\mathrm{v}}_{j}(t) + \frac{1}{2{\varepsilon}} \int_{{\mathbb R}^{N}}\, \mathcal{R}_{V}(t,\tilde x)\, \partial_{j} (U^{2}) (\tilde x)\, d\tilde x\end{aligned}$$ For $j=N+1,\dots, 2N$, using , $$\begin{aligned} \omega(I_{1}, z_{j,0}) = & {\mathrm{Im}}\int_{{\mathbb R}^{N}}\, \frac i {\varepsilon}\Big( {\varepsilon}^{\beta}\tilde x \cdot {\mathrm{v}}(t) - \mathcal{R}_{V}(t,\tilde x)\Big) \, U(\tilde x) \, \overline{\Big(i \frac 12 x_{j-N} U(\tilde x)\Big)}\, d\tilde x = 0\end{aligned}$$ For $j=2N+1$, using , $$\begin{aligned} \omega(I_{1}, z_{j,0}) = & {\mathrm{Im}}\int_{{\mathbb R}^{N}}\, \frac i {\varepsilon}\Big( {\varepsilon}^{\beta}\tilde x \cdot {\mathrm{v}}(t) - \mathcal{R}_{V}(t,\tilde x)\Big) \, U(\tilde x) \, \overline{\Big(i U(\tilde x)\Big)}\, d\tilde x = 0 \\\end{aligned}$$ and the proof is finished. \[i2\] Recalling notation , we have $$|\omega(I_{2}, z_{j,0})| \le \left\{ \begin{aligned} & {\left\Vert\tilde w\right\Vert}_{L^{2}}\, \Big( {\varepsilon}^{\beta-1} {\left\Vert|\tilde x| \left|\nabla U(\tilde x)\right|\right\Vert}_{L^{2}}\, |{\mathrm{v}}(t)| + {\varepsilon}^{-1} {\left\Vert\mathcal{R}_{V}(t,\cdot)\, |\nabla U|\right\Vert}_{L^{2}}\Big) \, , & j=1,\dots,N\\ & {\left\Vert\tilde w\right\Vert}_{L^{2}}\, \Big( \frac 12 {\varepsilon}^{\beta-1} {\left\Vert|\tilde x|^{2} U(\tilde x)\right\Vert}_{L^{2}}\, |{\mathrm{v}}(t)| + {\varepsilon}^{-1} {\left\Vert\mathcal{R}_{V}(t,\tilde x)\, |\tilde x| U(\tilde x)\right\Vert}_{L^{2}}\Big) \, , & j=N+1,\dots, 2N\\ & {\left\Vert\tilde w\right\Vert}_{L^{2}}\, \Big( \frac 12 {\varepsilon}^{\beta-1} {\left\Vert|\tilde x| U(\tilde x)\right\Vert}_{L^{2}}\, |{\mathrm{v}}(t)| + {\varepsilon}^{-1} {\left\Vert\mathcal{R}_{V}(t,\cdot)\, U\right\Vert}_{L^{2}}\Big) \, , & j=2N+1 \end{aligned} \right.$$ For $j=1,\dots,N$, using , $$\begin{aligned} |\omega(I_{2}, z_{j,0})| = & \Big| {\mathrm{Im}}\int_{{\mathbb R}^{N}}\, \frac i {\varepsilon}\Big( {\varepsilon}^{\beta}\tilde x \cdot {\mathrm{v}}(t) - \mathcal{R}_{V}(t,\tilde x)\Big) \, \tilde w(t,\tilde x) \, \overline{(- \partial_{j}U(\tilde x))}\, d\tilde x \Big|\le\\ \le & {\varepsilon}^{\beta-1} \int_{{\mathbb R}^{N}}\, |\tilde x \cdot {\mathrm{v}}(t)| \, |\tilde w(t,\tilde x)| \left| \partial_{j} U (\tilde x)\right|\, d\tilde x + {\varepsilon}^{-1} \int_{{\mathbb R}^{N}}\, \left|\mathcal{R}_{V}(t,\tilde x)\right| \, |\tilde w(t,\tilde x)| \left| \partial_{j} U (\tilde x)\right|\, d\tilde x \end{aligned}$$ and then use $|\tilde x \cdot {\mathrm{v}}(t)|\le |\tilde x| |{\mathrm{v}}(t)|$ and Cauchy-Schwarz inequality. The cases $j=N+1,\dots, 2N+1$ are proved in the same way. \[i3\] $$\omega(I_{3}, z_{j,0}) = \left\{ \begin{aligned} & 0\, , & j=1,\dots,N\\ & {\varepsilon}^{1-2\beta}\, \omega(\tilde w, z_{j-N,0}) \, , & j=N+1,\dots, 2N\\ & 0\, , & j=2N+1 \end{aligned} \right.$$ Notice that $$I_{3} = i {\varepsilon}^{1-2\beta}\, \mathcal{L}(\tilde w)$$ where $\mathcal{L}$ is the Hessian of the energy associated to . Then $$\omega(I_{3}, z_{j,0}) = - {\varepsilon}^{1-2\beta} {\mathrm{Im}}\int_{{\mathbb R}^{N}}\, \tilde w(t,\tilde x)\, \overline{i \mathcal{L}(z_{j,0})}\, d\tilde x$$ and we conclude using Lemma 2 in [@colliding]. \[i4\] If ${\left\Vert\tilde w(t,\cdot)\right\Vert}_{L^{2}}\le 1$ then $$|\omega(I_{4}, z_{j,0})| \le {\varepsilon}^{1-2\beta}\, C(z_{j,0},U) \, {\left\Vert\tilde w\right\Vert}_{L^{2}}$$ for all $j=1,\dots,2N+1$. It follows immediately from (N2) and (C2). We can now prove \[crescita\] Under the same assumptions of Theorem \[main-result\], it holds $$\max_{j=1,\dots,2N+1}\, \Big| \partial_{t}\, \omega(w,z_{j,\sigma}^{{\varepsilon}}) \Big| = O({\varepsilon}^{\delta-1})$$ for all $t\in (0,\tau)$ with $\tau= O({\varepsilon}^{-\delta})$. From Lemma \[sympl-w-tilde-w\], we have for $j=1,\dots,N$ $$\partial_{t}\, \omega(w,z_{j,\sigma}^{{\varepsilon}}) = {\varepsilon}^{2\gamma+\beta(N-1)}\, \omega(\partial_{t} \tilde w, z_{j,0}) - \frac 12 {\varepsilon}^{2\gamma+\beta N-1}\, \xi_{j}(t) \omega(\partial_{t} \tilde w, z_{2N+1,0}) - \frac 12 {\varepsilon}^{2\gamma+\beta N-1}\, \dot \xi_{j}(t) \omega(\tilde w, z_{2N+1,0})$$ and for $j=N+1,\dots,2N$ $$\begin{aligned} \partial_{t}\, \omega(w,z_{j,\sigma}^{{\varepsilon}}) =\, & {\varepsilon}^{2\gamma+\beta(N+1)-1}\, \omega(\partial_{t} \tilde w, z_{j,0}) + \frac 12 {\varepsilon}^{2\gamma+\beta N-1}\, a_{j-N}(t) \omega(\partial_{t} \tilde w, z_{2N+1,0}) + \\ +\, & \frac 12 {\varepsilon}^{2\gamma+\beta N-1}\, \dot a_{j-N}(t) \omega(\tilde w, z_{2N+1,0})\end{aligned}$$ Hence using Lemmas \[i1\]-\[i4\], we have the following estimates: for $j=1,\dots,N$ $$\begin{aligned} \Big| \partial_{t}\, \omega(w,z_{j,\sigma}^{{\varepsilon}}) \Big| \le\, & \frac 12 {\varepsilon}^{2\gamma + \beta N -1} \, \rho |{\mathrm{v}}(t)| + {\varepsilon}^{2\gamma + \beta (N-1) -1} \, {\left\Vert\mathcal{R}_{V}(t,\cdot) U\right\Vert}_{L^{2}}\, {\left\Vert\nabla U\right\Vert}_{L^{2}} +\\ +\, & {\left\Vert\tilde w\right\Vert}_{L^{2}}\, \Big( {\varepsilon}^{2\gamma + \beta N -1} {\left\Vert|x| \left|\nabla U(x)\right|\right\Vert}_{L^{2}}\, |{\mathrm{v}}(t)| + {\varepsilon}^{2\gamma + \beta (N-1) -1} {\left\Vert\mathcal{R}_{V}(t,\cdot)\, |\nabla U|\right\Vert}_{L^{2}}\Big) +\\ +\, & {\varepsilon}^{2\gamma+\beta(N-3)+1}\, C(z_{j,0},U) {\left\Vert\tilde w\right\Vert}_{L^{2}}+ \frac 12 {\varepsilon}^{2\gamma+\beta(N-2)}\, |\xi(t)|\, C(z_{2N+1,0},U) {\left\Vert\tilde w\right\Vert}_{L^{2}}+ \\ +\, & \frac 12 |\xi(t)|\, {\left\Vert\tilde w\right\Vert}_{L^{2}}\, \Big( \frac 12 {\varepsilon}^{2\gamma + \beta(N+1)-2} {\left\Vert|x| U(x)\right\Vert}_{L^{2}}\, |{\mathrm{v}}(t)| + {\varepsilon}^{2\gamma + \beta N -2} {\left\Vert\mathcal{R}_{V}(t,\cdot)\, U\right\Vert}_{L^{2}}\Big) + \\ +\, & \frac 12 {\varepsilon}^{2\gamma + \beta N -1}\, \rho^{\frac 12}\, |\dot \xi(t)|\, {\left\Vert\tilde w\right\Vert}_{L^{2}}\\\end{aligned}$$ for $j=N+1,\dots,2N$ $$\begin{aligned} \Big| \partial_{t}\, \omega(w,z_{j,\sigma}^{{\varepsilon}}) \Big| \le\, & {\left\Vert\tilde w\right\Vert}_{L^{2}}\, \Big( \frac 12 {\varepsilon}^{2\gamma + \beta(N+2)-2} {\left\Vert|x|^{2} U(x)\right\Vert}_{L^{2}}\, |{\mathrm{v}}(t)| + {\varepsilon}^{2\gamma + \beta(N+1)-2} {\left\Vert\mathcal{R}_{V}(t,x)\, |x| U(x)\right\Vert}_{L^{2}}\Big) +\\ +\, & {\varepsilon}^{2\gamma + \beta (N-1)}\, {\left\Vert|x| U(x)\right\Vert}_{L^{2}}\, {\left\Vert\tilde w\right\Vert}_{L^{2}} + {\varepsilon}^{2\gamma+\beta(N-1)}\, C(z_{j,0},U) {\left\Vert\tilde w\right\Vert}_{L^{2}}+\\ +\, & \frac 12 |a(t)|\, {\left\Vert\tilde w\right\Vert}_{L^{2}}\, \Big( \frac 12 {\varepsilon}^{2\gamma + \beta(N+1)-2} {\left\Vert|x| U(x)\right\Vert}_{L^{2}}\, |{\mathrm{v}}(t)| + {\varepsilon}^{2\gamma + \beta N -2} {\left\Vert\mathcal{R}_{V}(t,\cdot)\, U\right\Vert}_{L^{2}}\Big) + \\ +\, & \frac 12 {\varepsilon}^{2\gamma+\beta(N-2)}\, |a(t)|\, C(z_{2N+1,0},U) {\left\Vert\tilde w\right\Vert}_{L^{2}}+ \frac 12 {\varepsilon}^{2\gamma + \beta N -1}\, \rho^{\frac 12}\, |\dot a(t)|\, {\left\Vert\tilde w\right\Vert}_{L^{2}}\\\end{aligned}$$ Moreover we have from the proof of Theorem \[main-part-1\] that $$\begin{aligned} \Big| \partial_{t}\, \omega(w,z_{2N+1,\sigma}^{{\varepsilon}}) \Big| \le\, & {\left\Vert\tilde w\right\Vert}_{L^{2}} \Big( \frac 12 {\varepsilon}^{2\gamma + \beta(N+1)-1} {\left\Vert|x| U(x)\right\Vert}_{L^{2}}\, |{\mathrm{v}}(t)| + {\varepsilon}^{2\gamma + \beta N-2} {\left\Vert|\mathcal{R}_{V}(t,\cdot)|\, U\right\Vert}_{L^{2}} +\\ +\, & {\varepsilon}^{2\gamma + \beta(N-2)}\, C(z_{j,0},U)\Big)\end{aligned}$$ Arguing now as in the proof of Theorem \[main-result\], and using $${\left\Vert\tilde w\right\Vert}_{L^{2}} = {\varepsilon}^{-\gamma -\beta \frac N2}\, {\left\Vertw\right\Vert}_{L^{2}}$$ we have for $\beta \ge 1$ $$\max_{j=1,\dots,2N+1}\, \Big| \partial_{t}\, \omega(w,z_{j,\sigma}^{{\varepsilon}}) \Big| \le C(U,a_{0},\xi_{0},V)\, \Big( {\varepsilon}^{2\gamma + \beta(N-1)-1} + {\left\Vertw\right\Vert}_{L^{2}} {\varepsilon}^{\gamma + \beta(\frac N2-2)}\Big)$$ By , this implies that $$\max_{j=1,\dots,2N+1}\, \Big| \partial_{t}\, \omega(w,z_{j,\sigma}^{{\varepsilon}}) \Big| \le C(U,a_{0},\xi_{0},V)\, {\varepsilon}^{\gamma + \beta(\frac N2-2)}$$ for all $t\in (0,\tau)$ with $\tau= O({\varepsilon}^{-\delta})$. [100]{} W. Abou Salem, J. Fröhlich, I.M. Sigal, *Colliding solitons for the nonlinear Schrödinger equation*, Comm. Math. Phys. [**291**]{} (2009), 151–176. J. Bellazzini, V. Benci, M. Ghimenti, A.M. Micheletti, *On the existence of the fundamental eigenvalue of an elliptic problem in $R^{N}$*, Adv. Nonlinear Stud. [**7**]{} (2007), 439–458. V. Benci, M. Ghimenti, A.M. Micheletti, [*The nonlinear Schrödinger equation: soliton dynamics*]{}, J. Differential Equations [**249**]{} (2010), 3312–3341. V. Benci, M. Ghimenti, A.M. Micheletti, [*On the dynamics of solitons in the nonlinear Schrödinger equation*]{}, Arch. Ration. Mech. Anal. [**205**]{} (2012), 467–492. H. Berestycki, P.L. Lions, *Nonlinear scalar field equations. I. Existence of a ground state*, Arch. Rational Mech. Anal. **82** (1982), 313–345 C. Bonanno, M. Ghimenti, M. Squassina, *Soliton dynamics of NLS with singular potentials*, Dyn. Partial Differ. Equ. **10** (2013), 177–207. C. Bonanno, P. d’Avenia, M. Ghimenti, M. Squassina, *Soliton dynamics for the generalized Choquard equation*, arXiv:1310.3067 \[math.AP\] J.C. Bronski, R.L. Jerrard, [*Soliton dynamics in a potential*]{}, Math. Res. Lett. [**7**]{} (2000), 329–342. T. Cazenave, “Semilinear Schrödinger Equations”, Courant Lect. Notes Math., vol. 10, New York University Courant Institute of Mathematical Sciences, New York, 2003. J. Fröhlich, S. Gustafson, B.L.G. Jonsson, I.M. Sigal, [*Solitary wave dynamics in an external potential*]{}, Comm. Math. Phys. [**250**]{} (2004), 613–642. J. Fröhlich, S. Gustafson, B.L.G. Jonsson, I.M. Sigal, *Long time motion of NLS solitary waves in a confining potential*, Ann. Henri Poincaré [**7**]{} (2006), 621–660. M. Grillakis, J. Shatah, W. Strauss, *Stability theory of solitary waves in the presence of symmetry, I*, J. Funct. Anal. **74** (1987), 160–197. J. Holmer, M. Zworski, *Slow soliton interaction with delta impurities*, J. Mod. Dyn. [**1**]{} (2007), 689–718 J. Holmer, M. Zworski, *Soliton interaction with slowly varying potentials*, Int. Math. Res. Not. IMRN **2008** (2008), Art. ID rnn026 S. Keraani, [*Semiclassical limit for nonlinear Schrödinger equations with potential II*]{}, Asymptot. Anal. [**47**]{} (2006), 171–186. M.K. Kwong, *Uniqueness of positive solutions of $\Delta u-u+u^p=0$ in ${\mathbb R}^n$*, Arch. Rational Mech. Anal. [**105**]{} (1989), 243–266. M.I. Weinstein, *Modulational stability of ground state of nonlinear Schrödinger equations*, SIAM J. Math. Anal. [**16**]{} (1985), 472–491. M.I. Weinstein, *Lyapunov stability of ground states of nonlinear dispersive evolution equations*, Comm. Pure Appl. Math. [**39**]{} (1986), 51–67.
--- author: - '[V.N.Berestovskii, I.A.Zubareva]{}' title: 'Correct observer’s event horizon in de Sitter space-time' --- [^1] Introduction and main results {#in} ============================= In this paper we continue to study properties of the future and the past event horizons for any time-like geodesic in de Sitter space-time of the first kind $S(R)$ announced earlier in the paper [@Ber]. Later we shall use shorter term ”de Sitter space-time”. In Section \[pr\] we give necessary definitions and known results about globally hyperbolic space-time. In Section \[osn\] are given exact description of the past event horizon for every time-like geodesic of de Sitter space-time. We prove the theorem announced earlier in the paper  [@Ber] as the theorem 8. This theorem sets a connection of the future and the past event horizons between themselves and with the so-called Lobachevsky space of positive curvature $\frac{1}{R^2}$ in the sense of B.A.Rosenfeld (see p. 155 in [@Ros]), which is obtained by the gluing of antipodal events in $S(R)$. In this paper we give correct figures (see Fig.2, 3) of an observer’s event horizon, which differ with the corresponding non-correct figure (see. Fig.1) from the Hawking’s book [@Hokrus] on the page 120. Preliminaries {#pr} ============= We now remind necessary definitions from [@Beem], [@Hok]. Let $M$ be a $C^{\infty}$–manifold of dimension $n+1\geq 2$. A *Lorentzian metric* $g$ for $M$ is a smooth symmetric tensor field of the type $(0,2)$ on $M$ which assigns to each point $p\in M$ a nondegenerate inner product $g\mid_{p}:T_pM\times T_pM\rightarrow\mathbb{R}$ of the signature $(+,+,\dots, +,-).$ Then the pair $(M,g)$ is said to be *Lorentzian manifold*. A nonzero tangent vector $v$ is said to be respectively *time-like, space-like*, or *isotropic* if $g(v,v)$ is negative, positive, or zero. A tangent vector $v$ is said to be *non-space-like* if it is time-like or isotropic. A continuous vector field $X$ on Lorentzian manifold $M$ is called *time-like* if $g(X(p),X(p))<0$ for all events $p\in M.$ If Lorentzian manifold $(M,g)$ admits a time-like vector field $X$, then we say that $(M,g)$ is *time oriented by the field $X$*. The time-like vector field $X$ separates all non-space-like vectors into two disjoint classes of *future directed* and *past directed* vectors. More exactly, a non-space-like vector $v\in T_pM,\,\,p\in M,$ is said to be *future directed* (respectively, *past directed*) if $g(X(p),v)<0$ (respectively, $g(X(p),v)>0$). A Lorentzian manifold is *time orientable* if it admits some time-like vector field $X$. \[spacetime\] A space-time $(M,g)$ is a connected Hausdorff manifold of dimension equal or greater than two with a countable basis supplied with a Lorentzian $C^{\infty}$-metric $g$ and some time orientation. A continuous piecewise smooth curve (path) $c=c(t)$ with $t\in [a,b]$ or $t\in (a,b)$ on Lorentzian manifold $(M,g)$ is said to be *non-space-like* if $g(c'_{l}(t),c'_{r}(t))\leq 0$ for every inner point $t$ from the domain of the curve $c$, where $c'_{l}(t)$ (respectively, $c'_{r}(t)$) denotes left (respectively, right) tangent vector. If $(M,g)$ is a space-time, then the curve $c=c(t)$ with $t\in [a,b]$ or $t\in (a,b),$ is *future directed* or *past directed*, i.e. all (one-sided) tangent vectors of the curve $c$ either *are directed to the future* or *are directed to the past*. The *causal future* $J^+(L)$ (respectively, the *causal past* $J^-(L)$) of a subset $L$ of the space-time $(M,g)$ is defined as the set of all events $q\in M,$ for which there exists future directed (respectively past directed) curve $c=c(t), t\in [a,b],$ such that $c(a)\in L, c(b)=q.$ If $p\in M$, then we will use reduced notation $J^+(p)$ and $J^-(p)$ instead of $J^+(\{p\})$ and $J^-(\{p\})$. \[stronglycausal\][@Beem] An open set $U$ in a space-time is said to be causally convex if no non-space-like curve intersects $U$ in a disconnected set. The space-time $(M,g)$ is said to be strongly causal if each event in $M$ has arbitrarily small causally convex neighborhoods. In [@Beem] it has been proved the following important proposition 2.7. \[alsc\] A space-time $(M,g)$ is strongly causal if and only if the sets of the form $I^+(p)\cap I^-(q)$ with arbitrary $p,q\in M$ form a basis of original topology (i.e. the Alexandrov topology induced on $(M,g)$ agrees with given manifold topology). \[globgiperbolic\] A space-time $(M,g)$ is called globally hyperbolic if it is strongly causal and satisfies the condition that $J^+(p)\cap J^-(q)$ is compact for all $p,q\in M$. \[horizons\] Let $S$ be a subset in a globally hyperbolic space-time $(M,g).$ Then $\Gamma^{-}(S)$ (respectively, $\Gamma^{+}(S)$) denotes the boundary of the set $J^{-}(S)$ (respectively, $J^{+}(S)$) and is called the past event horizon (respectively, the future event horizon) of the set $S$. A simplest example of a globally hyperbolic space-time is *the Minkowski space-time* *Mink*$^{\,n+1}$, $n+1\geq 2,$ i.e. a manifold $\mathbb{R}^{n+1}$, $n+1\geq 2,$ with the Lorentz metric $g$ which has the constant components $g_{ij}$: $$g_{ij}=0,\,\,\mbox{if}\,\,i\neq j;\quad g_{11}=\ldots =g_{nn}=1,\,\,g_{(n+1)(n+1)}=-1.$$ in natural coordinates $(x_1,\dots,$ $x_n,$$t)$ on $\mathbb{R}^{n+1}.$ The time orientation is defined by the vector field $X$ with components $(0,\dots,0,1)$ relative to canonical coordinates in $\mathbb{R}^{n+1}.$ A more interesting example is *de Sitter space-time*. It is easily visualized as the hyperboloid $S(R)$ with one sheet $$\begin{aligned} \label{m1} \sum_{k=1}^{n}x_k^2-t^2=R^2,\,\,R>0,\end{aligned}$$ in Minkowski space-time *Mink*$^{\,n+1}$, $n+1\geq 3,$ with Lorentzian metric induced from *Mink*$^{\,n+1}$. *The Lorentz group* is the group of all linear isometries of the space *Mink*$^{\,n+1},$ transforming to itself the &lt;&lt;upper&gt;&gt; sheet of the hyperboloid $\sum_{k=1}^{n}x_k^2-t^2=-1$ (which is isometric to the Lobachevsky space of the constant sectional curvature -1). The Lorentz group acts transitively by isometries on $S(R).$ The time orientation on $S(R)$ is defined by unit tangent vector field $Y$ such that $Y$ is orthogonal to all time-like sections $$S(R,c)=S(R)\cap \{(x_1,\dots,x_{n}, t)\in \mathbb{R}^{n+1}: t=c\},\,\,c\in\mathbb{R}.$$ Notice that every integral curve of the vector field $Y$ is a future directed time-like geodesic in $S(R)$. Therefore we can consider it as a world line of some observer. Main result {#osn} =========== The main result of this paper is the following \[gorizont\] Let $L$ be a time-like geodesic in de Sitter space-time, $\Gamma^{-}(L)$ is the past event horizon for $L$ (observer’s event horizon). Then 1\. $\Gamma^{-}(L)=S(R)\cap\alpha$, where $\alpha$ is some hyperspace in $\mathbb{R}^{n+1}$ which goes through the origin of coordinate system, consists of isotropic geodesics. 2\. $J^+(L)=-J^-(-L),$ $J^-(L)=-J^+(-L).$ 3\. The sets $J^-(L)$ and $J^+(-L)$ (respectively $J^+(L)$ and $J^-(-L)$) don’t intersect and have joint boundary $\Gamma^{-}(L)$. In particular, the past event horizon for $L$ coincides with the future event horizon for $-L$ (respectively, the future event horizon for $L$ coincides with the past event horizon for $-L$). 4\. The quotient map $pr: S(R)\rightarrow S^1_n(R),$ gluing antipodal events in $S(R),$ is diffeomorphism on all open submanifolds $J^+(L),$ $J^-(-L),$ $J^-(L),$ $J^+(-L)$ which identifies antipodal events of boundaries for these submanifolds (i.e. the corresponding event horizons). 5\. The quotient manifold $(S^1_n(R),G),$ where $g=pr^{\ast}G,$ is the Lobachevsky space of positive curvature $\frac{1}{R^2}$ in the sense of B.A.Rosenfeld (see p. 155 in [@Ros]). Notice that if $L$ and $L^{\prime}$ are time-like geodesics in $S(R)$ and $p\in L$, $p^{\prime}\in L^{\prime},$ then there exists preserving the time direction isometry $i$ of de Sitter space-time such that $i(p)=p^{\prime}$ and $i(L)=L^{\prime}$. Therefore $i$ translates the past event horizon of $L$ to the past event horizon of $L^{\prime}$. Therefore it is enough to prove points 1–4 of theorem\[gorizont\] only for one time-like geodesic. We will suppose that $L$ is integral curve of the vector field $Y$, which intersects $S(R,0)$ at the event $p$ with Descartes coordinates $(R,0,\dots,0)$. Let us show that $$\begin{aligned} \label{m2} \Gamma^{-}(L)=S(R)\cap\{(x_1,\dots,x_{n},t)\in \mathbb{R}^{n+1}: x_1=t\}.\end{aligned}$$ Denote by $C_p$ isotropic cone at the point $p$with Descartes coordinates $(R,0,\dots,0)$. It follows from (\[m1\]) that $$\begin{aligned} \label{m3} C_p=\{(x_1,\dots,x_n,t)\in S(R)\,\mid\,\,x_1=R\}.\end{aligned}$$ Note that the causal past $J^{-}(p)$ of the event $p$ is the region in $S(R)$, lying in the hyperspace $t<0$ and bounding by isotropic cone $C_p$. For any $\psi\in\mathbb{R},$ the restriction of the Lorentz transformation $\Phi_\psi:S(R)\rightarrow S(R)$, realizing hyperbolic rotation in two-dimensional plane $Ox_1t$, prescribed by formulae $$\begin{aligned} \label{m4} x^{\prime}_1=x_1{\operatorname{ch}}\psi+t{\operatorname{sh}}\psi;\quad x^{\prime}_i=x_i,\,\,i=2,\dots,n; \quad t^{\prime}=x_1{\operatorname{sh}}\psi+t{\operatorname{ch}}\psi,\end{aligned}$$ is an isometry of the space-time $S(R)$, preserving the time direction. The set $\Phi$ of all such transformations $\Phi_\psi$, $\psi\in\mathbb{R}$, forms a one-parameter subgroup of the Lorentz group. Here the orbit of the event $p$ with respect to $\Phi$ coincides (up to parametrization) with the curve $L$, and the event $L(\psi):=\Phi_{\psi}(p)$ has Descartes coordinates $(R{\operatorname{ch}}\psi,0,\ldots,0,R{\operatorname{sh}}\psi)$. Under the action of $\Phi_\psi,$ isotropic cone $C_p$ at the point $p$ passes to isotropic cone $C_{L(\psi)}$ at the point $L(\psi)$ and the causal past $J^{-}(p)$ of the event $p$ passes to the causal past $J^{-}(L(\psi))$ of the event $L(\psi)$. It follows from (\[m3\]), (\[m4\]) that $$\begin{aligned} \label{m5} C_{L(\psi)}=\left\{(x_1,\dots,x_n,t)\in S(R)\,\mid\,\,x_1-t{\operatorname{th}}\psi=\frac{R}{{\operatorname{ch}}\psi}\right\}.\end{aligned}$$ Note that the set $J^{-}(L)$ of observed events for $L$ is the union of all sets $J^{-}(L(\psi))$, where $\psi\in\mathbb{R}$. If $\psi_1<\psi_2$ then $L(\psi_1)\in J^{-}(L(\psi_2))$, therefore $J^{-}(L(\psi_1))\subset J^{-}(L(\psi_2))$. Hence the past event horizon of $L$ is limiting position of the &lt;&lt;lower&gt;&gt; half (lying in the hyperspace $t<\frac{x_1{\operatorname{sh}}{\psi}}{{\operatorname{ch}}{\psi}}$) of isotropic cone $C_{L(\psi)}$ when $\psi\rightarrow +\infty$. Now (\[m2\]) follows from (\[m5\]) and the fact that ${\operatorname{ch}}\psi\rightarrow +\infty$, ${\operatorname{th}}\psi\rightarrow 1$ when $\psi\rightarrow +\infty$. Note also that the future event horizon $\Gamma^{+}(L)$ for $L$ is limiting position of &lt;&lt;upper&gt;&gt; half (lying in the hyperspace $t>\frac{x_1{\operatorname{sh}}{\psi}}{{\operatorname{ch}}{\psi}}$) of isotropic cone $C_{L(\psi)}$ when $\psi\rightarrow -\infty$. Then by (\[m5\]), $$\begin{aligned} \label{m6} \Gamma^{+}(L)=S(R)\cap\{(x_1,\dots,x_{n},t)\in \mathbb{R}^{n+1}: x_1+t=0\}.\end{aligned}$$ It follows from (\[m2\]), (\[m6\]) that $$\begin{aligned} \label{m7} J^{-}(L)=S(R)\cap\{(x_1,\dots,x_{n},t)\in \mathbb{R}^{n+1}: x_1-t>0\},\end{aligned}$$ $$\begin{aligned} \label{m8} J^{+}(L)=S(R)\cap\{(x_1,\dots,x_{n},t)\in \mathbb{R}^{n+1}: x_1+t>0\}.\end{aligned}$$ To prove the point 2 of theorem \[gorizont\], it is enough to note that central symmetry $i_0$ of the space-time $\mathbb{R}^{n+1}$ relative to the origin of coordinate system is an isometry of de Sitter space-time, reversing the time direction. Therefore $i_0(J^-(p))=J^+(-p)$, $i_0(J^+(p))=J^-(-p)$ for any event $p\in L$. Corresponding equalities in the point 2 of theorem\[gorizont\] follow from here. On the ground of p. 2 in theorem\[gorizont\] and (\[m7\]), (\[m8\]), $$\begin{aligned} \label{m9} J^{+}(-L)=S(R)\cap\{(x_1,\dots,x_{n},t)\in \mathbb{R}^{n+1}: x_1-t<0\},\end{aligned}$$ $$\begin{aligned} \label{m10} J^{-}(-L)=S(R)\cap\{(x_1,\dots,x_{n},t)\in \mathbb{R}^{n+1}: x_1+t<0\}.\end{aligned}$$ The statements of p. 3 in theorem\[gorizont\] are valid in view of (\[m2\]), (\[m6\]) – (\[m10\]). Let us prove p. 4 of theorem \[gorizont\]. It follows from (\[m7\]), (\[m8\]) that every set $J^-(L)$, $J^+(L)$ is open and contains no pair of antipodal events. Conversely, in consequence of (\[m2\]), (\[m6\]), the past event horizon $\Gamma^{-}(L)$ and the future event horizon $\Gamma^{+}(L)$ for $L$ are centrally symmetric sets. Therefore the quotient map $pr: S(R)\rightarrow S^1_n(R)$, identifying antipodal events from $S(R),$ is a diffeomorphism on the open submanifold $J^-(L)$ ($J^+(L)$) and glues antipodal events of the set $\Gamma^{-}(L)$ ($\Gamma^{+}(L)$). Now the rest statements of p.4 in theorem \[gorizont\] follow from equalities in p. 2 of this theorem. P. 5 of theorem\[gorizont\] is an immediate corollary of statements in p. 4. \[sl\] Let $L$ be a time-like geodesic in $S(R)$, $p$ is the joint event of $L$ and $S(R,0)$. Then the past event horizon $\Gamma^{-}(L)$ for $L$ intersects $S(R,0)$ by the sphere $S_{S(R,0)}(p,\pi R/2)$ of the radius $\pi R/2$ with the center $p$. Using the argument in the proof of theorem\[gorizont\], we can assume without loss of generality that $L$ is integral curve of vector field $Y$, intersecting $S(R,0)$ at the point $p$ with Descartes coordinates $(R,0,\ldots,0)$. Now the corollary \[sl\] follows from (\[m2\]) and the fact that $$S_{S(R,0)}(p,\pi R/2)=\{(x_1,\ldots,x_n,t)\in S(R)\mid\,x_1=0,\,t=0\}.$$ One can consider the time-like geodesic $L$ above as the history (or the world line) of an *eternal* observer. The Fig. 4.18, p. 120 in the Hawking’s book [@Hokrus] (Fig.1 in our paper) admits two interpretations, namely as a picture of the history and corresponding (past) event horizon for *a real* or *an eternal* observer in de Sitter space-time. The first interpretation corresponds to the inscription “Surface of constant time” but then contradicts to the smoothness of bright region at its top point since the top point must be the cone point for this region. To avoid the last mistake on Fig. 4.18, it would better to take the second interpretation and change the inscription “Surface of constant time” by the inscription ”$t=\infty$”, assuming that the scale on the picture goes to zero when $t\rightarrow \infty.$ But for the second interpretation, the observer’s event horizon is depicted incorrectly since on the ground of the corollary \[sl\], it must intersect the &lt;&lt;throat&gt;&gt; of the hyperboloid of one sheet (the sphere $S(R,0)$) by the sphere $S_{S(R,0)}(p,\pi R/2)$, where $p$ is the intersection event of the world line $L$ with $S(R,0)$; for the first interpretation, the above intersection must be $S_{S(R,0)}(p,r)$, where $r < \pi R/2$. On the other hand, this intersection on Fig. 4.18 is empty. The correct picture for the second interpretation for bounded (respectively, infinite) time is given on our Fig. 2 (respectively, Fig. 3). Note also that the observer’s event horizon (see Fig.2) consists of all isotropic geodesics, lying in corresponding hyperspace in *Mink*$^{\,n+1}$ going through zero event. Earlier V.N.Berestovskii formulated without proof the above statements about past event horizon in his plenary talk, but in the text [@Ber1] of this talk they are absent. ![Fig. 4.18 of (real) observer’s event horizon from Hawking’s book [@Hokrus][]{data-label="fig:fig1"}](hok3.jpg){width="13.5cm"} ![Correct (eternal) observer’s event horizon for $0\leq t \leq t_0$[]{data-label="fig:fig2"}](hok2.jpg){width="13.5cm"} ![(Eternal) observer’s event horizon for $0\leq t \leq \infty$[]{data-label="fig:fig3"}](hok1.jpg){width="13.5cm"} [1]{} Berestovskii V.N., Zubareva I.A. [*Functions with (non-)time-like gradient on a space-time*]{}, to be published in Siber. Advances in Mathematics. Berestovskii V.N. [*About one V.A.Toponogov’s problem and its generalizations*]{}, (Russian) Plenary talk on International Conference ”Petrov 2010 Anniversary Symposium on General Relativity and Gravitation 2010”, 1-6 November, Kazan. Relativity, Gravity and Geometry. Contributed papers. P. 62-65. Beem J., Ehrlich P. [*Global Lorentzian geometry*]{}, Marcel Dekker Inc., New York and Basel, 1981. Rosenfeld B.A. [*Non-Euclidean geometry*]{}, (Russian) M.: GITTL, 1953. Hawking S.W., Ellis G.F.R. [*The large scale structure of space-time*]{}, Cambridge University Press, Cambridge, 1973. Stephen Hawking. [*The Universe in a Nutshell*]{}, Bantam Books, New York, Toronto, London, Sydney, Auckland, 2001. Berestovskii Valerii Nikolaevich, Sobolev Institute Of mathematics SB RAS, Omsk department 644099, Omsk, Pevtsova street, 13, Russia Zubareva Irina Aleksandrovna, Sobolev Institute Of mathematics SB RAS, Omsk department 644099, Omsk, Pevtsova street, 13, Russia [^1]: The first author is partially supported by Grants of Russian Federation Government for state support of scientific investigations (Agreement no. 14.B25.31.0029) and of RFBR 11-01-00081-a
--- abstract: 'HST-1, a knot along the M87 jet located $0.85\arcsec$ from the nucleus of the galaxy has experienced dramatic and unexpected flaring activity since early 2000. We present analysis of Hubble Space Telescope Near-Ultraviolet (NUV) imaging of the M87 jet from 1999 May to 2006 December that reveals that the NUV intensity of HST-1 has increased 90 times over its quiescent level and outshines the core of the galaxy. The NUV light curve that we derive is synchronous with the light curves derived in other wavebands. The correlation of X-ray and NUV light curves during the HST-1 flare confirms the synchrotron origin of the X-ray emission in the M87 jet. The outburst observed in HST-1 is at odds with the common definition of AGN variability usually linked to blazars and originating in close proximity of the central black hole. In fact, the M87 jet is not aligned with our line of sight and HST-1 is located at one million Schwarzchild radii from the super-massive black hole in the core of the galaxy.' author: - 'Juan P. Madrid' title: Hubble Space Telescope observations of an extraordinary flare in the M87 jet --- Introduction ============ M87, the cD galaxy of the Virgo cluster, is a giant elliptical famed for its spectacular galactic-scale plasma jet. Due to its proximity, images of M87’s jet at high resolution have revealed a profusion of distinct knots or regions of enhanced emission along the whole length of the jet. These knots have been clearly detected at radio, optical, UV, and X-ray wavelengths. The detection of these UV and X-ray emission regions hundreds of parsecs away from the AGN proves that these are regions of in situ particle acceleration within the jet because such high energy emission vanishes rapidly. The radiative half-lives of synchrotron X-ray emitting electrons are of the order of years, and the cooling time for UV emitting particles are of the order of decades (Harris & Krawczynski 2006). High energy emission would be confined in a small space without re-acceleration along the jet. Thus these knots must be regions of acceleration distinct from the AGN. Until February 2000 HST-1 was an inconspicuous knot of the M87 jet located $0.85\arcsec$ from the nucleus of the galaxy (Waters & Zepf 2005). Since that date, HST-1 has shown an unexpectedly rapid variability in all wavebands. More strikingly, in 2003 HST-1 became brighter than the nucleus of the galaxy, which is known to harbor a super-massive black hole of $3.2 \pm 0.9\times10^9 M_{\odot}$(Macchetto et al. 1997). HST-1 is also the most probable site of production of the TeV $\gamma$ rays emanating from M87 recently reported by the HESS collaboration (Aharonian et al. 2006, see also Cheung et al. 2007). Specific observations of HST-1 have been carried out across the electromagnetic spectrum, and a particularly detailed study of the flaring of HST-1 has been conducted with the Chandra X-ray Observatory by D. E. Harris and collaborators (Harris et al. 2003, 2006, 2008). The X-ray intensity of this peculiar knot has increased more than 50 times in the past five years and peaked in 2005. There is a wealth of high-quality HST NUV imaging data of the M87 jet that has been only succinctly presented in the past (Madrid et al. 2007). The NUV light-curve for HST-1 that we present here has broadly the same shape as the X-ray light curve presented by Harris et al. (2006): the flare rises, peaks, and declines simultaneously in the X-rays and the NUV. HST-1 is located at one million Schwarzchild radii away from the galactic nucleus but if M87 were at a greater distance, or if our telescopes had lesser resolution, this flare would have been interpreted as variability intrinsic to the central black hole and its immediate vicinity. This blazar-like behavior is clearly isolated from the central engine and it is not directly beamed as the M87 jet is misaligned with respect to the line of sight (Harris et al. 2006, Cheung et al. 2007). A detailed characterization of this flare is thus important to better understand blazar variability. We describe the Hubble Space Telescope view of the remarkable flaring of the HST-1 knot with high resolution imaging taken over more than seven years of observations and aim to present the visually striking NUV data that bridges the gap between the X-ray (Harris et al. 2006) and radio (Cheung et al. 2007) observations of HST-1. Observations & Data Reduction ============================= We present observations obtained with two instruments on board HST: the Space Telescope Imaging Spectrograph (STIS) and the Advanced Camera for Surveys (ACS). STIS stopped functioning in 2004 August due to an electronics failure on the redundant (Side 2) power supply system. All observations after August 2004 were taken with the ACS. Even though each of these two instruments has unique characteristics, they cover the same wavebands and provide data that are easily compared. Moreover, the ACS images have the same file structure as STIS images making the data reduction procedure very similar between the two instruments. The discrepancy between the STIS and the ACS absolute photometric calibration does not exceed 2% (Bohlin, 2007). The STIS observations were carried out using the NUV/MAMA detector which has a field of view of $24.7\arcsec\times24.7\arcsec$ and a $0.024\arcsec$pixel size. The M87 jet was imaged with the F25QTZ filter that has its maximum throughput wavelength at $2364.8\mbox{\AA}$ and a width of $995.1\mbox{\AA}$. Due to the nature of the detector NUV/MAMA images are free of cosmic rays (Kim Quijano, 2003). The ACS High Resolution Camera (HRC) is a CCD instrument with a field of view of $29\arcsec\times25\arcsec$ and a scale of $\sim0.025\arcsec$ per pixel. We analyze images acquired with the F220W and F250W filters, which are the two broadband NUV filters with the most similar characteristics to the STIS F25QTZ filter. The F220W filter has its pivot wavelength at $2255.5\mbox{\AA}$ and a width of $187.3\mbox{\AA}$, for the F250W these values are $2715.9\mbox{\AA}$ and $239.4\mbox{\AA}$ respectively (Mack et al. 2003, Gonzaga et al. 2005). We obtained the flatfielded science files (FLT) from the HST public archive for data acquired by both instruments. These science ready files are processed through the automatic reduction and calibration pipeline (CALACS) before they are retrieved from the public archive. The pipeline subtracts the bias and dark current and applies the flatfield to the raw CCD data (Sirianni et al. 2005). Subsequent data reduction was performed using the software package Space Telescope Science Data Analysis System (STSDAS). We analyzed data taken over a period of time of more than seven years, from 1999 May, through 2006 December. Each image, at all epochs, is the product of the combination of four single exposures taken within the same orbit. This allows us to eliminate cosmic rays in the ACS images and improve the signal to noise. We used the STSDAS task [multidrizzle]{} to apply the geometric distortion correction, eliminate cosmic rays, and align and combine the individual exposures of every epoch. The distortion correction was computed with up-to-date distortion coefficient tables retrieved from the Multimission Archive at the Space Telescope (MAST). During the data reduction process we preserved the native pixel size. The final output images generated by [multidrizzle]{} have units of counts per second for the STIS data, and units of electrons per second for the ACS data (Koekemoer et al. 2002). We derive fluxes and errors by doing aperture photometry with [ phot]{} with and aperture radius of 10 pixels or $0.25\arcsec$. At the distance of M87, 16.1 Mpc (Tonry et al. 2001), $1\arcsec$ corresponds to 77pc. We convert the number of counts obtained with [phot]{} into flux and flux errors by using [photflam]{}, or inverse sensitivity, for each instrument found in the updated ACS zeropoint tables maintained by the STScI, or in the header of the STIS images: [photflam$_{STIS/F25QTZ}$]{}=5.8455.10$^{-18}$ erg cm$^{-2}$ Å\ [photflam$_{ACS/F220W}$]{}=8.0721.10$^{-18}$ erg cm$^{-2}$ Å\ [photflam$_{ACS/F250W}$]{}=4.7564.10$^{-18}$ erg cm$^{-2}$ Å\ Once the calibrated fluxes and poisson errors are derived they are transformed into miliJanskys using the pyraf task [calcphot]{} of the synthetic photometry ([synphot]{}) software package under STSDAS. We also scale the flux measurements obtained with different bands using [synphot]{} (Laidler et al. 2005). We expect that no additional errors are introduced by [synphot]{} when doing the transformation to miliJanskys. We assume that the spectrum of HST-1 is described by a power law with index $\alpha$ (Perlman et al. 2001), and we define the flux density as $S_{\nu}\propto\nu^{-\alpha}$. The background light was estimated by measuring the flux with the same circular aperture at the same radial distance from the nucleus than HST-1, but on the side of the jet. The aperture corrections were performed using the values published by Proffitt et al. (2003) for STIS and Sirianni et al. (2005) for the ACS. We use the reddening towards M87 determined by Schlegel et al. (1998), E(B-V)=0.022, and the extinction relations from Cardelli et al. (1989) to derive the extinction in the HST filters. We find the following values for the extinction: $A_{F25QTZ}=0.190$, $A_{F220W}=0.220$, and $A_{F250W}=0.134$. Results ======= HST-1 was dormant until 2000 February when its flaring activity began (Waters & Zepf 2003). We see this in Figure 1 which is a zoom of the inner regions of the M87 jet and displays, on the left, the early evolution of HST-1. Three main emission loci are visible in this zoom, from left to right, these are: the nucleus of the galaxy, HST-1, and knot D. The images in the left column of Fig. 1 were acquired with STIS/F25QTZ while the images on the right column were taken with the ACS in the F220W band. In the STIS image of 1999 May (top left) HST-1 was an unremarkable knot along the M87 jet. The brightening of HST-1 is already noticeable in 2001 July. The images in the lower left were taken in 2002 February and 2002 July respectively and show the slow brightening of HST-1 during this year. At the end of 2002 HST-1 is 15 times brighter than in 1999 May. In 2003 HST-1 became dramatically variable. The image on the top right was taken on 2003 April as HST-1 continues to rise in flux. HST-1 is at its highest recorded brightness in 2005 May, on this date we recorded the highest flux of HST-1 in the NUV, namely 0.54 mJy. At this point in time the NUV flux of HST-1 was four times the measured flux of the central engine of the galaxy. The peak of the X-ray flux based on Chandra observations was reported on 2005 April (Harris et al. 2006). HST-1 attained a NUV flux 90 times its quiescent level in 2005 May. The HST data acquired in 1999 gives us a measure of the brightness of HST-1 during its latent state and allow us to measure the total factor by which the brightness changed during the outburst. Chandra was just launched in 1999 and VLBI radio observations date back to 2000 only (Cheung et al. 2007). After May 2005 HST-1 declined in intensity with a decay time similar to the rise time. HST-1 experienced a second and also unexpected outburst in 2006 November. This second outburst is fainter than the first one in 2005 May. The image on the lower right of Fig. 1 shows HST-1 during the second, yet fainter, outburst in 2006 November. It is evident from Figure 1 that the jet itself is better mapped by the STIS images. Table 1 present the log of observations of the M87 jet taken with STIS and ACS. This table also presents the NUV intensities and poisson errors of the nucleus of the galaxy and HST-1 at all epochs studied here. Although magnitudes of the Space Telescope system or erg/s/cm$^2$/Hz would be more natural units we decided to plot our light curve in mJy to facilitate comparison with observations at other wavebands (Waters & Zepf 2005, Perlman et al. 2003, Harris et al. 2006). Figure 2 shows the light curve of HST-1 and the nucleus of the galaxy. The HST-1 light curve is bumpy in the radio and the X-rays and the NUV is not an exception. The dramatic flaring of HST-1 can be clearly appreciated in this figure. Table 2 contains the doubling and halving times for HST-1 calculated for the NUV following the prescriptions of Harris et al (2006). We calculate $y=I_{2}/I_{1}$, the flux ratio, and the time elapsed, $\Delta$t, between two consecutive observations. The doubling time is calculated using DT=$[\frac{1}{y-1}]\Delta$t and the halving time by HT=$[\frac{0.5}{1-y}]\Delta$t. The bumpiness attributed to synchrotron losses in the X-ray persists in the NUV. These rapid variations of brightness are consistent with the month time-scale variability reported by Perlman et al. (2003) for the early stages of the flare and found here to persist through time. The X-ray and NUV light curves of HST-1 are plotted together in Figure 3. We performed a formal correlation analysis of these two light curves by taking thirty simultaneous values of the NUV and X-ray fluxes and deriving the Spearman rank correlation coefficient $\rho$. This coefficient is a non parametric measure of correlation and the range is $0<\rho<1$, the higher the value, the more significant the correlation (Wall & Jenkins 2003). Simultaneous measurements of the flux of HST-1 in the X-ray and the NUV yield $\rho=0.966$ reflecting a strong correlation. Discussion ========== In Harris et al. (2003) synchrotron loss models based on the NUV data available at that time predicted the optical decay time of the flare to be a factor of 10 larger than the X-ray decay time. On the other hand, Perlman et al. (2003) forethought a similar decay timescale for both optical and X-ray lighcurves. The simultaneous rise and fall of the flare at NUV and X-ray wavelengths supports the first plausible hypothesis of the physical origin of the HST-1 flare postulated by Harris et al. (2006), namely, that a simple compression caused the HST-1 outburst. Compression increases both the magnetic field strength and particle energy at all wavelengths equally, leading to simultaneous flaring at all wavebands. The magnetic field vectors in HST-1 are perpendicular to the jet direction also consistent with a shock (Perlman et al. 2003). The overall rise and fall timescales, similar in both bands, and the lack of a large delay between bands suggest a rapid expansion as a probable cause for the decrease in luminosity. However, a more rigorous analysis of the rise and fall timescales shows that expansion is not the dominant mechanism of energy loss for HST-1, see below (Harris et al. 2008). A more elaborated theoretical interpretation for the origin of HST-1 was presented by Stawarz et al. (2006) and supported by the observations of Cheung et al.(2007). This newer hypothesis claims that HST-1 originates in a nozzle throat of the M87 jet that creates reconfinement of magnetic field lines liberating large amounts of energy, similar to the process responsible for solar flares. The gravitational influence of the central AGN on the velocity dispersion of the stars in the innermost regions of this galaxy has been well documented with early HST observations (Lauer et al. 1992, see also Macchetto et al. 1997). The hot thermal gas can be expected to follow the distribution of the stars in this inner region and create a reconfinement shock in the jet due to an enhanced thermal pressure. This reconfinement should happen at roughly the same distance of the well known stellar cusp, precisely where HST-1 lays (Stawarz et al. 2006). The doubling and halving time scales of the NUV presented in this paper and the X-rays ones published by Harris et al. (2006) do not always perfectly overlap in time, the HST and Chandra observations were not taken simultaneously. However, it is evident from the values of Table 2 and the values of Tables 3 and 6 of Harris et al. (2006) that the rise and fall timescales are consistently larger in the NUV than in the X-rays. See, for instance, the time interval between 2005 June 21 and 2005 August 06 when halving time for the X-rays is 0.21 while in the NUV between 2005 June 22 and 2005 August 01 the decay time is 0.36. Harris et al. (2008) make a detailed analysis of rise and fall time scales for HST-1 and conclude that this longer decay time in the NUV is an indication that expansion is not the primary energy loss mechanism for the charged particles emitting within the HST-1 region. The detection of polarized emission as well as synchrotron emission models fitted to flux measurements provided evidence that the physical process responsible for the radio to UV emission in the knots of the M87 jet is synchrotron radiation of electrons accelerated by the magnetic field of the jet (Perlman et al. 2001). The very strong correlation between the NUV and X-ray light curves of the HST-1 flare proves that the same physical phenomenon and the same electrons are responsible for the emission in both bands. Therefore the X-ray emission is also synchrotron in origin. The injection of fresh particles into the flaring volume is not needed to explain the high-energy emission, the X-ray emission is well interpreted as the high energy extension of the radio to optical spectrum (Perlman & Wilson 2005, Harris et al. 2006). The observations presented here rules out inverse Compton (IC) up-scattering of lower energy electrons, as the cause of the high energy emission of this flare in particular, and of the other emission knots along the M87 jet. Moreover, high-energy photons produced by IC up-scattering would take much longer (10000 years) than the observed time to decrease in flux (Harris, 2003).We can thus safely state that synchrotron emission is the physical process responsible for the high energy emission in the M87 jet. The encircled energy distribution of HST-1 follows the pattern of the detectors PSF at all epochs. In the NUV, HST-1 remains unresolved with an upper limit on its size of $\sim 0.025\arcsec$, i. e. $\sim 1.9pc$. The HESS collaboration recently reported a detection of TeV $\gamma$ rays emanating from M87, but the Cherenkov telescopes used for this detection lack the resolution to determine the exact position of the TeV emitting region. Cheung et al. (2007) favor HST-1 over the nucleus as the site of origin of the TeV emission. They note that the light curve of TeV emission in M87 is roughly similar to the light curve of the HST-1 flare seen in the radio and X-ray. The NUV peaks simultaneously with the X-rays and the $\gamma$ rays and only HST-1 shows a flaring behavior in the NUV. The nucleus shows only the characteristic low amplitude variability, see Figure 2. These facts support the hypothesis of HST-1 as the site of origin of the $\gamma$ rays through IC upscattering of ambient photons by high-energy electrons produced during the outburst. After 2003 May, and for more than four years, the flux of HST-1 dominates the NUV emission of M87, patently overpowering the emission of the central engine (see Fig. 3). Given that within radio galaxies the principal sources of particle acceleration are the core and the jet, the large flux from this flare plays, as we have shown here, a crucial role determining the NUV flux and therefore the spectrum of the entire galaxy. As part of the Chandra Cen A Very Large Project Hardcastle et al. (2007) searched, to no avail, for HST-1-like variability in the X-ray jet of this galaxy (D=3.7Mpc). Hardcastle et al. aimed at answering an important question: Is HST-1 a feature unique of M87? Or is this extreme variability ubiquitous, or at least frequent, in AGN jets? If an HST-1-type outburst occurred in more distant AGN, it would not be resolved with current optical and X-ray instruments and would probably be associated with Doppler boosting of emission emanating from a jet close to the line of sight or with events associated to variability of the accretion disk of the black hole. However, the angle of the M87 jet with the line of sight is about 26-30 deg (Cheung et al. 2007, Bicknell & Begelman 1996) allowing only a modest beaming. Also, given its large distance from the core, i.e. more than 65 pc or one million Schwarzchild radii, intrinsic black hole variability has no direct relation with this flaring. Outbursts similar to HST-1 can be responsible for variability associated with high redshift blazars but remain completely unresolved. This research has made use of the NASA Astrophysics Data System Bibliographic services. I wish to thank Jennifer Mack and Marco Sirianni (STScI) for answering an endless list of questions about the ACS. I am grateful to Ethan Vishniac (McMaster) for believing in my understanding of accretion disks. The anonymous referee gave a very constructive report that helped to improve this paper. Laura Schwartz (JHU) encouraged me to carry out this project and many others. [llccccc]{} 1999 May 17 & STIS/F25QTZ & 0.079 $\pm$ 0.008 & 0.006 $\pm$ 0.001\ 2001 Jul 30 & STIS/F25QTZ & 0.116 $\pm$ 0.010 & 0.024 $\pm$ 0.002\ 2002 Feb 27 & STIS/F25QTZ & 0.078 $\pm$ 0.008 & 0.044 $\pm$ 0.003\ 2002 Jul 17 & STIS/F25QTZ & 0.098 $\pm$ 0.009 & 0.061 $\pm$ 0.004\ 2002 Nov 30 & ACS/F220W & 0.094 $\pm$ 0.011 & 0.092 $\pm$ 0.006\ 2002 Dec 22 & ACS/F220W & 0.094 $\pm$ 0.011 & 0.090 $\pm$ 0.006\ 2003 Feb 02 & ACS/F220W & 0.097 $\pm$ 0.012 & 0.077 $\pm$ 0.005\ 2003 Mar 06 & ACS/F220W & 0.100 $\pm$ 0.012 & 0.075 $\pm$ 0.005\ 2003 Mar 31 & ACS/F250W & 0.081 $\pm$ 0.007 & 0.107 $\pm$ 0.005\ 2003 Apr 07 & ACS/F220W & 0.084 $\pm$ 0.011 & 0.094 $\pm$ 0.006\ 2003 May 10 & ACS/F250W & 0.079 $\pm$ 0.007 & 0.090 $\pm$ 0.004\ 2003 Jun 7 & STIS/F25QTZ & 0.067 $\pm$ 0.007 & 0.088 $\pm$ 0.010\ 2003 Jul 27 & STIS/F25QTZ & 0.070 $\pm$ 0.007 & 0.106 $\pm$ 0.010\ 2003 Nov 29 & ACS/F220W & 0.089 $\pm$ 0.011 & 0.159 $\pm$ 0.007\ 2004 Feb 07 & ACS/F220W & 0.076 $\pm$ 0.010 & 0.209 $\pm$ 0.009\ 2004 May 05 & ACS/F220W & 0.085 $\pm$ 0.011 & 0.180 $\pm$ 0.011\ 2004 Jul 30 & ACS/F220W & 0.099 $\pm$ 0.012 & 0.251 $\pm$ 0.009\ 2004 Nov 28 & ACS/F220W/F250W & 0.158 $\pm$ 0.014 & 0.409 $\pm$ 0.012\ 2004 Dec 26 & ACS/F250W & 0.187 $\pm$ 0.010 & 0.435 $\pm$ 0.010\ 2005 Feb 09 & ACS/F250W & 0.145 $\pm$ 0.009 & 0.447 $\pm$ 0.010\ 2005 Mar 27 & ACS/F250W & 0.168 $\pm$ 0.010 & 0.530 $\pm$ 0.011\ 2005 May 09 & ACS/F220W/F250W & 0.141 $\pm$ 0.009 & 0.542 $\pm$ 0.014\ 2005 Jun 22 & ACS/F250W & 0.141 $\pm$ 0.009 & 0.530 $\pm$ 0.011\ 2005 Aug 01 & ACS/F250W & 0.100 $\pm$ 0.007 & 0.449 $\pm$ 0.010\ 2005 Nov 29 & ACS/F220W/F250W & 0.112 $\pm$ 0.008 & 0.400 $\pm$ 0.012\ 2005 Dec 26 & ACS/F250W & 0.121 $\pm$ 0.008 & 0.398 $\pm$ 0.009\ 2006 Feb 08 & ACS/F220W/F250W & 0.103 $\pm$ 0.008 & 0.309 $\pm$ 0.008\ 2006 Mar 30 & ACS/F250W & 0.119 $\pm$ 0.008 & 0.259 $\pm$ 0.007\ 2006 May 23 & ACS/F220W/F250W & 0.094 $\pm$ 0.007 & 0.225 $\pm$ 0.007\ 2006 Nov 28 & ACS/F220W/F250W & 0.159 $\pm$ 0.009 & 0.323 $\pm$ 0.010\ 2006 Dec 30 & ACS/F250W & 0.143 $\pm$ 0.009 & 0.278 $\pm$ 0.007\ [lcccc]{} 1999 May 17 - 2001 Jul 30 & 2.21 & 3.00 & 0.73 &\ 2001 Jul 30 - 2002 Feb 27 & 0.58 & 0.83 & 0.70 &\ 2002 Feb 27 - 2002 Jul 17 & 0.38 & 0.39 & 0.97 &\ 2002 Jul 17 - 2002 Nov 30 & 0.37 & 0.51 & 0.72 &\ 2002 Nov 30 - 2002 Dec 22 & 0.06 & 0.02 & & 1.50\ 2002 Dec 22 - 2003 Feb 02 & 0.12 & 0.14 & & 0.43\ 2003 Feb 02 - 2003 Mar 06 & 0.09 & 0.03 & & 1.50\ 2003 Mar 06 - 2003 Mar 31 & 0.07 & 0.43 & 0.16 &\ 2003 Mar 31 - 2003 Apr 07 & 0.02 & 0.12 & & 0.08\ 2003 Apr 07 - 2003 May 10 & 0.09 & 0.04 & & 1.13\ 2003 May 10 - 2003 Jun 07 & 0.08 & 0.22 & & 0.18\ 2003 Jun 07 - 2003 Jul 27 & 0.14 & 0.21 & 0.67 &\ 2003 Jul 27 - 2003 Nov 29 & 0.34 & 0.50 & 0.68 &\ 2003 Nov 29 - 2004 Feb 07 & 0.19 & 0.31 & 0.61 &\ 2004 Feb 07 - 2004 May 05 & 0.24 & 0.14 & & 0.85\ 2004 May 05 - 2004 Jul 30 & 0.24 & 0.39 & 0.61 &\ 2004 Jul 30 - 2004 Nov 28 & 0.33 & 0.63 & 0.52 &\ 2004 Nov 28 - 2004 Dec 26 & 0.08 & 0.06 & 1.33 &\ 2004 Dec 26 - 2005 Feb 09 & 0.12 & 0.03 & 4.44 &\ 2005 Feb 09 - 2005 Mar 27 & 0.12 & 0.19 & 0.63 &\ 2005 Mar 27 - 2005 May 09 & 0.12 & 0.02 & 5.22 &\ 2005 May 09 - 2005 Jun 22 & 0.12 & 0.02 & & 5.22\ 2005 Jun 22 - 2005 Aug 01 & 0.11 & 0.15 & & 0.36\ 2005 Aug 01 - 2005 Nov 29 & 0.33 & 0.11 & & 1.50\ 2005 Nov 29 - 2005 Dec 26 & 0.07 & 0.01 & & 3.50\ 2005 Dec 26 - 2006 Feb 08 & 0.12 & 0.22 & & 0.27\ 2006 Feb 08 - 2006 Mar 30 & 0.14 & 0.16 & & 0.44\ 2006 Mar 30 - 2006 May 23 & 0.15 & 0.13 & & 0.57\ 2006 May 23 - 2006 Nov 28 & 0.52 & 0.44 & 1.18 &\ 2006 Nov 28 - 2006 Dec 30 & 0.09 & 0.14 & & 0.32\ Aharonian, F. et al. 2006, Science, 314, 1424 Bicknell, G. V. & Begelman, M. C. 1996, ApJ, 467, 597 Bohlin, R. C. 2007, Photometric Calibration of the ACS CCD cameras, ACS Instrument Science Report 2007-06, (Baltimore: STScI) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245 Cheung, C., Harris, D. E., & Stawarz, L. 2007, ApJL, 663, L65 Gonzaga, S. et al. 2005, ACS Instrument Handbook, Version 6.0, (Baltimore: STScI) Hardcastle, M. J. et al. 2007, ApJL, 670, L81 Harris, D. E., & Krawczynski, H. 2002, ApJ, 565, 244 Harris, D. E. 2003, New Astronomy Reviews, 47, 617 Harris, D. E., Biretta, J. A., Junor, W., Perlman, E. S., W. B., Sparks, W., Wilson, A. S. 2003, ApJ, 586, L41 Harris, D. E., Cheung, C. C., Biretta, J. A., Sparks, W. B., Junor, W., Perlman, E. S., Wilson, A. S. 2006, ApJ, 640, 211 Harris, D. E., & Krawczynski, H. 2006, ARA&A, 44, 463 Harris, D. E., Cheung, C. C., Stawarz, L., & Perlman, E. S. 2008 submitted Kim Quijano, J., et al. 2003, STIS Instrument Handbook, version 7.0, Baltimore, STScI Koekemoer, A. M., Fruchter, A. S., Hook, R. N., & Hack,W. 2002, in The 2002 HST Calibration Workshop: Hubble after the Installation of the ACS and the NICMOS Cooling System, ed. S. Arribas, A. Koekemoer, & B. Whitmore, Baltimore, STScI, 339 Laidler, V., et al. 200, Synphot User’s Guide, version 5.0, Baltimore, STScI Lauer, T. R. et al. 1992, AJ, 103, 703 Mack, J., et al. 2003, ACS Data Handbook, Version 2.0 (Baltimore: STScI), Madrid, J. P., Sparks, W. B., Harris, D. E., Perlman, E. S., Macchetto, D., Biretta, J. 2007, Ap&SS, 311, 329 Perlman, E. S., Biretta, J. A., Sparks, W. B., Macchetto, F. D., & Leahy, J. P. 2001, ApJ, 551, 206 Perlman, E. S., Harris, D. E., Biretta, J. A., Sparks, W. B., & Macchetto, F. D. 2003, ApJ, 599, L65 Proffitt, C. R., Brown., T. M., Mobasher, B., & Davies, J. 2003 Instrument Science Report, STIS 2003-01, (Baltimore: STScI) Schlegel, D. J, Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525 Sirianni, M., et al. 2005, PASP, 117, 1049 Stawarz, L. et al. 2006, MNRAS, 370, 981 Ulrich, M. E., Marashi, L., Urry, C. M., 1997, ARA&A, 35, 445 Tonry, J. L. et al. 2001, ApJ, 546, 681 Wall, J. V. & Jenkins, C. R. 2003, Practical Statistics for Astronomers, Cambridge University Press Waters, C. Z., & Zepf, S. E. 2005, ApJ, 624, 656
--- author: - 'M. Gillon$^{1,2}$, B.-O. Demory$^{2}$, A. H. M. J. Triaud$^{2}$, T. Barman$^3$, L. Hebb$^{4}$, J. Montalbán$^1$, P. F. L. Maxted$^{5}$, D. Queloz$^{2}$, M. Deleuil$^6$, P. Magain$^{1}$' date: 'Received date / accepted date' title: 'VLT transit and occultation photometry for the bloated planet CoRoT-1b[^1]$^{, }$[^2]' --- Introduction ============ Transiting planets play an important role in the study of planetary objects outside our solar system. Not only can we infer their density and use it to constraint their composition, but several other interesting measurements are possible for these objects (see e.g. review by Charbonneau et al. 2007). In particular, their thermal emission can be measured during their occultation, allowing the study of their atmosphere without spatially resolving their light from that of the host star. The $Spitzer$ $Space$ $Telescope$ (Werner et al. 2004) has produced a flurry of such planetary emission measurements, all at wavelengths longer than 3.5 $\mu$m. From the ground, several attempts to obtain occultation measurements at shorter wavelengths than $Spitzer$ spectral window were performed (Richardson et al. 2003a,b; Snellen 2005; Deming et al. 2007; Knutson et al. 2007, Snellen & Covino 2007, Winn et al. 2008). Very recently, two of them were successful: Sing & López-Morales (2009) obtained a $\sim$ 4 $\sigma$ detection of the occultation of OGLE-TR-56b in the $z$-band (0.9 $\mu$m) , while De Moiij & Snellen (2009) detected at $\sim$ 6 $\sigma$ the thermal emission of TrES-3b in the K-band (2.2 $\mu$m). It is important to obtain more similar measurements to improve our understanding of the atmospheric properties of short-period extrasolar planets. CoRoT-1b (Barge et al. 2008, hereafter B08) was the first planet detected by the CoRoT space transit survey (Baglin et al. 2006). With an orbital period of 1.5 days, this Jupiter-mass planet orbits at only $\sim$ 5 stellar radii from its G0V host star. Due to this proximity, its stellar irradiation is clearly large enough ($\sim$ $3.9 \times 10^{9}$ erg s$^{-1}$ cm$^{-2}$) to make it join OGLE-TR-56b, TrES-3b and a few other planets within the pM theoretical class proposed by Fortney et al. (2008). Under this theory, pM planets receive a stellar flux large enough to have high-opacity compounds like TiO and VO present in their gaseous form in the day-side atmosphere. These compounds should be responsible for a stratospheric thermal inversion, with re-emission on a very short time-scale of a large fraction of the incoming stellar flux, resulting in a poor efficiency of the heat distribution from the day-side to the night-side and to enhanced infrared planetary fluxes at orbital phases close to the occultation. Like the other pM planets, CoRoT-1b is thus a good target for near-infrared occultation measurements. Furthermore, CoRoT-1b belongs to the subgroup of the planets with a radius larger than predicted by basic models of irradiated planets (e.g. Burrows et al. 2007, Fortney et al. 2007). Tidal heating has been proposed by several authors (e.g. Bodenheimer et al. 2001, Jackson et al. 2008b) as a possible extra source of energy able to explain the radius anomaly shown by these hyper-bloated planets. As shown by Jackson et al. (2008b) and Ibgui & Burrows (2009), even a tiny orbital eccentricity is able to produce an intense tidal heating for very short period planets. Occultation photometry does not only allow to measure the planetary thermal emission, but also constrains strongly the orbital eccentricity (see e.g. Charbonneau et al. 2005). Such an occultation measurement for CoRoT-1b could thus help understanding its low density. These reasons motivated us to measure an occultation of CoRoT-1b with the Very Large Telescope (VLT). We also decided to obtain a precise VLT transit light curve for this planet to better constrain its orbital elements. Furthermore, CoRoT transit photometry presented in B08 is exquisite, but it is important to obtain an independent measurement of similar quality to check its reliability and to assess the presence of any systematic effect in the CoRoT photometry. We present in Section 2 our new VLT data and their reduction. Section 3 presents our analysis of the resulting photometry and our determination of the system parameters. Our results are discussed in Section 4, before giving our conclusion in Section 5. Observations ============ VLT/FORS2 transit photometry ---------------------------- A transit of CoRoT-1b was observed on 2008 February 28 with the FORS2 camera (Appenzeller et al. 1998) installed at the VLT/UT1 (Antu). FORS2 camera has a mosaic of two 2k $\times$ 4k MIT CCDs and is optimized for observations in the red with a very low level of fringes. It was used several times in the past to obtain high precision transit photometry (e.g. Gillon et al. 2007a, 2008). The high resolution mode was used to optimize the spatial sampling, resulting in a 4.6’ $\times$ 4.6’ field of view with a pixel scale of 0.063”/pixel. Airmass increased from 1.08 to 1.77 during the run which lasted from 1h16 to 4h30 UT. The quality of the night was photometric. Due to scheduling constraints, only a small amount of observations were performed before and after the transit, and the total out-of-transit (OOT) part of the run is only $\sim$ 50 minutes. 114 images were acquired in the R\_SPECIAL filter ($\lambda_{eff}= 655 $ nm, FWHM = 165 nm) with an exposure time of 15 s. After a standard pre-reduction, the stellar fluxes were extracted for all the images with the [IRAF]{}[^3] [DAOPHOT]{} aperture photometry software (Stetson, 1987). We noticed that CoRoT-1 was saturated in 11 images because of seeing and transparency variations, and we rejected these images from our analysis. Several sets of reduction parameters were tested, and we kept the one giving the most precise photometry for the stars of similar brightness than CoRoT-1. After a careful selection of reference stars, differential photometry was obtained. A linear fit for magnitude $vs$ airmass was performed to correct the photometry for differential reddening using the OOT data. The corresponding fluxes were then normalized using the OOT part of the photometry. The resulting transit light curve is shown in Fig. 1. After subtraction of the best-fit model (see next section), the obtained residuals show a $rms$ of $\sim$ 520 ppm, very close to the photon noise limit ($\sim$ 450 ppm). \[fig:a\] ![$Top$: VLT/FORS2 R-band transit light curve with the best-fitting transit + trend model superimposed. $Bottom$: residuals of the fit.](a.ps "fig:"){width="9cm"} VLT/HAWK-I occultation photometry --------------------------------- We observed an occultation of CoRoT-1b with HAWK-I ([*High Acuity Wide-field K-band Imager*]{}, Pirard et al. 2004, Casali et al. 2006), a cryogenic near-IR imager recently installed at the VLT/UT4 (Yepun). HAWK-I provides a relatively large field of view of 7.5’ x 7.5’. The detector is kept at 75 K and is composed of a mosaic of four Hawaii-2RG 2048x2048 pixels chips. The pixel scale is 0.106"/pixel, providing a good spatial sampling even for the excellent seeing conditions of Paranal (seeing down to 0.3 arcsec measured in K-band). Instead of using a broad band K or K$_s$ filter, we choose to observe with the narrow band filter NB2090 (central wavelength = 2.095 $\mu$m, width = 0.020 $\mu$m). This filter avoids absorption bands at the edge of the K-band, its small width minimizes the effect of differential extinction and furthermore its bandpass shows a much smaller sky emission than the one of the nearby Br$\gamma$ bandpass (central wavelength = 2.165 $\mu$m, width = 0.030 $\mu$m), leading to a flux ratio background/star more than twice better than in Br$\gamma$ or K-band filters. Because of the large aperture of the VLT and the relative brightness of CoRoT-1, the expected stellar count in this narrow filter is still good enough to allow theoretical noise of less than 0.15% for a 1 minute integration. Observations took place on 2009 January 06 from 1h54 to 7h56 UT. Atmospheric conditions were very good, while the mean seeing measured on the images was 0.47“. Airmass decreased from 1.36 to 1.08 then raised to 1.65. Each exposure was composed of 4 integrations of 11 s each. A random jitter pattern within a square 45”-sized box was applied to the telescope. This strategy aimed to obtain for each image an accurate sky map from the neighbor images. Indeed, the near-IR background shows a large spatial variability at different scales and an accurate subtraction of this complex background is crucial, except when this background has an amplitude negligible when compared to the stellar count (see e.g. Alonso et al. 2008). In total, 318 images were obtained during the run. After a standard pre-reduction (dark subtraction, flatfield division), a sky map was constructed and removed for each image using a median-filtered set of the ten adjacent images. The resulting sky-subtracted images were aligned and then compared on a per-pixel basis to the median of the 10 adjacent images in order to detect any spurious values due, e.g., to a cosmic hit or a pixel damage. The concerned pixels had their value replaced by the one obtained by linear interpolation using the 10 adjacent images. Two different methods were tested to extract the stellar fluxes. Aperture photometry was obtained using the [IRAF DAOPHOT]{} software and compared to deconvolution photometry obtained with the algorithm [DECPHOT]{} (Gillon et al. 2006 and 2007b, Magain et al. 2007). We obtained a significantly ($\sim$ 25%) better result with [DECPHOT]{}. We attribute this improvement to the fact that [DECPHOT]{} optimizes the separation of the stellar flux from the background contribution, while aperture photometry simply sums the counts within an aperture. In order to avoid any systematic noise due to the different characteristics of the HAWK-I chips, we choose to use only reference stars located in the same chip than our target to obtain the differential photometry. As CoRoT-1 lies in a dense field of the Galactic plane, we have in any case enough reference flux in one single chip to reach the desired photometric precision. After a careful selection of the reference stars, the obtained differential curve shows clearly an eclipse with the expected duration and timing (Fig. 2). We could not find any firm correlation of the OOT photometric values with the airmass or time, so we simply normalized the fluxes using the OOT part without any further correction. The OOT $rms$ is 0.32 %, much larger than the mean theoretical error: 0.13 %. This difference implies the existence of an extra source of noise of $\sim$ 0.3 %. We attribute this noise to the sensitivity and cosmetic inhomogeneity of the detector combined with our jitter strategy. In the optical, one can avoid this noise by staring at the same exact position during the whole run, i.e. by keeping the stars on the same pixels. In the near-IR, dithering is needed to remove properly the large, complex and variable background. This background varies in time at frequencies similar to the one of the transit, so any poor background removal is able to bring correlated noise in the resulting photometry. It is thus much preferable to optimize the background subtraction by using a fast random jitter pattern even if this brings an extra noise, because this latter is dominated by frequencies much larger than the one of the searched signal and is thus unable to produce a fake detection or modify the eclipse shape. \[fig:b\] ![$Top$: VLT/HAWK-I 2.09 $\mu$m occultation light curve binned per 10 minutes, with the best-fitting occultation + trend model superimposed. $Bottom$: residuals of the fit.](b.ps "fig:"){width="9cm"} Analysis ======== Data and model -------------- To obtain an independent determination of the system parameters, we decided to use only our VLT R-band transit and 2.09 $\mu$m occultation photometry in addition to the SOPHIE (Bouchy et al. 2006) radial velocities (RV) presented in B08 as data for our analysis. These data were used as input into a Markov Chain Monte Carlo (MCMC; see e.g. Tegmark 2004, Gregory 2005, Ford 2006) code. MCMC is a Bayesian inference method based on stochastic simulations and provides the $a$ $posteriori$ probability distribution of adjusted parameters for a given model. Here the model is based on a star and a transiting planet on a Keplerian orbit about their center of mass. More specifically, we used a classical Keplerian model for the RV variations and we fitted independent offsets for the two epochs of the SOPHIE observations to account for the drift between them mentioned in B08. To fit the VLT photometry, we used the photometric eclipse model of Mandel & Agol (2002) multiplied by a trend model. In order to obtain reliable error bars for our fitted parameters, it is indeed preferable to consider the possible presence of a low-amplitude time-dependant systematic in our photometry due, e.g. to an imperfect differential extinction correction or a low-amplitude low-frequency stellar variability. We choose to model this trend as a second-order time polynomial function for both FORS2 and HAWK-I photometry. Limb-darkening -------------- For the transit, a quadratic limb darkening law was assumed, with initial coefficients $u_1$ and $u_2$ interpolated from Claret’s tables (2000; 2004) for the R-band photometric filter and for $T_{eff} = 5950 \pm 150$ K, log $g$ = $4.25 \pm 0.30$ and \[Fe/H\] = $-0.30 \pm 0.25$ (B08). We used the partial derivatives of $u_1$ and $u_2$ as a function of the spectroscopic parameters in Claret’s tables to obtain their errors $\sigma_{u_1}$ and $\sigma_{u_2}$ via: $$\label{eq:1} \sigma_{u_x}= \sqrt{\sum_{i=1}^3\,\big(\frac{\delta u_x}{\delta S_i}\sigma_{S_i}\big)^2 }\textrm{,}$$ where $x$ is 1 or 2, while $S_i$ and $\sigma_{S_i}$ are the $i^{th}$ ($i=1,3$) spectroscopic parameter and its error from B08. We obtained $u_1 = 0.279 \pm 0.033$ and $u_2 = 0.351 \pm 0.016$ as initial values. We allowed $u_1$ and $u_2$ to float in our MCMC analysis, using as jump parameters not these coefficients themselves but the combinations $c_1 = 2 \times u_1 + u_2$ and $c_2 = u_1 - 2 \times u_2$ to minimize the correlation of the obtained uncertainties (Holman et al. 2006). The following Bayesian prior on $c_1$ and $c_2$ was added to our merit function: $$\label{eq:2} BP_{\rm limb-darkening} = \sum_{i=1,2} \bigg(\frac{c_i - c'_i}{\sigma_{c'_i}} \bigg)^2$$where $c'_i$ is the initial value deduced for the coefficient $c_i$ and $\sigma_{c'_i}$ is its error computed from $\sigma_{u_1}$ and $\sigma_{u_2}$. We let $c_1$ and $c_2$ be free parameters under the control of a Bayesian prior to propagate the uncertainty on the limb-darkening to the deduced transit parameters. Jump parameters --------------- The other jump parameters in our MCMC simulation were: the transit timing (time of minimum light) $T_0$, the planet/star area ratio $(R_p/R_s)^2 $, the transit width (from first to last contact) $W$, the impact parameter $b'=a\cos{i}/R_\ast$, three coefficients per photometric time-series for the low-frequency systematic, one systemic RV for each of the two SOPHIE epochs, and the two parameters $e\cos{\omega}$ and $e\sin{\omega}$ where $e$ is the orbital eccentricity and $\omega$ is the argument of periastron. The RV orbital semi-amplitude $K$ was not used as jump parameter, but instead we used the following parameter: $$\label{eq:3} K_2 = K \sqrt{1-e^2} \textrm{ } P^{1/3} = (2\pi G)^{1/3} \frac{M_p \sin i}{(M_p + M_\ast)^{2/3}}\textrm{,}$$to minimize the correlation with the other jump parameters. We notice that our used jump parameter $b'$ is equal to the actual transit impact parameter $b$ only for a circular orbit. For a non-zero eccentricity, it is related to the actual impact parameter $b$ via: $$\label{eq:4} b = b' \textrm{ } \frac{1 - e^2}{1+ e \sin{\omega}}\textrm{.}$$Here too, the goal of using $b'$ instead of $b$ is to minimize the correlation between the jump parameters. The orbital period $P$ was let free in our analysis, constrained not only with the data presented above but also with the timings determined independently by Bean (2009) for each of the 35 CoRoT transits. Practically, we added the following bayesian penalty $BP_{\rm timings}$ to our merit function: $$\label{eq:5} BP_{\rm timings} = \sum_{i=1,35} \bigg(\frac{T_0 + N_i \times P - T_i}{\sigma_{T_i}} \bigg)^2$$where $T_i$ is the transit timing determined by Bean (2009) for the $i^{th}$ CoRoT transit, $\sigma_{T_i}$ is its error and $N_i$ is its differential epoch compared to our VLT transit. This procedure relies on the reasonable assumption that the timings determined by Bean (2009) are uncorrelated with the other transit parameters. Photometric correlated noise and RV jitter noise ------------------------------------------------ Our analysis was done in 4 steps. First, a single MCMC chain was performed. This chain was composed of 10$^6$ steps, the first 20% of each chain being considered as its burn-in phase and discarded. The best-fitting model found in the first chain was used to estimate the level of correlated noise in each photometric time-series and a jitter noise in the RV time series. For both photometric time-series, the red noise was estimated as described in Gillon et al. (2006), by comparing the $rms$ of the unbinned and binned residuals. We used a bin size corresponding to a duration of 20 minutes, similar to the timescale of the ingress/egress of the transit. The obtained results were compatible with a purely Gaussian noise for both time-series. Still, it is possible that a low-amplitude correlated noise damaging only the eclipse part had been ‘swallowed’ by our best-fitting model, so we preferred to be conservative and to add quadratically a red noise of 100 ppm to the theoretical uncertainties of each photometric time-series. The deduced RV jitter noise was high: 23 m.s$^{-1}$. Nevertheless, we noticed that it goes down to zero if we discard the second RV measurement of the first SOPHIE epoch. Furthermore, this measurement has a significantly larger error bar than the others, we thus decided to consider it as doubtful and to do not use it in our analysis. A theoretical jitter noise of 3.5 m.s$^{-1}$ was then added quadratically to the error bars of the other SOPHIE measurements, a typical value for a quiet solar-type star like CoRoT-1 (Wright 2005). Determination of the stellar density ------------------------------------ Then, 10 new MCMC chains were performed using the updated measurement error bars. These 10 chains were then combined, using the Gelman and Rubin statistic (Gelman & Rubin 1992) to verify that they were well converged and mixed enough, then the best-fitting values and error bars for each parameter were obtained from their distribution. The goal of this MCMC run was to provide us with an improved estimation of the stellar density $\rho_\ast $ (see e.g. Torres 2007). The stellar density that we obtained was $\rho_\ast = 0.84^{+0.11}_{-0.07}$ $\rho_{\odot}$. Stellar-evolutionary modeling ----------------------------- The deduced stellar density and the spectroscopic parameters were then used to better constrain the stellar mass and age via a comparison with theoretical stellar evolution models.Two independent stellar analysis were performed in order to assess the impact of the stellar evolution models used on the final system parameters: - Our first analysis was based on Girardi’s evolution models (Girardi et al.2000). We first perform a linear interpolation between the solar (Z=0.019) and subsolar (Z=0.008) metallicity theoretical models to derive a set of mass tracks at the metallicity of the host star (\[M/H\]=-0.3). We then compare the effective temperature and the inverse cube root of the stellar density to the same values in the host star metallicity models. We interpolate linearly along the mass tracks to generate an equal number of age points between the zero age main sequence and the point corresponding to core hydrogen exhaustion. We then interpolate between the tracks along equivalent evolutionary points to find the mass, $M=0.94$ $M_{\odot}$, and age, $\tau$ = 7.1  Gyr, of the host star that best match the measured temperature and stellar density. We repeat the above prescription using the extreme values of the observed parameters to determine the uncertainties on the derived mass and age. The large errors on the spectroscopic parameters, particularly the $\pm 0.25$ dex uncertainty on the metallicity, lead to a 15-20 % error on the stellar mass ($M=0.94^{+0.19}_{-0.16} M_{\odot}$) and an age for the system no more precise than older than 0.5 Gyr. Fig. 3 presents the deduced position of CoRoT-1 in a $R/M^{1/3}$-$T_{eff}$ diagram. - In the second analysis we apply the Levenberg-Marquard miniminization algorithm to derive the fundamental parameters of the host star. The merit function is defined by: $$\chi^2=\sum_{i=1}^{3}\,\frac{(O_i^{obs}-O_i^{theo})^2}{(\sigma_i^{obs})^2}$$ The observables ($O_i^{obs}$) we take into consideration are effective temperature, surface metallicity and mean density. The corresponding observational errors are $\sigma_i^{obs}$. The theoretical values ($O_i^{theo}$) are obtained from stellar evolution models computed with the code CLES (Code Liégois d’Evolution Stellaire, Scuflaire et al. 2008). Several fittings have been performed, in all of them we use the mixing-length theory (MLT) of convection (Böhm-Vitense, 1958) and the most recent equation of state from OPAL (OPAL05, Rogers & Nayfonov, 2002). Opacity tables are those from OPAL (Iglesias & Rogers, 1996) for two different solar mixtures, the standard one from Grevesse & Noels (1993, GN93) and the recently revised solar mixture from Asplund, Grevesse & Sauval (2005, AGS05). In the first case $(Z/X)_\odot$=0.0245, in the second one $(Z/X)_\odot$=0.0167. These tables are extended at low temperatures with Ferguson et al. (2005) opacity values for the corresponding metal mixtures. The surface boundary conditions are given by grey atmospheres with an Eddington law. Microscopic diffusion (Thoul et al. 1994) is included in stellar model computation. The parameters of the stellar model are mass, initial hydrogen ($X_{\rm i}$) and metal ($Z_{\rm i}$) mass fractions, age, and the parameters of convection ($\alpha_{\rm MLT}$ and the overshooting parameter). Since we have only three observational constraints, we decide to fix the $\alpha_{\rm MLT}$ and $X_{\rm i}$ values to those derived from the solar calibration for the same input physics. Furthermore, given the low mass we expect for the host star, all the models are computed without overshooting. The values of stellar mass and age obtained for the two different solar mixtures are: $M=0.90\pm 0.21\,M_{\odot}$ with GN93 and $M=0.92\pm 0.18 M_{\odot}$ with AGS05, and respectively $\tau=7.5\pm6.0$ Gyr and $\tau=6.9\pm 5.4$ Gyr. The result of our two independent stellar analysis are thus fully compatible, and the uncertainty due to the large errors on the spectroscopic parameters dominates the one coming from our imperfect knowledge of stellar physics. The large uncertainties affecting the stellar mass and age are mainly due to the lack of accuracy in metallicity determination. We estimate from several tests that an improvement in the atmospheric parameters determination leading to an error in metallicity of 0.05 dex would translate in a reduction in uncertainty by a factor three for the stellar mass, and a factor two for the stellar age. Moreover, decreasing the effective temperature error to 75 K would imply a subsequent reduction of stellar parameter errors by an additional factor two. Getting more high-SNR high-resolution spectroscopic data for the host star is thus very desirable. Determination of the system parameters -------------------------------------- For the last part of our analysis, we decided to use 0.93 $\pm$ 0.18 $M_{\odot}$, i.e. the average of the values obtained with the two different evolution models, as our starting value for the stellar mass. A new MCMC run was then performed. This run was identical to the first one, with the exception that $M_\ast$ was also a jump parameter under the control of a Bayesian penalty based on $M_\ast$ = 0.93 $\pm$ 0.18 $M_\odot$. At each step of the chains, the physical parameters $M_p$, $R_p$ and $R_\ast$ were computed from the relevant jump parameters including the stellar mass. Table 1 shows the deduced values for the jump + physical parameters and compares them to the values presented in B08. It also shows the Bayesian penalties used in this second MCMC run. \[fig:c\] ![$R/M^{1/3}$ (in solar units) versus effective temperature for CoRoT-1 compared to the theoretical stellar stellar evolutionary models of Girardi et al. (2000) interpolated at -0.3 metallicity. The labeled mass tracks are for 0.8, 0.9 and 1.0 $M_\odot$ and the isochrones are 100 Myr (solid), 5 Gyr (dotted), 10 Gyr (dashed), 16 Gyr (dot-dashed). We have interpolated the tracks at -0.2 metallicity and have included the uncertainty on the metallicity ($\pm$0.25) in the overall uncertainties on the mass and the age. ](c.ps "fig:"){width="9cm"} \[tab:params\] [lcccccl]{} Parameter & Value & Bayesian penalty & B08 & Unit &\ $Jump$ $parameters$ & & & &\ Transit epoch $ T_0 $ & $ 2454524.62324^{+0.00009}_{-0.00013}$ & & 2454159.4532 $\pm$ 0.0001& BJD\ Planet/star area ratio $ (R_p/R_s)^2 $ & $ 0.01906^{+0.00020}_{-0.00040} $ & & $0.01927 \pm 0.00058$ &\ Transit width $W$ & $ 0.10439 \pm 0.00094 $ & & & day\ 2.09 $\mu$m occultation depth & $0.00278^{+ 0.00043}_{- 0.00066}$ & & &\ $ b'=a\cos{i}/R_\ast $ & $ 0.398^{+ 0.032}_{- 0.043} $ & & $0.420 \pm 0.043$ & $R_*$\ RV $K_2$ & $215^{+15}_{-16}$ & & $216 \pm 13$ &\ RV $\gamma_1$ & $23.366^{+0.020}_{-0.017}$ & & &\ RV $\gamma_2$ & $23.350^{+0.012}_{-0.011}$ & & &\ $e\cos{\omega}$ & $0.0083^{+0.0038}_{-0.0025}$ & & &\ $e\sin{\omega}$ & $-0.070^{+0.029}_{-0.042}$ & & &\ $A_{\rm transit}$ & $0.99963^{+0.00028}_{-0.00009}$ & & &\ $B_{\rm transit}$ & $0.017^{+0.003}_{-0.018}$ & & & day$^{-2}$\ $C_{\rm transit}$ & $-0.10^{+0.12}_{-0.02}$ & & & day$^{-1}$\ $A_{\rm occultation}$ &$1.00041^{+0.00096}_{-0.00052}$ & & &\ $B_{\rm occultation}$ &$-0.008^{+0.007}_{-0.023}$ & & & day$^{-2}$\ $C_{\rm occultation}$ & $0.029^{+0.079}_{-0.029}$& & & day$^{-1}$\ & & & &\ Orbital period $ P$ & $ 1.5089686^{+ 0.0000005}_{- 0.0000006} $ & from timings in Bean (2009) & 1.5089557 $\pm$ 0.0000064 & day\ Stellar mass $ M_\ast $ & $ 1.01^{+0.13}_{-0.22}$ & 0.93 $\pm$ 0.18 & 0.95 $\pm$ 0.15 & $M_\odot$\ R-filter $c_1$ & $ 0.794^{+ 0.047}_{- 0.048}$ & 0.909 $\pm$ 0.067 & &\ R-filter $c_2$ & $ -0.444^{+ 0.054 }_{- 0.032}$ & -0.423 $\pm$ 0.046 & &\ $Deduced$ $parameters$ & & & &\ RV $K$ & $ 188 \pm 14 $ & & 188 $\pm$ 11&\ $b_{transit}$ & $ 0.426^{+ 0.035}_{- 0.042} $ & & $0.420 \pm 0.043$ & $R_*$\ $b_{occultation}$ & $ 0.370^{+ 0.037}_{- 0.049} $ & & $0.420 \pm 0.043$ & $R_*$\ Orbital semi-major axis $ a $ & $ 0.0259 ^{+ 0.0011}_{- 0.0020} $ & & $0.0254 \pm 0.0014$ & AU\ Orbital inclination $ i $ & $ 85.66^{+0.62}_{-0.48} $ & & 85.1 $\pm$ 0.5& degree\ Orbital eccentricity $ e $ & $ 0.071^{+0.042}_{-0.028} $ & & 0 (fixed)&\ Argument of periastron $ \omega $ & $276.7^{+5.9}_{-4.3}$ & & & degree\ Stellar radius $ R_\ast $ & $ 1.057^{+ 0.055}_{- 0.094} $ & & 1.11 $\pm$ 0.05 & $R_\odot$\ Stellar density $\rho_* $ & $0.86^{+ 0.13}_{- 0.08} $ & & $0.698 \pm 0.033$ & $\rho_\odot $\ R-filter $u_1$ & $ 0.229^{+ 0.025 }_{- 0.022}$ & & &\ R-filter $u_2$ & $ 0.336^{+ 0.012}_{- 0.020}$ & & &\ Planet radius $ R_p $ & $ 1.45 ^{+ 0.07}_{- 0.13} $ & & 1.49 $\pm$ 0.08 & $R_J$\ Planet mass $ M_p $ & $ 1.07 ^{+ 0.13}_{- 0.18} $ & & 1.03 $\pm$ 0.12 & $M_J$\ Planet density $ \rho_p $ & $0.350^{+0.077}_{-0.042}$ & & $0.31 \pm 0.06$ & $\rho_{J}$\ \ Discussion ========== The density and eccentricity of CoRoT-1b ---------------------------------------- As can be seen in Table 1, the transit parameters that we obtain from our VLT/FORS-2 R-band photometry agree well with the ones presented in B08 and based on CoRoT photometry. Our value for the transit impact parameter is in good agreement with the one obtained by B08, and has a similar uncertainty. The planet/star area ratio that we deduce is within the error bar of the values obtained by B08, while our error bar is smaller. Our deduced physical parameters agree also very well with the ones presented in B08. Our analysis confirms thus the very low-density of the planet (see Fig. 4) and its membership to the sub-group of short period planets too large for current models of irradiated planets (Burrows et al. 2007; Fortney et al. 2008). In this context, it is worth noticing the marginal non-zero eccentricity that we deduce from our combined analysis: $e = 0.071^{+0.042}_{-0.028}$. As outlined by recent works (Jackson et al. 2008b, Ibgui & Burrows 2009), tidal heating could play a major role in the energy budget of very short period planets and help explaining the very low density of some of them. Better constraining the orbital eccentricity of CoRoT-1b by obtaining more radial velocity measurements and occultation photometry is thus desirable. To test the amplitude of the constraint brought by the occultation on the orbital eccentricity, we made an analysis similar to the one presented in Sec. 3 but discarding the HAWK-I photometry. We obtained similar results for the transit parameters, but the eccentricity was poorly constrained: we obtained much less precise values for $e \cos{\omega}$ and $e \sin{\omega} $, respectively $0.020^{+0.024}_{-0.029}$ and $-0.170^{+0.062}_{-0.078}$. HAWK-I occultation brings thus a strong constraint on these parameters, especially on $e \cos{\omega}$. Table 1 shows that our analysis does not agree with B08 for one important parameter: the stellar density. Indeed, the value presented in B08 is significantly smaller and more precise than ours. Still, B08 assumed a zero eccentricity in their analysis, while the stellar mean density deduced from transit observables depend on $e$ and $\omega$ (see e.g. Winn 2009). To test the influence of the zero eccentricity assumption on the deduced stellar density, we made a new MCMC analysis assuming $e$ = 0. We obtained this time $\rho_\ast = 0.695^{+0.043}_{-0.030}$ $\rho_\odot$, in excellent agreement with the value $\rho_\ast = 0.698 \pm 0.033$ $\rho_\odot$ presented by B08. This shows nicely that not only are VLT and CoRoT data fully compatible, but also that assuming a zero eccentricity can lead to an unreliable stellar density value and uncertainty. In our case, this has no significant impact on the deduced physical parameters because the large errors that we have on the stellar effective temperature and metallicity dominate totally the result of the stellar-evolutionary modeling (see Sec. 3.6). Still, the point is important. As shown by Jackson et al. (2008a), most published estimates of planetary tidal circularization timescales used inappropriate assumptions that led to unreliable values, and most close-in planets could probably keep a tiny but non-zero eccentricity during a major part of their lifetime. In this context, very precise transit photometry like the CoRoT one is not enough to reach the highest accurary on the physical parameters of the system, a precise determination of $e$ and $\omega$ is also needed. This strengthens the interest of getting complementary occultation photometry in addition to high-precision radial velocities to improve the characterization of transiting planets. \[fig:e\] ![Position of CoRoT-1b (in red) among the other transiting planets (black circles, values from http://exoplanet.eu) in a mass-radius diagram. The error bars are shown only for CoRoT-1b for the sake of clarity. ](d.ps "fig:"){width="8cm"} The atmospheric properties of CoRoT-1b -------------------------------------- The flux at 2.09 $\mu$m of this planet is slightly larger than the (zero-albedo) equilibrium temperature, $\sim$ 2660 K, obtained if the star’s effective temperature is allowed to be as high as 6100K (maximum within the 1-$\sigma$ error-bars from B08). An irradiated planet atmosphere model (following Barman et al. 2005) for CoRoT-1b, was computed adopting the maximum observational allowed stellar effective temperature and radius and assuming zero energy is transported to the night side. Solar metallicity was assumed and all other parameters were taken from Table 1. This model (Fig. 5) falls short of matching the observations within 1-$\sigma$, while a black body with the same equilibrium temperature as the irradiated planet model is in better agreement. The atmosphere model is hot enough for a significant temperature inversion to form for P $<$ 0.1 bar and is nearly isothermal from 0.1 down to $\sim$ 100 bar. A model which uniformly redistributes the absorbed stellar flux across the entire planet surface (lower curve in Fig. 5) is far too cool to match the observations and is excluded at $\sim$ 3 $\sigma$. The flux at 2.09 $\mu$m alone is suggestive that very little energy is redistributed to the night side; however additional observations at shorter and/or longer wavelengths are needed to better estimate the bolometric flux of the planet’s day side. Occultation measurements in other bands will help provide limits on the day side bolometric flux and determine the depth of any possible temperature inversion and the extent of the isothermal zone. Recently Snellen et al. (2009) measured the dayside planet-star flux ratio of CoRoT-1 in the optical ($\sim$ 0.7 $\mu$m) to be $1.26 \pm 0.33 \times 10^{-4}$. The hot, dayside only, model shown in Fig. 5 predicts a value of $1.29 \pm 0.33 \times 10^{-4}$, which is fully consistent with the optical measurement. Consequently, it appears as though very little energy is being carried over to the night side of this planet. \[fig:e\] ![Comparison of our 2.09 $\mu$m occultation depth measured for CoRoT-1 with models of planet-star flux density ratios assuming that the absorbed stellar flux is redistributed across the dayside only (top curve) and uniformly redistributed across the entire planetary atmosphere (lower curve). A black body model is also shown (dotted) for T = 2365 K. ](e.ps "fig:"){width="9cm"} Assessing the presence of another body in the system ---------------------------------------------------- As shown in Table 1, our deduced systemic RV for each SOPHIE epoch agree with well each other: we do not confirm the RV drift mentioned in B08. Our combined analysis presented in Sec. 3 leads to a very precise determination of the orbital period: $ 1.5089686^{+ 0.0000005}_{- 0.0000006} $ days, thanks to a lever arm of nearly one year between CoRoT transits and the VLT one. A simple linear fit for timing $vs$ epoch based on the CoRoT and VLT transits lead to a similar level of precision, giving $P=1.5089686^{+0.0000003}_{-0.0000005}$ days. This fit has a reduced $\chi^2$ of 1.28 and the $rms$ of its residuals (see Fig. 6) is 36 s. These values are fully consistent with those reported by Bean (2009) for CoRoT data alone. We also notice the same 3-$\sigma$ discrepancy with transit \#23 which, once removed, results in a reduced $\chi^2$ of 1.00, hereby confirming the remarkable periodicity of the transit signal. \[timings\_corot1b\] ![$Top$: Residuals of the linear fit timing $vs$ epoch for CoRoT-1b (see text for details). The rightmost point is our VLT/FORS2 timing. $Botttom$: zoom on the CoRoT residuals.](f.ps "fig:"){width="8.0cm"} While limits on additional planetary companions in CoRoT-1 system have been extensively discussed for transit timing variations (TTVs) by Bean (2009), it is interesting to take limits from RVs into account. Figure 7 illustrates the domains where additional planets could be found through TTVs (white) and through RV measurements (above coloured curves). We focused on short periods objects since TTVs are more sensitive to nearby perturbators as compared to the known transiting planet. We assumed an eccentricity of 0.05 for a putative coplanar planet and used the $Mercury$ package described in Chambers (1999) to estimate by numerical integration the maximum TTV signal expected for CoRoT-1b. White is the domain with a $>$ 5-$\sigma$ detection through TTVs according to CoRoT data $rms$, while black area is below the 1-$\sigma$ detection threshold. Although approximative, this shows that for a typical 3  accuracy on radial velocities (dashed curve on Fig. 7) routinely obtained with HARPS spectrograph (Mayor et al. 2003), no room remains for planetary companion detection through TTVs alone. Thus, TTV search method may opportunely be applied on active and/or stars for which RV measurements accuracy is limited, increasing a detectability area for which RVs are not or far less sensitive to. Each transit timing may be compared to a single RV measurement. The increased free parameters in a TTV orbital solution raise degeneracies that could not be lifted by considering the same number of datapoints that would allow an orbital solution recovery with RVs. Determination of a large number of consecutive transits added to occultation timings allow to access to the uniqueness of the solution as well as lowering constraints on timings accuracy (Nesvorný & Morbidelli 2008). This is thus a high cost approach that is the most potentially rewarding on carefully determined targets stars. ![Detectivity domain for a putative CoRoT-1c planet, assuming $e_c=0.05$. In white, the period-mass region where planets yield maximum TTV on CoRoT-1b above 100 s ($5\sigma$ detection based on CoRoT data). Companions in the black area yield maximum TTV below the $1\sigma$ threshold. Solid, dashed and dotted curves shows RV detection limits for 1, 3 and 10 m/s RMS. ](g.ps){width="7.0cm"} Conclusion ========== We have obtained new high-precision transit photometry for the planet CoRoT-1b. Our deduced system parameters are in very good agreement with the ones presented in B08, providing thus an independent verification of the validity of the CoRoT photometry. Due to the precision of the CoRoT and VLT transit photometry and the long baseline between them, the orbital period is now known to a precision better than 1/10th of a second. The precision on the planetary mass and radius is limited by the large errors on the stellar spectroscopic parameters, and a significant precision improvement should be made possible by getting new high-quality spectra of CoRoT-1. We have also measured successfully the occultation of the planet with HAWK-I, a new wide-field near-infrared imager mounted recently on the VLT. The large occultation depth that we measure is better reproduced by an atmospheric model with no redistribution of the absorbed stellar flux to the night side of the planet. This measurement firmly establishes the potential of the HAWK-I instrument for the study of exoplanetary atmospheres. At the time of writing, $Spitzer$ cryogen is nearly depleted and soon only its 3.6 $\mu$m and 4.5 $\mu$m will remain available for occultation measurements, while the eagerly awaited JWST (Gardner et al. 2006) is not scheduled for launch before 2013. It is thus reassuring to note that ground-based near-infrared photometry is now able to perform precise planetary occultation measurements, bringing new independent constraints on the orbital eccentricity and on the atmospheric physics and composition of highly irradiated extrasolar planets. The authors thank the VLT staff for its support during the preparation and acquisition of the observations. In particular, J. Smoker and F. Selman are gratefully acknowledged for their help and support during the HAWK-I run. C. Moutou and F. Pont are acknowledged for their preparation of the FORS2 observations. F. Bouchy, PI of the ESO Program 080.C-0661(B), is also gratefully acknowledged. M. Gillon acknowledges support from the Belgian Science Policy Office in the form of a Return Grant, and thanks A. Collier Cameron for his help during the development of his MCMC analysis method. J. Montalbán thanks A. Miglio for his implementation of the optimization algorithm used in her stellar-evolutionary modeling. We thank the referee Scott Gaudi for a critical and constructive report. Alonso R., Barbieri M., Rabus M., et al., 2008, A&A, 487, L5 Appenzeller I., Fricke K., Furtig W., et al., 1998, The Messenger, 94, 1 Asplund M., Grevesse N., Sauval A. J., 2005, Cosmic Abundances as Records of Stellar Evolution and Nucleosynthesis, 336, 25 Baglin A., Auvergne M., Boisnard L., et al., 2006, 36th COSPAR Scientific Assembly, 36, 3749 Barge P., Baglin A., Auvergne M, et al., 2008, A&A, 482, L17 Barman T. S., Hauschildt P., Allard, F. , 2005, ApJ, 632, 1132 Bean J. L., 2009, A&A (accepted), arXiv:0903.1845 ÊBodenheimer P., Lin D. N. C., Mardling R. A., 2001, ApJ, 548, 466 B[" o]{}hm-Vitense E., 1958, Zeitschrift fur Astrophysics, 46, 108 Bouchy F., et al., 2006, in Tenth Anniversary of 51 Peg-b, 319 Burrows A., Hubeny I., Budaj J., Hubbard W. B., 2007, ApJ, 661, 502 Casali M., Pirard J.-F., Kissler-Patig M., et al., 2006, SPIE, 6269, 29 Chambers J. E., 1999, MNRAS, 304, 793 Charbonneau, D., Allen L. E:, Megeath S. T., et al. 2005, ApJ, 626, 523 Charbonneau D., Brown T. M., Burrows A., Laughlin G., 2007, Protostars and Planets V, B. Reipurth, D. Jewitt, and K. Keill (eds.), University of Arizona Press, Tucson, 701 Claret A., 2000, A&A, 363, 1081 Claret A., 2004, A&A, 428, 1001 Deming D., Richardson L. J., Harrington J., 2007, MNRAS, 378, 148 De Mooij E. J. W., Snellen I. A. G., 2009, A&A, 493, L35 Ferguson J. W., Alexander, D. R., Allard F., Barman T., Bodnarik J. G., Hauschildt, P. H., Heffner-Wong A., Tamanai, A. 2005, ApJ, 623, 585 Ford E., 2006, ApJ, 642, 505 Fortney J. J., Marley M. S., Barnes J. W., 2007, ApJ, 658, 1661 Fortney J. J., Lodders K., Marley M. S., Freedman R. S., 2008, ApJ, 678, 1419 Gardner J. P., Mather J. C., Clampin M., et al., 2006, Space Science Reviews, 123, 485 Gillon M., Pont F., Moutou C., et al., 2006, A&A, 459, 249 Gillon M., Pont F., Moutou C., et al., 2007, A&A, 466, 743 Gillon M., Magain P., Chantry V., et al., 2007, ASPC, 366, 113 Gillon M., Smalley B., Hebb L., et al., 2008, A&A, 496, 259 Girardi L., Bressan A., Bertelli G., Chiosi C., 2000, A&AS 141, 371 Gelman A., Rubin D., 1992, Statistical Science, 7, 457 Gregory P. C., 2005, ApJ, 631, 1198 Grevesse N., Noels A., 1993, in La formation des éléments chimiques, AVCP, ed. R. D. Hauck B., Paltani S. Holman M. J., Winn J. N., Latham D. W., et al., 2006, ApJ, 652, 1715 Ibgui L., Burrows A., 2009, ApJ (submitted), arXiv:0902.3998 Iglesias C. A., Rogers F. J., 1996, ApJ, 464, 943 Jackson B., Greenberg R., Barnes R., 2008a, ApJ, 681, 1631 Jackson B., Greenberg R., Barnes R., 2008b, ApJ, 681, 1631 Knutson H. A., Charbonneau D., Deming D., Richardson L. J., 2007 , PASP, 119, 616 Mandel K., Agol. E., 2002, ApJ, 5802, L171 Magain P., Courbin F., Gillon M., et al., 2007, A&A, 461, 373 Mayor M., Pepe F., Queloz D., 2003, The Messenger, 114, 20 Nesvorný D, Morbidelli A., 2008, ApJ, 688, 636 Pirard J.-F., Kissler-Patig M., Moorwood A., et al., 2004, SPIE, 5492. 510 Richardson L. J., Deming, D., Seager, S., 2003a, ApJ, 597, 581 Richardson L. J., Deming D., Wiedemann G., et al, 2003b, ApJ, 584, 1053 Rogers, F. J., Nayfonov A., 2002, ApJ, 576, 1064 Scuflaire R., Th[é]{}ado S., Montalb[á]{}n J., Miglio, A., et al., 2008, APSS, 316, 83 Sing. D. K. & López-Morales M., 2009, A&A, 493, L31 Snellen I. A. G., 2005, MNRAS, 363, 211 Snellen I. A. G., Covino E., 2007, MNRAS, 375, 307 Snellen I. A. G., de Mooij E. J. W., Albrecht S., 2009, Nature, 459, 543 Tegmark M., Strauss M. A., Blanton M. R., 2004, Phys. Rev. D., 69, 103501 Thoul A., Bahcall J. N., Loeb A., 1994, ApJ, 421, 828 Torres G., 2007, ApJ, 671, L65 Werner M. W., Roellig T. L., Low F. J., 2004, ApJS, 154, 1 Winn J. N., Holman M. J., Shporer A., et al., 2008, ApJ, 136, 267 Winn J. N., 2008, IAU 253 Transiting Planets, eds. F. Pont, Cambridge, USA, p. 99 Wrigth J. T., 005, PASP, 117, 657 [^1]: Based on data collected with the VLT/FORS2 and VLT/HAWK-I instruments at ESO Paranal Observatory, Chile (programs 080.C-0661(B) and 382.C-0642(A)). [^2]: The photometric time-series used in this work are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/ [^3]: [IRAF]{} is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
--- abstract: 'We analytically describe the architecture of randomly damaged uncorrelated networks as a set of successively enclosed substructures — $k$-cores. The $k$-core is the largest subgraph where vertices have at least $k$ interconnections. We find the structure of $k$-cores, their sizes, and their birth points — the bootstrap percolation thresholds. We show that in networks with a finite mean number $z_{2}$ of the second-nearest neighbors, the emergence of a $k$-core is a hybrid phase transition. In contrast, if $z_{2}$ diverges, the networks contain an infinite sequence of $k$-cores which are ultra-robust against random damage.' author: - 'S. N. Dorogovtsev' - 'A. V. Goltsev' - 'J. F. F. Mendes' title: '$k$-core organization of complex networks' --- *Introduction.*—Extracting and indexing highly interconnected parts of complex networks—communities, cliques, cores, etc.—as well as finding relations between these substructures is an issue of topical interest in network research, see, e.g., Refs. [@gm02; @pdfv05]. This decomposition helps one to describe the complex topologies of real-world networks. In this respect, the notion of $k$-core is of fundamental importance [@b84; @s83]. The $k$-core may be obtained in the following way. Remove from a graph all vertices of degree less than $k$. Some of the rest vertices may remain with less than $% k$ edges. Then remove these vertices, and so on until no further removal is possible. The result, if it exists, is the $k$[*-core*]{}. Thus, a network is organized as a set of successively enclosed $k$-cores, similarly to a Russian nesting doll. The $k$-core decomposition was recently applied to a number of real-world networks (the Internet, the WWW, cellular networks, etc.) [aabv05,k05,wa05]{} and was turned out to be an important tool for visualization of complex networks and interpretation of cooperative processes in them. Rich $k$-core architectures of real networks was revealed. Furthermore, a $k$-core related Jellyfish model [@tpsf01] is one of the popular models of the Autonomous System graph of the Internet. The notion of the $k$-core is a natural generalization of the giant connected component in the ordinary percolation [@ajb00; @cah02; @cnsw00] (for another possible generalization, see clique percolation in Ref. [@dpv05]). Impressively, the giant connected component of an infinite network with a heavy-tailed degree distribution is robust against random damage of the net. The $k$-core percolation implies the emergence of a giant $k$-core below a threshold concentration of vertices or edges removed at random. In physics, the $k$-core percolation (bootstrap percolation) on the Bethe lattice was introduced in Ref. [@clr79] for describing some magnetic materials. Note that the $k\geq 3$-core percolation is an unusual, hybrid phase transition with a jump of the order parameter as at a first order phase transition but also with strong critical fluctuations as at a continuous phase transition [@clr79; @slc04]. The $k$-core decomposition of a random graph was formulated as a mathematical problem in Refs. [@b84; @s83]. This attracted much attentionof mathematicians [@psw96; @fr04], but actually only the criteria of emergence of $k$-cores in basic random networks were found. In this Letter we derive exact equations describing the $k$-core organization of a randomly damaged uncorrelated network with an arbitrary degree distribution. This allows us to obtain the sizes and other structural characteristics of $k$-cores in a variety of damaged and undamaged random networks and find the nature of the $k$-core percolation in complex networks. We apply our general results to the classical random graphs and to scale-free networks, in particular, to empirical router-level Internet maps. We find that not only the giant connected components in infinite networks with slowly decreasing degree distributions are resilient against random damage, as was known, but their entire $k$-core architectures are robust. *Basic equations.*—We consider an uncorrelated network—a maximally random graph with a given degree distribution $P(q)$—the so-called configuration model. We assume that a fraction $Q\equiv 1-p$ of the vertices in this network are removed at random. The $k$-core extracting procedure results in the structure of the network with a $k$-core depicted in Fig. \[sun\]. Taking into account the tree-like structure of the infinite sparse configuration model shows that the $k$-core coincides with the infinite $(k{-}1)$-ary subtree [@remark]. (The $m$-ary tree is a tree, where all vertices have branching at least $m$.) Let $R$ be the probability that a given end of an edge of a network is not the root of an infinite $(k{-}1)$-ary subtree. Then a vertex belongs to the $k$-core if at least $k$ its neighbors are roots of infinite $(k{-}1)$-ary subtrees. So the probability that a vertex is in the $k$-core is $$M(k)=p\sum\limits_{q\geqslant k}^{{}}P(q)\sum\limits_{n=k}^{q}C_{n}^{q}R^{q-n}(1-R)^{n} , \label{k-core}$$ where $C_{n}^{m}=m!/(m-n)!n!$. Note that for the ordinary percolation we must set $k=1$ in this equation. An end of an edge is not a root of an infinite $(k{-}1)$-ary subtree if at most $k{-}2$ its children branches are roots of infinite $(k{-}1)$-ary subtrees. This leads to the following equation for $R$: $$\!R = 1{-}p{+}p\sum_{n=0}^{k-2} \left[\, \sum_{i=n}^\infty \frac{(i{+}1)P(i{+}1)} %%{\overline{q}} {z_{1}}\, C_n^i R^{i-n} (1{-}R)^n \right] \!\!{.}\!\!\! \label{R}$$ Let us explain this equation. (i) The first term, $1{-}p\equiv Q$, is the probability that the end of the edge is unoccupied. (ii) $C_n^i R^{i-n} (1-R)^n$ is the probability that if a given end of the edge has $i$ children (i.e., other edges than the starting edge), then exactly $n$ of them are roots of infinite $(k{-}1)$-ary subtrees. $(i{+}1)P(i{+}1)/z_{1}$ is the probability that a randomly chosen edge leads to a vertex with branching $i$. $z_{1}=\sum\nolimits_{q}qP(q)$ is the mean number of the nearest neighbors of a vertex in the graph. Thus, in the square brackets, we present the probability that a given end of the edge has exactly $n$ edges, which are roots of infinite $(k-1)$-ary subtrees. (iii) Finally, we take into account that $n$ must be at most $k-2$. The sum $\sum_{n=0}^{k-2}$ in Eq. (\[R\]) may be rewritten as: $$\Phi _{k}(R)\!\! = \!\!\sum\limits_{n=0}^{k-2}\frac{(1-R)^{n}}{n!}\frac{d^{n}}{dR^{n}}G_{1}(R), \label{F1}$$ where $G_{1}(x)=z_{1}^{-1}\sum% \nolimits_{q}P(q)qx^{q-1}=z_{1}^{-1}dG_{0}(x)/dx$, and $G_{0}(x)=\sum% \nolimits_{q}P(q)x^{q}$ [@nsw01]. Then Eq. (\[R\]) takes the form: $$R=1-p+p\Phi _{k}(R). \label{R2}$$ In the case $p=1$, Eq. (\[R2\]) was recently obtained in [@fr04]. If Eq. (\[R2\]) has only the trivial solution $R\!=\!1$, there is no giant $k$-core. The emergence of a nontrivial solution corresponds to the birth of the giant $k$-core. It is the lowest nontrivial solution $R\!<\!1$ that describes the $k$-core. Let us define a function $$f_{k}(R)=[1-\Phi _{k}(R)]/(1-R) . \label{fk}$$ This function is positive in the range $R\in \lbrack 0,1)$ and, in networks with a finite mean number of the second neighbors of a vertex, $z_{2}=\sum_{q}q(q-1)P(q)$, it tends to zero in the limit $R\rightarrow 1$ as $f_{k}(R)\propto (1-R)^{k-2}$. In terms of the function $f_{k}(R)$, Eq. (\[R\]) is especially simple: $$pf_{k}(R)=1. \label{f}$$ Depending on $P(q)$, with increasing $R$, $f_{k}(R)$ either (i) monotonously decreases from $f_{k}(0)<1$ to $f_{k}(1)=0$, or (ii) at first increases, then approaches a maximum at $R_{\max }\in (0,1)$, and finally tends to zero at $R\rightarrow 1$. Therefore Eq. (\[f\]) has a non-trivial solution $R<1$ if $$p\max_{R\in \lbrack 0,1)}f_{k}(R)\geqslant 1. \label{criterion}$$ This is the criterion for the emergence of the giant $k$-core in a randomly damaged uncorrelated network. The equality in Eq. (\[criterion\]) takes place at a critical concentration $p_{c}(k)$ when the line $y(R)=1/p_{c}(k)$ touches the maximum of $f_{k}(R)$. Therefore the threshold of the $k$-core percolation is determined by two equations: $$p_{c}(k)=1/f_{k}(R_{\max }),\ \ \ \ \ \ 0=f_{k}^{\prime }(R_{\max }). \label{cp1}$$ $R_{\max }$ is the value of the order parameter at the birth point of the $k$-core. At $p<p_{c}(k)$ there is only the trivial solution $R=1$. At $k=2$, Eq. (\[R2\]) describes the ordinary percolation in a random uncorrelated graph [@cah02; @cnsw00]. In this case, in infinite networks we have $R_{\max }\rightarrow 1$, and the criterion (\[criterion\]) is reduced to the standard condition for existence of the giant connected component: $pG_{1}^{\prime }(1)=pz_{2}/z_{1}\geqslant 1$. Let us find $R$ near the $k\!\geq \!3$-core percolation transition in a network with a finite $z_{2}$. We examine Eq. (\[R2\]) for $R=R_{\max }+r$ and $% p=p_{c}(k)+\epsilon $ with $\epsilon ,\left\vert r\right\vert \ll 1$. Note that at $k\geqslant 3$, $\Phi _{k}(R)$ is an analytical function in the range $R\in \lbrack 0,1)$. It means that the expansion of $\Phi _{k}(R+r)$ over $r$ contains no singular term at $R\in \lbrack 0,1)$. Substituting this expansion into Eq. (\[R2\]), in the leading order, we find $$R_{\max }-R\propto [p-p_{c}(k)]^{1/2}, \label{expR}$$ i.e., the combination of a jump and the square root critical singularity. The origin of this singularity is an intriguing problem of the hybrid phase transition. The structure of the $k$-core is essentially determined by its degree distribution which we find to be $$P_{k}(q)=\frac{p}{M(k)}\sum\limits_{q^{\prime }\geqslant q}P(q^{\prime })C_{q}^{q^{\prime }}R^{q^{\prime }-q}(1-R)^{q}. \label{z1(k)}$$ The mean degree of the $k$-core vertices is $z_{1}(k)=\sum_{q\geq k}P_{k}(q)q $. The $k$-core of a given graph contains the $k+1$-core as a subgraph. Vertices which belong to the $k$-core, but do not belong to the $% k+1$-core, form the $k$-shell of the relative size $S(k)=M(k)-M(k+1)$. We apply our general results to two basic networks. *Erdős-Rényi (ER) graphs.*—These random graphs have the Poisson degree distribution $P(q)\!=\!z_{1}^{q}\exp(-z_{1})/q!$, where $z_{1}$ is the mean degree. In this case, $G_{0}(x)=G_{1}(x)=\exp [z_{1}(x\!-\!1)]$. In Eq. (\[R2\]), $% \Phi _{k}(R)=\Gamma \lbrack k\!-\!1,z_{1}(1\!-\!R)]/\Gamma (k-1)$, where $% \Gamma (n,x)$ is the incomplete gamma function. From Eq. (\[k-core\]) we get the size of the $k$-core: $$M(k)=p\{1-\Gamma \lbrack k,z_{1}(1-R)]/\Gamma (k)\} , \label{ER-core}$$ where $R$ is the solution of Eq. (\[R2\]). The degree distribution in the $% k$-core is $P_{k}(q{\geq }k)=pz_{1}^{q}(1{-}R)^{q} e^{-z_1(1{-}R)}/[M(k)q!]$. Our numerical calculations revealed that at $p=1$, the highest $k$-core increases almost linearly with $z_{1}$, namely, $k_{h}\approx 0.78z_{1}$ at $z_{1}\lesssim 500$. Furthermore, the mean degree $z_{1}(k)$ in the $k$-core weakly depends on $k$: $z_{1}(k)\approx z_{1}$. Fig. \[dER\] shows the dependence of the size of the $k$-cores, $M(k)$, on the concentration $Q=1-p$ of the vertices removed at random. Note that counterintuitively, it is the highest $k$-core—the central, most interconnected part of a network—that is destroyed primarily. The inset of Fig. \[dER\] shows that with increasing damage $Q$, the mean degree $z_{1}(k)$ decreases. The $k$-cores disappear consecutively, starting from the highest core. The $k$-core structure of the undamaged ER graphs is displayed in Fig. \[kCores\]. *Scale-free networks.*—We consider uncorrelated networks with a degree distribution $P(q)\propto (q+c)^{-\gamma }$. Let us start with the case of $\gamma >3$, where $z_{2}$ is finite. It turns out that the existence of $k$-cores is determined by the complete form of the degree distribution including its low degree region. It was proved in Ref. [fr04]{} that there is no $k\!\geqslant \!3$-core in a graph with the minimal degree $q_{0}=1$, $\gamma \geq 3$, and $c=0$. We find that the $k$-cores emerge as $c$ increases. The $k$-core structure of scale-free graphs is represented in Fig. \[kCores\]. The relative sizes of the giant $k$-cores in the scale-free networks are smaller than in the ER graphs. As $z_{2}$ is finite, the $k\!\geq \!3$-core percolation at $\gamma \!>\!3$ is the hybrid phase transition. This is in contrast to the ordinary percolation in scale-free networks, where behavior is non-standard if $\gamma \leq 4$ [@cah02]. The case $2<\gamma \leqslant 3$ is realized in most important real-world networks. With $\gamma $ in this range, $z_{2}$ diverges if $N\rightarrow \infty $. In the leading order in $1-R\ll 1$, Eq. (\[fk\]) gives $% f_{k}(R)\cong (q_{0}/k)^{\gamma -2}(1-R)^{-(3-\gamma )}$. From Eq. (\[f\]) we find the order parameter $R$. Substituting this solution into Eq. (\[k-core\]), in the leading order in $1-R$ we find that the size of the $k$-core decreases with increasing $k$: $$M(k)=p[q_{0}(1\!-\!R)/k]^{\gamma -1}=p^{2/(3-\gamma )}(q_{0}/k)^{(\gamma -1)/(3-\gamma )}\!. \label{k-core2}$$ The divergence of $f_{k}(R)$ at $R\rightarrow 1$ means that the percolation threshold $p_{c}(k)$ tends to zero as $N\rightarrow \infty $. The $k$-core percolation transition in this limit is of infinite order similarly to the ordinary percolation [@cah02]. As $k_{h}(N\!\rightarrow \!\infty )\rightarrow \infty $, there is an infinite sequence of successively enclosed $k$-cores. One has to remove at random almost all vertices in order to destroy any of these cores. Eq. (\[z1(k)\]) allows us to find the degree distribution of $k$-cores in scale-free networks. For $\gamma >2$ and $k\gg 1$, $P_{k}(q\gg k)\approx (\gamma -1)k^{\gamma -1}q^{-\gamma }$. The mean degree $z_{1}(k)$ in the $k$-core grows linearly with $k$: $z_{1}(k)\approx kz_{1}/q_{0}$ in contrast to the Erdős-Rényi graphs. *Finite-size effect.*—The finiteness of the scale-free networks with $2<\gamma <3$ essentially determines their $k$-core organization. We introduce a size dependent cutoff $q_{\text{cut}}(N)$ of the degree distribution. Here $q_{\text{cut}}(N)$ depends on details of a specific network. For example, for the configuration model without multiple connections, the dependence $q_{\text{cut}}(N) \sim \sqrt{N}$ is usually used if $2<\gamma <3$. It is this function that must be substituted into Eqs. (\[kh\]), (\[Mh\]), and (\[k-thr\]) below. A detailed analysis of Eq. (\[fk\]) shows that the cutoff dramatically changes the behavior of the function $f_{k}(R)$ near $R=1$. $f_{k}(R)$ has a maximum at $R_{\max }\cong 1-(3-\gamma )^{-1/(\gamma -2)}\,k/q_{\text{cut}}$ and tends to zero at $% R\rightarrow 1$ instead of divergence. As a result, the $k$-core percolation again becomes to be the hybrid phase transition. The cutoff determines the highest $k$-core: $$k_{h}\cong p(\gamma -2)(3-\gamma )^{(3-\gamma )/(\gamma -2)}q_{\text{cut}% }(q_{0}/q_{\text{cut}})^{\gamma -2}. \label{kh}$$ The sizes of the $k$-core at $q_{0}\,\ll \,k\,\ll \,k_{h}$ are given by Eq. (\[k-core2\]). The relative size of the highest $k$-core is $$M(k_{h})\cong p[(3-\gamma )^{-(\gamma -1)/(\gamma -2)}\!-\!1](q_{0}/q_{\text{% cut}})^{\gamma -1}. \label{Mh}$$ Finally, the threshold of the $k$-core percolation is $$p_{c}(k)=1/f_{k}(R_{\max })\cong k/k_{h}. \label{k-thr}$$ If $k\!\rightarrow \!k_{h}$, then $p_{c}(k)\rightarrow 1$, i.e. even minor random damage destroys the highest $k_{h}$-core. By using exact Eqs. (\[R\]) and (\[k-core\]), we calculated numerically $M(k)$ and $S(k)$ for a scale-free network with $\gamma =2.5$, see Fig. \[kCores\]. These curves agree with asymptotic expressions (\[k-core2\]) and (\[Mh\]). *$k$-core organization of the router-level Internet.*—We consider the router-level Internet which has lower degree-degree correlations than the Internet at the Autonomous Systems (AS) level. We substitute the empirical degree distribution of the router-level Internet as seen in skitter and iffinder measurements [@CAIDA] into our exact equations and compare our results with the direct $k$-core decomposition of this network. The calculated sizes of $k$-cores and $k$-shells are shown in Fig. [kCores]{}. The calculated dependence $S(k)$ \[Fig. \[kCores\](b), the IR curve\] is surprisingly similar to the dependence obtained by the direct $k$-core decomposition of, actually, a different network—the AS-level Internet—in Ref. [@k05]. On the other hand, one can see in Fig. [kCores]{} that the highest $k$-core with $k_{h}=10$ occupies about 2% of the network, while a direct $k$-core decomposition of the same router-level Internet map in Ref. [@aabv05] revealed $k$-cores up to $k_{h}=32$. This difference indicates the significance of degree–degree correlations, which we neglected. *Discussion and conclusions.*—It is important to indicate a quantity critically divergent at the $k$-core’s birth point. This is a mean size of a cluster of vertices of the $k$-core with exactly $k$ connections inside of the $k$-core. One may show that it diverges as $-dM(k)/dp \sim (p-p_c)^{-1/2}$ and that the size distribution of these clusters is a power law at the critical point. One should note that the $k$-core (or bootstrap) percolation is not related to the recently introduced $k$-clique percolation [@dpv05] despite of the seemingly similar terms. The $k$-clique percolation is due to the overlapping of $k$-cliques—full subgraphs of $k$ vertices—by $k-1$ vertices. Therefore, the $k$-clique percolation is impossible in sparse networks with few loops, e.g., in the configuration model and in classical random graphs, considered here. In summary, we have developed the theory of $k$-core percolation in damaged uncorrelated networks. We have found that if the second moment of the degree distribution of a network is finite, the $k$-core transition has the hybrid nature. In contrast, in the networks with infinite $z_{2}$, instead of the hybrid transition, we have observed an infinite order transition, similarly to the ordinary percolation in this situation. All $k$-cores in these networks are extremely robust against random damage. It indicates the remarkable robustness of the entire $k$-core architectures of infinite networks with $\gamma \leq 3$. Nonetheless, we have observed that the finite networks are less robust, and increasing failures successively destroy $k$-cores starting from the highest one. Our results can be applied to numerous cooperative models on networks: a formation of highly connected communities in social networks, the spread of diseases, and many others. This work was partially supported by projects POCTI: FAT/46241, MAT/46176, FIS/61665, and BIA-BCM/62662, and DYSONET. The authors thank D. Krioukov of CAIDA for information on the Internet maps. [99]{} M. Girvan and M.E.J. Newman, Proc. Natl. Acad. Sci. USA **99**, 7821 (2002). G. Palla, I. Derenyi, I. Farkas, and T. Vicsek, Nature **435**, 814 (2005). B. Bollobás, in *Graph Theory and Combinatorics: Proc. Cambridge Combinatorial Conf. in honour of Paul Erdős* (B. Bollobás, ed.) (Academic Press, NY, 1984), p. 35. S.B. Seidman, Social Networks **5**, 269 (1983). J.I. Alvarez-Hamelin, L. Dall'Asta, A. Barrat, and A. Vespignani, cs.NI/0504107; cs.NI/0511007. S. Kirkpatrick, Jellyfish and other interesting creatures of the Internet, http://www.cs.huji.ac.il/kirk/Jellyfish\_Dimes.ppt; S. Carmi, S. Havlin, S. Kirkpatrick, Y. Shavitt, and E. Shir, cond-mat/0601240. S. Wuchty and E. Almaas, BMC Evol Biol. **5**, 24 (2005). S.L. Tauro, C. Palmer, G. Siganos, and M. Faloutsos, in [*Global Telecommunications Conference GLOBECOM ’01*]{} (IEEE, Piscataway, NJ, 2001), Vol. 3, p. 1667. R. Albert, H. Jeong, and A.-L. Barabási, Nature **406**, 378 (2000). R. Cohen, K. Erez, D. ben-Avraham, and S. Havlin, Phys. Rev. Lett. **85**, 4626 (2000); R. Cohen, D. ben-Avraham, and S. Havlin, Phys. Rev. E **66**, 036113 (2002). D.S. Callaway, M.E.J. Newman, S.H. Strogatz, and D.J. Watts, Phys. Rev.Lett. **85**, 5468 (2000). I. Derényi, G. Palla, and T. Vicsek, Phys. Rev. Lett. [**94**]{}, 160202 (2005). J. Chalupa, P.L. Leath and G.R. Reich, J. Phys. C **12**, L31 (1979). J.M. Schwarz, A.J. Liu, and L.Q. Chayes, cond-mat/0410595. B. Pittel, J. Spencer and N. Wormald, J. Combin. Theory B **67**, 111 (1996). D. Fernholz and V. Ramachandran, UTCS Technical Report TR04-13, 2004. Note that due to the tree-like structure of the infinite sparse configuration model, finite $k$-cores do not exist. M.E.J. Newman, S.H. Strogatz, and D.J. Watts, Phys. Rev. E **64**, 026118 (2001). CAIDA’s router-level topology measurements, http://www.caida.org/tools/measurements/skitter/ router\_topology/.
--- abstract: 'Lattice models are useful for understanding behaviors of interacting complex many-body systems. The lattice dimer model has been proposed to study the adsorption of diatomic molecules on a substrate. Here we analyze the partition function of the dimer model on an $2 M \times 2 N$ checkerboard lattice wrapped on a torus and derive the exact asymptotic expansion of the logarithm of the partition function. We find that the internal energy at the critical point is equal to zero. We also derive the exact finite-size corrections for the free energy, the internal energy, and the specific heat. Using the exact partition function and finite-size corrections for the dimer model on finite checkerboard lattice we obtain finite-size scaling functions for the free energy, the internal energy, and the specific heat of the dimer model. We investigate the properties of the specific heat near the critical point and find that specific-heat pseudocritical point coincides with the critical point of the thermodynamic limit, which means that the specific-heat shift exponent $\lambda$ is equal to $\infty$. We have also considered the limit $N \to \infty$ for which we obtain the expansion of the free energy for the dimer model on the infinitely long cylinder.' author: - 'Nickolay Sh. Izmailian' - 'Ming-Chya Wu' - 'Chin-Kun Hu' date: 'October 14, 2016' title: 'Finite-size corrections and scaling for the dimer model on the checkerboard lattice' --- ø Ł[[L]{} ]{} Introduction ============ Lattice models are useful for understanding behaviors of interacting complex many-body systems. For examples, the Ising model [@onsager; @95jpa3dIsing; @96jpaIsing] can be used to understand the critical behavior of gas-liquid systems [@review; @2012jcp], the lattice model of interacting self-avoiding walks [@Orr47; @r1; @13eplISAW; @16cpc] can be used to understand the collapse and the freezing transitions of the homopolymer, and a charged H-P model [@10prlLiMS; @13jcpLiMS] can be used to understand aggregation of proteins. The lattice dimer model [@Fowler] has been proposed to study the adsorption of diatomic molecules on a substrate. In this paper, we will use analytic equations to study finite-size corrections and scaling of the dimer model [@Fowler] on the checkerboard lattice. Finite-size corrections and scaling for critical lattice systems [@Fowler; @onsager; @kasteleyn; @fisher1961], initiated more than four decades ago by Ferdinand, Fisher, and Barber [@ferdinand; @FerdFisher; @barber] have attracted much attention in recent decades (see Refs. [@privman; @hu] for reviews). Finite-size effects become of practical interest due to the recent progress in fine processing technologies, which has enabled the fabrication of nanoscale materials with novel shapes [@nano1; @nano2; @nano3]. In the quest to improve our understanding of realistic systems of finite extent, exactly solvable two-dimensional models play a key role in statistical mechanics as they have long served as a testing ground to explore the general ideas of corrections and scaling under controlled conditions. Very few of them have been solved exactly and the dimer model [@Fowler; @kasteleyn; @fisher1961] is being one of the most prominent examples. The classical dimer model has been introduced in 1937 by Fowler and Rushbrook as a model for the adsorption of diatomic molecules on a substrate [@Fowler]. Later it became a general problem studied in various scientific communities with a large spectrum of applications. The dimer model has regained interest because of its quantum version, the so-called quantum dimer model, originally introduced by Rokhsar and Kivelson [@rokhsar]. Besides, a recent connection between dimer models and D-brane gauge theories has been discovered [@brane], providing a very powerful computational tool. &gt;From the mathematical point of view the dimer model is extremely simple to define. We take a finite graph ${\mathcal L}$ and consider all arrangements of dimers (dominoes) so that all sites of ${\mathcal L}$ are covered by exactly one dimer. This is the so-called close-packed dimer model. Here we focus on the dimer model on a checkerboard lattice (see Fig. \[fig\_1\]). The checkerboard lattice is a unique two-dimensional (2D) system of great current interest, a set-up which provides a tool to study the evolution of physical properties as the system transits between different geometries. The checkerboard lattice is a simple rectangular lattice with anisotropic dimer weights $x_1, x_2, y_1$ and $y_2$. Each weight $a$ is simply the Boltzmann factor $e^{-E_a/k T}$ for a dimer on a bond of type $a$ with energy $E_a$. When one of the weights $x_1, x_2, y_1$, or $y_2$ on the checkerboard lattice is equal to zero, the partition function reduces to that for the dimer model on the one-dimensional strip. The dimer model on the checkerboard lattice was first introduced by Kasteleyn [@kasteleyn1], who showed that the model exhibits a phase transition. The exact expression for the partition function for the dimer model on the checkerboard lattice on finite $2M \times 2N$ lattices with periodic boundary conditions has been obtained in Ref. [@ihk2015]. ![(Color online) The unit cell for the dimer model on the checkerboard lattice.[]{data-label="fig_1"}](fig_1.eps){width="32.00000%"} In the present paper, we are going to study the finite-size effects of the dimer model on the finite checkerboard lattice of Fig. \[fig\_1\]. The detailed study of the finite size effects for free energy and specific heat of the dimer model began with the work of Ferdinand [@ferdinand] few years after the exact solution, and has continued in a long series of articles using analytical [@blote; @nagle; @itzycson; @izmailian2006; @izmailian2007; @wu2011; @izmailian2014; @izmkenna; @ipph; @ioh03; @allegra; @ruelle] and numerical methods [@kong] for various geometries and boundary conditions. In particular, Ivashkevich, Izmailian, and Hu [@Ivasho] proposed a systematic method to compute finite-size corrections to the partition functions and their derivatives of free models on torus, including the Ising model, the dimer model, and the Gaussian model. Their approach is based on relations between the terms of the asymptotic expansion and the so-called Kronecker’s double series which are directly related to the elliptic $\theta$ functions. We will apply the algorithm of Ivashkevich, Izmailian, and Hu [@Ivasho] to derive the exact asymptotic expansion of the logarithm of the partition function. We will also derive the exact finite-size corrections for the free energy $F$, the internal energy $U$, and the specific heat $C(t)$. Using exact partition functions and finite-size corrections for the dimer model on the finite checkerboard lattice we obtain finite-size scaling functions for the free energy, the internal energy, and the specific heat. We are particularly interested in the finite-size scaling behavior of the specific-heat pseudocritical point. The pseudocritical point $t_{\mathrm{pseudo}}$ is the value of the temperature at which the specific heat has its maximum for finite $2 M \times 2N$ lattice. One can determine this quantity as the point where the derivative of $C_{2M,2N}(t)$ vanishes. Finite-size properties of the specific heat $C_{2M,2N}(t)$ for the dimer model are characterized by (i) the location of its peak, $t_{\mathrm{pseudo}}$, (ii) its height $C(t_{\mathrm{pseudo}})$, and its value at the infinite-volume critical point $C(t_c)$. The peak position $t_{\rm{pseudo}}$, is a pseudocritical point which typically approaches $t_c$ as the characteristic size of the system $L$ tends to infinity as $L^{\lambda}$, where $\lambda$ is the shift exponent and $L$ is characteristic size of the system ($L=\sqrt{4 M N}$). Usually the shift exponent $\lambda$ coincides with $1/\nu$, where $\nu$ is the correlation length critical exponent, but this is not always the case and it is not a consequence of the finite-size scaling (FSS) [@barber1]. In a classic paper, Ferdinand and Fisher [@FerdFisher] determined the behavior of the specific heat pseudocritical point. They found that the shift exponent for the specific heat is $\lambda=1=1/\nu$, except for the special case of an infinitely long torus, in which case pseudocritical specific-heat scaling behavior was found to be of the form $L^2 \ln L$ [@onsager]. Thus the actual value of the shift exponent depends on the lattice topology (see Ref. [@kenna] and references therein). Quite recently Izmailian and Kenna [@izmkenna] have found that the shift exponent can be also depend on the parity of the number of lattice sites $N$ along a given lattice axis. They found for the dimer model on the triangular lattice that the shift exponent for the specific heat is equal to $1$ ($\lambda = 1$) for odd $N$, while for even $N$ the shift exponent is equal to infinite ($\lambda = \infty$). In the former case, therefore, the finite-size specific-heat pseudocritical point is size dependent, while in the latter case it coincides with the critical point of the thermodynamic limit. A question we wish to address here is the corresponding status of the shift exponent in the dimer model on the checkerboard lattice. Our objective in this paper is to study the finite-size properties of a dimer model on the plane checkerboard lattice using the same techniques developed in Refs. [@ioh03] and [@Ivasho]. The paper is organized as follows. In Sec. \[partition-function\] we introduce the dimer model on the checkerboard lattice with periodic boundary conditions. In Sec. \[asymptotic-expansion\] we derive the exact asymptotic expansions of the logarithm of the partition functions and their derivatives and write down the expansion coefficients up to second order. In Sec. \[dimer-finitite-T\] we numerically investigate the free energy, internal energy and specific heat as function of temperature like parameter $t$, and analyze the scaling functions of the free energy, the internal energy, and the specific heat. We also investigate the properties of the specific heat near the critical point and find that the specific-heat shift exponent $\lambda$ is equal to infinity, which actually means that specific-heat pseudocritical point coincides with the critical point of the thermodynamic limit. In Sec. \[inf\_long\] we consider the limit $N \to \infty$ for which we obtain the expansion of the free energy for the dimer model on the infinitely long cylinder. From a finite-size analysis we find that the dimer model on a checkerboard lattice can be described by a conformal field theory having a central charge $c = - 2$. Our main results are summarized and discussed in Sec. \[summary-discussion\]. Partition function ================== In the present paper, we consider the dimer model on an $2M \times 2N$ checkerboard lattice, as shown in Fig. \[fig\_1\], under periodic boundary conditions. The partition function can be written as $$Z=\sum x_1^{N_{x_1}}x_2^{N_{x_2}}y_1^{N_{y_1}}y_2^{N_{y_2}}, \label{partition}$$ where $N_a$ is the number of dimers of type $a$ and the summation is over all possible dimer configurations on the lattice. An explicit expression for the partition function of the dimer model on the $2M \times 2N$ checkerboard lattice under periodic boundary condition is given by [@ihk2015] $$\begin{aligned} Z_{2M,2N}(t)&=&\frac{(x_1x_2)^{M N}}{2}\left\{ -Z_{0,0}^2(t)+Z_{\frac{1}{2},\frac{1}{2}}^2(t)+ Z_{\frac{1}{2},0}^2(t)+Z_{0,\frac{1}{2}}^2(t)\right\}, \label{stat}\\ Z_{\alpha,\beta}^2(t)&=&\prod_{m=0}^{M-1}\prod_{n=0}^{N-1} 4 \left\{t^2 +z^2 \sin^2 \left(\pi \frac{m+\beta}{M}\right)+ \sin^2 \left( \pi\frac{n+\alpha}{N}\right) \right\} , \label{twist}\end{aligned}$$ where $$t^2 =\frac{(x_1-x_2)^2+(y_1-y_2)^2}{4x_1 x_2}, \qquad \qquad z^2 = \frac{y_1 y_2}{x_1 x_2}.$$ Without loss the generality we can set $x_{1}x_{2}=1$ and $y_{1}y_{2}=1$, such that $z=1$. The dimer model on checkerboard lattice has a singularity at $t=0$ ($x_1=x_2, y_1=y_2$). With the help of the identity $$\begin{aligned} 4\left|~\!{\sinh}\left(M\omega+i\pi\beta\right)\right|^2 =4\left[\,{\sinh}^2 M\omega + \sin^2\pi\beta\,\right]=\prod_{m=0}^{M-1}4\textstyle{ \left\{~\!{\sinh}^2\omega + \sin^2\left[\frac{\pi}{M}(m+\beta)\right]\right\}}, \label{ident}\end{aligned}$$ the $Z_{\alpha,\beta}(t)$ can be transformed into a simpler form $$\begin{aligned} Z_{\alpha,\beta}(t)=\prod_{n=0}^{N-1} 2\left| \textstyle{~\!{\sinh}\left\{M\omega_t\!\left(\pi\frac{n+\alpha}{N}\right)+i\pi \beta \right\} }\right|, \label{Zab}\end{aligned}$$ where lattice dispersion relation has appeared $$\begin{aligned} \omega_t(k)={\rm arcsinh}\sqrt{\sin^2 k+t^2}. \label{SpectralFunction}\end{aligned}$$ The Taylor expansion of the lattice dispersion relation of Eq. (\[SpectralFunction\]) at the critical point is given by $$\omega_0(k)=k\left(\lambda_0+\sum_{p=1}^{\infty} \frac{\lambda_{2p}}{(2p)!}\;k^{2p}\right), \label{Spectral}$$ where $\lambda_0=1$, $\lambda_2=-2/3$, $\lambda_4=4$, etc. We are interested in computing the asymptotic expansions for large $M$, $N$ with fixed aspect ratio $\rho=M/N$ of the free energy $F_{2M,2N}(t)$, the internal energy $U_{2M,2N}(t)$, and the specific heat $C_{2M,2N}(t)$. These quantities are defined as follows $$\begin{aligned} F_{2M,2N}(t) &=& \frac{1}{4M N} \ln Z_{2M,2N}(t) \label{def_free_energy}, \\ U_{2M,2N}(t) &=& \frac{\partial}{\partial t} F_{2M,2N}(t) \label{def_energy},\\ C_{2M,2N}(t) &=& \frac{\partial^2}{\partial t^2}F_{2M,2N}(t). \label{def_specific_heat}\end{aligned}$$ In addition to $F_{2M,2N}(t)$, $U_{2M,2N}(t)$, and $C_{2M,2N}(t)$, we will also consider higher derivatives of the free energy at criticality $$\begin{aligned} F^{(k)}_c = \left. \frac{\partial^k} {\partial t^k} F_{2M,2N}(t) \right|_{t=0}, \label{def_der_CH}\end{aligned}$$ with $k=3,4$. Asymptotic expansion of the free energy and its derivatives {#asymptotic-expansion} =========================================================== Asymptotic expansion of the free energy {#subsec:2a} --------------------------------------- \[subsec:2a0\] The exact asymptotic expansion of the logarithm of the partition function of the dimer model on checkerboard lattice at the critical point $t=t_c=0$ can be obtained along the same line as in Ref. [@Ivasho]. We didn’t repeat here the calculations and give the final result: $$\ln Z_{2M,2N}(0)=\ln\frac{1}{2}\left\{Z_{\frac{1}{2},\frac{1}{2}}^2(0)+ Z_{\frac{1}{2},0}^2(0)+Z_{0,\frac{1}{2}}^2(0)\right\} \label{asymptotic}$$ Here we use the fact that $Z_{0,0}^2(t)$ at the critical point is equal to zero. The exact asymptotic expansion of the $\ln Z_{\alpha,\beta}(0)$ for $(\alpha,\beta)= (0,\frac{1}{2}), (\frac{1}{2},0),(\frac{1}{2},\frac{1}{2})$ is given by [@Ivasho] $$\begin{aligned} \ln Z_{\alpha,\beta}(0)&=&\frac{S}{\pi}\int_{0}^{\pi}\!\!\omega_0(x)~\!{\rm d}x + \ln\left|\frac{\theta_{\alpha,\beta}(i\lambda \rho)}{\eta(i\lambda \rho)}\right| -2\pi\rho\sum_{p=1}^{\infty} \left(\frac{\pi^2\rho}{S}\right)^{p}\frac{\Lambda_{2p}}{(2p)!}\, \frac{{\tt Re}\;{\rm K}_{2p+2}^{\alpha,\beta}(i\lambda \rho)}{2p+2} \label{ExpansionOflnZab}\end{aligned}$$ Here $\theta_{\alpha,\beta}$ is the elliptic theta function ($\theta_{0,\frac{1}{2}}(i\lambda \rho)=\theta_2(i\lambda \rho)\equiv \theta_2, \theta_{\frac{1}{2},\frac{1}{2}}(i\lambda \rho)=\theta_3(i\lambda \rho)\equiv \theta_3, \theta_{\frac{1}{2},0}(i\lambda \rho)=\theta_4(i\lambda \rho)\equiv \theta_4$), $\eta(i\lambda \rho)\equiv \eta$ is the Dedekind $\eta$-function, ${\rm K}_{2p+2}^{\alpha,\beta}(i\lambda \rho)$ is the Kronecker’s double series (see Appendix D of Ref. [@Ivasho]), which can be expressed through the elliptic theta function (see Appendix F of Ref. [@Ivasho]) and $\Lambda_{2p}$ is the differential operators, which can be expressed via coefficients $\lambda_{2p}$ of the expansion of the lattice dispersion relation at the critical point as $$\begin{aligned} {\Lambda}_{2}&=&\lambda_2, \nonumber\\ {\Lambda}_{4}&=&\lambda_4+3\lambda_2^2\,\frac{\partial}{\partial\lambda}, \nonumber \\ {\Lambda}_{6}&=&\lambda_6+15\lambda_4\lambda_2\,\frac{\partial}{\partial\lambda} +15\lambda_2^3\,\frac{\partial^2}{\partial\lambda^2}, \nonumber \\ &\vdots& \nonumber\end{aligned}$$ Now using Eqs. (\[asymptotic\]) and (\[ExpansionOflnZab\]) it is easy to write down all terms in the exact asymptotic expansion of the logarithm of the partition function of the dimer model. Thus we find that the exact asymptotic expansion of the free energy at the critical point $F_c=F_{2M,2N}(0)$ can be written as $$\begin{aligned} F_c=F_{2M,2N}(0)=f_{\mathrm{bulk}} +\sum_{p=1}^\infty \frac{f_{p}(\rho)}{ S^{p}}, \label{expansion}\end{aligned}$$ where $S = 4 M N$. The expansion coefficients are: $$\begin{aligned} f_{\mathrm{bulk}}&=&\frac{G}{\pi}=0.2915607\dots, \label{fbulk}\\ f_1(\rho) &=& \ln\frac{\theta_2^2+\theta_3^2+\theta_4^2}{2\eta^2}, \label{fex1}\\ f_2(\rho)&=&\frac{2\pi^3\rho^2}{45}\frac{\frac{7}{8}(\theta_2^{10}+\theta_3^{10}+\theta_4^{10})+\theta_2^2\theta_3^2 \theta_4^2(\theta_2^2 \theta_4^2- \theta_3^2\theta_2^2-\theta_3^2\theta_4^2)}{\theta_2^2+\theta_3^2+\theta_4^2}, \label{fex2}\\ & \vdots & \nonumber\end{aligned}$$ where $G=0.915965\dots$ is the Catalan constant, and $$2\eta^3=\theta_2\theta_3\theta_4. \nonumber$$ For the case when the aspect ratio $\rho$ is equal to $1$, the coefficients $f_1(\rho)$ and $f_2(\rho)$ are given by $$f_1(\rho=1) = 0.881374\dots, \qquad f_2(\rho=1) = 0.805761\dots, \label{aspectratio}$$ which match very well with our numerical data (see Eq. (\[Fc\])). The values of $f_1$ and $f_2$ as functions of the aspect ratio $\rho$ are shown in Fig. \[fig\_2\]. ![image](fig_2a_f_f1.eps){width="36.00000%"} ![image](fig_2b_f_f2.eps){width="35.00000%"} Asymptotic expansion of the internal energy {#subsec:2b} ------------------------------------------- Now we will deal with the internal energy. The internal energy at the critical point can be computed directly from Eq. (\[def\_energy\]): $$\begin{aligned} U_c = \frac{2}{S} \frac{-Z_{0,0}^\prime Z_{0,0}+Z_{0,1/2}^\prime Z_{0,1/2}+Z_{1/2,0}^\prime Z_{1/2,0}+Z_{1/2,1/2}^\prime Z_{1/2,1/2}}{-Z_{0,0}^2+Z_{0,1/2}^2+Z_{1/2,0}^2+Z_{1/2,1/2}^2}, \label{critical_energy}\end{aligned}$$ Here $Z_{\alpha,\beta}^{\prime}=\left. \frac{d Z_{\alpha,\beta}(t)}{d t} \right|_{t=0}$ is the first derivative of $Z_{\alpha,\beta}(t)$ with respect to $t$ at criticality. In what follow we will use the following notation $$\left. Z_{\alpha,\beta}(t)\right|_{t=0}=Z_{\alpha,\beta}, \left. Z_{\alpha,\beta}(t)^{\prime}\right|_{t=0}=Z_{\alpha,\beta}^{\prime}, \left. Z_{\alpha,\beta}(t)^{\prime \prime}\right|_{t=0}=Z_{\alpha,\beta}^{\prime \prime}, \left. Z_{\alpha,\beta}(t)^{\prime \prime \prime}\right|_{t=0}=Z_{\alpha,\beta}^{\prime \prime \prime}, \left. Z_{\alpha,\beta}(t)^{(4)}\right|_{t=0}=Z_{\alpha,\beta}^{(4)}.$$ Since $$Z_{0,0}=Z_{0,1/2}^\prime=Z_{1/2,0}^\prime=Z_{1/2,1/2}^\prime=0,$$ the internal energy at the critical point is equal to zero $$U_c=0.$$ Asymptotic expansion of the specific heat {#subsec:2c} ----------------------------------------- The specific heat at criticality is given by the following formula $$\begin{aligned} C_c &=& \frac{2}{S} \frac{-Z_{0,0}^{\prime^2}+Z_{0,1/2}Z_{0,1/2}^{\prime \prime}+Z_{1/2,0}Z_{1/2,0}^{\prime \prime}+Z_{1/2,1/2}Z_{1/2,1/2}^{\prime \prime}} {Z_{0,1/2}^2+Z_{1/2,0}^2+Z_{1/2,1/2}^2}. \label{def_CH}\end{aligned}$$ Following along the same line as in Ref. [@Ivasho] we have found that the exact asymptotic expansion of the specific heat can be written in the following form $$\begin{aligned} C_c &=& \frac{1}{2\pi}\ln{S}+ c_{b}+\sum_{p=1}^{\infty} \frac{c_{p}}{S^{p}}+ \cdots \nonumber \\ &=& \frac{1}{2\pi}\ln{S}+c_{b}+ \frac{c_{1}}{S}+\cdots . \label{heatexpansion}\end{aligned}$$ where $$\begin{aligned} c_{b} &=& \frac{1}{\pi} \left(C_E-\frac{1}{2}\ln{\rho}-\ln{\pi}+\frac{3}{2}\ln{2}\right) - \frac{\rho}{2} \frac{ {\theta}_2^2{\theta}_3^2{\theta}_4^2}{{\theta}_2^2+{\theta}_3^2+{\theta}_4^2} - \frac{2}{\pi}\frac{\sum_{i=2}^4{\theta}_i^2 \ln{{\theta}_i}} {{\theta}_2^2+{\theta}_3^2+{\theta}_4^2}, \label{cbulk} \\ c_1 &=& \frac{\pi^2 \rho^2}{6}~\frac{{\theta}_2^2{\theta}_3^2{\theta}_4^2}{({\theta}_2^2+{\theta}_3^2+{\theta}_4^2)^2} \left[{\theta}_4^2({\theta}_2^4+{\theta}_3^4)\ln\frac{{\theta}_2}{{\theta}_3}+{\theta}_3^2({\theta}_2^4 -{\theta}_4^4)\ln\frac{{\theta}_4}{{\theta}_2}+{\theta}_2^2({\theta}_3^4+{\theta}_4^4)\ln\frac{{\theta}_4}{{\theta}_3}\right] \nonumber \\ &+& \frac{\pi^2 \rho^2}{18}~ \frac{{\theta}_3^4 {\theta}_4^4 (2{\theta}_2^2 -{\theta}_3^2-{\theta}_4^2)}{{\theta}_2^2+{\theta}_3^2+{\theta}_4^2} +\frac{\pi^3 \rho^3}{24} \frac{{\theta}_2^2 {\theta}_3^2 {\theta}_4^2 ({\theta}_2^{10}+{\theta}_3^{10}+{\theta}_4^{10})} {({\theta}_2^2+{\theta}_3^2+{\theta}_4^2)^2} \nonumber \\ &+& \frac{\pi \rho}{18}~ \frac{(\theta_2^2-\theta_4^2)({\theta}_3^4-\theta_2^2{\theta}_4^2+ {\theta}_2^2{\theta}_3^2+{\theta}_3^2{\theta}_4^2)}{{\theta}_2^2+{\theta}_3^2+{\theta}_4^2} \left(1+4 ~\rho\;\frac{\partial}{\partial \rho} \ln{\theta_2}\right). \label{c1}\end{aligned}$$ Here $C_{E}=0.5772156649\dots$ is the Euler constant and $$\frac{\partial}{\partial \rho} \ln{\theta_2}=-\frac{1}{2}\theta_3^2 E,$$ where $E$ is the elliptic integral of the second kind. Note that the $c_b$ and $c_1$ are functions of the aspect ratio $\rho$. For the case when the aspect ratio $\rho$ is equal to $1$ the coefficients $c_b$ and $c_1$ are given by $$c_{b} (\rho=1) = 0.0178829\dots, \qquad c_{1} (\rho=1)=0.240428\dots, \label{aspectratio1}$$ which match very well with our numerical data (see Eq. (\[cmax\_scaling\])). The values of $c_b$ and $c_1$ as functions of $\rho$ are shown in Fig. \[fig\_3\]. ![image](fig_3a_c_cb.eps){width="36.00000%"} ![image](fig_3b_c_c1.eps){width="35.00000%"} Asymptotic expansion of the higher derivatives of the free energy {#higher} ----------------------------------------------------------------- Using the fact that $$Z_{0,0}=Z_{0,1/2}^\prime=Z_{1/2,0}^\prime=Z_{1/2,1/2}^\prime=Z_{0,0}^{\prime \prime}=Z_{0,1/2}^{\prime \prime\prime}=Z_{1/2,0}^{\prime\prime \prime}=Z_{1/2,1/2}^{\prime\prime \prime}=0,$$ it is easy to show that the third derivative of the logarithm of the partition function at the criticality $F^{(3)}_c$ is equal to zero $$\begin{aligned} F^{(3)}_c &=&0. $$ Let us now consider the fourth derivative of the logarithm of the partition function at the criticality $F^{(4)}_c$ which can be written as follows: $$\begin{aligned} F^{(4)}_c &=& -3 S\; C_c^2 +\frac{6}{S}\frac{Z_{0,1/2}^{\prime \prime^2}+Z_{1/2,0}^{\prime \prime^2}+Z_{1/2,1/2}^{\prime \prime^2}} {Z_{0,1/2}^2+Z_{1/2,0}^2+Z_{1/2,1/2}^2}-\frac{8}{S}\frac{Z_{0,0}^{\prime} Z_{0,0}^{\prime \prime \prime} } {Z_{0,1/2}^2+Z_{1/2,0}^2+Z_{1/2,1/2}^2} \nonumber \\ &+& \frac{2}{S}\frac{Z_{0,1/2} Z_{0,1/2}^{(4)}+Z_{1/2,0}Z_{1/2,0}^{(4)}+Z_{1/2,1/2} Z_{1/2,1/2}^{(4)}} {Z_{0,1/2}^2+Z_{1/2,0}^2+Z_{1/2,1/2}^2}. \label{def_F4critchislo}\end{aligned}$$ We have found that the exact asymptotic expansion can be written in the following form $$\begin{aligned} F_c^{(4)} &=& g S -\frac{3}{2\pi} \ln{S}+g_0 +\sum_{p=1}^{\infty} \frac{g_{p}}{S^{p}} \nonumber \\ &=& g S -\frac{3}{2\pi}\ln{S} +g_{0}+\frac{g_{1}}{S}+\cdots , \label{freeenergy4expansion}\end{aligned}$$ where $$\begin{aligned} g(\rho) &=& \frac{12}{\pi^2}\;\frac{ {\theta}_3^2{\theta}_4^2\left(\ln{\frac{{\theta}_3}{{\theta}_4}}\right)^2 +{\theta}_2^2{\theta}_4^2\left(\ln{\frac{{\theta}_2}{{\theta}_4}}\right)^2 +{\theta}_2^2{\theta}_3^2\left(\ln{\frac{{\theta}_2}{{\theta}_3}}\right)^2} {({\theta}_2^2+{\theta}_3^2+{\theta}_4^2)^2} \nonumber \\ &-& \frac{3\rho}{4} \frac{ {\theta}_2^2{\theta}_3^2{\theta}_4^2}{{\theta}_2^2+{\theta}_3^2+{\theta}_4^2} \left[ \rho \frac{ {\theta}_2^2{\theta}_3^2{\theta}_4^2}{{\theta}_2^2+{\theta}_3^2+{\theta}_4^2}+\frac{8}{\pi} \left(\frac{\sum_{i=2}^4{\theta}_i^2 \ln{{\theta}_i}} {{\theta}_2^2+{\theta}_3^2+{\theta}_4^2}-\ln{2 \eta}\right)\right] \nonumber \\ &+& \frac{3}{16\pi^3 \rho}\;\frac{{\theta}_2^2 \left(\rho\frac{\partial}{\partial \rho }-1\right)R_4^{0,1/2}(\rho)+{\theta}_3^2 \left(\rho\frac{\partial}{\partial \rho }-1\right)R_4^{1/2,1/2}(\rho)+{\theta}_4^2 \left(\rho\frac{\partial}{\partial \rho }-1\right)R_4^{1/2,0}(\rho)}{{\theta}_2^2+{\theta}_3^2+{\theta}_4^2}. \label{sm3}\end{aligned}$$ $R_4^{\alpha,\beta}$ is given by $$\begin{aligned} R_4^{\alpha,\beta}(\rho) = -\psi^{\prime \prime}(\alpha)-\psi^{\prime \prime}(1-\alpha)+4\sum_{n=0}^{\infty}\sum_{m=1}^{\infty} \left\{\frac{e^{-2\pi m(\rho(n+\alpha)+i \beta)}}{(n+\alpha)^3}+(\alpha \to 1-\alpha)\right\},\end{aligned}$$ where $\psi^{\prime \prime}(x)$ is the second derivative of the digamma function $\psi(x)$ with respect to $x$, $$\psi^{\prime\prime}(1) = -2\zeta(3), \qquad \psi^{\prime\prime}(1/2)=-14\zeta(3).$$ Here $\zeta(n)$ is the zeta function $$\zeta(n) = \sum_{k=1}^{\infty}\frac{1}{k^n},$$ and for small $x$ $$\begin{aligned} \psi(x) &=& -C_E-\frac{1}{x} +x\sum_{k=1}^{\infty}\frac{1}{k(k+x)}, \nonumber \\ \psi^{\prime \prime}(x)&=&-\frac{2}{x^3}+2x\sum_{k=1}^{\infty}\frac{1}{k(k+x)^3}-2\sum_{k=1}^{\infty}\frac{1}{k(k+x)^2}. \nonumber\end{aligned}$$ One can show that $$\begin{aligned} \left(\rho\frac{\partial}{\partial \rho }-1\right)R_4^{0,\frac{1}{2}}(\rho)&=&-\frac{4\pi^3\rho}{3}-4\zeta(3)+\sum_{n=1}^{\infty}\frac{8}{n^3\left(1+e^{2\pi n\rho}\right)}+\sum_{n=1}^{\infty}\frac{4\pi\rho}{n^2\cosh^2(\pi n\rho)}, \nonumber \\ \left(\rho\frac{\partial}{\partial \rho }-1\right)R_4^{\frac{1}{2},\frac{1}{2}}(\rho)&=&-28\zeta(3)+\sum_{n=1}^{\infty}\frac{8}{\left(n+\frac{1}{2}\right)^3 \left(e^{2\pi\rho\left(n+\frac{1}{2}\right)}+1\right)}+ \sum_{n=1}^{\infty}\frac{4\pi\rho}{\left(n+\frac{1}{2}\right)^2\cosh^2{(\pi\rho\left(n+\frac{1}{2})\right)}}, \nonumber \\ \left(\rho\frac{\partial}{\partial \rho }-1\right)R_4^{\frac{1}{2},0}(\rho)&=&-28\zeta(3)-\sum_{n=1}^{\infty}\frac{8}{\left(n+\frac{1}{2}\right)^3 \left(e^{2\pi\rho\left(n+\frac{1}{2}\right)}-1\right)}- \sum_{n=1}^{\infty}\frac{4\pi\rho}{\left(n+\frac{1}{2}\right)^2\sinh^2{(\pi\rho\left(n+\frac{1}{2})\right)}}. \nonumber\end{aligned}$$ Accordingly, the value of the asymptotic expansion coefficient $g$ of Eq.(\[sm3\]) as a function of the aspect ratio $\rho$ can be determined and is shown in Fig. \[fig\_4\]. More explicitly, $g(\rho=1)=-0.032122\dots$, $g(\rho=2)=0.00762119\dots$, and $g(\rho=4)=-0.0346017\dots$. ![(Color online) The value of the asymptotic expansion coefficient $g$ in the fourth derivative of free energy $F_c^{(4)}$ as a function of the aspect ratio $\rho$.[]{data-label="fig_4"}](fig_4_g-r.eps){width="40.00000%"} Dimer model on the checkerboard lattice at finite temperature {#dimer-finitite-T} ============================================================= Numerical calculations of thermodynamic variables ------------------------------------------------- Using the partition function of Eq. (\[stat\]) we plot the free energy $F_{2M,2N}(t)$, the internal energy $U_{2M,2N}(t)$ and the specific heat $C_{2M,2N}(t)$ as functions of $t$ for different lattice sizes in Figs. \[fig\_5\](a), \[fig\_5\](b), and \[fig\_5\](c), respectively. ![(Color online) (a) Free energy $F_{2M,2N}(t)$, (b) internal energy $U_{2M,2N}(t)$ and (c) specific heat $C_{2M,2N}(t)$ as functions of $t$. The aspect ratio $\rho=M/N$ has been set to unity.[]{data-label="fig_5"}](fig_5a_f2m2n.eps "fig:"){width="32.00000%"} ![(Color online) (a) Free energy $F_{2M,2N}(t)$, (b) internal energy $U_{2M,2N}(t)$ and (c) specific heat $C_{2M,2N}(t)$ as functions of $t$. The aspect ratio $\rho=M/N$ has been set to unity.[]{data-label="fig_5"}](fig_5b_u2m2n.eps "fig:"){width="32.00000%"} ![(Color online) (a) Free energy $F_{2M,2N}(t)$, (b) internal energy $U_{2M,2N}(t)$ and (c) specific heat $C_{2M,2N}(t)$ as functions of $t$. The aspect ratio $\rho=M/N$ has been set to unity.[]{data-label="fig_5"}](fig_5c_c2m2n.eps "fig:"){width="32.00000%"} The specific heat curve becomes higher with the increase of the system size, while the peaks always locate at $t=0$ exactly. To study scaling behaviors of thermodynamic variables, we analyzed the variation of $F_{\mathrm{c}}=F(t_{\mathrm{c}})$ with respect to different system sizes $S=2M \times 2N$. Figure \[fig\_6\](a) shows $F_{\mathrm{c}}$ as a function of $1/S$. Using a polynomial function of $1/S$ to fit the data, the best polynomial fitting to the data is found to be $$F_{\mathrm{c}} = 0.29156 (\pm 0.00000001) + \frac {0.88138 (\pm 0.00001)}{S} + \frac{0.791 (\pm 0.004)}{S^2} + \cdots,$$ which can be approximately expressed as $$F_{\mathrm{c}} \approx \frac{G}{\pi} + \frac {0.88138 (\pm 0.00001)}{S} + \frac{0.791 (\pm 0.004)}{S^2}. \label{Fc}$$ This expression is consistent with Eq. (\[expansion\]) for the case $\rho = 1$, see Eq. (\[aspectratio\]). For the specific heat, we plotted $C_{\mathrm{c}} - \ln S/(2\pi)$ as a function of $1/S$ in Fig. \[fig\_6\](b). The data points can be well described by the polynomial fit $$C_{\mathrm{c}} - \frac{1}{2\pi} \ln S = 0.01788 (\pm 0.000000009) + \frac{0.2402 (\pm 0.001)}{S} - \frac{3.7 (\pm 1.9)}{S^2} \cdots. \label{cmax_scaling}$$ This expression is also consistent with Eq.(\[heatexpansion\]) for the case $\rho = 1$, see Eq. (\[aspectratio1\]). ![(Color online) (a) $F_{\mathrm{c}}$ as a function of $1/S$, and (b) $C_{\mathrm{c}}-(\ln S)/2\pi $ as functions of $1/S$. The aspect ratio $\rho=M/N$ has been set to unity.[]{data-label="fig_6"}](fig_6a_Fc.eps "fig:"){width="42.00000%"} ![(Color online) (a) $F_{\mathrm{c}}$ as a function of $1/S$, and (b) $C_{\mathrm{c}}-(\ln S)/2\pi $ as functions of $1/S$. The aspect ratio $\rho=M/N$ has been set to unity.[]{data-label="fig_6"}](fig_6b_Cc.eps "fig:"){width="42.00000%"} Scaling functions of the free energy, the internal energy and the specific heat {#scaling-functions} ------------------------------------------------------------------------------- Following the proposal for the scaling functions in Ref. [@wu2003] and using the exact expansions of the free energy in Eq. (\[expansion\]) and the specific heat in Eq. (\[heatexpansion\]), we define the scaling function of the free energy $\Delta _F (S, \rho, t)$ as $$\Delta _F (S, \rho, \tau) = S \left[ F_{2M,2N} - \left( f_{\mathrm{bulk}} + \frac{f_1}{S} + \frac{f_2}{S^2} \right) - \frac{1}{2S} \left( c_b + \frac{1}{2\pi} \ln S \right) \tau^2\right], \label{f_scaling}$$ where $\tau$ defined as $$\tau=t\cdot S^{\frac{1}{2}}, \label{rescaledt}$$ is a scaled variable. The scaling function $\Delta _F (S, \rho, \tau)$ as a function of $\tau$ for different system size $S$ with the aspect ratio $\rho=1, 2$, and $4$ is shown in Fig. \[fig\_7\](a). With the help of the first and second derivatives of the free energy, we obtain the exact expression of the scaling function at criticality for small $\tau$ $$\Delta _F (S, \rho, \tau) = \frac{1}{2} \left[ \frac{c_1}{S} + O \left( \frac{1}{S^2} \right) \right] \tau^2 + O \left( \frac{1}{S^2} \right) + O(\tau^4). \label{f_scaling_exp}$$ For $\rho=1$, we have $$\Delta _F (S, \rho=1, \tau) = \frac{1}{2} \left[ \frac{0.240428}{S} + O \left( \frac{1}{S^2} \right) \right] \tau^2 + O \left( \frac{1}{S^2} \right) + O(\tau^4), \label{f_scaling_exp1}$$ which describes the behavior of the scaling function at critical region for small $\tau$ in Fig. \[fig\_7\](a). ![(Color online) The scaling functions (a) $\Delta _F (S, \rho, \tau)$, (b) $\Delta _U (S, \rho, \tau)$ and (c) $\Delta _C (S, \rho, \tau)$, as functions of $\tau$ for different system sizes $S$ with the aspect ratio $\rho=1$, $2$, and $4$.[]{data-label="fig_7"}](fig_7a_fsc_f2m2n.eps "fig:"){width="32.00000%"} ![(Color online) The scaling functions (a) $\Delta _F (S, \rho, \tau)$, (b) $\Delta _U (S, \rho, \tau)$ and (c) $\Delta _C (S, \rho, \tau)$, as functions of $\tau$ for different system sizes $S$ with the aspect ratio $\rho=1$, $2$, and $4$.[]{data-label="fig_7"}](fig_7b_fsc_c2m2n.eps "fig:"){width="32.00000%"} ![(Color online) The scaling functions (a) $\Delta _F (S, \rho, \tau)$, (b) $\Delta _U (S, \rho, \tau)$ and (c) $\Delta _C (S, \rho, \tau)$, as functions of $\tau$ for different system sizes $S$ with the aspect ratio $\rho=1$, $2$, and $4$.[]{data-label="fig_7"}](fig_7c_fsc_c2m2n.eps "fig:"){width="33.00000%"} We further propose the scaling function of the internal energy $\Delta _U (S, \rho, \tau)$ as $$\Delta _U (S, \rho, \tau) = S^{\frac{1}{2}} \left[ U_{2M, 2N} - \frac{1}{S^{\frac{1}{2}}} \left( c_b + \frac{1}{2\pi} \ln S \right) \tau \right]. \label{u_scaling_exp}$$ The scaling function $\Delta _U (S, \rho, \tau )$ as a function of $\tau$ for different system size $S$ with aspect ratio $\rho=1, 2$, and $4$ is shown in Fig. \[fig\_7\](b). Similarly, using the expression of expansion of the specific heat in Eq. (\[heatexpansion\]), we define the scaling function of the specific heat $\Delta _C (S, \rho, \tau)$ $$\Delta _C (S, \rho, \tau) = C_{2M,2N} - \left( c_b + \frac{1}{2\pi} \ln S \right). \label{c_scaling}$$ The scaling function $\Delta _C (S, \rho, \tau)$ as a function of $\tau$ for different system size $S$ with the aspect ratio $\rho=1, 2$, and $4$ is shown in Fig. \[fig\_7\](c). Note that $\Delta _C (S, \rho, \tau)$ at small $\tau$ can be formulated as $$\Delta _C (S, \rho, \tau) = \frac{c_{1} (\rho)}{S}+ \left[ \frac{1}{2}g(\rho) -\frac{3}{4\pi}\frac{\ln{S}}{S} + \frac{1}{2}\frac{g_{0}}{S} + O \left( \frac{1}{S^2} \right) \right] \tau^2 + O \left( \frac{1}{S^2} \right) + O(\tau^4), \label{c_scaling_exp1}$$ as the case of Ising model [@wu2003], while the leading term of $\tau$ in the scaling function for Ising model is $\tau$. For $\rho=1$, $$\Delta _C (S, \rho=1, \tau) = \frac{0.240428}{S} - \left[ 0.016061 +\frac{3}{4\pi}\frac{\ln{S}}{S} + O \left( \frac{1}{S^2} \right) \right] \tau^2 + O \left( \frac{1}{S^2} \right) + O(\tau^4). \label{c_scaling_exp2_r1}$$ Equations (\[f\_scaling\_exp\]), (\[u\_scaling\_exp\]), and (\[c\_scaling\_exp1\]) and Fig. \[fig\_7\] generally suggest the following relations [@wu2003] $$\begin{aligned} \Delta _F (S, \rho, \tau) & \simeq & \frac{1}{2} \Delta _C (S, \rho) \tau^2 + O \left( \frac{1}{S^2} \right) + O(\tau^4), \label{f_and_c_scaling} \\ \Delta _U (S, \rho, \tau) & \simeq & \Delta _C (S, \rho) \tau + O \left( \frac{1}{S^2} \right) + O(\tau^3), \label{u_and_c_scaling} \\ \Delta _C (S, \rho, \tau) & \simeq & \Delta _C (S, \rho) + O \left( \frac{1}{S^2} \right) + O(\tau^2), \label{c_and_c_scaling}\end{aligned}$$ where $\Delta _C (S, \rho) = \Delta _C (S, \rho, \tau=0)$. Specific heat near the critical point {#spec_heat} ------------------------------------- Let us now consider the behavior of the specific heat near the critical point. The specific heat $C_{2M,2N}(t)$ of the dimer model on $2M \times 2N$ checkerboard lattice is defined as the second derivative of the free energy in Eq. (\[def\_specific\_heat\]). The pseudocritical point $t_{\mathrm{pseudo}}$ is the value of the temperature at which the specific heat has its maximum for finite $2M \times 2N$ lattice. One can determine this quantity as the point where the derivative of $C_{2M,2N}(t)$ vanishes. The pseudocritical point approaches the critical point $t_c=0$ as $L \to \infty$ in a manner dictated by the shift exponent $\lambda$, $$\begin{aligned} |t_{\mathrm{pseudo}}-t_c| \sim L^{-\lambda}. \label{lamb}\end{aligned}$$ where $L = \sqrt{4MN}$ is the characteristic size of the system. The coincidence of $\lambda$ with $1/\nu$, where $\nu$ is the correlation lengths exponent, is common to most models, but it is not a direct consequence of finite-size scaling and is not always true. One can see from Eqs.(\[stat\]), (\[twist\]) and (\[def\_specific\_heat\]) that the partition function $Z_{2M,2N}(t)$ and the specific heat $C_{2M,2N}(t)$ are an even function with respect to its argument $t$ $$\begin{aligned} C_{2M,2N}(t) = C(0) +\frac{t^2}{2}C^{(2)}(0)+\frac{t^4}{4!}C^{(4)}(0)+O(t^6). \label{expansion specific heat}\end{aligned}$$ Thus the first derivative of $C_{2M,2N}(t)$ vanishes exactly at $$\begin{aligned} t_{\mathrm{pseudo}}=0. \label{expansionmu0}\end{aligned}$$ In Fig. \[fig\_5\](c) we plot the $t$ dependence of the specific heat for different lattice sizes up to $512 \times 512$. We can see from Fig. \[fig\_5\](c) that the position of the specific heat peak $t_{\mathrm{pseudo}}$ is equal exactly to zero. Therefore the maximum of the specific heat (the pseudocritical point $t_{\mathrm{pseudo}}$) always occurs at vanishing reduced temperature for any finite $2M \times 2N$ lattice and coincides with the critical point $t_c$ at the thermodynamic limit ($t_{\mathrm{pseudo}}=t_c=0$). From Eqs. (\[lamb\]) and (\[expansionmu0\]) we find that the shift exponent is infinity $\lambda=\infty$. Dimer on the infinitely long cylinder {#inf_long} ===================================== Conformal invariance of the model in the continuum scaling limit would dictates that at the critical point the asymptotic finite-size scaling behavior of the critical free energy $f_c$ of an infinitely long two-dimensional cylinder of finite circumference ${\mathcal{N}}$ has the form $$f_c=f_{\mathrm{bulk}} + \frac{A}{{\mathcal{N}}^2} + \cdots, \label{freeenergystrip}$$ where $f_{\mathrm{bulk}}$ is the bulk free energy and $A$ is a constant. Unlike the bulk free energy the constant $A$ is universal, which may depend on the boundary conditions. In some 2D geometries, the value of $A$ is related to the conformal anomaly number $c$ and the highest conformal weights $\Delta, \bar \Delta$ of the irreducible highest weight representations of two commuting Virasoro algebras. These two dependencies can be combined into a function of the effective central charge $c_{\mathrm{eff}}=c-12(\Delta+\bar\Delta)$ [@Blote; @Affleck; @Cardy], $$\begin{aligned} A = \frac{\pi}{6}c_{\mathrm{eff}} = 2\pi\left(\frac{c}{12}-\Delta-\bar\Delta\right) \qquad \mbox{on a cylinder.} \label{Aperiod}\end{aligned}$$ Let us now consider the dimer model on the infinitely long cylinder of width $\mathcal{N} = 2 N$. Considering the logarithm of the partition function given by Eq. (\[Zab\]) at the critical point ($t=t_{c}=0$), we note that it can be transformed as $$\ln Z_{\alpha,\beta}(0)= M\sum_{n=0}^{N-1} \omega_0\!\left(\textstyle{\frac{\pi(n+\alpha)}{N}}\right)+ \sum_{n=0}^{N-1}\ln\left|\,1-e^{-2\big[\,M \omega_0\left(\frac{\pi(n+\alpha)}{N}\right)-i\pi\beta\,\big]}\right|. \label{lnZab}$$ The second sum here vanishes in the formal limit $M\to\infty$ when the torus turns into infinitely long cylinder of circumference $2N$. Therefore, the first sum gives the logarithm of the partition function on that cylinder. Its asymptotic expansion can be found with the help of the Euler-Maclaurin summation formula $$M\sum_{n=0}^{N-1}\omega\!\left(\textstyle{\frac{\pi(n+\alpha)}{N}}\right)= \frac{S}{\pi}\int_{0}^{\pi}\!\!\omega_0(x)~\!{\mathrm{d}}x-\pi\lambda_0\rho\,{\mathrm{B}}_{2}^\alpha- 2\pi\rho\sum_{p=1}^{\infty} \left(\frac{\pi^2\rho}{S}\right)^{p} \frac{\lambda_{2p}}{(2p)!}\;\frac{{\mathrm{B}}_{2p+2}^\alpha}{2p+2}, \label{EulerMaclaurinTerm}$$ where $\int_{0}^{\pi}\!\!\omega_0(x)~\!{\mathrm{d}}x = 2 G$ and ${\mathrm{B}}^{\alpha}_{p}$ are the so-called Bernoulli polynomials. Here we have also used the coefficients $\lambda_{2p}$ of the Taylor expansion of the lattice dispersion relation $\omega_0(k)$ at the critical point given by Eq. (\[Spectral\]). Thus one can easily write down all the terms of the exact asymptotic expansion for the $F_{\alpha,\beta}= \lim_{M \to \infty}\frac{1}{M}\ln Z_{\alpha,\beta}(0)$ $$\begin{aligned} F_{\alpha,\beta} &=& \lim_{M \to \infty}\frac{1}{M}\ln Z_{\alpha,\beta}(0)= \frac{2 G}{\pi}N - 2\sum_{p=0}^\infty \left(\frac{\pi}{N}\right)^{2p+1}\frac{\lambda_{2p}}{(2p)!} \frac{\mathrm{B}_{2p+2}^{\alpha}}{2p+2}. \label{AsymptoticExpansion1}\end{aligned}$$ From $F_{\alpha,\beta}$, we can obtain the asymptotic expansion of free energy per bond of an infinitely long cylinder of circumference $\mathcal{N}=2N$ $$f = \lim_{M \to \infty} \frac{1}{4 M N}\ln{Z_{2M,2N}(0)} = \lim_{M \to \infty} \frac{1}{2 M N}\ln {Z_{1/2, 0}(M, N)}=\frac{1}{2 N} F_{1/2,0}(N). \label{free2N}$$ From Eq. (\[free2N\]) using Eq. (\[AsymptoticExpansion1\]) one can easily obtain that for even ${\mathcal{N}}=2N$ the asymptotic expansion of the free energy is given by $$\begin{aligned} f &=& f_{\mathrm{bulk}} - \frac{1}{\pi}\sum_{p=0}^\infty \left(\frac{2 \pi}{\mathcal{N}}\right)^{2p+2}\frac{\lambda_{2p}}{(2p)!} \frac{\mathrm{B}_{2p+2}^{1/2}}{2p+2} \nonumber \\ &=& f_{\mathrm{bulk}} + \frac{\pi}{6}\frac{1}{\mathcal{N}}+\dots \quad ({\mathrm{for}}~ \mathcal{N}=2N), \label{2Nper}\end{aligned}$$ where $f_{\mathrm{bulk}}$ is given by Eq. (\[fbulk\]). Thus we can conclude from Eqs. (\[freeenergystrip\]), (\[Aperiod\]) and (\[2Nper\]) that $c_{\mathrm{eff}}=1$. Since the effective central charge $c_{\mathrm{eff}}$ is defined as function of $c$, $\Delta$ and $\bar \Delta$, one cannot obtain the values of $c$, $\Delta$ and $\bar \Delta$ without some assumption about one of them. This assumption can be a posteriori justified if the conformal description obtained from it is fully consistent. It is easy to see that there are two consistent values of $c$ that can be used to describe the dimer model, namely, $c = -2$ and $c = 1$. For example, for the dimer model on an infinitely long cylinder of even circumference $\mathcal{N}$, one can obtain from Eqs. (\[freeenergystrip\]), (\[Aperiod\]) and (\[2Nper\]) that the central charge $c$ and the highest conformal weights $\Delta, \bar \Delta$ can take the values $c = 1$ and $\Delta = \bar \Delta = 0$ or $c = -2$ and $\Delta = \bar \Delta = -1/8$. Thus from the finite-size analyses we can see that two conformal field theories with the central charges $c = 1$ and $c = -2$ can be used to describe the dimer model on the checkerboard lattice. Summary and Discussion {#summary-discussion} ====================== We analyze the partition function of the dimer model on an $2M \times 2N$ checkerboard lattice wrapped on a torus. We have obtained exact asymptotic expansions for the free energy, the internal energy, the specific heat, and the third and fourth derivatives of the free energy of a dimer model on the square lattice wrapped on a torus at the critical point $t=0$. Using exact partition functions and finite-size corrections for the dimer model on finite checkerboard lattice we obtain finite-size scaling functions for the free energy, the internal energy, and the specific heat of the dimer model. From a finite size analysis we have found that the shift exponent $\lambda$ is infinity and the finite-size specific-heat pseudocritical point coincides with the critical point of the thermodynamic limit. This adds to the catalog of anomalous circumstances where the shift exponent is not coincident with the correlation-length critical exponent. We have also considered the limit $N \to \infty$ for which we obtain the expansion of the free energy for the dimer model on the infinitely long cylinder. One of us (N.Sh.I.) thanks Laboratory of Statistical and Computational Physics at the Institute of Physics, Academia Sinica, Taiwan, for hospitality during completion of this work. This work was partially supported by IRSES (Projects No. 612707-DIONICOS) within 7th European Community Framework Programme (N.Sh.I.) and by a grant from the Science Committee of the Ministry of Science and Education of the Republic of Armenia under Contract No. 15T-1C068 (N.Sh.I.), and by the Ministry of Science and Technology of the Republic of China (Taiwan) under Grant Nos. MOST 103-2112-M-008-008-MY3 and 105-2912-I-008-513 (M.C.W.), and MOST 105-2112-M-001-004 (C.K.H.), and NCTS of Taiwan. [99]{} L. Onsager, Phys. Rev. **65**, 117 (1944). H. W. J. Blöte, E. Luijten, and J. R. Heringa, J. Phys. A: Math. Gen. **28**, 6289 (1995). A. L. Talapov and H. W. J. Blöte, J. Phys. A: Math. Gen. **29**, 5727 (1996). J. V. Sengers and J. G. Shanks, J. Stat. Phys. **137**, 857 (2009). H. Watanabe, N. Ito, and C.-K. Hu, J. Chem. Phys. **136**, 204102 (2012). W. J. C. Orr, Trans. Faraday Soc. **43**, 12 (1947). J. H. Lee, S.-Y. Kim, and J. Lee, Phys. Rev. E **86**, 011802 (2012). C.-N. Chen, Y.-H. Hsieh, and C.-K. Hu, EPL **104**, 20005 (2013) Y.-H. Hsieh, C.-N. Chen, and C.-K. Hu, Comp. Phys. Communications (2016) http://dx.doi.org/10.1016/j.cpc.2016.08.006. M. S. Li, N. T. Co, G. Reddy, C. -K. Hu, J. E. Straub, and D. Thirumalai, Phys. Rev. lett. **105**, 218101 (2010). N. T. Co, C.-K. Hu, and M. S. Li, J. Chem. Phys. **138**, 185101 (2013). R. H. Fowler and G. S. Rushbrooke, Trans. Faraday Soc. **33**, 1272 (1937). P. W. Kasteleyn, Physica **27**, 1209 (1961). M. E. Fisher Phys. Rev. **124**, 1664 (1961). A. E. Ferdinand, J. Math. Phys. **8**, 2332 (1967). A. E. Ferdinand and M. E. Fisher, Phys. Rev. **185**, 832 (1969). M. Fisher and M. N. Barber, Phys. Rev. Lett. **28**, 1516 (1972). *Finite-size Scaling and Numerical Simulation of Statistical Systems*, V. Privman ed. (World Scientific, Singapore, 1990). C.-K. Hu, Chin. J. Phys. **52**, 1 (2014). S. Kawata, H.-B. Sun, T. Tanaka, and K. Takeda, Nature **412**, 697 (2001). V. F. Puntes, K. M. Krishnan, and A. P. Alivisatos, Science **291**, 2115 (2001). Y. Yin, R. M. Rioux, C. K. Erdonmez, S. Hughes, G. A. Somorjai, and A. P. Alivisatos, Science **304**, 711 (2004). D. S. Rokhsar and S. A. Kivelson, Phys. Rev. Lett. **61**, 2376 (1988). S. Franco, A. Hanany, D. Vegh, B. Wecht, and K. D. Kennaway, JHEP **01**, 096 (2006). P. W. Kasteleyn, J. Math. Phys. **4**, 287 (1963). N. Sh. Izmailian, C.-K. Hu, and R. Kenna, Phys. Rev. E **91**, 062139 (2015). B. Nienhuis, H. J. Hilhorst, and H. Blöte, J. Phys. A **17**, 3559 (1984). S. M. Bhattacharjee and J. F. Nagle, Phys. Rev. A **31**, 3199 (1985). C. Itzykson, H. Saleur, and J.-B. Zuber, EPL **2**, 91 (1986). N. Sh. Izmailian, K. B. Oganesyan, M.-C. Wu, and C.-K. Hu, Phys. Rev. E **73**, 016128 (2006). N. Sh. Izmailian, V. B. Priezzhev, and P. Ruelle, SIGMA **3**, 001 (2007). F. Y. Wu, W.-J. Tzeng, and N. Sh. Izmailian, Phys. Rev. E **83**, 011106 (2011). N. Sh. Izmailian, R. Kenna, W. Guo, and X. Wu, Nucl. Phys. B **884**, 157 (2014). N. Sh. Izmailian and R. Kenna, Phys. Rev. E **84**, 021107 (2011). N. Sh. Izmailian, V. B. Priezzhev, P. Ruelle, and C.-K. Hu, Phys. Rev. Lett. **95** 260602, (2005). N. Sh. Izmailian, K. B. Oganesyan, and C.-K. Hu, Phys. Rev. E **67**, 066114 (2003). N. Allegra, Nucl. Phys. B **893**, 685 (2015). A. Morin-Duchesne, J. Rasmussen, and P. Ruelle, J. Phys. A **49**, 174002 (2016). Y. Kong, Phys. Rev. E **74**, 011102 (2006); **74**, 061102 (2006). E. Ivashkevich, N. Sh. Izmailian, and C.-K. Hu, J. Phys. A: Math. Gen. **35**, 5543 (2002). M. N. Barber, in *Phase Transition and Critical Phenomena*, edited by C. Domb and J. Lebowitz (Academic, New York, 1983), Vol. VIII, Chap. 2, p. 157. W. Janke and R. Kenna, Phys. Rev. B **65**, 064110 (2002). M.-C. Wu, C.-K. Hu, and N. Sh. Izmailian, Phys. Rev. E **67**, 065103(R) (2003). H. W. J. Blöte, J. L. Cardy, and M. P. Nightingale, Phys. Rev. Lett. **56**, 742 (1986). I. Affleck, Phys. Rev. Lett. **56**, 746 (1986). J. Cardy, Nucl. Phys. B **275**, 200 (1986).
--- abstract: 'We propose a simple scheme to construct composition-dependent interatomic potentials for multicomponent systems that when superposed onto the potentials for the pure elements can reproduce not only the heat of mixing of the solid solution in the entire concentration range but also the energetics of a wider range of configurations including intermetallic phases. We show that an expansion in cluster interactions provides a way to systematically increase the accuracy of the model, and that it is straightforward to generalise this procedure to multicomponent systems. Concentration-dependent interatomic potentials can be built upon almost any type of potential for the pure elements including embedded atom method (EAM), modified EAM, bond-order, and Stillinger-Weber type potentials. In general, composition-dependent $N$-body terms in the total energy lead to explicit $N+1$-body forces, which potentially renders them computationally expensive. We present an algorithm that overcomes this problem and that can speed up the calculation of the forces for composition-dependent pair potentials in such a way as to make them computationally comparable in efficiency and scaling behaviour to standard EAM potentials. We also discuss the implementation in Monte-Carlo simulations. Finally, we exemplarily review the composition-dependent EAM model for the Fe–Cr system \[PRL [**95**]{},075702, (2005)\].' author: - | B. Sadigh$^{\rm a}$, P. Erhart$^{\rm a}$, A. Stukowski$^{\rm b}$, and A. Caro$^{\rm a}$\ $^{\rm a}$ [*Condensed Matter and Materials Division, Lawrence Livermore\ National Laboratory, Livermore, CA*]{}; $^{\rm b}$ [*Institut für Materialwissenschaft,\ Technische Universität Darmstadt, Germany*]{} title: | Composition-dependent interatomic potentials:\ A systematic approach to modelling multicomponent alloys --- empirical potentials; multicomponent alloys; concentrated alloys; computer simulations; molecular dynamics; Monte Carlo; composi- tion dependent interatomic potentials; cluster interactions Introduction ============ Twenty-five years ago, the Finnis-Sinclair many body potential [@FinSin84], the Embedded Atom Model of Daw and Baskes [@DawBas84], the Glue model of Ercolessi and Parrinello [@ErcParTos86], and the effective medium theory due to Puska, Nieminen and N[o]{}rskov [@PusNieMan81; @Nor82] marked the birthday of modern atomic scale computational materials science, enabling computer simulations at the multimillion atom scale to become a routine in modern materials science research. This family of many body potentials share in common the fact that the expression for the total energy has non linear contributions of pair functions, removing in this way the limitations of the pair potential formulation to describe realistic elastic properties. Alloys and compounds, where the thermodynamic information is of relevance, is one of the main fields in which these potentials have been applied. In the early days of many body potentials the main alloy property fitted was the heat of solution of a single impurity [@FoiBasDaw86], [*i.e.*]{} the dilute limit of the heat of formation (HOF) of the alloy. However, when these potentials are applied to concentrated alloys the predictions are usually uncontrolled; they work well for systems with a mixing enthalpy that is nearly symmetric and positive over the entire concentration range, as for example in the cases of Fe–Cu [@LudFarPed98; @PasMal07], or Au–Ni [@FoiBasDaw86; @AstFoi96; @ArrCarCar02]. Alloys which show a strong asymmetry or even a sign inversion in the HOF such as Fe–Cr or Pd–Ni are beyond the scope of standard many body potential models, and there is not yet a unique methodology suitable for their description. Similar limitations apply with respect to systems with a negative HOF which feature intermetallic phases. Frequently, such systems require different parametrisations for different phases, as in the case of Ni–Al with the B2 phase on one hand [@MisMehPap02], and the $\gamma$ and $\gamma'$ phases on the other [@Mis04]. Two schemes have been developed to deal with these shortcomings in the case of Fe–Cr which displays an inversion in the HOF as a function of concentration, namely the composition-dependent embedded atom method (CD-EAM) [@CarCroCar05] and the two-band model (2BM) [@OlsWalDom05]. For neither one of these schemes, it is obvious how it can be extended to more than two components. The objective of this paper is to develop a framework for constructing interatomic potential models for multicomponent alloys based on an expansion in clusters of increasing size that can be practically implemented and systematically improved. Our methodology allows to describe systems with arbitrary heat of mixing curves and includes intermetallic phases in a systematic and physically meaningful fashion. Thereby, we overcome the most important disadvantages of current alloy potential schemes and provide a framework for systems of arbitrary complexity. In our methodology the interatomic interactions are modified by composition-dependent functions. This introduces a dependence on the environment which is somewhat reminiscent of the bond-order potential (BOP) scheme developed by Abell and Tersoff [@Abe85; @Ter86; @Ter88a]. In this formalism the attractive pair potential is scaled by a (usually) angular dependent function (the “bond-order”) which describes the local structure. Thereby, it is possible to distinguish different lattice structures (face-centred cubic, body-centred cubic, cubic diamond [*etc.*]{}) and also to stabilise structures with low packing density such as diamond or zincblende lattices. (In fact, the BOP formalism has been successfully applied to model alloys such as Fe–Pt that feature intermetallic phases with different lattice structures [@MulErhAlb07b]). The composition-dependent interatomic potential (CDIP) scheme introduced in the present work and the BOP formalism thus both include explicit environment-dependent terms. However, in the CDIP approach this environment-dependence is used to distinguish different [ *chemical*]{} motifs while in the BOP scheme it is used to identify different [*structural*]{} motifs. This paper is organised as follows: In [Sect. \[sect:pair\_potentials\]]{} we introduce the basic terminology and present a systematic approach to fitting potentials for binary systems. Section \[sect:beyond\_pair\_potentials\] describes how by including higher order terms it is possible to fit e.g., intermetallic phases. In [Sect. \[sect:series\]]{} a series expansion is developed which generalises the concepts introduced in the previous sections and which is used in [Sect. \[sect:ternary\]]{} to obtain explicit expressions for a ternary system. The efficient computation of forces is discussed in [Sect. \[sect:forces\]]{} and an optimal implementation in Monte-Carlo simulations is the subject of [Sect. \[sect:monte\_carlo\]]{}. Finally, as an example, the composition-dependent embedded atom method potential for Fe–Cr is reviewed in [Sect. \[sect:FeCr\]]{}. Binary Systems ============== Pair Potentials {#sect:pair_potentials} --------------- For the sake of clarity of the following exposition, we assume EAM models throughout this paper. It is important to stress that the formalism to be developed hereafter can be applied to any potential model for the pure elements including modified embedded atom method (MEAM) [@Bas87; @Bas92], bond-order [@Abe85; @Ter86; @Ter88a], and Stillinger-Weber type [@StiWeb85] potentials. Consider a single-component system of atoms [*A*]{}, whose interactions are described by the EAM model, $$\begin{aligned} \label{eq:1} E_A = \sum_i U_A\left(\overline{\rho}_i \right) + \frac{1}{2} \sum_i\sum_j\phi_A\left(r_{ij}\right) \quad\text{with}\quad \overline{\rho}_i = \sum_{j\neq i} \rho(r_{ij}) . \label{eq:rho}\end{aligned}$$ The first term in [Eq. (\[eq:1\])]{} contains the embedding function $U_A(\overline{\rho}_i)$, which is a nonlinear function of the local electron density $\overline{\rho}_i$ around atom $i$. It accounts for cohesion due to band formation in the solid state and is constructed to reproduce the equation of state of system [*A*]{}. The second term represents the remainder of the interaction energy. It can be interpreted as the effective screened Coulomb interaction between pairs of ions in [*A*]{}. The EAM formalism can capture the energetics associated with density fluctuations in the lattice and has been successfully applied for modelling the formation of crystal defects such as vacancies, interstitials and their clusters. Consider now a binary system, where the pure phases are described by EAM potentials. It can be shown that the total energy expression for this type of potentials is invariant under certain scaling operations [@DawFoiBas93]. This “effective pair format” can be used to rescale the two EAM potentials, e.g. such that at the equilibrium volume for a certain lattice the electron density is 1, to ensure their compatibility. One part of the total energy of the two-component system can be written as the superposition of the respective embedding terms and effective pair interactions: $$\begin{aligned} \label{eq:2} E_0 &=& \sum_{i\in A} U_A \left(\overline{\rho}_i^{A}+\mu_{A(B)}~\overline{\rho}_i^{B}\right) + \frac{1}{2} \sum_{i\in A}\sum_{j\in A}\phi_A\left(r_{ij}\right)\\ &+& \sum_{i\in B} U_B \left(\overline{\rho}_i^{B}+\mu_{B(A)}~\overline{\rho}_i^{A}\right) + \frac{1}{2} \sum_{i\in B}\sum_{j\in B}\phi_B\left(r_{ij}\right) ,\nonumber\end{aligned}$$ where $$\label{eq:rhobar} \overline{\rho}_i^{\mathcal{S}} = \sum_{j\in \mathcal{S},j\neq i} \rho^{\mathcal{S}}(r_{ij}).$$ Note that above we have not yet added any explicit $A-B$ interactions. Equation (\[eq:2\]) is a strict superposition of the interatomic potentials for the pure elements with the only caveat that the electron density of the $A$ ($B$) species in the embedding function of a $B$ ($A$) particle is scaled with a parameter $\mu_{B(A)}$ in order to account for the different local electron densities. Thereby, two EAM models can be calibrated with respect to each other. More elaborate schemes are possible, e.g. one can treat $\mu_A$ and $\mu_B$ as free parameters. Here for the sake of simplicity, we restrict ourselves to normalised electron densities. Starting from a parametrisation for $E_0$, we now devise a practical scheme for systematically improving the interaction model. Let us denote the true many-body energy functional of the binary system by $E_t$. Our goal is to construct an interatomic potential model for the difference energy functional $\Delta E^{(0)} = E_t-E_0$. We begin with the two dilute limits. Consider a lattice of $A$ particles and substitute the atom residing in the $i$-th site with a $B$ atom. Let us now assume that $\Delta E^{(0)}$ for this configuration can be satisfactorily represented by a pair potential between the $A-B$ pairs. In this limit $\Delta E^{(0)}$ can thus be written as $$\label{eq:4} \Delta E^{(0)}({\text{$A$-rich}}) = \sum_{j \in A} V^{A}_{AB}(r_{ij}).$$ (There is only one sum in this expression since we are dealing with a configuration that contains only one $B$ atom). A similar expression is obtained for the $B$-rich limit $$\label{eq:5} \Delta E^{(0)}({\text{$B$-rich}}) = \sum_{j \in B} V^{B}_{AB}(r_{ij}).$$ Since we do not require the pair potential models for the two dilute limits to coincide with each other, an interpolation is needed which preserves the energetics of the impurities. The main objective of the present paper is to devise such an interpolation scheme. The simplest ansatz for such an expression is $$\label{eq:dE} \Delta E^{(0)} = \sum_{i \in A}\sum_{j\in B} x^A_{ij} V^A_{AB}(r_{ij}) + \sum_{i \in A}\sum_{j\in B} x^B_{ij} V^B_{AB}(r_{ij})$$ Above, $x^{\mathcal{S}}_{ij}$ denotes the concentration of species $\mathcal{S}$ in the neighbourhood of an $A-B$ pair residing on the $i$ and $j$ sites. Ideally, we require this quantity to be easy to calculate and to be insensitive to the local density and topology, in other words it should separate chemistry from structure. In any case, $x^{\mathcal{S}}_{ij}$ has to represent an average over the neighbourhood of both centres $i$ and $j$. Before we derive the expression for $x^{\mathcal{S}}_{ij}$, it is instructive to discuss the corresponding one-centre quantity $x^{\mathcal{S}}_i$. It describes the local concentration of species $\mathcal{S}$ around atom $i$. A simple way to determine $x^{\mathcal{S}}_i$ is to choose a local density function $\sigma(r_{ij})$ and then to evaluate the following expression $$x^{\mathcal{S}}_i = \frac{\sum_{(j\in S,j\neq i)}\sigma(r_{ij})}{\sum_{j\neq i}\sigma(r_{ij})} = \frac{\overline{\sigma}^{\mathcal{S}}_i}{\overline{\sigma}_i}, \label{eq:conc_i}$$ which is indeed rather insensitive to the local geometry. This is most obvious in the dilute limits. The local concentration $x^{\mathcal{S}}_i$ at the site of an impurity atom $i$ is either 0 (if $\mathcal{S}$ is the minority species) or 1 (if $\mathcal{S}$ is the majority species) independent of the local structure. This is, however, strictly true only for the impurity atom. For the other atoms in the system $x^{\mathcal{S}}_j$ varies between 0 and 1 depending on the distance to the impurity atom. Also for these particles, atomic displacements may change the value of $x^{\mathcal{S}}_j$. A total decoupling of chemistry and structure is therefore not possible. The optimal choice for $\sigma(r_{ij})$ is the function that minimises the effect of local geometry on $x^{\mathcal{S}}_i$. Although it is possible to choose different $\sigma$-functions for the different species, we do not expect the quality of the final potential to depend crucially on the choice of $\sigma(r_{ij})$. In fact, we expect the best choice for $\sigma(r_{ij})$ to be the simplest one. ![ Schematic illustration of the connection between $x_i^{\mathcal{S}}$ and two-centre concentrations $x_{ij}^{\mathcal{S}}$ and their computation in a binary alloy according to Eqs. (\[eq:conc\_i\]) and (\[eq:conc\_ij\]). Here, the cutoff function $\sigma(r)$ which appears in [Eq. (\[eq:conc\_i\])]{} is assumed to be a step function which is 1 for $r<r_c$ and zero otherwise. []{data-label="fig:schematic_2comp"}](fig1.eps) It is now straightforward to extend [Eq. (\[eq:conc\_i\])]{} to define the concentration $x^{\mathcal{S}}_{ij}$ in the neighbourhood of a pair of atoms residing on sites $i$ and $j$. To this end, we first define a quantity $x^{\mathcal{S}}_{i(j)}$ to represent the concentration of the species $\mathcal{S}$ in the neighbourhood of atom $i$ excluding atom $j$: $$\begin{aligned} \label{eq:conc_ij} x^{\mathcal{S}}_{i(j)} &=& \frac{\sum_{(k\in \mathcal{S},k\neq i,k\neq j)}\sigma(r_{ik})}{ \sum_{(k\neq i,k\neq j)}\sigma(r_{ik})} = \frac{\overline{\sigma}^{\mathcal{S}}_i - \delta(\mathcal{S},t_j) \sigma(r_{ij})}{ \overline{\sigma}_i - \sigma(r_{ij})} \\ &=& x^{\mathcal{S}}_i \left\{ \begin{array}{ll} \displaystyle \frac{1-\sigma(r_{ij})/\overline{\sigma}^{\mathcal{S}}_i}{ 1-\sigma(r_{ij})/\overline{\sigma}_i} & ~ t_j = \mathcal{S} \\ \displaystyle \frac{1}{1-\sigma(r_{ij})/\overline{\sigma}_i} & ~ t_j = \mathcal{S} \end{array} \right. , \nonumber\end{aligned}$$ where $t_i$ denotes the type of atom $i$, and $\delta(t_i,t_j)$ is 1 if $t_i=t_j$ and zero otherwise. Using this quantity, the two-centre concentration $x^{\mathcal{S}}_{ij}$ can be defined as follows $$x^{\mathcal{S}}_{ij} = \frac{1}{2}\left(x^{\mathcal{S}}_{i(j)} + x^{\mathcal{S}}_{j(i)} \right) \label{eq:2cntr}$$ Hence, the two-centre concentration of the species $\mathcal{S}$ about the atom pair $(i,j)$ is the average concentration in the two separate neighbourhoods of sites $i$ and $j$ excluding both of these atoms. This definition, which is illustrated in [Fig. \[fig:schematic\_2comp\]]{}, has the important advantage that the interpolation scheme introduced in [Eq. (\[eq:dE\])]{} does not modify the interactions in the dilute limits, since $x^{\mathcal{S}}_{ij}$ is strictly 0 or 1 in the two limits irrespective of the local structure. Furthermore, it is straightforward to generalise [Eq. (\[eq:2cntr\])]{} to multi-centre concentrations. For example, in the next section, we will explicitly discuss the construction of interatomic potentials using three-centre concentrations. Let us now revisit [Eq. (\[eq:dE\])]{}. As mentioned earlier this is the simplest ansatz for $\Delta E^{(0)}$ that can reproduce the energetics of both dilute limits. A more general expression is $$\label{eq:dE1} \Delta E^{(0)} = \sum_{i \in A}\sum_{j\in B} h_{AB}^A(x^A_{ij})~ V^A_{AB}(r_{ij}) + \sum_{i \in A}\sum_{j\in B} h_{AB}^B(x^B_{ij})~V^B_{AB}(r_{ij}) ,$$ where $h_A^B(x)$ and $h_B^A$ are nonlinear functions with the property $h_A^B(0) = h_B^A(0) = 0$ and $h_A^B(1) = h_B^A(1) = 1$. By fitting these functions to the energetics of the concentrated alloys, the quality of the interatomic potential model for the binary can be improved drastically. In principle, one can stop here and have an interatomic potential model, $E_0 + \Delta E^{(0)}$, that can reproduce the energetics of the dilute limits as well as the solid solution of the binary. It is, however, also possible to further refine the above model. For this purpose, let us again define a difference energy functional $$\Delta E^{(1)} = E_t - E_0 - \Delta E^{(0)},$$ and construct an interatomic potential model for the energy functional $\Delta E^{(1)}$. Consider a lattice of $A$ particles and substitute two atoms, say $i$ and $j$, with $B$ particles. Assume that $\Delta E^{(1)}$ for this configuration can be well represented by a potential model describing the interaction of the $B$-$B$ pair with a lattice of $A$ particles. In this limit we can express $\Delta E^{(1)}$ as $$\label{eq:Arich1} \Delta E^{(1)}({\text{$A$-rich}}) = V_{BB}^A(r_{ij})+\sum_k V_{BBA}^A(r_{ijk}),$$ where $r_{ijk}$ is shorthand for the three sets of positions of the $i$, $j$ and $k$ atoms, i.e. $\{{\boldsymbol{r}}_{i},{\boldsymbol{r}}_{j},{\boldsymbol{r}}_{k}\}$. In the same way we obtain for the $B$-rich limit $$\label{eq:Brich1} \Delta E^{(1)}({\text{$B$-rich}}) = V_{AA}^B(r_{ij})+\sum_k V_{AAB}^B(r_{ijk}).$$ Note that $\Delta E^{(1)}$ has both a two-body and a three-body component and thus can be decomposed as follows $$\label{eq:15} \Delta E^{(1)} = \Delta E^{(1)}_{\text {pair}} + \Delta E^{(1)}_{\text {triplet}} .$$ In the next section we discuss how to incorporate the three-body contribution into the interatomic potential model. For now, we only consider $\Delta E^{(1)}_{\text {pair}}$. Following the same line of arguments that lead to [Eq. (\[eq:dE1\])]{}, we obtain the expression $$\Delta E^{(1)}_{\text {pair}} = \sum_{i \in B}\sum_{j\in B} h_{BB}^A(x^A_{ij})~ V^A_{BB}(r_{ij}) + \sum_{i \in A}\sum_{j\in A} h_{AA}^B(x^B_{ij})~V^B_{AA}(r_{ij}) ,$$ which reproduces the contributions of the pair terms in the two limits given by Eqs. (\[eq:Arich1\]) and (\[eq:Brich1\]). The two non-linear functions have to fulfil the conditions $$\begin{aligned} h_{AA}^B(0) = h_{BB}^A(0) = 0 \\ h_{AA}^B(1) = h_{BB}^A(1) = 1.\end{aligned}$$ By fitting the functions $h_{AA}^B$ and $h_{BB}^A$ in the intermediate concentration range to the energetics of the concentrated alloy, one can obtain a further improvement for the interaction model for the binary system. Beyond Pair Potentials {#sect:beyond_pair_potentials} ---------------------- In this section, we show that the formalism introduced in the previous section can be extended to multi-body interaction potentials, which enables us to capture the energetics of a wider range of phases including ordered compounds. In the previous section, we outlined a scheme to construct composition-dependent pair potentials for the potential energy landscape $E_0 + \Delta E^{(0)} + \Delta E^{(1)}$. It was also observed that a proper formulation of $\Delta E^{(1)}$ requires incorporation of explicit three-body terms. In this section we describe how to construct such composition-dependent multi-body potentials. ![ Schematic illustration of the computation of three-centre concentrations in a binary alloy using Eqs. (\[eq:conc\_i\]) and (\[eq:conc\_ijk\]). Here, the cutoff function $\sigma(r)$ which appears in [Eq. (\[eq:conc\_i\])]{} is assumed to be a step function which is 1 for $r<r_c$ and zero otherwise. []{data-label="fig:schematic_2comp_trip"}](fig2.eps) First, we require an interpolation scheme to connect the two limits of the three-body term $\Delta E^{(1)}_{\text {triplet}}$ in [Eq. (\[eq:15\])]{}. The simplest ansatz for such an expression is $$\label{eq:triplet} \Delta E^{(1)}_{\text {triplet}} = \sum_{i \in B}\sum_{j\in B}\sum_{k\in A} x^A_{ijk} V^A_{BBA}(r_{ijk}) + \sum_{i \in A}\sum_{j\in A}\sum_{k\in B} x^B_{ijk} V^B_{AAB}(r_{ijk}) ,$$ where $x^{\mathcal {S}}_{ijk}$ denotes the concentration of species $\mathcal {S}$ in the neighbourhood of the triplet residing on sites $i$, $j$ and $k$. In analogy with the derivation of the two-centre concentration [Eq. (\[eq:2cntr\])]{}, we start from the one-centre concentration $x^{\mathcal {S}}_i$ and define the intermediate quantity $x^{\mathcal{S}}_i{(jk)}$ that represents the concentration centred around atom $i$ excluding atoms $j$ and $k$ $$\begin{aligned} x^{\mathcal{S}}_{i(jk)} &=& \frac{\sum_{(l\in S,l\neq i,l\neq j,l\neq k)}\sigma(r_{il})} {\sum_{(l\neq i,l\neq j,l\neq k)}\sigma(r_{il}) } = \frac{\overline{\sigma}^{\mathcal{S}}_i - \delta(\mathcal{S},t_j) \sigma(r_{ij}) - \delta(\mathcal{S},t_k) \sigma(r_{ik})}{ \overline{\sigma}_i - \sigma(r_{ij})-\sigma(r_{ik})} \\ &=& x^{\mathcal{S}}_i~\frac{1 -\left[ \delta(\mathcal{S},t_j) \sigma(r_{ij}) + \delta(\mathcal{S},t_k) \sigma(r_{ik}) \right] /\overline{\sigma}^{\mathcal{S}}_i}{ 1 - \left[\sigma(r_{ij}) + \sigma(r_{ik})\right]/\overline{\sigma}_i}, \end{aligned}$$ and now following the same line of arguments leading to [Eq. (\[eq:2cntr\])]{} we define the three-centre concentration $x^{\mathcal{S}}_{ijk}$ as follows $$x^{\mathcal{S}}_{ijk} = \frac{1}{3}\left(x^{\mathcal{S}}_{i(jk)} + x^{\mathcal{S}}_{j(ik)} + x^{\mathcal{S}}_{k(ij)}\right). \label{eq:conc_ijk}$$ A graphical illustration of the computation of this quantity is given in [Fig. \[fig:schematic\_2comp\_trip\]]{}. The three-centre concentration of the species $\mathcal{S}$ about the triplet $(i,j,k)$ is the average concentration (excluding the triplet) in three separate neighbourhoods, each of which is centred at one of the atoms in the triplet. Thanks to this definition $x^{\mathcal{S}}_{ijk}$ is strictly 0 or 1 in the two dilute limits described in Eqs. (\[eq:Arich1\]) and (\[eq:Brich1\]), irrespective of the local structure. Hence, the interpolation scheme in [Eq. (\[eq:triplet\])]{} does not alter the interactions in Eqs. (\[eq:Arich1\]) and (\[eq:Brich1\]). Again, as in [Eq. (\[eq:dE1\])]{}, we can improve the simple interpolation scheme in [Eq. (\[eq:triplet\])]{} $$\label{eq:triplet1} \Delta E^{(1)}_{\text {triplet}} = \sum_{i \in B}\sum_{j\in B}\sum_{k\in A} h_{BBA}^A(x^A_{ijk}) V^A_{BBA}(r_{ijk}) + \sum_{i \in A}\sum_{j\in A}\sum_{k\in B} h_{AAB}^B(x^B_{ijk}) V^B_{AAB}(r_{ijk}) ,$$ where $h_{BBA}^A$ and $h_{AAB}^B$ are non-linear functions that can be fitted to the energetics of the concentrated alloys with the boundary conditions $$\begin{aligned} h_{BBA}^A(0) = h_{AAB}^B(0) = 0 \quad\text{and}\quad h_{BBA}^A(1) = h_{AAB}^B(1) = 1.\end{aligned}$$ Following this scheme composition-dependent cluster interactions of arbitrary order can be included in the interatomic potential model. To summarise, to incorporate cluster interactions of order $n$, two cluster potentials are constructed, one for the configuration where the cluster is embedded in the $A$ lattice and one for the configuration where the cluster is embedded in the $B$ lattice. Subsequently these limits are interpolated using the $n$-centre concentrations. In the next section, we review this strategy in detail to show that a systematic series expansion in composition-dependent cluster interactions is possible for general multicomponent systems. Multicomponent Systems ====================== Series Expansion in Embedded Cluster Interactions {#sect:series} ------------------------------------------------- In the first sections of this paper, we have shown how to practically construct interatomic potentials for binary systems. First, mixed interatomic pair and triplet potentials are generated for the dilute limits which are subsequently extended to arbitrary concentrations by fitting interpolation functions that depend on the local concentration about the atomic pairs and triplets. The choice of specific potentials and dilute configurations was mainly driven by physical intuition. In this section we show that this procedure can be formalised and generalised to arbitrarily complex systems with more than two components. ![ Schematic illustration of $\mathcal{S}$-embedded coloured clusters of orders 2, 3, and 4 in a ternary alloy. The shaded region indicates the cutoff range around the central atom marked by an asterisk. []{data-label="fig:schematic_clusters1"}](fig3.eps) Consider an $n$-component mixture of $N$ particles that are distinguishable only through their species. Assign a unique colour to each of the species: $\{\mathcal{C}_1,\ldots,\mathcal{C}_n\}$. We define a colour cluster of order $m$ to be a set of $m$ particles with a specific colour combination. We use the occupation number formalism to identify colour schemes, i.e. $\left(\mathcal{C}_1^{k_1},\ldots,\mathcal{C}_n^{k_n}\right)$, where $k_i$ is the number of particles in the cluster with colour $\mathcal{C}_i$, and $\sum_i k_i = m$. For example, a cluster of order 3 consisting of one particle with the colour $\mathcal{C}_1$ and two particles with the colour $\mathcal{C}_3$, is denoted by $\left(\mathcal{C}_1,\mathcal{C}_3^2\right)$. Furthermore, we define an $\mathcal{S}$-embedded colour cluster of order $m$ to be a set of $m$ coloured particles embedded in a pure matrix of species $\mathcal{S}$. Three examples of such $\mathcal{S}$-embedded coloured clusters are shown in [Fig. \[fig:schematic\_clusters1\]]{}. The key idea is that the potential energy landscape of an alloy can be expanded in the basis set of elementary interaction potentials each of which is constructed to reproduce the energetics of a particular embedded colour cluster. The order of an interaction element in the series is determined by the order of the corresponding colour cluster. By progressively including higher order colour cluster interactions, one can systematically increase the accuracy of the model. ![ Schematic illustration of the connection between $x_i^{\mathcal{S}}$ and two-centre concentrations $x_{ij}^{\mathcal{S}}$ and their computation in a ternary alloy according to Eqs. (\[eq:conc\_i\]) and (\[eq:conc\_ij\]). Here, the cutoff function $\sigma(r)$ which appears in [Eq. (\[eq:conc\_i\])]{} is assumed to be a step function which is 1 for $r<r_c$ and zero otherwise. []{data-label="fig:schematic_3comp"}](fig4.eps) To recapitulate, we expand the potential energy landscape of multicomponent systems in the basis set of colour cluster interatomic potential functions $V^{\mathcal{S}}_{ \mathcal{C}_1^{k_1}\ldots\mathcal{C}_n^{k_n} } (\{ {\boldsymbol{r}} \} )$, where $\{ {\boldsymbol{r}} \}$ is the real-space configuration of the respective cluster. The expansion coefficient for each basis function is the interpolation function $h^{\mathcal{S}}_{ \mathcal{C}_1^{k_1}\ldots\mathcal{C}_n^{k_n}} (x^{\mathcal{S}})$, where $x^{\mathcal{S}}$ is the local concentration of the species $\mathcal{S}$ in the neighbourhood of the cluster. One of the innovations in this work is a simple and computationally expeditious way to determine $x^{\mathcal{S}}$ which is illustrated for the case of a ternary alloy in [Fig. \[fig:schematic\_3comp\]]{}. Formally the total energy expression for an alloy of $n$ components and $N$ particles can be written as $$E = E_0 + \sum_m \underbrace{\sum_{k_1}\ldots\sum_{k_n}}_{\sum_{i=1}^n k_i=m}\sum_{\mathcal{S}} h^{\mathcal{S}}_{ \mathcal{C}_1^{k_1}\ldots\mathcal{C}_n^{k_n}} (x^{\mathcal{S}}) V^{\mathcal{S}}_{ \mathcal{C}_1^{k_1}\ldots\mathcal{C}_n^{k_n} } (\{ {\boldsymbol{r}} \} ),$$ where the first sum is over the order of the cluster potentials and the subsequent sums are over all distinguishable colour combinations of $m$-size clusters. Each term in the above expansion can be evaluated as follows $$\label{eq:27} h^{\mathcal{S}}_{ \mathcal{C}_1^{k_1}\ldots\mathcal{C}_n^{k_n}} (x^{\mathcal{S}}) V^{\mathcal{S}}_{ \mathcal{C}_1^{k_1}\ldots\mathcal{C}_n^{k_n} } (\{ {\boldsymbol{r}} \} ) = \underbrace{\sum_{i_1=1}^N\ldots\sum_{i_m=1}^N }_{ \{i_1\ldots i_m\}\in \{ \mathcal{C}_1^{k_1}\ldots\mathcal{C}_n^{k_n} \} } h^{\mathcal{S}}_{ \mathcal{C}_1^{k_1}\ldots\mathcal{C}_n^{k_n}} (x^{\mathcal{S}}_{i_1\ldots i_m}) V^{\mathcal{S}}_{ \mathcal{C}_1^{k_1}\ldots\mathcal{C}_n^{k_n} } (r_{i_1\ldots i_m}).$$ The sums in [Eq. (\[eq:27\])]{} are over all possible $m$-size atom clusters $\{i_1\ldots i_m\}$ in the system with the colour scheme $\left(\mathcal{C}_1^{k_1},\ldots,\mathcal{C}_n^{k_n}\right)$. The main advantage of this scheme is that the basis functions can be constructed sequentially and independent of the interpolation functions. The lower order terms can be constructed with no knowledge of the higher order terms and therefore need not be reparametrised when higher order cluster potentials are constructed. The higher order terms in the expansion become progressively smaller. Furthermore, addition of new terms in the series expansion is not likely to introduce unphysical behaviour, a problem that plagues most fitting schemes for interatomic potentials. Explicit expressions for ternary alloys {#sect:ternary} --------------------------------------- In this section we illustrate the formal discussion in the previous section by constructing an expansion in embedded pair and triplet potentials for a ternary system. For simplicity we assume the pure elements are described by EAM models. The extension to larger number of components and higher order cluster potentials will be obvious. We consider a system of three components $A$, $B$ and $C$, and assume that three composition-dependent pair potentials for the binary systems $A-B$, $A-C$ and $B-C$ have already been constructed. Explicitly, the $A-B$ interaction is given by the following expression $$\begin{aligned} E^{pair}_{\text {A-B}} &=& \sum_{i\in A} U_A \left(\overline{\rho}_i^{A}+\mu_{A(B)}~\overline{\rho}_i^{B}\right) + \frac{1}{2} \sum_{i\in A}\sum_{j\in A}\left( h_{AA}^A(x^A_{ij}) \phi_A(r_{ij}) + h_{AA}^B(x^B_{ij}) V_{AA}^B(r_{ij}) \right) \nonumber\\ &+& \sum_{i\in B} U_B\left(\overline{\rho}_i^{B}+\mu_{B(A)}~\overline{\rho}_i^{A}\right) + \frac{1}{2} \sum_{i\in B}\sum_{j\in B}\left(h_{BB}^B(x^B_{ij}) \phi_B(r_{ij}) + h_{BB}^A(x^A_{ij}) V_{BB}^A(r_{ij}) \right) \nonumber\\ &+& \sum_{i\in A}\sum_{j\in B}\left( h_{AB}^A(x^A_{ij}) V_{AB}^A(r_{ij}) + h_{AB}^B(x^B_{ij}) V_{AB}^B(r_{ij}) \right).\end{aligned}$$ By now the notation above should be familiar. The interaction potentials for the two other pairs can be written in analogous fashion. Now, we can spell out the expansion in embedded pair potentials for the ternary $A-B-C$ $$\begin{aligned} \label{eq:ternary} E^{pair}_{A-B-C} &=& \sum_{i\in A} U_A\left(\overline{\rho}_i^{A}+\mu_{A(B)}~\overline{\rho}_i^{B} + \mu_{A(C)}~\overline{\rho}_i^{C}\right) \\ &+& \sum_{i\in B} U_B\left(\overline{\rho}_i^{B}+\mu_{B(A)}~\overline{\rho}_i^{A} + \mu_{B(C)}~\overline{\rho}_i^{C}\right) \nonumber\\ &+& \sum_{i\in C} U_C\left(\overline{\rho}_i^{C}+\mu_{C(A)}~\overline{\rho}_i^{A}+ \mu_{C(B)}~\overline{\rho}_i^{B}\right) \nonumber \\ &+&\frac{1}{2} \sum_{i\in A}\sum_{j\in A}\left[ h_{AA}^A(x^A_{ij}) \phi_A(r_{ij}) + h_{AA}^B(x^B_{ij}) V_{AA}^B(r_{ij}) + h_{AA}^C(x^C_{ij}) V_{AA}^C(r_{ij})\right] \nonumber\\ &+&\frac{1}{2} \sum_{i\in B}\sum_{j\in B}\left[ h_{BB}^B(x^B_{ij}) \phi_B(r_{ij}) + h_{BB}^A(x^A_{ij}) V_{BB}^A(r_{ij}) + h_{BB}^C(x^C_{ij}) V_{BB}^C(r_{ij})\right] \nonumber\\ &+&\frac{1}{2} \sum_{i\in C}\sum_{j\in C}\left[ h_{CC}^C(x^C_{ij}) \phi_C(r_{ij}) + h_{CC}^A(x^A_{ij}) V_{CC}^A(r_{ij}) + h_{CC}^B(x^B_{ij}) V_{CC}^B(r_{ij})\right] \nonumber\\ &+& \sum_{i\in A}\sum_{j\in B}\left[ h_{AB}^A(x^A_{ij}) V_{AB}^A(r_{ij}) + h_{AB}^B(x^B_{ij}) V_{AB}^B(r_{ij}) + h_{AB}^C(x^C_{ij}) V_{AB}^C(r_{ij}) \right] \nonumber\\ &+& \sum_{i\in A}\sum_{j\in C}\left[ h_{AC}^A(x^A_{ij}) V_{AC}^A(r_{ij}) + h_{AC}^B(x^B_{ij}) V_{AC}^B(r_{ij}) + h_{AC}^C(x^C_{ij}) V_{AC}^C(r_{ij}) \right] \nonumber\\ &+& \sum_{i\in B}\sum_{j\in C}\left[ h_{BC}^A(x^A_{ij}) V_{BC}^A(r_{ij}) + h_{BC}^B(x^B_{ij}) V_{BC}^B(r_{ij}) + h_{BC}^C(x^C_{ij}) V_{BC}^C(r_{ij}) \right] \nonumber.\end{aligned}$$ The only unknowns in the above equation are $V_{AB}^C(r_{ij})$, $V_{AC}^B(r_{ij})$, $V_{BC}^A(r_{ij})$, $h_{AB}^C(x^{C}_{ij})$, $h_{AC}^B(x^{B}_{ij})$ and $h_{BC}^A(x^{A}_{ij})$. The potentials $V_{AB}^C(r_{ij})$, $V_{AC}^B(r_{ij})$ and $V_{BC}^A(r_{ij})$ describe the interaction between pairs of unlike species embedded in pure lattices of the third species of the ternary. In analogy with the previous section, it is reasonable to expect that we can construct these potentials separately in their respective dilute limits and subsequently fit the interpolation functions $h_{AB}^C(x^{C}_{ij})$, $h_{AC}^B(x^{B}_{ij})$, $h_{BC}^A(x^{A}_{ij})$ to the energetics of the concentrated ternary alloys. However, when the number of species increases certain complications can arise that are not present in the binaries. This is well illustrated in the situation above. We now show that it is in fact not possible to separately construct the three pair potentials $V_{AB}^C(r_{ij})$, $V_{AC}^B(r_{ij})$ and $V_{BC}^A(r_{ij})$ described above. ![ Schematic illustration of two and three-centre concentrations for a ternary alloy in the dilute limit. Note that the two-centre concentrations $x_{ij}$ in the [*dilute limit*]{} in a binary alloy are either one or zero. In contrast, in the case of a ternary alloy the two-centre concentrations in the same limit can be non-zero. The three-centre concentrations, however, are again either one or zero. []{data-label="fig:schematic_colored_cluster"}](fig5.eps) To this end, consider a pure lattice of $N$ particles of e.g., $C$ species. Substitute two nearest neighbour particles in this lattice with an $A$ particle and a $B$ particle respectively. The ternary energy [Eq. (\[eq:ternary\])]{} for a $C$-rich configuration containing one $A-B$ pair on the sites $i$ and $j$ respectively becomes $$\begin{aligned} \label{eq:3dilute} E^{pair}_{A-B-C} ({\text{$C$-rich}}) &=& \tilde{E}_0\nonumber\\ &+&\frac{1}{2} \sum_{k\in C}\sum_{l\in C}\left( h_{CC}^C(x^C_{kl}) \phi_C(r_{kl}) + h_{CC}^A(x^A_{kl}) V_{CC}^A(r_{kl}) + h_{CC}^B(x^B_{kl}) V_{CC}^B(r_{kl})\right) \nonumber\\ &+& V_{AB}^C(r_{ij}) + \sum_{k\in C}\left( h_{AC}^B(x^B_{ik}) V_{AC}^B(r_{ik}) + h_{AC}^C(x^C_{ik}) V_{AC}^C(r_{ik}) \right) \nonumber.\\ &+& \sum_{k\in C} \left( h_{BC}^A(x^A_{jk}) V_{BC}^A(r_{jk}) + h_{BC}^C(x^C_{jk}) V_{BC}^C(r_{jk}) \right) \nonumber,\end{aligned}$$ where for the sake of clarity we have replaced the three embedding terms in [Eq. (\[eq:ternary\])]{} by $\tilde{E}_0$. Observe that all three unknown potentials $V_{AB}^C(r_{ij})$, $V_{AC}^B(r_{ik})$ and $V_{BC}^A(r_{jk})$ as well as their corresponding interpolation functions appear in [Eq. (\[eq:3dilute\])]{}. This is in contrast to the binary case, e.g. Eqs. (\[eq:4\]), (\[eq:5\]), (\[eq:Arich1\]) and (\[eq:Brich1\]), where the potentials for the two dilute limits can be constructed independently of each other. This is because the two-centre concentrations in the [*dilute limit*]{} in a [*binary*]{} alloy are either one or zero. In contrast, in the case of a [ *ternary*]{} alloy the two-centre concentrations in the same limit can be non-zero (see [Fig. \[fig:schematic\_colored\_cluster\]]{}). A straightforward solution to the above problem is to fit all three pair potentials simultaneously. A closer look at [Eq. (\[eq:3dilute\])]{}, however, suggests a simpler solution. Let us examine the interpolation functions $h_{AC}^B(x^B_{ik})$ and $h_{BC}^A(x^A_{jk})$. Note that since we are dealing here with an $A-B$ cluster in a $C$-rich system $x^B_{ik}$ and $x^A_{jk}$ are close to zero. Remembering the boundary conditions on the interpolation functions, i.e. $h(1) = 1$ and $h(0) = 0$, we conclude that the contributions of the $V_{AC}^B(r_{ij})$ and $V_{BC}^A(r_{ij})$ potentials to the energetics of an $A-B$ pair embedded in a $C$ lattice are small. In fact, we can diminish the contribution of these potentials to [Eq. (\[eq:3dilute\])]{} by enforcing the interpolation functions to be 0 for $x < x_{th}$, where $x_{th}$ is the largest concentration of $B$ or $A$ particles found about any pair in the system. In this way, one can generally separate the construction of cluster potentials when they overlap in the dilute configurations. The problem of potential overlap in the dilute limit discussed above should not be neglected. On the other hand it is quite benign and —as shown above— can be handled easily. Furthermore, more often than not, even for complex clusters and many components, there is no overlap. We illustrate this point by considering the simplest expansion in triplet cluster potentials for the ternary above: $$\begin{aligned} E^{\text{triplet}}_{A-B-C} &=& \sum_{i\in A}\sum_{j\in B}\sum_{k\in C} h^A_{ABC}(x_{ijk}^A)~V^A_{ABC}(r_{ijk}) \\ &+& h^B_{ABC}(x_{ijk}^B)~V^B_{ABC}(r_{ijk}) + h^C_{ABC}(x_{ijk}^C)~V^C_{ABC}(r_{ijk}) \nonumber.\end{aligned}$$ Now consider again the same $C$ lattice as above, where an $A-B$ pair has been embedded at the sites $i$ and $j$. The triplet energy becomes $$E^{\text{triplet}}_{A-B-C}(\text{$C$-rich}) = \sum_{k\in C} V^C_{ABC}(r_{ijk}).$$ Since we have only contributions from $V^C_{ABC}(r_{ijk})$ for the these configurations, we can construct these potentials separately from each other and independent of the interpolation functions. This is because in the dilute limit the three-centre concentrations are again either one or zero (see [Fig. \[fig:schematic\_colored\_cluster\]]{}). Implementation of Forces in Molecular dynamics {#sect:forces} ============================================== Next to accuracy, the most important quality of an interatomic potential model is its computational efficiency when implemented into atomistic simulation codes. Due to the unconventional form of the interatomic potentials described in this work, it is important to discuss the efficient implementation of forces for molecular-dynamics simulations. We will see below that the straightforward derivation of the forces for composition-dependent pair potentials leads to explicit 3-body forces. In fact in general, composition-dependent $N$-body potentials lead to explicit $N+1$-body forces. Below we present an algorithm that considerably speeds up the calculation of forces for composition-dependent $N$-body potentials, making them comparable in efficiency to the corresponding $N$-body regular potentials. In the following, for the sake of clarity we limit our discussion to pair potentials. The extension to cluster potentials of higher order is straightforward. For reference, let us first consider a conventional mixed pair potential energy expression for a binary system, $$E_{\text{pp}} = \sum_{i\in A}\sum_{j\in B} V(r_{ij}).$$ Within this model the force on a particle $k$ of type $A$ is calculated as follows $$\frac{\partial E_{\text{pp}}}{\partial {\boldsymbol{r}}_k^A} = \sum_{j\in B} V'(r_{kj}) \frac{{\boldsymbol{r}}_{kj}}{r_{kj}}.$$ Let us now consider a typical composition-dependent pair potential model for the same binary system, $$E_{\text{cdpp}} = \sum_{i\in A}\sum_{j\in B} h(x^A_{ij})~V(r_{ij}),$$ where $x^A_{ij}$ is the two-centre concentration of the species $A$ about the $(i,j)$ pair. Now the force on particle $k$ of type $A$ can be written $$\label{eq:force0} \frac{\partial E_{\text{cdpp}}}{\partial {\boldsymbol{r}}_k^A} = \sum_{j\in B} V'(r_{kj}) h(x^A_{kj}) + \sum_{i\in A}\sum_{j\in B}V(r_{ij})h'(x_{ij}^A)\frac{1}{2} \left( \frac{\partial x^A_{i(j)}}{\partial {\boldsymbol{r}}_k^A} + \frac{\partial x^A_{j(i)}}{\partial {\boldsymbol{r}}_k^A} \right),$$ for which after some algebra we obtain $$\frac{\partial x^A_{i(j)}}{\partial {\boldsymbol{r}}_k^A} = \frac{\overline{\sigma}_i^B-\delta(\mathcal{S},t_j)\sigma(r_{ij})}{ \left(\overline{\sigma}_i\right)^2-\sigma(r_{ij})}\sigma'(r_{ik}) \frac{{\boldsymbol{r}}_{ki}}{r_{ki}}.$$ All the quantities above have already been defined in Eqs. (\[eq:conc\_ij\]) and (\[eq:2cntr\]). The second term in [Eq. (\[eq:force0\])]{} contains contributions from two particles $i$ and $j$ to the forces on particle $k$. Hence composition-dependent pair potentials lead to explicit three-body forces, which usually implies significantly more expensive to calculations. However, we will now show that in the case of expressions such as [Eq. (\[eq:force0\])]{} one can regroup the terms in such a way as to speed up the calculation of forces drastically. To this end, let us introduce a per-atom quantity that for an atom of type $A$ reads $$M_{i\in A}^{\mathcal{S}} = \sum_{j\in B} V(r_{ij}) h'(x_{ij}^A) \frac{\overline{\sigma}_i^{\mathcal{S}}-\delta(B,t_j)\sigma(r_{ij}) }{\left(\overline{\sigma}_i\right)^2-\sigma(r_{ij})},$$ and for an atom of type $B$ $$M_{i\in B}^{\mathcal{S}} = \sum_{j\in A} V(r_{ij}) h'(x_{ij}^A) \frac{\overline{\sigma}_i^{\mathcal{S}}-\delta(A,t_j)\sigma(r_{ij}) }{\left(\overline{\sigma}_i\right)^2-\sigma(r_{ij})}.$$ Substituting $M_i^{\mathcal{S}}$ into [Eq. (\[eq:force0\])]{} we obtain $$\label{eq:force1} \frac{\partial E_{\text{cdpp}}}{\partial {\boldsymbol{r}}_k^A} = \sum_{j\in B} V'(r_{kj}) h(x^A_{kj}) + \frac{1}{2} \sum_i M_i^B \sigma'(r_{ki})\frac{{\boldsymbol{r}}_{ki}}{r_{ki}}.$$ Similar derivation for the force on a particle $k$ of type $B$ leads to the expression $$\label{eq:force2} \frac{\partial E_{\text{cdpp}}}{\partial {\boldsymbol{r}}_k^B} = \sum_{j\in A} V'(r_{kj}) h(x^A_{kj}) + \frac{1}{2} \sum_i M_i^A \sigma'(r_{ki})\frac{{\boldsymbol{r}}_{ki}}{r_{ki}}.$$ Each quantity in the above force expressions can be calculated separately via pairwise summations. This allows for a very efficient three-step algorithm for the calculation of forces: ([*i*]{}) compute and store the local partial densities $\overline{\sigma}_i^{\mathcal{S}}$ for every atom, ([*ii*]{}) compute and store the quantities $M_i^{\mathcal{S}}$ for every atom, and ([*iii*]{}) compute the forces according to the Eqs. (\[eq:force1\]) and (\[eq:force2\]). This method leads to computational efficiency comparable to standard EAM models. Linearised Models for efficient Monte-Carlo simulations {#sect:monte_carlo} ======================================================= Molecular dynamics simulations are limited when it comes to modelling phenomena such as precipitation, surface and grain boundary segregation, or ordering in alloys. Monte-Carlo (MC) methods, however, are ideally suited for such applications. The most common techniques are based on so-called swap trial moves, in which the chemical identity of a random particle is changed. The resulting change in potential energy, $\Delta E$, is used to decide whether the swap is accepted or rejected. The main task in an MC simulation is therefore to calculate the change in potential energy induced by swapping the type of a single atom. For short-range potentials this can be done very efficiently, since the type exchange only affects the atoms in the neighbourhood of the type swap. In the framework of the standard EAM model the situation is as follows: Changing the species of one atom directly affects (1) its embedding energy, (2) its pair-wise interactions with neighbouring atoms, and (3) indirectly changes the electron density at neighbouring atoms and therefore their embedding energies. All these quantities need to be recalculated by visiting the atoms affected by the type swap. In the case of composition-dependent models the situation turns out to be more laborious. To illustrate this let us again consider a typical composition-dependent pair potential model for a binary system: $$E_{\text{cdpp}} = \sum_{i\in A}\sum_{j\in B} h(x^A_{ij})~V(r_{ij}), \label{eq:cdpp1}$$ where $x^A_{ij}$ is the two-centre concentration of the species $A$ about the $(i,j)$ pair $$x^A_{ij} = \frac{1}{2}\left(x^A_{i(j)} + x^A_{j(i)} \right),$$ where the $x^A_{i(j)}$ is the local concentration $A$ about the atom $i$ excluding atom $j$. From [Eq. (\[eq:2cntr\])]{} we observed that to a good approximation $x_{i(j)}\approx x_i$. Therefore, for the qualitative discussion below we replace $x_{i(j)}$ by $x_i$. In the energy expression [Eq. (\[eq:cdpp1\])]{}, the site energy $E_i$ of an atom $i$ does not only depend on the local concentration $x_i$, but also on the concentrations $x_j$ of all its neighbours $j$. This has a dreadful impact on the efficiency of the energy calculation. Changing the chemical identity of some atom $i$ alters the local concentrations $x_j$ of all its direct neighbours $j$, which in turn affects the mixed interaction of all atoms $j$ with all of their respective neighbour atoms $k$. All of these have to be re-evaluated to compute the total change in energy induced by the single swap operation. The interaction radius that has to be considered is therefore twice as large as the cutoff radius of the underlying EAM potential, which increases the computational costs by at least one order of magnitude. This issue can be resolved quite effectively if we linearise the interpolation function $h(x^A_{ij})$ as follows $$h(x^A_{ij}) = \frac{1}{2} \left(h(x^A_{i(j)}) + h(x^A_{j(i)}) \right).$$ Within the new linearised formulation, although a single pair interaction between two atoms $j$ and $k$ still depends on the concentration at both sites, the site energy can be recast in a form that is independent of the concentrations on the neighbouring sites. As a result, the site energy of atom $k$ is no longer affected by changing the type of an atom $i$ that is farther away than one cutoff radius. Note that linearisation can be done for interpolation functions of any $n$-centre concentrations. All composition-dependent models independent of cluster size can therefore be linearised. We have discussed the linearised model and its implementation for MD and MC at length in a recent publication [@StuSadErh09]. A practical example {#sect:FeCr} =================== To provide a practical illustration of the concepts developed in this paper, we now revisit the composition-dependent EAM potential for Fe–Cr [@CarCroCar05], which has already been successfully applied in a number of cases [@CarCarLop06b; @ErhCarCar08]. Application of composition-dependent embedded atom method to Fe–Cr ------------------------------------------------------------------ Iron alloys are materials with numerous technological applications. In particular Fe–Cr alloys are at the basis of ferritic stainless steels. It has been recently shown [@OlsAbrVit03] that the Fe–Cr alloy in the ferromagnetic phase has an anomaly in the heat of formation which shows a change in sign going from negative to positive at about 10% Cr and leads to the coexistence of intermetallic phase [@ErhSadCar08] and segregation in the same alloy. This complexity results from a “magnetic frustration” of the Cr atoms in the Fe matrix [@KlaDraFin06] which leads to an effectively repulsive Cr-Cr interaction. Capturing this complexity with an empirical potential model has been an active subject of research in recent years. To model this system, Caro and coworkers used the following ansatz $$\begin{aligned} \label{eq:FeCr} E_{\text{Fe--Cr}} &=& \sum_{i\in {\text Fe}} U_{\text{Fe}} \left(\overline{\rho}_i^{\text{Fe}}+\overline{\rho}_i^{\text{Cr}}\right) + \frac{1}{2} \sum_{i\in {\text {Fe}}}\sum_{j\in {\text{Fe}}}\phi_{\text{Fe}}\left(r_{ij}\right)\\ &+& \sum_{i\in {\text{Cr}}} U_{\text{Cr}}\left(\overline{\rho}_i^{\text{Cr}} + \overline{\rho}_i^{\text{Fe}}\right) + \frac{1}{2} \sum_{i\in \text{Cr}}\sum_{j\in \text{Cr}}\phi_{\text{Cr}}\left(r_{ij}\right) ,\nonumber\\ &+& \sum_{i\in \text{Fe}}\sum_{j\in \text{Cr}} h\left(\frac{x_i + x_j}{2}\right) V_{\text{mix}}(r_{ij}),\nonumber\end{aligned}$$ where we used the same notation as in the earlier sections. The partial electron densities $\overline{\rho}^{\mathcal{S}}_i$ follow the same definition as in [Eq. (\[eq:rhobar\])]{}. Furthermore, the local concentration variable $x_i$ in [Eq. (\[eq:FeCr\])]{} is defined as $$x_i = \frac{\overline{\rho}^{\text{Cr}}_i}{\rho^{\text{Cr}}_i+\rho^{\text{Fe}}_i}.$$ The two densities $\rho^{\text{Fe}}(r_{ij})$ and $\rho^{\text{Cr}}(r_{ij})$ are normalised such that at the equilibrium lattice constant of each pure lattice, the respective partial electron density is 1. In this way the two EAM models for the pure elements are made compatible with each other. Equation (\[eq:FeCr\]) looks quite similar to the composition-dependent pair potential energy expressions discussed in [Sect. \[sect:pair\_potentials\]]{}. There are, however, three essential differences: (i) There is only one mixed pair potential $V_{\text{mixed}}(r_{ij})$ as opposed to two in [Sect. \[sect:pair\_potentials\]]{} (one for each limit). (ii) There is no boundary conditions on the interpolation function $h(x)$ at $x=0$ and $x=1$. (iii) The local concentration about the $(i,j)$ pair is just the average of the one-centre concentrations about the two sites, and not the two-centre concentration as defined in [Eq. (\[eq:2cntr\])]{}. Of course, at no extra cost the more rigorous definition in [Eq. (\[eq:2cntr\])]{} is a better choice for the measure of local concentration about a pair of atoms. On the other hand, [Eq. (\[eq:conc\_ij\])]{} shows that the one-centre concentration above is only a perturbation away from the more accurate quantity. The Fe–Cr CD-EAM model was the pioneering work that has inspired the current paper. Here, we have tried to give a more rigorous foundation to the CD-EAM model. In fact, we can strictly argue that CD-EAM is a simplified version of the current formalism. It works very well for the Fe–Cr system since the two elements are similar in size and chemical nature. It is therefore reasonable to make the approximation that functional forms of the mixed pair potentials describing the two dilute limits are the same. Let us illustrate the last statement with the example of Lennard-Jones (LJ) potentials. These potentials are determined by two parameters: $\sigma$ and $\epsilon$; the first parameter specifies the position of the minimum of the potential or in other words the particle size, and the second parameter specifies the interaction strength. A mixture of two types of LJ particles with no size mismatch (same $\sigma$) but different cohesive energies can be described by the same potential that is merely scaled differently for the two particles. Extending this analogy to the case of the Fe–Cr system we can see why only one mixed potential can be enough. However, it is important to realise now that when only one potential is used, the functions $h(x)$ provide the interaction strength, which in the case of Fe–Cr is positive in one dilute limit and negative in the other. Hence no boundary conditions exist at the two concentrations $x=0$ and $x=1$. In the original CD-EAM model, there was a further simplification. The mixed potentials $V_{\text{mix}}(r_{ij})$ was never fitted. In fact it was taken as the average of the effective EAM pairwise interactions of the pure elements at their respective equilibrium volumes $$V_{\text{mix}}(r_{ij}) = \frac{1}{2}\left( \phi_{\text{Fe}}(r_{ij}) + 2U_{\text{Fe}}(\overline{\rho}^{\text{Fe}}_0)\rho^{\text{Fe}}(r_{ij}) + \phi_{\text{Cr}}(r_{ij}) + 2U_{\text{Cr}}(\overline{\rho}^{\text{Cr}}_0)\rho^{\text{Cr}}(r_{ij}) \right),$$ where $\overline{\rho}^{\mathcal{S}}_0$ is the electron density at the equilibrium lattice constant for the species $\mathcal{S}$. Only the function $h(x)$ was fitted to the heat of mixing of the solid solution. The success of this model in spite of all the simplifications is a telltale of the power of this methodology. Molecular dynamics and Monte Carlo performance {#sect:MolecularDynamicsPerformance} ---------------------------------------------- ![ Comparison of the computation times for the CD-EAM models and the standard EAM model in a parallel molecular dynamics simulation. The benchmark simulation consists of a body-centred cubic crystal at 300K with 16[,]{}000 atoms per processor. []{data-label="fig:ScalingBenchmark"}](fig6.eps){width="0.6\linewidth"} In [Sect. \[sect:forces\]]{} we presented an algorithm for calculating forces within the composition-dependent interatomic potential models which brings their efficiency on par with the standard EAM scheme. This was first discussed in a recent publication by the present authors [@StuSadErh09], where this algorithm was implemented for the Fe–Cr CD-EAM model in the popular massively-parallel MD code LAMMPS [@Pli95].To benchmark its performance, we carried out MD simulations of a body-centred cubic (BCC) crystal at 300K using periodic boundary conditions. For the CD-EAM case we considered a random alloy with 50% Cr. For the standard EAM case, the sample contained only Fe. Simulations were run on 1, 8, 27, 64, and 512 processors with 16,000 atoms per processor (weak scaling). The results for the CD-EAM routines and the LAMMPS standard EAM routine are displayed in [Fig. \[fig:ScalingBenchmark\]]{}. In this figure, the original CD-EAM model as well as its linearised version are displayed. We see that the two versions are between 60% (linearised model) to 70% (original model) slower than the standard EAM. This is a small price to pay considering the fact that the CD-EAM expression actually contains explicit three-body forces. ![ Comparison of the timing in a MC simulation of a Fe–Cr alloy at 50% composition. The simulation cell contained 1024 atoms. []{data-label="fig:mc_performance"}](fig7.eps){width="0.6\linewidth"} In our recent publication [[@StuSadErh09]]{} we also studied the Monte Carlo performance of composition-dependent interatomic potentials focusing on the comparison of the original and the linearised CD-EAM model. The performance gain due to the linearised formulation is illustrated in [Fig. \[fig:mc\_performance\]]{} which compares the timing of the linearised and original CD-EAM models in a serial MC simulation for a random Fe–Cr alloy at 50% composition. We find that the linearized CD-EAM model is twelve times faster than the original formulation. This is an impressive performance gain, which clearly advocates for linearised composition-dependent interatomic potentials. Conclusions =========== The present work has come about in response to a need for a practical scheme for fitting interatomic potential models for multicomponent alloys. At this point of time, when faced with the task of modelling the chemistry of e.g. a ternary alloy, one is overwhelmed with the complexity of the problem. In this paper, we have presented a systematic methodology for the construction of alloy potentials, starting from pre-existing potentials for the constituent elements. The formalism represents a generalisation of the approach employed by one of the authors for the Fe–Cr system [@CarCroCar05]. We have shown that this formalism naturally extends to treating multicomponent systems. The main idea of the approach is to describe the energetics of dilute concentrations of solute atoms in the pure host in terms of pair and higher-order cluster interactions (see Figs. \[fig:schematic\_clusters1\] and \[fig:schematic\_clusters2\]). These interaction functions are then used as a basis set for expanding the potential energy of the alloy in the entire concentration range. To describe the energetics of the concentrated alloys, the contributions of the basis functions are weighted by interpolation functions expressed in terms of local concentration variables. One of the innovations in this work is a novel measure of local composition around individual atoms in the system. This introduces an explicit dependence on the [*chemical*]{} environment. In this sense the composition-dependent interatomic potential scheme is reminiscent of the bond-order potential scheme developed by Abell and Tersoff [@Abe85; @Ter86; @Ter88a] which employs a measure of the bond-order to distinguish between different [*structural*]{} motifs. The main advantage of the framework presented here is that the basis functions can be constructed sequentially and independent of the interpolation functions, leading to a scheme that can be practically implemented and systematically improved upon. The lower order terms can be constructed with no knowledge of the higher order terms and therefore need not be reparametrised when higher order cluster potentials are constructed. The higher order terms in the expansion become progressively smaller. In this way the model can be made step by step, starting from the lowest order cluster potentials. Furthermore addition of new terms in the series expansion is not likely to introduce unphysical behaviour, a problem that plagues most fitting schemes for interatomic potentials. ![ Several examples for clusters used to construct higher-order interaction terms which can be extracted from the configuration shown on the left. []{data-label="fig:schematic_clusters2"}](fig8.eps) The practical determination of the basis functions and the interpolation functions proceeds by fitting to first-principles data. The expansion in cluster interactions may be reminiscent of the celebrated “cluster expansion” technique [@SanDucGra84] that has been used extensively during the past few decades to model the thermodynamics of multicomponent alloys from first principles. But it is important to note here that the methodology presented in this paper has no relation to the cluster expansion technique. The latter reduces the continuous phase space of e.g., a binary alloy onto the discrete configuration space of the corresponding Ising model. There is only one number associated with each cluster configuration, namely the the free energy of that cluster. The so-called “effective cluster interactions” (ECIs) are usually obtained via an optimisation process from all the cluster free energies. A procedure of the sort proposed in this paper is not possible, since there is not direct link between any single cluster free energy and an ECI. In contrast, when fitting e.g. a $V_{AB}(r_{ij})$ interaction potential, a solute inclusion not only changes the total energy of the system, it causes forces in the system and modifies the force constants of the host, all of which can be used to construct a continuous pair potential. Composition-dependent interatomic potentials are constructed by incorporating pair, triplet and higher-order cluster interactions that describe the energetics of clusters embedded in a pure host with a specific underlying lattice. One may now wonder, with this approach, could a potential be expected to handle systems which change lattice type as a function of concentration? For instance the Ni-Al phase diagram contains phases with BCC-based crystal structures, while the pure metals are face-centred cubic (FCC). Following the approach described above, the basis functions are parametrised in terms of solute cluster energies in the constituent FCC structures. How can one then expect to provide a reasonable model for the BCC-based NiAl phase? The answer lies in the interpolation functions.They are fitted to the energetics of the ordered and disordered compounds along the concentration range with arbitrary crystal structures. Acknowledgements {#acknowledgements .unnumbered} ================ Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. DOE-NNSA under Contract DE-AC52-07NA27344. Partial financial support from the LDRD office and the Fusion Materials Program as well as computer time allocations from NERSC at Lawrence Berkeley National Laboratory are gratefully acknowledged. [10]{} M. W. Finnis and J. E. Sinclair. . , 50:45, 1984. M. S. Daw and M. I. Baskes. Embedded-atom method: [D]{}erivation and application to impurities, surfaces and other defects in metals. , 29:6443, 1984. F Ercolessi, M Parrinello, and E Tosatti. Au(100) reconstruction in the glue model. , 177:314, 1986. M. J. Puska, R. M. Nieminen, and M. Manninen. Atoms embedded in an electron gas: [I]{}mmersion energies. , 24:3037, 1981. J. K. N[o]{}rskov. Covalent effects in the effective-medium theory of chemical binding: [H]{}ydrogen heats of solution in the 3d metals. , 26:2875, 1982. S. M. Foiles, M. I. Baskes, and M. S. Daw. Embedded-atom-method functions for the fcc metals [C]{}u, [A]{}g, [A]{}u, [N]{}i, [P]{}d, [P]{}t, and their alloys. , 33:7983, 1986. M. Ludwig, D. Farkas, D. Pedraza, and S. Schmauder. Embedded atom potential for [F]{}e–[C]{}u interactions and simulations of precipitate–matrix interfaces. , 6:19, 1998. R. C. Pasianot and L. Malerba. Interatomic potentials consistent with thermodynamics: The [F]{}e-[C]{}u system. , 360:118, 2007. M. Asta and S. M. Foiles. . , 53:2389, 1996. E. Ogando Arregui M. Caro and A. Caro. . , 66:054201, 2002. Y. Mishin, M. J. Mehl, and D. A. Papaconstantopoulos. Embedded-atom potential for [B]{}2-[N]{}i[A]{}l. , 65:224114, 2002. Y. Mishin. Atomistic modeling of the $\gamma$ and $\gamma'$-phases of the [N]{}i[A]{}l system. , 52:1451, 2004. A. Caro, D. A. Crowson, and M. Caro. Classical many-body potential for concentrated alloys and the inversion of order in iron-chromium alloys. , 95:075702, 2005. P. Olsson, J. Wallenius, C. Domain, K. Nordlund, and L. Malerba. . , 72:214119, 2005. G. C. Abell. Empirical chemical pseudopotential theory of molecular and metallic bonding. , 31:6184, 1985. J. Tersoff. New [E]{}mpirical [M]{}odel for the [S]{}tructural [P]{}roperties of [S]{}ilicon. , 56:632, 1986. J. Tersoff. New empirical approach for the structure and energy of covalent systems. , 37:6991, 1988. M. Müller, P. Erhart, and K. Albe. . , 76:155412, 2007. M. I. Baskes. Application of the embedded-atom method to covalent materials: A semiempirical potential for silicon. , 59:2666, 1987. M. I. Baskes. Modified embedded-atom potentials for cubic materials and impurities. , 46:2727, 1992. F. H. Stillinger and T. A. Weber. Computer simulation of local order in condensed phases of silicon. , 31:5262, 1985. M. S. Daw, S. M. Foiles, and M. I. Baskes. The embedded-atom method - [A]{} review of theory and applications. , 9:251, 1993. A. Stukowski, B. Sadigh, P. Erhart, and A. Caro. Efficient implementation of the concentration-dependent embedded atom method for molecular dynamics and [Monte-Carlo]{} simulations. , 17:075005, 2009. A. Caro, M. Caro, E. M. Lopasso, and D. A. Crowson. Implications of ab initio energetics on the thermodynamics of [F]{}e-[C]{}r alloys. , 89:121902, 2006. P. Erhart, A. Caro, M. Serrano de Caro, and B. Sadigh. . , 77:134206, 2008. P. Olsson, I. A. Abrikosov, L. Vitos, and J. Wallenius. Ab initio formation energies of [F]{}e-[C]{}r alloys. , 321:84, 2003. P. Erhart, B. Sadigh, and A. Caro. , 92:141904, 2008. T. P. C. Klaver, R. Drautz, and M. W. Finnis. Magnetism and thermodynamics of defect-free [F]{}e-[C]{}r alloys. , 74:094435, 2006. S. Plimpton. Fast parallel algorithms for short-range molecular dynamics. , 117:1, 1995. J. M. Sanchez, F. Ducastelle, and D. Gratias. Generalized cluster description of multicomponent systems. , 128:334, 1984.
--- abstract: 'We examine closely the solar Center-to-Limb variation of continua and lines and compare observations with predictions from both a 3-D hydrodynamic simulation of the solar surface (provided by M. Asplund and collaborators) and 1-D model atmospheres. Intensities from the 3-D time series are derived by means of the new synthesis code [Ass$\epsilon$t]{}, which overcomes limitations of previously available codes by including a consistent treatment of scattering and allowing for arbitrarily complex line and continuum opacities. In the continuum, we find very similar discrepancies between synthesis and observation for both types of model atmospheres. This is in contrast to previous studies that used a “horizontally” and time averaged representation of the 3-D model and found a significantly larger disagreement with observations. The presence of temperature and velocity fields in the 3-D simulation provides a significant advantage when it comes to reproduce solar spectral line shapes. Nonetheless, a comparison of observed and synthetic equivalent widths reveals that the 3-D model also predicts more uniform abundances as a function of position angle on the disk. We conclude that the 3-D simulation provides not only a more realistic description of the gas dynamics, but, despite its simplified treatment of the radiation transport, it also predicts reasonably well the observed Center-to-Limb variation, which is indicative of a thermal structure free from significant systematic errors.' author: - 'L. Koesterke, C. Allende Prieto[^1] and D. L. Lambert' title: 'Center-to-Limb Variation of Solar 3-D Hydrodynamical Simulations' --- Introduction ============ A few years back, it was realized that one of the most ’trusted’ absorption lines to gauge the oxygen abundance in the solar photosphere, the forbidden \[OI\] line at $\lambda$6300, was blended with a Ni I transition. These two transitions overlap so closely that only a minor distortion is apparent in the observed feature. Disentangling the two contributions with the help of a 3-D hydrodynamical simulation of surface convection led us to propose a reduction of the solar photospheric abundance by $\sim$ 30% [@allIII]. Using the same solar model, subsequent analysis of other atomic oxygen and OH lines confirmed the lower abundance, resulting in an average value $\log \epsilon$[^2] (O)$= 8.66 \pm 0.05$ [@asp04]. This reduction in the solar O/H ratio, together with a parallel downward revision for carbon [@all02; @asp05b], ruins the nearly perfect agreement between models of the solar interior and seismological observations [@bah05; @del06; @lin07]. A brief overview of the proposed solutions is given by @all07. Interior and surface models appear to describe two different stars. Supporters of the new hydrodynamical models, and the revised surface abundances, focus on their strengths: they include more realistic physics, and are able to reproduce extremely well detailed observations (oscillations, spectral line asymmetries and net wavelength blueshifts, granulation contrast and topology). Detractors emphasize the fact that the new models necessarily employ a simplified description of the radiation field and they have not been tested to the same extent as classical 1-D models. The calculation of spectra for 3-D time-dependent models is a demanding task, which is likely the main reason why some fundamental tests have not yet been performed for the new models. On the basis of 1-D radiative transfer calculations, @ayr06 suggest that the thermal profile of the solar surface convection simulation of Asplund et al. (2000) may be incorrect. @ayr06 make use of a 1-D average, both ’horizontal’ and over time, of the 3-D simulation to analyze the center-to-limb variation in the continuum, finding that the averaged model performs much more poorly than the semi-empirical FAL C model of Fontenla et al. (1993). When the FAL C model is adopted, an analysis of CO lines leads to a much higher oxygen abundance, and therefore @ayr06 question the downward revision proposed earlier. @asp05b argue that when classical 1-D model atmospheres are employed, the inferred oxygen abundance from atomic features differs by only 0.05 dex between an analysis in 1-D and 3-D. The difference is even smaller for atomic carbon lines. When the hydrodynamical model is considered, there is good agreement between the oxygen abundance inferred from atomic lines and from OH transitions [@asp04; @sco06]. A high value of the oxygen abundance is derived only when considering molecular tracers in one dimensional atmospheres, perhaps not a surprising result given the high sensitivity to temperature of the molecular dissociation. A low oxygen abundance ($\log \epsilon$(O) $= 8.63$) value is also deduced from atomic lines and atmospheric models based on the inversion of spatially resolved polarimetric data [@soc07]. Despite the balance seems favorable to the 3-D models and the low values of the oxygen and carbon abundances, a failure of the 3-D model to match the observed limb darkening, as suggested by the experiments of @ayr06, would be reason for serious concern. In the present paper, we perform spectral synthesis on the solar surface convection simulation of Asplund et al. (2000) with the goal of testing its ability to reproduce the observed center-to-limb variations of both the continuum intensities, and the equivalent widths of spectral lines. We compare its performance with commonly-used theoretical 1-D model atmospheres. Our calculations are rigorous: they take into account the four-dimensionality of the hydrodynamical simulation: its 3-D geometry and time dependency. After a concise description of our calculations in Section 2, §3 outlines the comparison with solar observations and §4 summarizes our conclusions. Models and Spectrum Synthesis ============================= We investigate the Center-to-Limb Variation (CLV) of the solar spectrum for the continuum and lines. Snapshots taken from 3-D hydrodynamical simulations of the solar surface by @asp00 serve as model atmospheres. The synthetic continuum intensities and line profiles are calculated by means of the new spectrum synthesis code [Ass$\epsilon$t]{} (Advanced Spectrum Synthesis 3-D Tool), which is designed to solve accurately the equation of radiation transfer in 3-D. The new synthesis code will be described in detail by @koeI and only the key features are highlighted in subsequent sections. Hydrodynamic Models ------------------- The simulation of solar granulation was carried out with a 3-D, time-dependent, compressible, radiative-hydrodynamics code [@nor90; @ste89; @asp99]. The simulation describes a volume of 6.0x6.0x3.8Mm (about 1Mm being above $\tau_{\rm cont} \approx 1$) with 200x200x82 equidistantly spaced grid points over two hours of solar time. About 10 granules are included in the computed domain at any given time. 99 snapshots were taken in 30s intervals from a shorter sequence of 50min. The grid points and the physical dimensions are changed to accommodate the spectrum synthesis: The horizontal sampling is reduced by omitting 3 out of 4 grid points in both directions; the vertical extension is decreased by omitting layers below $\tau_{\rm cont}^{\rm min} \approx 300$ while keeping the number of grid points in $z$-direction constant, i.e. by increasing the vertical sampling and introducing a non-equidistant vertical grid. After these changes, a single snapshot covers approximately a volume of 6.0x6.0x1.7Mm with 50x50x82 grid points [@asp00]. Spectrum Synthesis ------------------ Compared to the spectrum synthesis in one dimension, the calculation of emergent fluxes and intensities from 3-D snapshots is a tremendous task, even when LTE is applied. Previous investigations (e.g., @asp00 [@lud07]) were limited to the calculation of a single line profile or a blend of very few individual lines on top of constant background opacities, and without scattering. In order to overcome these limitations, we devise a new scheme that is capable of dealing with arbitrary line blends, frequency dependent continuum opacities, and scattering. The spectrum synthesis is divided into five separate tasks that are outlined below. A more detailed description which contains all essential numerical tests will be given by @koeI. ### Opacity Interpolation {#sss_opai} For the 3D calculations we face a situation in which we have to provide detailed opacities for $\approx 2\!\cdot\!10^7$ grid points for every single frequency under consideration. Under the assumption of LTE, the size of the problem can be reduced substantially by using an interpolation scheme to derive opacities from a dataset that has orders-of-magnitude fewer datapoints. We introduce an [*opacity*]{} grid that covers all grid points of the snapshots in the temperature-density plane. The grid points are regularly spaced in log$T$ and log$\rho$ with typical intervals of 0.018dex and 0.25dex, respectively. We use piecewise cubic Bezier polynomials that do not introduce artificial extrema [@aue03]. To enable 3$^{\rm rd}$-order interpolations close to the edges, additional points are added to the opacity grid. The estimated interpolation error is well below 0.1% for the setup used throughout the present paper. ### Opacity Calculation {#sss_opac} We use a modified version of [SYNSPEC]{} [@hub95] to prepare frequency-dependent opacities for the relatively small numbers of grid points in the [*opacity*]{} grid. The modifications allow for the calculation of opacities on equidistant log($\lambda$) scales, to output the opacities to binary files, and to skip the calculation of intensities. Two datasets are produced. Continuum opacities are calculated at intervals of about 1Å at 3000Å. Full opacities (continuum and lines) are provided at a much finer spacing of $0.3\,v_{\rm min}$, with $v_{\rm min}$ being the thermal velocity of an iron atom at the minimum temperature of all grid points in all snapshots under consideration. A typical step in wavelength is $2.7\!\cdot\!10^{-3}$Å at 3000Å, which corresponds to 0.27 kms$^{-1}$. We adopt the solar photospheric abundances recently proposed by @asp05a, with carbon and oxygen abundances of ${\rm log}\,\epsilon = 8.39$ and 8.66, respectively, which are about 30% less than in earlier compilations [@gre98]. We account for bound-bound and bound-free opacities from hydrogen and from the first two ionization stages of He, C, N, O, Na, Mg, Al, Si, Ca and Fe. Bound-free cross sections for all metals but iron are taken from [TOPBASE]{} and smoothed as described by @allI. Iron bound-free opacities are derived from the photoionization cross-sections computed by the Iron Project (see, e.g., @nah95 [@bau97]), after smoothing. Bound-bound ${\rm log}\,(gf)$ values are taken from Kurucz, augmented by damping constants from @bar00 where available. We also account for bound-free opacities from H$^{-}$, H$_2^+$, CH and OH, and for a few million molecular lines from the nine most prominent molecules in the wavelength range from 2200Å to 7200Å. Thomson and Rayleigh (atomic hydrogen) scattering are considered as well, as described below in §\[sss\_scat\]. The equation of state is solved considering the first 99 elements and 338 molecular species. Chemical equilibrium is assumed for the calculation of the molecular abundances, and the atomic abundances are updated accordingly (private comm. from I. Hubeny). ### Scattering {#sss_scat} We employ a background approximation, calculating the radiation field $J_\nu$ for the sparse continuum frequency points for which we have calculated the continuum opacity without any contribution from spectral lines. The calculation starts at the bluemost frequency and the velocity field is neglected at this point: no frequency coupling is present. The opacities for individual grid points are derived by interpolation from the [*opacity*]{} grid, and the emissivities are calculated assuming LTE. As mentioned above, we include electron (Thomson) scattering and Rayleigh scattering by atomic hydrogen. An Accelerated Lambda Iteration (ALI) scheme is used to obtain a consistent solution of the mean radiation field $J_\nu$ and the source function $S_\nu$ at all grid points. In turn, $J_\nu$ is calculated from $S_\nu$ and vice versa, accelerating the iteration by amplifying $\Delta J_\nu = J_\nu^{\rm New} - J_\nu^{\rm Old}$ by the factor $1\,/\,(1-\Lambda^*)$ with $\Lambda^*$ being the approximate lambda operator [@ols87]. Generally the mean radiation from the last frequency point, i.e. the frequency to the blue, serves as an initial guess of $J_\nu$ at the actual frequency. At the first frequency point, the iteration starts with $J_\nu=S_\nu$. The formal solution, i.e. the solution of the equation $J_\nu=\Lambda\,S_\nu$, is obtained by means of a short characteristics scheme [@ols87]. For all grid points the angle-dependent intensity $I_\nu^\mu$ is derived by integrating the source function along the ray between the grid point itself and the closest intersection of the ray with a horizontal or vertical plane in the mesh. The operator $\Lambda^*$ needed for the acceleration is calculated within the formal solution. For the present calculations, $J_\nu$ is integrated from $I_\nu^\mu$ at 48 angles (6 in $\mu$, 8 in $\phi$). The integration in $\mu$ is performed by a three-point Gaussian quadrature for each half-space, i.e. for rays pointing to the outer and the inner boundary, respectively. The integration in $\phi$ is trapezoidal. The opacities and source functions are assumed to vary linearly (1$^{\rm st}$-order scheme) along the ray. In order to integrate the intensity between the grid point and the point of intersection where the ray leaves the grid cell, the opacity, source function and the specific intensity ($\kappa_\nu$, $S_\nu$, $I_\nu^\mu$) have to be provided at both ends of the ray. Since the point where the ray leaves the cell is generally not a grid point itself, an interpolation scheme has to be employed to derive the required quantities. We perform interpolations in two dimensions on the surfaces of the cuboids applying again Bezier polynomials with control values that avoid the introduction of artificial extrema. The interpolation may introduce a noticeable source of numerical inaccuracies. Detailed tests, using an artificial 3-D structure constructed by horizontally replicating a 1-D model, revealed that a 3$^{\rm rd}$-order interpolation scheme provides sufficient accuracy where linear interpolations fail in reproducing the radiation field: the mean relative errors are 0.5% and 0.05% for linear and cubic interpolation, respectively. It is possible, in terms of computing time, to calculate $J_\nu$ from the full opacity dataset for all frequencies (our ’fine’ sampling). However, since the total effect of scattering for the solar case in the optical is quite small, the differences between the two methods are negligible. Therefore, we apply the faster method throughout this paper. Note that in both approximations (using background or full opacities), the calculation of the mean radiation field does not account for any frequency coupling. ### Calculation of Intensities and Fluxes {#ss_cif} The emergent flux is calculated from the opacities of the full dataset provided at the fine frequency grid. Again, the opacities for individual grid points are derived by interpolation from the [*opacity*]{} grid and the emissivities are calculated from LTE. The mean background radiation field $J_\nu$ is interpolated from the coarser continuum frequency grid to the actual frequency, and it contributes to the source function at all grid points via Thomson and Rayleigh (atomic hydrogen) scattering opacities. The integration along a ray is performed in the observer’s frame by following long characteristics from the top layer down to optical depths of $\tau_{\rm Ray}> 20$. Frequency shifts due to the velocity field are applied to the opacities and source functions. Each ray starts at a grid point of the top layer and is built by the points of intersection of the ray and the mesh. At these points of intersection an interpolation in three dimensions is generally performed, i.e. a 2-D geometric interpolation in the X-Y, X-Z or Y-Z-plane, respectively, is enhanced by an interpolation in frequency necessitated by the presence of the velocity field. Additional points are inserted into the ray to ensure full frequency coverage of the opacities. This is done when the difference of the velocity field projected onto the ray between the entry and exit point of a grid cell exceeds the frequency spacing of the opacity. Without these additional points and in the presence of large velocity gradients, line opacities could be underestimated along the ray –a line could be shifted to one side at the entry point and to the other side at the exit point–, leaving only neighboring continuum opacities visible to both points while the line is hidden within the cell. Similar to the calculation of the mean radiation field $J$ described in Sect. \[sss\_scat\], all interpolations in both space and frequency are based on piecewise cubic Bezier polynomials. It is not completely trivial to mention that for the accurate calculation of the emergent intensities, the application of a high-order interpolation scheme is much more important than it is for the calculation of the mean background radiation field (Sect. \[sss\_scat\]). Here we are calculating precisely the quantity we are interested in, i.e. specific intensities. But, in addition to that, we deal with interpolations in three dimensions (2-D in space, 1-D in frequency) instead of a 2-D interpolation in space. Hence, any quantity is derived from 21 1-D interpolations rather than just 5. In the standard setup of the 3-D calculations, 20 rays are used for the integration of the flux $F_\nu$ from the intensities $I_\nu^\mu$. Similar to the integration of $J$ described in Sect. \[sss\_scat\], the integration in $\mu$ is a three-point Gaussian quadrature, while the integration in $\phi$ is trapezoidal. Eight angles in $\phi$ are assigned to the first two of the $\mu$ angles while the last and most inclined angle with the by far smallest (flux) integration weight has 4 contributing $\phi$ angles. Note that for the investigation of the Center-to-Limb variation, the number of angles and their distribution in $\mu$ and $\phi$ differs considerably from this standard setup, as explained below (Sect. \[ss\_continuum\]). Spectra in 1-D {#ss_1d} -------------- To facilitate consistent comparisons of spectra from 3-D [*and*]{} 1-D models, the new spectrum synthesis code [Ass$\epsilon$t]{} accepts also 1-D structures as input. Consistency is achieved by the use of the same opacity data (cf. Sect. \[sss\_opac\]) and its interpolation (if desired in 1-D, cf. Sect. \[sss\_opai\]) and by the application of the same radiation transfer solvers, i.e. 1$^{\rm st}$-order short and long-characteristic schemes (cf. Sect. \[sss\_scat\] and \[ss\_cif\], respectively). All angle integrations are performed by means of a three-point Gaussian formula. This leaves the interpolations inherent to the radiation transfer scheme in 3-D as the only major inconsistency between the spectra in 1-D and 3-D. Numerical tests have revealed that these remaining inconsistencies are quite small, as we will report in an upcoming paper. Solar Model {#ss_km} ----------- Our choice is not to use a semi-empirical model of the solar atmosphere as a 1-D comparison with the 3-D hydrodynamical simulation, but a theoretical model atmosphere. Semi-empirical models take advantage of observations to constrain the atmospheric structure, a fact that would constitute an unfair advantage over the 3-D simulation. Some semi-empirical models, in particular, use observed limb darkening curves, and of course it is meaningless to test their ability to reproduce the same or different observations of the center-to-limb variation in the continuum. Consequently we are using models from Kurucz, the MARCS group, and a horizontal- and time-averaged representation of the 3-D hydrodynamical simulations. We have derived a 1-D solar reference model from the Kurucz grid [@kur93]. The reference model is derived from 3$^{\rm rd}$-order interpolations in $\tau$, $T_{\rm eff}$, ${\rm log}\,g$, and $Z$. Details of the interpolation scheme will be presented elsewhere. We have adopted the usual values of $T_{\rm eff}=5777\,{\rm K}$ and ${\rm log}\,g=4.437$ (cgs) but a reduced metallicity of ${\rm log}\,(Z/Z_\sun)=-0.2$ in an attempt to account globally for the difference between the solar abundances (mainly iron) used in the calculation of the model and more recent values, as described by @all06. To avoid a biased result by using a single 1-D comparison model, we have also experimented with a solar MARCS model kindly provided by M. Asplund, and a solar model interpolated from the more recent ODFNEW grid from Kurucz (available on his website[^3]). No metallicity correction was applied to these newer solar models. In earlier investigations [@ayr06], a 1-D representation [@asp05b] of the 3-D time series, i.e. a ’horizontal’ average[^4] over time, has been used to study the thermal profile of the 3-D model. While this approximation allows easy handling by means of a 1-D radiation transfer code, the validity of this approach has never been established. In order to investigate the limitations of this shortcut, we compare its Center-to-Limb variation in the continuum with the exact result from the 3-D radiation transfer on the full series of snapshots. Center-to-Limb Variation ======================== Continuum {#ss_continuum} --------- @nec94 [@nec03; @nec05] investigated the Center-to-Limb variation of the Sun based on observations taken at the National Solar Observatory/Kitt Peak in 1986 and 1987. They describe the observed intensities across the solar disk as a function of the heliocentric distance by 5$^{\rm th}$-order polynomials for 30 frequencies between 303nm and 1099nm. Similar observations by @pet84 and @els07 (with a smaller spectral coverage) indicate that @nec94 may have overcorrected for scattered light, but confirm a level of accuracy of $\approx$0.4%. We have calculated fluxes and intensities for small spectral regions ($\pm5\,{\rm km\,s^{-1}}$) around eight frequencies (corresponding to standard air wavelengths of 3033.27Å, 3499.47Å, 4163.19Å, 4774.27Å, 5798.80Å, 7487.10Å, 8117.60Å, 8696.00Å) and compare monochromatic synthetic intensities with the data from @nec94. Because the spectral regions are essentially free from absorption lines, the width of the bandpass of the observations varying between 1.5 kms$^{-1}$ in the blue (3030Å) and 1.9 kms$^{-1}$ in the red (10990Å) is irrelevant. The fluxes were integrated from 20/3 angles (cf. Sects. \[ss\_cif\] and \[ss\_1d\]) for the 3-D/1-D calculations, respectively. For the study of the CLV, intensities (as a function of $\mu$) were calculated for 11 positions on the Sun ($\mu \equiv {\rm cos}\theta=1.0, 0.9, ..., 0.1, 0.05$) averaging over 4 directions in $\phi$ and all horizontal (X-Y) positions. All 99 snapshots were utilized for the 3-D calculations. \[hbtp\] The eight frequencies cover a broad spectral range. Although some neighboring features are poorly matched by our synthetic spectra, the solar flux spectrum of [@kur84] is reproduced well at the frequencies selected by Neckel & Labs, and therefore modifications of our linelist were deemed unnecessary (see Fig. \[fig\_fluxc\]). The normalization of the synthetic spectra was achieved by means of “pure-continuum” fluxes that were derived from calculations lacking all atomic and molecular line opacities – Fig. \[fig\_fluxc\] shows that Neckel & Labs did a superb job selecting continuum windows. Comparisons of observed and synthetic CLV’s are conducted with datasets that are normalized with respect to the intensity at the disk center, i.e. all intensities are divided by the central intensity. We show the residual CLV’s $${\rm R\!-\!CLV} \equiv I_\mu^{\rm obs}/I_{\mu=1}^{\rm obs} - I_\mu^{\rm syn}/I_{\mu=1}^{\rm syn},$$ in Fig. \[fig\_clvc\]. The R-CLV’s within each group are quite homogeneous. In addition to the data derived from our 1-D Kurucz model (cf. Sect. \[ss\_km\]), we show also data from two other 1-D models, i.e. a MARCS model (1$^{\rm st}$ panel) from Asplund (priv. comm.) and an alternative (odfnew) Kurucz model (2$^{\rm nd}$ panel) from a different model grid (http://kurucz.harvard.edu/grids.html). The Center-to-Limb variation from both alternatives show much larger residuals and are not used for the comparison with the 3-D data. However, the scatter within the 1-D data demonstrates vividly the divergence that still persist among different 1-D models. Our reference 1-D model (3$^{\rm rd}$ panel) describes the observed CLV’s well down to $\mu \approx 0.5$. Closer to the rim the R-CLV’s rise to $\approx 0.1$ at $\mu=0.2$ followed by a sharp decline at the rim. In 3-D (4$^{\rm th}$ panel) we find on average a linear trend of the R-CLV’s with $\mu$, showing a maximum residual of [$\gtrsim$]{} 0.2 close to the rim. \[hbtp\] The investigation of the Center-to-Limb variation of the continuum is an effective tool to probe the continuum forming region at and above $\tau \approx 1$. Deviations from the observed CLV’s indicate that the temperature gradient around $\tau \approx 2/3$ is incorrectly reproduced by the model atmosphere. This can, of course, mean that the gradient in the model is inaccurate, but it can also signal that the opacity used for the construction of the model atmosphere differs significantly from the opacity used for the spectrum synthesis. In that case, the temperature gradient is tested at the wrong depth due to the shift of the $\tau$-scales. Our spectrum calculations suffer from an inconsistency introduced by the fact that the abundance pattern and the opacity cross-sections might differ from what was used when the model was constructed. In our reference 1-D model, we compensate for the new solar iron abundance ($\epsilon_{\rm Fe}: 7.63 \rightarrow 7.45$) and interpolate to ${\rm log}\,(Z/Z_\sun)=-0.2$ in the Kurucz model grid (cf. Sect. \[ss\_km\]). The 3-D model has been constructed based on the @gre98 solar abundances (cf. @asp00) with $\epsilon_{\rm Fe} = 7.50$ and, to first order, no compensation is necessary. (And the same is true for the other two 1-D models considered in Fig. \[fig\_clvc\].) The changes in carbon and oxygen abundances do not affect the continuum opacities, which are dominated in the optical by H and H$^-$. Consequently, only metals that contribute to the electron density and therefore to the H$^-$ population (i.e. Fe, Si and Mg) are relevant. \[hbtp\] In order to investigate the impact of changes of the opacity on the Center-to-Limb variation we have calculated the R-CLV’s for the 3-D and our reference 1-D models at 3499.47Å with two different Fe abundances ($\pm 0.3$ dex). The purpose of the test is to demonstrate the general effect of opacity variations that can come from different sources, i.e. uncertainties of abundances and uncertainties of bound-free cross-sections of all relevant species (not only iron). However, to simplify the procedure we have modified only the abundance of iron which stands for the cumulative effect of all uncertainties. In the example the total opacity is increased by 50% and decreased by 22%, respectively. Increased opacity, i.e. increased iron abundances, results in large negative residuals while decreased opacity produces large positive residuals (Fig. \[fig\_clvcfe\]). Both models are affected in a similar way, but the strength of the effect is slightly smaller for the 3-D calculation by about 20% (cf. Fig. \[fig\_clvcfe\], lower panel). A change in opacity has a significant effect on the CLV but it does not eliminate the discrepancies. To estimate the effect of a varied temperature gradient on the R-CLV’s we have calculated the CLV at 3499.47Å for two artificially modified 1-D models (Fig. \[fig\_clvct\]). The temperature structure around $\tau_{\rm Ross}=2/3$ is changed such that the gradient in temperature is increased and decreased by 1%, respectively. At $\mu=0.2$, i.e. the position of the largest discrepancy, the residual of $0.0085$ is changed by $\approx 0.0035$, i.e. by roughly 1/3, indicating a maximum error of the 1-D temperature gradient of about 3%. Again, a simple change does not lead to perfect agreement, especially when more than one frequency is considered. \[hbtp\] \[ht!\] Finally, we compare the CLV in the continuum for the average (’horizontal’ and over time) 3-D model with the exact data derived from the radiation transfer in 3-D. Fig. \[fig\_clvm3d\] shows the residual CLV’s for both models. The discrepancies with the observations are much more severe for the average 3-D model and it becomes obvious that it does not represent the original 3-D time series at all. Although a 1-D representation would obviously be highly desirable because it would allow to quickly calculate spectra by means of a 1-D radiation transfer code, this turns out to be a very poor approximation in this case. @ayr06 have carefully investigated the rotational-vibrational bands of carbon monoxide (CO) in the solar spectrum and have derived oxygen abundances from three models, i.e. the Fal C model [@fon93], a 1-D model that is especially adapted to match the Center-to-Limb variation of the CO-bands (COmosphere), and from the averaged 3-D time series. In all three cases, temperature fluctuations are accounted for in a so-called 1.5-D approximation, in which profiles from 5 different temperature structures are averaged. By assuming a C/O ratio of 0.5, @ayr06 derive a high oxygen abundance close to the “old” value from @gre98 from both the Fal C and the COmosphere model, discarding the low oxygen abundance derived from the mean 3-D model because its temperature gradient is too steep around $\tau_{\rm 0.5\,\mu m}\approx 1$ and fails to reproduce the observed Center-to-Limb variations. Our current study documents that the mean 3-D model is not a valid approximation of the 3-D time series, and therefore its performance cannot be taken as indicative of the performance of the 3-D model, and in particular of its temperature profile. We find that the Center-to-Limb variation of the continuum predicted by the 3-D simulation matches reasonably well (i.e. similar to the best 1-D model in our study) the observations. The results by Scott et al., based on 3-D radiative transfer on the same hydrodynamical simulations used here, indicate that the observed CO ro-vibrational lines are consistent with the low oxygen and carbon abundances. Our results show that there is no reason to distrust the 3-D-based abundances on the basis of the simulations having a wrong thermal profile. Lines {#s_lines} ----- We study the Center-to-Limb variation of a number of lines by comparing observations of the quiet Sun taken at 6 different heliocentric angles to synthetic profiles derived from 3-D and 1-D models. The observations are described in detail by @allII and were previously used for the investigation of inelastic collisions with neutral hydrogen on oxygen lines formed under non-LTE conditions[^5]. The observations cover 8 spectral regions obtained at 6 different positions on the Sun. The first 5 slit positions are centered at heliocentric angles of $\mu \equiv {\rm cos}\,\theta=$ 1.00, 0.97, 0.87, 0.71 and 0.50. The last position varies between $\mu=$ 0.26 and 0.17 for different wavelength regions. This translates to distances of the slit center from the limb of the Sun in arcmin of 16.00’, 12.11’, 8.11’, 4.74’, 2.14’, 0.54’ and 0.24’, assuming a diameter of the Sun of 31.99’. For both of these last positions the slit extends beyond the solar disk and the center of the illuminated slit corresponds to $\mu=$ 0.34 and 0.31 (0.96’ and 0.78’). We have calculated a variety of line profiles for the 6 positions defined by the center (in $\mu$) of the illuminated slit. Although the slit length, 160arcsec, is rather large, test calculations show that averaging the spectrum from six discrete $\mu$-angles spanning the slit length gives virtually the same equivalent width than the spectrum from the central $\mu$. For $\mu=0.5$, the second last angle, the difference amounts to a marginal change of the log-$gf$ value of about 0.01. To further reduce the computational burden we have derived the average 3-D profiles from calculations taking only 50 (every other) of the 99 snapshots into account. We have selected 10 seemingly unblended lines from 5 different neutral ions. The list of lines is compiled in Table \[tabI\]. The log-$gf$ values for most lines were adopted from laboratory measurements at Oxford (e.g. @bla95 and references therein) and by @obr91. \[hbtp\] [cccccccc]{} Ion& $\lambda$& $R'$& max($\theta$)& log-$gf$ & log $\Gamma_{\rm Rad}$ & log $\Gamma_{\rm Stark}$ & log $\Gamma_{\rm VdW}$\ & \[Å\]& & \[deg\]& & & &\ FeI& 5242.5& 56000& 75& -0.970& 7.76& -6.33& -7.58\ FeI& 5243.8& 56000& 75& -1.050& 8.32& -4.61& -7.22\ FeI& 5247.0& 56000& 75& -4.946& 3.89& -6.33& -7.82\ FeI& 6170.5& 77000& 80& -0.380& 8.24& -5.59& -7.12\ FeI& 6200.3& 206000& 75& -2.437& 8.01& -6.11& -7.59\ FeI& 7583.8& 176000& 80& -1.880& 8.01& -6.33& -7.57\ CrI& 5247.6& 56000& 75& -1.627& 7.72& -6.12& -7.62\ NiI& 5248.4& 56000& 75& -2.426& 7.92& -4.64& -7.76\ SiI& 6125.0& 77000& 80& -0.930& & &\ TiI& 6126.2& 77000& 80& -1.425& 6.85& -6.35& -7.73\ \[hbtp\] \[hbtp\] We are interested in how synthetic lines profiles deviate from observations as a function of the position angle $\mu$ for two reasons. First of all, any clear trend with $\mu$ would reveal shortcomings of the theoretical model atmospheres similar to our findings presented in Sect. \[ss\_continuum\]. But, arguably, even more relevant is the fact that any significant deviation (scatter) would add to the error bar attached to a line-based abundance determination. In our present study we compare synthetic line profiles from 3-D and 1-D models with the observations. Due to the inherent deficiencies of the latter models, i.e. no velocity fields and correspondingly narrow and symmetric line profiles, etc., we focus on line strengths and compare observed and synthetic line equivalent widths, rather than comparing the line profiles in detail. To be able to detect weak deviations, we have devised the following strategy. We have identified wavelength intervals around each line under consideration for the contribution to the line equivalent widths and have calculated series of synthetic line profiles in 1-D and 3-D with varied log-$gf$ values that encircle the observations with respect to their equivalent widths. That allowed us to determine by interpolation the log-$gf$ value required to match the observed line equivalent widths separately for each position angle (“Best-Fit”). To keep interpolation errors at a marginal level we have applied a small step of $\Delta$(log-$gf)=0.05$ for these series of calculations. A simple normalization scheme has been applied. All profiles have been divided by the maximum intensity found in the vicinity of the line center(within $\pm15\,{\rm km\,s^{-1}}$). We convolved the synthetic profiles with a Gaussian as to mimic the instrumental profile (see Table 1). An additional Gaussian broadening is applied to the line profiles from the 1-D calculation to account for macro-turbulence; this value was adjusted for each line in order to reproduce the line profiles observed at the disk center. Finally we have translated variations of line strength into variations of abundance, i.e. we have identified $\Delta$ log-($gf$) = $\Delta$log-$\epsilon$. This approximation is valid because the impact of slight changes in a metal abundance on the continuum in the optical is marginal. Note that it is not the intent of this study to derive metal abundances from individual lines. Such an endeavor would require a more careful consideration regarding line blends, continuum normalization, and non-LTE effects. All calculations described in this section are single-line calculations, i.e. no blends with atomic or molecular lines are accounted for. The observations did not have information on the absolute wavelength scale (see Allende Prieto et al. 2004), but that is not important for our purposes and the velocity scales in Figs. \[fig\_l1\] and \[fig\_l2\] are relative to center of the line profiles. The individual synthetic profiles were convolved with a Gaussian profile to match the observed profiles (cf. Table \[tabI\]). We were generally able to achieve a better fit of the observations when slightly less broadening was applied to the 3-D profiles (0.3% in case of FeI 5242.5). Since we know from previous investigations that the theoretical profiles derived from 3-D Hydro-models match the observations well, we argue that the resolution of the observations is actually slightly higher than estimated by @allII. An alternative explanation would be that the amplitude of the velocity field in the models is too high. Such a finding, if confirmed, deserves a deeper investigation but is beyond the scope of this study since line equivalent widths are only marginally (if at all) affected. We introduce the lines under consideration by showing the observed center-disk line profiles and the “Best-Fits” derived from the 3-D calculations of the six FeI lines and the four lines from other ions in Figs. \[fig\_l1\] and \[fig\_l2\], respectively. In Fig. \[fig\_lex\] we exemplify the fitting process by means of the FeI line at 5242.5Å and show the relative difference between the observation and a variety of model calculations for all 6 angles under consideration. The “Best-Fit” log-$gf$ values are derived by interpolation to match the observed equivalent widths from the spectral region around the line profile. \[hbtp\] We have obtained “Best-Fits” for all 10 lines (cf. Table \[tabI\]) and present the log-$gf$ values as a function of $\mu$ in Fig. \[fig\_clvl\]. Be reminded that the aim of this study is not the measurement of absolute abundances: we focus on relative numbers and normalize our results with respect to the disk center ($\mu=1$). For improved readability we subdivide our findings presented in Fig. \[fig\_clvl\] into 4 distinct groups, i.e. iron/non-iron lines and 1-D/3-D calculations, respectively. We focus our discussion on the first five data points because we have some indications that the data obtained for the shallowest angle is less trustworthy than the data from the other angles: [*i*]{}) the relative contribution of scattered light was estimated from the comparison of the center-disk spectrum with the FTS spectrum taken from @braI, and the outer-most position was the only one for which the entire slit was not illuminated, [*ii*]{}) for all 10 lines the fit of the line profiles for this particular angle is the worst (cf. Fig. \[fig\_lex\]) and [*iii*]{}) the scatter in our data presented in Fig. \[fig\_clvl\] is the largest for this angle. Fortunately, the flux integration is naturally biased towards the center of the disk. We find this systematic behavior for all 6 iron lines: $\Delta{\rm log} (\epsilon)$ is larger or equal in 1-D compared to 3-D, for all but one line (FeI 6170.5Å) $\Delta{\rm log} (\epsilon)$ is positive or zero for the 1-D calculations, and $\Delta{\rm log} (\epsilon)$ is negative or zero for all 3-D calculations. The FeI line at 6170.5Å stands out in both comparisons. In 1-D it is the only line with a negative $\Delta{\rm log} (\epsilon)$ and in 3-D it shows the by far largest negative $\Delta{\rm log} (\epsilon)$. This might be related to the noticeable line blend (cf. Fig. \[fig\_l1\]). The iron lines calculated in 3-D indicate a uniform trend of decreased log-$gf$ values with increased distance from the center-disk. The average decrease at $\mu=0.5$ for this group is -0.015 (FeI 6170.5Å excluded). From the 1-D calculations we derive the opposite trend for the same group of lines and obtain an average of 0.103. Obviously, the 3-D model performs significantly better than the 1-D reference model regarding the center-to-limb variation of Fe I lines, even when equivalent widths, and not line asymmetries or shifts, are considered. For these five FeI lines we obtain an average difference (1-D vs. 3-D) of 0.12 at $\mu=0.5$. To estimate the impact on abundance determinations based on solar fluxes we apply a 3-point Gaussian integration, neglecting the shallowest angle at $\mu=0.11$ (which has, by far, the smallest integration weight) for which we have no data, and assuming that the good agreement between the 1-D and the 3-D calculations for the central ray implies an equally good agreement for the first angle at $\mu=0.89$. These estimates lead to an abundance correction of approximately 0.06 dex between 1-D and 3-D models due to their different center-to-limb variation. @asp00 found a similar correction from the comparison of 1-D and 3-D line profiles at the disk center. For the 4 non-iron lines we find a uniform trend of increasing log-$gf$ values with decreasing $\mu$ for both, the 1-D and the 3-D dataset. The systematic behavior is similar to what we find for the iron lines, but now the performance of the 1-D and 3-D models is similar, and the offsets are in the same sense: larger abundances would be found towards the limb for both models. \[hbtp\] Conclusion ========== The photosphere of cool stars and the Sun can be described by stellar atmospheres in 1-D and 3-D. Since the 3-D models add more realistic physics, i.e. the hydrodynamic description of the gas, they can be seen truly as an advancement over the 1-D models. However, this refinement increases the computational effort by many orders of magnitude. In fact, the computational workload becomes so demanding, that the description of the radiation field has to be cut back to very few frequencies, i.e. to a rudimentary level that had been surpassed by 1-D models over 30 years ago. Overall we are left with the astonishing situation that a stellar photosphere can be modeled by either an accurate description of the radiation field with the help of a makeshift account of stellar convection (Mixing-length theory), or by an accurate description of the hydrodynamic properties augmented by a rudimentary account of the radiation field. It is evident that individual line profiles can be described to a much higher degree and without any artificial micro- or macro-turbulence by the 3-D Hydro models, as the simulations account for Doppler-shifts from differential motions within the atmosphere. We know from detailed investigations of line profiles that the velocity field is described quite accurately and that the residuals of the fittings to line profiles are reduced by about a factor of 10. However, it is not obvious how the 3-D models compare to their 1-D counterparts when it comes to reproduce spectral energy distributions and line strengths. We study the solar Center-to-Limb variation for several lines and continua, to probe the temperature structure of 3-D models. The work is facilitated by the new code [Ass$\epsilon$t]{}, which allows for the fast and accurate calculation of spectra from 3-D structures. In comparisons to other programs (e.g. @asp00 [@lud07]), the attributes of the new code are a greater versatility, i.e. the ability to handle arbitrarily complicated lines blends on top of non-constant background opacities, higher accuracy due to the proper incorporation of scattering and improved (higher-order) interpolation schemes, and a higher computational speed. In our study we find that regarding center-to-limb variations, the overall shortcomings of the 3-D model are roughly comparable to the shortcomings of the 1-D models. Firstly, we conclude from the investigation of the continuum layers that the models’ temperature gradient is too steep around $\tau\approx2/3$. This behavior is more pronounced for the 3-D model which shows a drop in intensity (with $\mu$) that is about twice the size of the drop displayed by our reference 1-D model, but at the same time smaller than the discrepancies found for two other (newer!) 1-D structures. Secondly, the line profiles for different position angles on the Sun cannot be reproduced by a single abundance. For Fe I lines, the abundance variation between the disk center and $\mu=0.5$ is about 0.1 dex for our reference 1-D model, but only $0.015$ dex (and with the opposite sign) for the 3-D simulations, albeit the calculations for lines of other neutral species suggest a more balanced outcome. Overall we conclude that the 1-D and the 3-D models match the observed temperature structure to a similar degree of accuracy. This is somewhat surprising but it might be that the improved description of the convective energy transport is offset by deficiencies introduced by the poor radiation transfer. Once new Hydro models based on an upgraded radiation transfer scheme (i.e. more frequencies and angles, better frequency binning) become available in the near future (Asplund, priv. comm.), we will be able to test this hypothesis. It will become clear whether focusing on refining the radiation transfer will be enough to achieve better agreement with observations, or the hydrodynamics needs to be improved as well. We thank M. Asplund for providing us with the 3-D hydrodynamical simulation and the 1-D MARCS model, and M. Bautista, I. Hubeny, and S. Nahar for crucial assistance computing opacities. We extent our thanks to the late John Bahcall, Andy Davis, and Marc Pinsonneault for their interest on our tests of the solar simulations, which enhanced our motivation to carry out this work. Continuing support from NSF (AST-0086321), NASA (NAG5-13057 and NAG5-13147), and the Welch Foundation of Houston is greatly appreciated. Allende Prieto, Lambert, D. L., Asplund, M. 2001, , 556, L63 Allende Prieto, Lambert, D. L., Asplund, M. 2002, , 567, 544 Allende Prieto, C., Lambert, D. L., Hubeny, I., Lanz, T. 2003, , 147, 363 Allende Prieto, C., Asplund, M., Bendicho, P. F. 2004, , 423, 1109 Allende Prieto, C., Beers, T. C., Wilhelm, R., Newberg, H. J., Rockosi, C. M., Yanny, B., Lee, Y. S. 2006, , 636, 804 Allende Prieto, C. 2007, Invited review to appear in the proceedings of the 14th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun; G. van Belle, ed. (Pasadena, November 2006), astro-ph/0702429 Asplund, M., Nordlund, Å, Trampedach, R., & Stein, R. F. 1999, , 346, L17 Asplund, M., Nordlund, Å, Trampedach, R., Allende Prieto, C., & Stein, R. F. 2000, , 359, 729 Asplund, M., Grevesse, N., Sauval, A. J., Allende Prieto, C., Kiselman, D. 2004, , 417, 751 Asplund, M., Grevesse, N., & Sauval, A. J. 2005a, Cosmic Abundances as Records of Stellar Evolution and Nucleosynthesis, 336, 25 Asplund, M., Grevesse, N., Sauval, A. J., Allende Prieto, C., Kiselman, D. 2005b, , 461, 693 Auer, L., 2003, Formal Solution: EXPLICIT Answers in [*Stellar Atmosphere Modeling*]{}, ASP Conference proceedings, 2003, Vol. 288, Hubeny I., Mihalas D., Werner K. (eds) Ayres, T. R., Plymate, C., Keller, C. U. 2006, , 165, 618 Bahcall, J. N., Basu, S., Pinsonneault, M., & Serenelli, A. M. 2005, , 618, 1049 Barklem, P. S., Piskunov, N., O’Mara, B. J. 2000, , 142, 467 Bautista, M. A. 1997, , 122, 167 Blackwell, D. E., Lynas-Gray, A. E., & Smith, G. 1995, , 296, 217 Brault, J., & Neckel, H. 1987, Spectral Atlas of Solar Absolute Disk-Averaged and Disk-Center Intensity from 3290 to 12150Å, unpublished, Tape copy from KIS IDL library Delahaye, F., & Pinsonneault, M. H. 2006, , 649, 529 Elste, G., Gilliam, L. 2007, , 240, 9 Fontenla, J. M., Avrett, E. H., Loeser, R. 1993, , 406, 319 Grevesse, N., & Sauval, A. J. 1998, in: Frölich C., Huber M.C.E., Solanki S.K., von Steiger R. (eds), Solar composition and its evolution — from core to corona, Dordrecht: Kluwer, p. 161 Gray, D. F. 1992, The Observation and Analysis of Stellar Photospheres, 2nd edition, Cambridge Universiy Press, Cambridge Hubeny, I., Lanz, T. 1995 “SYNSPEC — A Users’s Guide” http://nova.astro.umd.edu/\ Tlusty2002/pdf/syn43guide.pdf Koesterke, L., Allende Prieto, C., Lambert, D. L. 2006, submitted to Koesterke, L., Allende Prieto, C., Lambert, D. L. 2006, in prep. for Kurucz, R. L., Furenlid, I., Brault, J., & Testerman, L. 1984, National Solar Observatory Atlas, Sunspot, New Mexico: National Solar Observatory, 1984, Kurucz, R. 1993, ATLAS9 Stellar Atmosphere Programs and 2 km/s grid. Kurucz CD-ROM No. 13.  Cambridge, Mass.: Smithsonian Astrophysical Observatory, 1993., 13, Lin, C.-H., Antia, H. M., & Basu, S. 2007, ApJ, in press (ArXiv e-prints, 706, arXiv:0706.3046) Ludwig, H.-G., Steffen, M. 2007, arXiv:0704.1176 Nahar, S. N. 1995, , 293, 967 Neckel, H. 2003, , 212, 239 Neckel, H. 2005, , 229, 13 Neckel, H., & Labs, D. 1994, , 153, 91 Nordlund, Å, & Stein, R. F., 1990, Comp. Phys. Comm., 59, 119 O’Brian, T. R., Wickliffe, M. E., Lawler, J. E., Whaling, J. W., & Brault, W. 1991, Journal of the Optical Society of America B Optical Physics, 8, 1185 Olson, G. L., & Kunasz, P. B., 1987, , 38, 325 Petro, D. L., Foukal, P. V., Rosen, W. A., Kurucz, R. L., Pierce, A. K. 1984, , 283, 426 Scott, P. C., Asplund, M., Grevesse, N., & Sauval, A. J. 2006, , 456, 675 Socas-Navarro, H., & Norton, A. A. 2007, , 660, L153 Stein, & R. F., Nordlund, Å, 1989, , 342, L95 [^1]: Present address: Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Surrey, RH5 6NT, UK [^2]: $\epsilon({\rm X})= {\rm N(X)/N(H)}\!\cdot\!10^{12}$ [^3]: http://kurucz.harvard.edu/ [^4]: Horizontal average, in this context, refers to the mean value over a surface with constant vertical optical depth, rather than over a constant geometrical depth. [^5]: Data available at http://hebe.as.utexas.edu/izana
--- abstract: 'We establish well-posedness results for multidimensional non degenerate $\alpha$-stable driven SDEs with time inhomogeneous singular drifts in $\bL^r-{\mathbb B}_{p,q}^{-1+\gamma}$ with $\gamma<1$ and $\alpha$ in $(1,2]$, where $\bL^r$ and ${\mathbb B}_{p,q}^{-1+\gamma} $ stand for Lebesgue and Besov spaces respectively. Precisely, we first prove the well-posedness of the corresponding martingale problem and then give a precise meaning to the dynamics of the SDE. Our results rely on the smoothing properties of the underlying PDE, which is investigated by combining a perturbative approach with duality results between Besov spaces.' author: - '**Paul-Éric Chaudru de Raynal**[^1] **and**  **Stéphane Menozzi**[^2]' bibliography: - 'bibli.bib' title: '[**On Multidimensional stable-driven Stochastic Differential Equations with Besov drift**]{}' --- Introduction ============= Statement of the problem ------------------------ We are here interested in providing a well-posedness theory for the following *formal* $d$-dimensional stable driven SDE. : $$\label{SDE} X_t=x+\int_0^t F(s,X_s)ds+\mathcal W_t,$$ where in the above equation $({\mathcal{W}}_s)_{s\ge 0} $ is a $d $-dimensional symmetric $\alpha$-stable process, for some $\alpha$ in $(1,2]$. The main point here comes from the fact that the drift $F$ is only supposed to belong to the space $\mathbb{L}^r(\textcolor{black}{[0,T]},{\mathbb{B}}_{p,q}^{-1+\gamma}({\mathbb{R}}^d,{\mathbb{R}}^d))$, where ${\mathbb{B}}_{p,q}^{-1+\gamma}({\mathbb{R}}^d,{\mathbb{R}}^d)$ denotes a Besov space (see Section 2.6.4 of [@trie:83] ). parameters $(p,q,\gamma,\textcolor{black}{r})$ s.t. $1/2<\gamma<1$, $p,q,r\ge 1$ some constraints be specified later on . Importantly, assuming the parameter $\gamma$ to be strictly less than $1$ implies that $F$ can even not be a function, but just a distribution, so that it is not clear that the integral part in has any meaning, at least as this. This is the reason why, at this stage, we talk about “*formal* $d$-dimensional stable SDE”. There are many approach to tackle such a problem which mainly depend on the choice of the parameters $p,q,\gamma,r,\alpha$ and $d$. Let us now try to review some of them.\ **The Brownian setting: $\alpha=2$.** There already exists a rather large literature about singular/distributional SDEs of type . Let us first mention the work by Bass and Chen [@bass_stochastic_2001] who derived in the Brownian scalar case the strong well-posedness of when the drift writes (still formally) as $F(t,x)=F(x)=aa'(x)$, for a spatial function $a$ being $\beta$-Hölder continuous with $\beta>1/2 $ and for a multiplicative noise associated with $a^2$, i.e. the additive noise $\mathcal W_t$ in must be replaced by $\int_0^t a(X_s) d\mathcal W_s$. The key point in this setting is that the underlying generator associated with the SDE writes as $\textcolor{black}{L= (1/2) \partial_x \big(a^2 \partial_x \big)} $. From this specific divergence form structure, the authors manage to use the theory of Dirichlet forms of Fukushima *et al.* (see [@fuku:oshi:take:10]) to give a proper meaning to . Importantly, the formal integral corresponding to the drift has to be understood as a Dirichlet process. Also, in the particular case where the distributional derivative of $a$ is a signed Radon measure, the authors give an explicit expression of the drift of the SDE in terms of the local time (see Theorem 3.6 therein). In the multi-dimensional Brownian case, Bass and Chen have also established weak well-posedness of SDE of type when the homogeneous drift belongs to the Kato class, see [@bass:chen:03]. Many authors have also recently investigated SDEs of type in both the scalar and multidimensional Brownian setting for time inhomogeneous drifts in connection with some physical applications. From these works, it clearly appears that handling time inhomogeneous distributional drift can be a more challenging question. Indeed, in the time homogeneous case, denoting by $\gF $ an antiderivative of $F$, one can observe that the generator of can be written in the form $(1/2) \exp(-2\gF(x))\partial_x\big( \exp(2\gF(x))\partial_x\big) $ and the dynamics can again be investigated within the framework of Dirichlet forms (see e.g. the works by Flandoli, Russo and Wolf, [@flan:russ:wolf:03], [@flan:russ:wolf:04]). The crucial point is that in the time inhomogeneous case such connection breaks down. In this framework, we can mention the work by Flandoli, Issoglio and Russo [@flandoli_multidimensional_2017] for drifts in fractional Sobolev spaces. The authors establish therein the existence and uniqueness of what they call *virtual solutions* to : such solutions are defined through the diffeomorphism induced by the Zvonkin transform in [@zvonkin_transformation_1974] which is precisely designed to get rid of the *bad* drift through Itô’s formula. Namely, after having established appropriate smoothness properties of the underlying PDE: $$\label{PDE_FIRST_ZVONKIN} \left\{\begin{array}{l} \partial_t u+F\cdot D u+\frac12 \Delta u-(\lambda+1)u=-F,\\ u(T)=0, \end{array}\right.$$ introducing $\Phi(t,x)=x+u(t,x) $ (Zvonkin transform), it is indeed seen from Itô’s formula that, at a formal level, $Y_t=\Phi(t,X_t) $ satisfies: $$\label{DYN_ZVONKIN_TRANSFORM} Y_t=\Phi(0,x)+\int_0^t (\lambda+1)(s, \Phi^{-1}(s,Y_s)) ds+\int_0^t D \Phi(s,\Phi^{-1}(s,Y_s)) d{\mathcal{W}}_s,$$ which itself has a unique weak solution from the smoothness of $u$ solving . Since $\Phi $ can be shown to be a $C^1$-diffeomorphism for $\lambda $ large enough, a *virtual* solution to is then rigorously defined setting $X_t=\Phi^{-1}(t,Y_t) $. We can also refer to the work of Zhang and Zhao [@ZZ17], who derived the well-posedness of the martingale problem for the generator associated with , which also contain non trivial smooth enough diffusion coefficients . Therein, they obtained as well as some Krylov type density estimates in Bessel potential spaces for the solution. Also, they manage to obtain more precise description of the limit drift in the dynamics in , which is interpreted as a suitable limit of a sequence of mollified drifts, i.e. $\lim_n \int_0^t F_n(s,X_s) ds $ for a sequence of smooth functions $(F_n)_{n\ge 1} $ converging to $F$. The key point in these works, who heavily rely on PDE arguments, is to establish that the product $F\cdot D u$ in is in some sense meaningful, which is a real issue since $F$ is meant to be a distribution. This in particular implies to derive some sufficient smoothness properties for the gradient $D u $. Such estimates are usually obtained, and this will also be the case in the current work, through a Duhamel type perturbative argument (or mild representation of the solution) that leads to some *natural* constraints on the parameter of the space in which the drift is assumed to belong. To make things simple, if $F $ is the derivative in space of a Hölder function $\gF $ with Hölder exponent $\gamma$, it follows from the usual parabolic bootstrap that the gradient of the solution $u$ to can only be expected to live in a Hölder space of regularity index $-1+\gamma+\alpha-1$, $\alpha=2$. Thus, in order to give a meaning to the product $F\cdot Du$ as a distribution (more specifically as an element of a suitable Besov-Hölder space) in , one has to assume that $\gamma$ is such that $-1+\gamma+\alpha-1+\gamma>1 \Leftrightarrow \gamma >1/2$, recalling that $\alpha=2$. This is indeed the threshold appearing in [@flandoli_multidimensional_2017] and [@ZZ17] as well as the one previously obtained in [@bass_stochastic_2001]. Note that in such a case, the product also makes sense as a Young integral, i.e. $\int_{{\mathbb{R}}^d}Du(s,y) \cdot F(s,y) dy=\int_{{\mathbb{R}}^d} Du(s,y ) d\gF(s,y) $, which is again coherent with the thresholds. To bypass such a limit, one therefore has to use a suitable theory in order to give a meaning to the product $F\cdot Du$. This is, for instance, precisely the aim of either rough paths, regularity structures or paracontrolled calculus. However, as a price to pay to enter this framework, one has to add some structure to the drift assuming that this latter can be enhanced into a rough path structure. In the scalar Brownian setting, and in connection with the KPZ equation, Delarue and Diel [@dela:diel:16] used such specific structure to extend the previous results for an inhomogeneous drift which can be viewed as the generalized derivative of $\gF$ with Hölder regularity index greater than $1/3$ (i.e. assuming that $F$ belongs to $\bL^{\infty}([0,T],{\mathbb{B}}^{(-1/3)^+}_{\infty,\infty}$). Importantly, in [@dela:diel:16] the authors derived a very precise description of the meaning of the *formal* dynamics : they show that the drift of the solution may be understood as stochastic-Young integral against a mollification of the distribution by the transition density of the underlying noise. As far as we know, it appears to us that such a description is the more accurate that can be found in the literature on stochastic processes (see [@catellier_averaging_2016] for a pathwise version and Remark 18 in [@dela:diel:16] for some comparisons between the two approaches). With regard to the martingale problem, the result of [@dela:diel:16] has then been extended to the multidimensional setting by Cannizzaro and Choukh [@canni:chouk:18], but nothing is said therein about the dynamics.\ **The pure jump case: $\alpha<2$.** In the pure jump case, there are a few works concerning the well-posedness in the singular/distributional case. Even for drifts that are functions, strong uniqueness was shown rather recently. Let us distinguish two cases: the *sub-critical case* $\alpha \ge 1 $, in this case the noise dominates the drift (in term of self-similarity index $\alpha $) and the *super-critical* case where the noise does not dominate. In the first case, we can refer for bounded Hölder drifts to Priola [@prio:12] who proved that strong uniqueness holds (for time homogeneous) functions $F$ in which are $\beta$ Hölder continuous provided $\beta >1-\alpha /2$. In the second case, the strong well-posedness has been established under the same previous condition by Chen *et al.* [@CZZ17]. . In the current distributional framework, , the martingale problem associated with the formal generator of has been recently investigated by Athreya, Butkovski and Mytnik [@athr:butk:mytn:18] for $\alpha> 1$ and a time homogeneous $F \in {\mathbb{B}}_{\infty,\infty}^{-1+\textcolor{black}{\gamma}} $ under the condition: $-1+\textcolor{black}{\gamma}>(1-\alpha)/2$. After specifying how the associated dynamics can be understood, viewing namely the drift as a Dirichlet process (similarly to what was already done in the Brownian case in [@bass_stochastic_2001]), they eventually manage to derive strong uniqueness under the previous condition. Note that results in that direction have also been derived by Bogachev and Pilipenko in [@bo:pi:15] for drift belonging to a certain Kato class .\ Again, the result obtained by Athreya, Butkovski and Mytnik on transform and hence a suitable theory for the associated PDE. In pure-jump framework, it writes $$\label{PDE_FIRST_ZVONKIN_S} \left\{\begin{array}{l} \partial_t u+F\cdot D u+L^\alpha u-(\lambda+1)u=-F,\\ u(T)=0, \end{array}\right.$$ where $L^\alpha$ is the generator of a non-degenerate $\alpha $-stable process. Reproducing the previous reasoning concerning the expected parabolic bootstrap properties induced by the stable process, we can now expect that, when $F(t,\cdot) $ is the generalized derivative of a Hölder function $\gF$ with regularity index $\gamma $ (or putting in the Besov space terminology $F\in \bL^\infty(\textcolor{black}{[0,T]},{\mathbb{B}}_{\infty,\infty}^{-1+\gamma} )$), the gradient of the solution of the above PDE has Hölder regularity index $-1+\gamma+\alpha-1$: we gain the stability index as regularity order. Again, in order to give a meaning to the product $F\cdot Du$ as a distribution (more specifically as an element of a suitable Besov-Hölder space) in , one has to assume that $\gamma$ is such that $-1+\gamma+\alpha-1+\gamma>1 \Leftrightarrow \gamma >(3-\alpha)/2$. This is precisely the threshold that will guarantee weak well-posedness holds for a drift $F\in \bL^\infty(\textcolor{black}{[0,T]},{\mathbb{B}}_{\infty,\infty}^{-1+\gamma} )$. Aim of the paper. ----------------- In the current work, we aim at investigating a large framework by considering the $d$-dimensional case $d\ge1$, with a distributional, singular time, inhomogeneous drift (in $\bL^r([0,T],{\mathbb{B}}^{-1+\gamma}_{p,q}))$ when the noise driving the SDE is symmetric $\alpha$-stable process, $\alpha$ in $(1,2]$. both the Brownian and pure-jump case. . As done for the aforementioned result, our strategy rel on the . of the analysis suitable *a priori* estimates on an associated underlying PDE of type or . Namely, we will provide a Schauder type theory for the mild solution of such PDE for a large class of data. This result is also part of the novelty of our approach since these estimates are obtain thanks to a rather robust methodology based on heat-kernel estimate on the transition density of the driving noise together with duality results between Besov spaces viewed through their thermic characterization (see Section \[SEC\_BESOV\] below and Triebel [@trie:83] for additional properties on Besov spaces and their characterizations). This approach does not distinguish the pure-jump and Brownian setting provided the heat-kernel estimates hold. .\ Our first main result consists in deriving the well-posedness of the martingale problem introduced in Definition \[DEF\_MPB\] under suitable conditions on the parameters $p,q,r$ and $\gamma $, see Theorem \[THEO\_WELL\_POSED\]. As a by-product of our proof, we also manage to obtain through Krylov type estimates that the canonical process associated with the solution of the martingale problem also possesses a density belonging to an appropriate Lebesgue-Besov space (see Corollary \[COROL\_KRYLOV\]). Then, under slightly reinforced conditions on $p,q,r$ and $\gamma $, we are able to reconstruct the dynamics for the canonical process associated with the solution of the martingale problem, see Theorem \[THEO\_DYN\], specifying how the Dirichlet process associated with the drift writes. In the spirit of [@dela:diel:16], we in particular exhibit a main contribution in this drift that could be useful to investigate the numerical approximations of those singular SDEs (see equations and ) and the recent work by De Angelis *et al.* [@dean:germ:isso:19].\ Let us conclude by that, while finishing the preparation of the present manuscript, we discovered a brand new preprint of Ling and Zhao [@ling:zhao:19] which somehow presents some overlaps with our results. Therein, the Authors investigate *a priori* estimates for the elliptic version of the PDE of type or with (homogeneous) drift belonging to Hölder-Besov spaces with negative regularity index (i.e. in $\textcolor{black}{{\mathbb{B}}_{\infty,\infty}^{-1+\gamma}} $) and including a non-trivial diffusion coefficient provided the spectral measure of the driving noise is absolutely continuous. As an application, they derive the well-posedness of the associated martingale problem and prove that the drift can be understood as a Dirichlet process. They also obtained quite sharp regularity estimates on the density of the solution and succeeded in including the limit case $\alpha=1$. In comparison with their results, we here manage to handle the case of drift additional space singularities, since the integrability indexes of the parameter $p,q$ for the Besov space are not supposed to be $p=q=\infty$ (). Although we did not include it, we could also handle in our framework an additional non-trivial diffusion coefficient under their standing assumptions, we refer to Remarks \[REM\_DIFF\_PRELI\] and \[REM\_COEFF\_DIFF\] below concerning this point. It also turns out that we obtain more accurate version of the dynamics of the solution which is here, as mentioned above, tractable enough for practical purposes. We eventually mention that, as a main difference with our approach, the controls in [@ling:zhao:19] are mainly obtained through Littlewood- decompositions whereas we rather exploit the thermic characterization and the parabolic framework for the PDE. In this regard, we truly think that the methodology to derive the a priori estimates in both works can be seen as complementary.\ . The paper is organized as follows. We introduce our main assumptions and state our results in the next paragraph. Section \[SEC\_PREUVE\] is dedicated to the proof of the main results concerning the SDE: we state in Subsection \[SEC\_PDE\] the key *a priori* controls for the underlying PDE (with both the mollified and initial rough coefficients) and then describe in Subsection \[SDE\_2\_PDE\] how to pass from the PDE results to the SDE itself, following somehow the procedure considered by Delarue and Diel [@dela:diel:16]. In Section \[SEC\_PDE\_PROOF\], we prove the *a priori* control for the PDE introducing to this end the auxiliary mathematical tools needed (heat kernel estimates, thermic characterization of Besov spaces). Section \[SEC\_RECON\_DYN\] is then devoted to the reconstruction of the dynamics from the solution to the martingale problem and Section \[SEC\_PATH\_UNIQUE\] to the pathwise uniqueness in dimension one. Eventually, we postpone to Appendix \[SEC\_APP\_TEC\] the proof of some technical results. Assumptions and main results ---------------------------- **Framework.** We will denote by $L^\alpha $ the generator associated with the driving stable process $({\mathcal{W}}_s)_{s\ge 0} $. When $\alpha=2 $, $L^2=(1/2) \Delta $ where $\Delta$ stands for the usual Laplace operator on ${\mathbb{R}}^d $. In the pure-jump stable case $\alpha \in (1,2) $, for all $\varphi\in C_0^\infty({\mathbb{R}}^d,{\mathbb{R}}) $: $$\label{GENERATEUR_STABLE} L^\alpha \varphi(x) ={\rm p.v.}\int_{{\mathbb{R}}^d} \big[\varphi(x+z)-\varphi(x)\big]\nu(dz),$$ where, writing in polar coordinates $z= \rho \xi,\ \rho\in {\mathbb{R}}_+\times {\mathbb S}^{d-1}$, the Lévy measure decomposes as $\nu(dz)=\frac{\mu(d\xi)}{\rho^{1+\alpha}} $ with $\mu $ a symmetric non degenerate measure on the sphere ${\mathbb S}^{d-1} $. Precisely, we assume: There exists $\kappa\ge 1$ s.t. for all $\lambda \in {\mathbb{R}}^d $: $$\label{NON_DEG} \kappa^{-1}|\lambda |^\alpha \le \int_{{\mathbb S}^{d-1}} |\langle \lambda, \xi\rangle|^\alpha \mu(d\xi)\le \kappa|\lambda|^\alpha.$$ Observe in particular that a rather large class of spherical measures $\mu $ satisfy . From the Lebesgue measure, which actually leads, , to $L^\alpha=-(-\Delta)^{\alpha/2} $ (usual fractional Laplacian of order $ \alpha$ corresponding to the stable process), to sums of Dirac masses in each direction, i.e. $\mu_{{\rm Cyl}} =\sum_{j=1}^d c_j(\delta_{e_j}+\delta_{-e_j})$, with $(e_j)_{j\in \leftB 1,d\rightB} $ standing for the canonical basis vectors, which for $c_j= 1/2 $ then yields $L^\alpha =-\sum_{j=1}^d (-\partial_{x_j}^2)^{\alpha/2} $ corresponding to the cylindrical fractional Laplacian of order $\alpha $ associated with the sum of scalar symmetric $\alpha $-stable processes in each direction. In particular, it is clear that under [[**(UE)**]{}]{}, the process ${\mathcal{W}}$ admits a smooth density in positive time (see e.g. [@kolo:00]). Correspondingly, $L^\alpha$ generates a semi-group that will be denoted from now on by $P_t^\alpha=\exp(tL^\alpha)$. Precisely, for all $\varphi \in B_b({\mathbb{R}}^d,{\mathbb{R}})$ (space of bounded Borel functions), and all $t>0$: $$\label{DEF_HEAT_SG} P_{t}^\alpha[\varphi](x) := \int_{{\mathbb{R}}^d} dy p_\alpha(t,y-x) \varphi(y),$$ where $p_\alpha(t,\cdot) $ stands for the density of ${\mathcal{W}}_t $. Further properties associated with the density $p_\alpha $, in particular concerning the integrability properties of its derivatives, are stated in Section \[SEC\_MATH\_TOOLS\].\ **Main results.** As already mentioned, the SDE is stated at a *formal* level. Indeed, the drift being only a distribution, the dynamics cannot have a clear meaning as this stage. Our first main result concerns the weak well-posedness for in terms of the Stroock and Varadhan formulation of martingale problem (see [@stro:vara:79]). \[DEF\_MPB\] Let $\alpha \in (1,2] $. For any given fixed $T\textcolor{black}{>} 0$, we say that the martingale problem with data $\big(L^\alpha,F,x\big)$, $x\in {\mathbb{R}}^d $ is well posed if there exists a unique probability measure ${\mathbb{P}}^\alpha $ on $\mathcal C([0,T],{\mathbb{R}}^d)$ if $\alpha=2 $ and on the Skorokhod space $\mathcal D([0,T],{\mathbb{R}}^d)$ of ${\mathbb{R}}^d $-valued càdlàg functions if $\alpha \in (1,2) $, s.t. the canonical process $(X_t)_{0\leq t \leq T}$ satisfies the following conditions: - $\mathbb{P}^\alpha(X_0=x) =1$ - For any $f\in \mathcal C([0,T], \bL^\infty({\mathbb{R}}^d))$, the process $$\Big(u(t,X_t) - \int_0^t f(s,X_s) ds - u(0,x_0) \Big)_{0\leq t \leq T}$$ is a ${\mathbb{P}}^\alpha $-martingale where $u \in \mathcal C^{0,1}([0,T], {\mathbb{R}}^d)$ is the mild solution of $$\begin{aligned} \label{Asso_PDE_PMG} \p_t u(t,x) + L^\alpha u(t,x) + F(t,x) \cdot D u(t,x) &=& f(t,x),\quad \text{on }[0, T)\times {\mathbb{R}}^d, \notag \\ u( T,x) &=& \textcolor{black}{0} ,\quad \text{on }{\mathbb{R}}^d.\end{aligned}$$ Having such a definition at hand, we may state our first existence and uniqueness result related to . \[THEO\_WELL\_POSED\] Let $p,q,r\ge 1 $, $\alpha \in \left( \frac{1+ [d/p]}{1-[1/r]},2\right]$. Then, for all $\gamma \in \left(\frac{3-\alpha + [d/p] + [\alpha/r]}{2}, 1\right)$, for all $x \in {\mathbb{R}}^d$ the martingale problem with data $\big(L^\alpha,F,x\big)$ is well posed in the sense of Definition \[DEF\_MPB\]. As a consequence of the proof of Theorem \[THEO\_WELL\_POSED\] we also derive the following corollary. \[COROL\_KRYLOV\] Under the previous assumptions, the following Krylov type estimate holds for the canonical process $(X_t)_{t\ge 0} $. Define: $$\label{DEF_THETA} \theta=\gamma-1+\alpha-\frac dp-\frac \alpha r.$$ For all $f\in C^{\infty}$, $$\label{EST_KRYLOV} |{\mathbb{E}}^{{\mathbb{P}}^\alpha}[\int_0^T f(s,X_s)ds]|\le C \|f\|_{\bL^r([0,T],{\mathbb{B}}_{p,q}^{\theta-\alpha})}.$$ with $r>\alpha/(\theta- d/p)>1$ and $T>0 $. This in particular implies that $X_t $ admits for almost all $t>0 $ a density ${\mathfrak p}_\alpha(\cdot,x,\cdot):(t,y) \mapsto {\mathfrak p}_\alpha(t,x,y) $ in $\bL^{r'}([0,T],{\mathbb{B}}_{p',q'}^{-\theta+\alpha}) $ with $1/m+1/{m'}=1,\ m\in \{p,q,r\} $. Note that there is no constraint on the parameter $q$. This comes from the fact that such a parameter does not play any role in the estimate. The density $\mathfrak p_\alpha$ thus belongs to $\bL^{r'}([0,T],{\mathbb{B}}_{p',\infty}^{-\theta+\alpha}) $. We emphasize that this estimate seems to be not optimal for us. Roughly speaking, the expected regularity should be the one needed to define pointwise the gradient of the solution of the associated PDE . As suggested by the analysis done in point **(i)** of , one may be able to prove that the density belongs to $\bL^{r'}([0,T],{\mathbb{B}}_{p',\infty}^{\alpha+\gamma-2-\alpha/r-d/p})$. Note that when $p=r=\infty$, this threshold is, at least formally, the one that could be obtained through the result of Debussche and Fournier [@de:four:13] where density estimate for (time homogeneous) stable SDE with Hölder diffusion coefficients and bounded measurable drift are obtained. We refrain to go further in that direction as such estimate is not the main concern of our work. The following theorem connects the solution of the martingale problem with the dynamics of the *formal* SDE . Namely, it specifies, in our current singular framework, how the dynamics of has to be understood. We decompose it into two terms: the first one is the driving $\alpha$-stable process and the other one is a drift obtained as the stochastic-Young limit of a regularized version of the initial drift by the density of the driving process. \[THEO\_DYN\] If we now reinforce the assumptions of Theorem \[THEO\_WELL\_POSED\], assuming $$\label{THRESHOLDS_FOR_DYN} \gamma \in \left(\frac{3-\alpha + [2d/p] + [2\alpha/r]}{2}, 1\right) ,$$ it then holds that: $$\label{DYNAMICS} X_t=x+\int_0^t \mathscr F(s,X_s,ds)+{\mathcal W}_t,$$ where for any $0\leq v \leq s \leq T$, $x\in {\mathbb{R}}^d$, $$\begin{aligned} \label{DECOMP_DRIFT} \mathscr F(v,x,s-v)&=&\int_v^s dr \int_{{\mathbb{R}}^d}dy F(r,y) p_\alpha(r-v,y-x),\end{aligned}$$ with $p_\alpha$ the (smooth) density of ${\mathcal{W}}$ and where the integral in is understood as a $\bL^\ell$ limit of the associated Riemann sum (called $\bL^\ell$ stochatic-Young integral), $1\leq \ell <\alpha$. \[INTEG\_STO\] Under the above assumptions, for any $1\leq \ell < \alpha$ one can define a stochastic-Young integral w.r.t. the quantities in . Namely, for any $1 \leq \ell < \alpha$, there $1\leq q < \textcolor{black}{\ell}$ and $q' \ge 1$ satisfying $1/q'+1/q=1/\ell$ such that for any predictable process $(\psi_s)_{s\in [0,t]} $, $(1-1/\alpha-\varepsilon_2) $-Hölder continuous in $\bL^{q'}$ with $0<\varepsilon_2<(\theta-1)/\alpha$, one has $$\int_0^t \psi_s dX_s= \int_0^t \psi_s \mathscr F(s,X_s,ds)+ \int_0^t \psi_s \textcolor{black}{d} {\mathcal W}_s.$$ Eventually, in the particular case $d=1$, we are able to derive pathwise uniqueness for the solution of under suitable conditions. We hence recover and generalize part of the previous existing results of Bass and Chen [@bass_stochastic_2001] and Athreya *et al.* [@athr:butk:mytn:18]. \[THEO\_STRONG\] Under the assumption of Theorem \[THEO\_DYN\], when $d=1$, pathwise uniqueness holds for the formal equation , i.e. two weak solutions $(X,{\mathcal{W}})$ and $(X',{\mathcal{W}})$ satisfying are a.s. equal. \[REM\_MEASURABILITY\] Pay attention that, in the above result, we do not claim that strong uniqueness holds. This mainly comes from a measurability argument. In [@athr:butk:mytn:18], the Authors built the drift as a Dirichlet process and then recover the noise part of the dynamics as the difference between the solution and the drift allowing them in turn to work under a more standard framework (in term of measurability), and thus to use the Yamada-Watanabe Theorem. Here, we mainly recover the noise in a canonical way, through the martingale problem, and then build the drift as the difference between the solution and the noise. Such a construction allow us to give a precise meaning to the drift and the loss of measurability can be seen as the price to pay for it. Nevertheless, at this stage, one may restart with the approach of Athreya *et al.* [@athr:butk:mytn:18] to define an ad hoc noise as the difference between the process and the drift (which reads as a Dirichlet process), identify the objects obtained with the two approaches and then obtain suitable measurability conditions to apply Yamada-Watanabe Theorem. **Notations.** , we denote by $c,c'...$ some positive constants depending [[**(UE)**]{}]{} and on the set of parameters $\{\alpha,p,q,r,\gamma\}$. Proof of the main results {#SEC_PREUVE} ========================= The underlying PDE {#SEC_PDE} ------------------ As underlined by Definition \[DEF\_MPB\], it turns out that the well-posedness of the martingale problem associated with heavily relies on the of a suitable theory for the Cauchy problem . Hence, we start this part by introducing, for data $T>0$, $f: {\mathbb{R}}_+\times {\mathbb{R}}^d \to {\mathbb{R}}$ and $g:{\mathbb{R}}^d\to {\mathbb{R}}$, the following *formal* Cauchy problem: $$\begin{aligned} \label{Asso_PDE} \p_t u(t,x) + L^\alpha u(t,x) + F(t,x) \cdot D u(t,x) &=& f(t,x),\quad \text{on }[0,T]\times {\mathbb{R}}^d, \notag \\ u(T,x) &=& g(x),\quad \text{on }{\mathbb{R}}^d,\end{aligned}$$ with $L^\alpha$ as in . Obviously, as is, it is not clear that the scalar product $F(t,x) \cdot D u(t,x)$ makes sense, and this is why the above PDE is, for the time being, only stated formally. Here, the data $f$ and $g$ are functions belonging to some spaces to be specified later on.\ The aim of this section is to provide a “$(p,q,r,\gamma)-{\rm{ well\, posedness\, theory}}$” for the PDE which will in turn allow us to establish our main results for the *formal* SDE . As a key intermediate tool we need to introduce what we will later on call the *mollified* PDE associated with . Namely, denoting by $(F_m)_{m \in \mathbb N}$ as sequence of smooth functions such that $\|F-F_m\|_{\mathbb L^r([0,T], \mathbb B_{p,q}^{-1+\gamma})}\to 0$ when $m\to \infty$, we introduce the *mollified* PDE: $$\begin{aligned} \label{Asso_PDE_MOLL} \p_t u_m(t,x) + L^\alpha u_m(t,x) + F_m(t,x) \cdot D u_m(t,x) &=& f(t,x),\quad \text{on }[0,T]\times {\mathbb{R}}^d, \notag \\ u_m(T,x) &=& g(x),\quad \text{on }{\mathbb{R}}^d,\end{aligned}$$ for which we are able to obtain the following controls. \[PROP\_PDE\_MOLL\] . Let $(u_m)_{m \geq 0}$ denote the sequence of classical solutions of the *mollified* PDE . It satisfies that $$\label{MR_PDE} \forall\, p,q,r\geq 1,\, \forall\, \alpha \in \left( \frac{1+\frac dp}{1-\frac 1r},2\right],\, \forall \gamma \in \left(\frac{3-\alpha + \frac{d}{p} + \frac{\alpha} r}{2}, 1\right),$$ that $\theta-1:=\gamma - 2+\alpha - d/p - \alpha/r >0 $, there exist positive constants $C:=C(\|F\|_{\bL^r({\mathbb{B}}_{p,q}^{\textcolor{black}{-1+\gamma}})})$, $C_T:=C(T,\|F\|_{\bL^r({\mathbb{B}}_{p,q}^{\textcolor{black}{-1+\gamma}})})$, $ \varepsilon>0$ , s.t. for all $m\ge 0 $ $$\begin{aligned} |u_m(t,x)|&\le &C(1+|x|),\notag\\ \|D u_m\|_{\bL^{\infty}\left({\mathbb{B}}^{\theta-1 - \varepsilon}_{\infty,\infty}\right)} &\le& C_T(\|Dg\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-1}}+\|f\|_{\textcolor{black}{\bL^\infty}({\mathbb{B}}_{\infty,\infty}^{\theta-\alpha })}),\label{CTR_SCHAUDER_LIKE}\\ \forall 0\le t\le s\le T,\ x\in {\mathbb{R}}^d,\ |u_m(t,x)-u_m(s,x)|&\le& C |t-s|^{\frac{\theta}{\alpha}},\ |Du_m(t,x)-Du_m(s,x)|\le C |t-s|^{\frac{\theta-1}{\alpha}},\notag\end{aligned}$$ where $\varepsilon <<1$ can be chosen as small as desired and $T \mapsto C_T$ is a non-decreasing function. \[REM\_SHAU\_EST\_MOLL\] \[COR\_ZVON\_THEO\] Let $k$ in $\{1,\ldots,d\}$ and consider the *mollified* PDE with terminal condition $g\equiv 0$ and source $f=F_m^k$ (the $k^{\rm{th}}$ component of $F_m$). Under the above assumptions, : $$\begin{aligned} \label{ZVON_CTR_MOLL_PDE} \|u_m^k\|_{\bL^\infty(\bL^{\infty})}+ \|D u_m^k\|_{\bL^{\infty}\left({\mathbb{B}}^{\theta-1- \varepsilon}_{\infty,\infty}\right)} \le C_T,\end{aligned}$$ where $C_T\downarrow 0$ when $T\downarrow 0$. Moreover, there exists $C:=C(T,\|F\|_{\bL^r({\mathbb{B}}_{p,q}^{-1+\gamma)})})>0$ such that holds. , we carefully point out that:$$\theta=\gamma - 1+\alpha - \frac dp - \frac{\alpha}r>1.$$ reflects the spatial smoothness of the underlying PDE. In particular, the condition $\theta>1 $ provides a pointwise gradient estimate for the solution of the mollified PDE. This key condition rewrites: $\theta>1\iff \gamma-2+\alpha-[d/p]-[\alpha/r]> 0 $. It will be implied assuming that $\gamma>[3-\alpha+d/p+\alpha/r]/2 $, since in this case $[3-\alpha+d/p+\alpha/r]/2 -2+\alpha-[d/p]-[\alpha/r]> 0 \iff \alpha >[1+d/p]/[1-1/r]$. Of course, to derive strong well-posedness in the multidimensional setting some controls of the second order derivatives are needed. This is what do in [@krylov_strong_2005] in the Sobolev setting. Let us also specify that, in connection with Theorem 5 and Remark 3, in the scalar setting weak and strong uniqueness are somehow closer since, from the PDE viewpoint, they do not require to go up to second order derivatives. Indeed, the strategy is then to develop for two weak solutions $X^1 X^2 $ of (1.15), a regularized version of $|X_t^1-X_t^2|$, which somehow makes appear a kind of “*local-time*" term which is handled through the Hölder controls on the gradients (see the proof of Theorem 5 and e.g. Proposition 2.9 in \[ABP18\]), whereas in the multidimensional setting, for strong uniqueness, the second derivatives get in. This, in turn, allows us to derive a well-posedness theory, in the mild sense, for the *formal* PDE summarized in the following theorem. \[THE\_PDE\] Let the assumptions of Theorem \[THEO\_WELL\_POSED\] hold. For $\theta $ defined in , so that in particular $\theta-1>0 $, we assume that $g$ has linear growth and $Dg \in {\mathbb{B}}_{\infty,\infty}^{\theta-1}$ and $f\in \textcolor{black}{\bL^\infty ([0,T]},{\mathbb{B}}_{\infty,\infty}^{\theta-\alpha})$. Then, the PDE admits a unique mild solution $$u(t,x) = P_{\textcolor{black}{T-t}}^\alpha[g](x) + \int_t^T ds P_{\textcolor{black}{s-t}}^\alpha[\left\{ f + F \cdot D u\right\}](s,x),$$ with the semi-group generated by $L^\alpha$. Furthermore, the unique mild solution satisfies the bounds of Proposition \[PROP\_PDE\_MOLL\] (replacing $u_m$ by $u$). \[REM\_DIFF\_PRELI\] Observe that, when $p=r=+\infty $, we almost have a Schauder type result, namely $\theta=\gamma-1+\alpha$ in and we end up with the corresponding parabolic bootstrap effect for both the solution of the mollified PDE and the mild solution of , up to the small exponent $\varepsilon $ which can be chosen arbitrarily small. It should be noted at this point that we are confident about the extension of the results to differential operator $L^\alpha$ involving non-trivial diffusion coefficient, provided this last is Hölder-continuous in space. Sketches of proofs in this direction are given in the Remark \[REM\_COEFF\_DIFF\] following the proof of Proposition \[PROP\_PDE\_MOLL\], Theorem \[THE\_PDE\] and Corollary \[COR\_ZVON\_THEO\]. However, we avoid investing this direction for sake of clarity and in order to focus on the more (unusual) drift component. From PDE to SDE results {#SDE_2_PDE} ----------------------- We here state the procedure to go from the “$(p,q,r,\gamma)-{\rm{ well\, posedness\, theory}}$” deriving from Proposition \[PROP\_PDE\_MOLL\], Corollary \[COR\_ZVON\_THEO\] and Theorem \[THE\_PDE\] to the corresponding one for the SDE.\ It is quite standard to derive well-posedness results for a probabilistic problem through PDE estimates. When the drift is a function, such a strategy goes back to e.g. Zvonkin [@zvonkin_transformation_1974] or Stroock and Varadhan [@stro:vara:79]. Such strategy has been made quite systematic in the distributional setting by Delarue and Diel in [@dela:diel:16] who provide a very robust framework. . Points **(i)** to **(iii)** allow to derive the rigorous proof of Theorem \[THEO\_WELL\_POSED\] provided Proposition \[PROP\_PDE\_MOLL\], Corollary \[COR\_ZVON\_THEO\] and Theorem \[THE\_PDE\] hold. Point **(iv)** concerns the meaning of the *formal* dynamics and gives some highlights to the (more involved) proof of Theorem \[THEO\_DYN\]. Eventually, we explain in point **(v)** how the PDE results obtained in Proposition \[PROP\_PDE\_MOLL\], Corollary \[COR\_ZVON\_THEO\] and Theorem \[THE\_PDE\] be used to derive the pathwise uniqueness for the *formal* SDE (or more precisely for the stochastic dynamical system obtained in point **(iv)**). This gives a flavor of the proof of Theorem \[THEO\_STRONG\].\ **(i) Tightness of the sequence of probability measure induced by the solution of the mollified SDE .** Here, we consider the regular framework induced by the *mollified* PDE . Note that in this regularized framework, for any $m$, the martingale problem associated with $L^\alpha_m$ is well posed. We denote by $\mathbb{P}^\alpha_m$ the associated solution. Let us generically denote by $(X_t^m)_{t\ge 0} $ the associated canonical process. Note that the underlying space where such a process is defined differs according to the values of $\alpha$: when $\alpha=2$ the underlying space is $\mathcal C([0,T],{\mathbb{R}}^d) $ while it is $\mathcal D([0,T],{\mathbb{R}}^d)$ when $\alpha<2$. Assume w.l.o.g. $s>v $, let $u_m=(u^1_m,\ldots,u^d_m)$ where each $\textcolor{black}{u_m^i}$ is the solution of with terminal condition $g\equiv 0$ and source term $f=F_m^k$ (i.e. the $k^{{\rm th}}$ component of $F_m$). Let us define for any $s\geq v$ in $ [0,T]^2$ and for any $\alpha \in (1,2]$ the process $$\label{def_de_m_alpha} M_{v,s}(\alpha,u_m,X^m)= \left\lbrace\begin{array}{llll} \displaystyle \int_v^s D u_m(r,X_r^m)\cdot dW_r,\\ \text{ where } W\text{ is a Brownian motion, if }\alpha=2;\\ \displaystyle \int_v^s \int_{{\mathbb{R}}^d \backslash\{0\} } \{u_m(r,X_{r^-}^{\textcolor{black}{m}}+x) - u_m(r,X_{r^-}^{\textcolor{black}{m}}) \}\tilde N(dr,dx), \\ \text{ where }\ \tilde N\text{ is \textcolor{black}{the} compensated Poisson measure, if }\alpha<2. \end{array} \right.$$ Note that this process makes sense since the solution $u_m$ of the *mollified* PDE is bounded. Next, applying Itô’s formula we obtain $$\begin{aligned} \label{rep_canoproc_PDE} X_s^m-X_v^m&=&M_{v,s}(\alpha,u_m,X^m)+ {\mathcal{W}}_s-{\mathcal{W}}_v - [u_m(v,X_v^m)-u_m(s,X_s^m)].\end{aligned}$$ In order to prove that $({\mathbb{P}}^\alpha_m)_{m\in {\mathbb{N}}} $ actually forms a tight sequence of probability measures on $\mathcal C([0,T],{\mathbb{R}}^d) $ (resp. on $\mathcal D([0,T],{\mathbb{R}}^d)$), it is sufficient to prove that there exists $c,\tilde p$ and $\eta>0$ such that ${\mathbb{E}}^{{\mathbb{P}}^\alpha_{\textcolor{black}{m}}}[|X^m_s - X^m_v|^{\tilde p}] \leq c|v-s|^{1+\eta}$ (resp. ${\mathbb{E}}^{{\mathbb{P}}^\alpha_{\textcolor{black}{m}}}[|X^m_s - X^m_0|^{\tilde p}] \leq cs^{\eta}$) thanks to the Kolmogorov (resp. Aldous) Criterion. We refer e.g. for the latter to Proposition 34.9 in Bass [@bass:11]. Writing $$\begin{aligned} [u_m(v,X_v^m)-u_m(s,X_s^m)] = u_m(v,X_v^m)-u_m(v,X_s^m) +u_m(v,X_s^m)-u_m(s,X_s^m),\end{aligned}$$ the result follows in small time thanks to Corollary \[COR\_ZVON\_THEO\] (choosing $1< \tilde p<\alpha$ in the pure jump setting) .\ **(ii) Identification of the limit probability measure.** Let us now prove that the limit is indeed a solution of the martingale problem associated with $L^\alpha$. Let $f:[0,T] \times {\mathbb{R}}^d \to {\mathbb{R}}^d$ be some measurable, continuous in time and bounded in space function, let $u_m$ be the classical solution of the *mollified* PDE with source term $f$ and terminal condition $g\equiv 0$. Applying Itô’s Formula for each $u_m(t,X_t^m)$ we obtain that $$\begin{aligned} u_m(t,X_t^m) - u_m(0,x_0) - \int_0^t f(s,X_s^m) d s = M_{\textcolor{black}{0},t}(\alpha,u_m,X^m), $$ where $M(\alpha,u_m,X^m)$ is defined by $\eqref{def_de_m_alpha}$. From this definition, if we are able to control uniformly in $m$ the modulus of continuity of $u_m$ and of $D u_m$, then from Arzelà -Ascoli Theorem, we know that we can extract a subsequence $(m_k)_{k\ge 0}$ s.t. $(u_{m_k})_{k\geq 0}$ and $(D u_{m_k})_{m_k\geq 0}$ converge uniformly on compact subsets of $[0,T] \times {\mathbb{R}}^d$ to functions $u$ and $D u$ respectively. In particular, equation holds for the limit functions $u$, $Du$. Hence, this implies that $u$ is the unique mild solution of PDE . Thus, together with a uniform control of the moment of $X^m$ (which also follows from and above conditions on $u_m$), we deduce that $$\label{eq:mgprop} \left(u(t,X_t) - \int_0^t f(s,X_s) d s - u(0,x_0) \right)_{0\leq t \leq T},$$ is a $\mathbb{P}^\alpha$-martingale () by letting the regularization procedure tend to the infinity.\ **(iii) Uniqueness of the limit probability measure.** We now come back to the canonical space (which again depends on the current value of $\alpha$), and let $\mathbb{P}^\alpha$ and $\tilde{\mathbb{P}}^\alpha$ be two solutions of the martingale problem associated with data ($L^\alpha,\textcolor{black}{F},x_0$), $x_0$ in ${\mathbb{R}}^d$. Thus, for all continuous in time and measurable and bounded in space functions $f :[0,T] \times {\mathbb{R}}^d \to {\mathbb{R}}$ we have, setting again $g \equiv 0$, from Theorem \[THE\_PDE\] $$u(0,x_0) = {\mathbb{E}}^{\mathbb{P}^\alpha}\left[ \int_0^T f(s,X_s) d s\right] = {\mathbb{E}}^{\tilde{\mathbb{P}}^\alpha}\left[ \int_0^T f(s,X_s) d s\right],$$ so that the marginal law of the canonical process are the same under $\mathbb{P}^\alpha$ and $\tilde{\mathbb{P}}^\alpha$. We extend the result on ${\mathbb{R}}_+$ thanks to regular conditional probabilities, see Chapter 6.2 in [@stro:vara:79] . Uniqueness then follows from Corollary 6.2.4 of [@stro:vara:79].\ **(iv) Reconstructing the dynamics associated with the *formal* SDE .** This part requires to introduce an enhanced martingale problem (considering $(X,{\mathcal{W}}) $ as canonical process). Working within this enlarged setting allows to recover the drift part of the dynamics by studying the difference between the increment of the process and the associated stable noise on small time intervals which are further meant to be infinitesimal. It turns out that, for any time interval $ [v,s] $ considering $\mathfrak f(v,X_v,s-v):=u^{s}(v,X_v)-X_v $ (), which can be expanded as in , we establish sufficient quantitive controls to be able to give a meaning through stochastic-Young type integration to $\int_0^t \mathfrak f(v,X_v,dv)$ which in turns is the limit drift of the dynamics. Observe as well from that, on a small time interval, the Euler approximation of the drift writes as ${\mathscr F}(v,X_v,s-v)=\textcolor{black}{\int_v^s dr \int_{{\mathbb{R}}^d}dy F(r,y) p_\alpha(r-v,y-X_v)}$ which is nothing but the convolution of the initial distributional drift with the density of the driving noise. could also be useful in order to derive numerical approximations for the SDE . We can to this end mention the recent work by De Angelis *et al.* [@dean:germ:isso:19] who considered in the Brownian scalar case some related issues.\ **(v) About the strong well-posedness for .** , it is tempting to wonder if pathwise uniqueness holds for the SDE or even if it admits a strong solution. , we know in particular that the drift part in reads as a Dirichlet process. The point is then to apply the Itô formula for Dirichlet processes to expand any weak solution of along the solution of the *mollified* PDE with source term $F_m$ and terminal condition 0. This yields to $$\begin{aligned} \label{DYN_ZVON} X_t^{Z, \textcolor{black}{m}} := X_t-u_m(t,X_t) = x-u_m(0,x) + \mathcal{W}_t- M_{0,t}(\alpha,u_m,X) + R_{0,t}(\alpha,F_m,\mathscr F, X).\end{aligned}$$ where $M_{0,t}(\alpha,u_m,X)$ is as in with $X$ instead of $X^m$ therein and $R_{0,t}(\alpha,F_m,\mathscr F, X):= \int_0^t \mathscr F(s,X_s,ds) - F_m(s,X_s)ds$. would lead to strong well-posedness as soon as the parameters satisfy the previous condition in Theorem \[THEO\_DYN\]. [ startsection [section]{}[1]{}[@]{} [-3.5ex plus -1ex minus -.2ex]{} [2.3ex plus.2ex]{} [****]{}]{}[PDE analysis]{}\[SEC\_PDE\_PROOF\] This part is dedicated to the proofs of Proposition \[PROP\_PDE\_MOLL\], Corollary \[COR\_ZVON\_THEO\] and Theorem \[THE\_PDE\]. It is thus the core of this paper as these results allow to recover, and extend, most of the previous results on SDEs with distributional drifts in the introduction. Especially, as they are handled, the proofs are essentially the same in the diffusive ($\alpha=2$) and pure jump ($\alpha<2$) setting as they only require heat kernel type estimates on the density of the associated underlying noise. We first start by introducing the mathematical tools in Section \[SEC\_MATH\_TOOLS\]. Then, we provide a primer on the PDE by investigating the smoothing properties of the Green kernel associated with the stable noise in Section \[SEC\_GREEN\]. Eventually, we derive in Section \[SEC\_PERTURB\] the proofs of Proposition \[PROP\_PDE\_MOLL\], Corollary \[COR\_ZVON\_THEO\] and Theorem \[THE\_PDE\]. Mathematical tools {#SEC_MATH_TOOLS} ------------------ In this part, we give the main mathematical tools needed to prove Proposition \[PROP\_PDE\_MOLL\] Theorem \[THE\_PDE\]. ### Heat kernel estimates for the density of the driving process. Under [[**(UE)**]{}]{}, it is rather well known that the following properties hold for the density $p_\alpha $ of ${\mathcal{W}}$. For the sake of completeness we provide a complete proof. \[SENS\_SING\_STAB\] There exists $C:=C({{\bf (A)}})$ s.t. for all $\ell \in \{1,2\} $, $t>0$, and $ y\in {\mathbb{R}}^d $: $$\label{SENSI_STABLE} |D_y^\ell p_\alpha(t,y)|\le \frac{C}{t^{\ell/\alpha}} q_\alpha(t,y),\ |\partial_t^\ell p_\alpha(t,y)|\le \frac{C}{t^{\ell}} q_\alpha(t,y),$$ where $\big( q_\alpha(t,\cdot)\big)_{t>0}$ is a family of probability densities on ${\mathbb{R}}^d $ such that $q_\alpha(t,y) = {t^{-d/ \alpha}} \, q_\alpha (1, t^{- 1/\alpha} y)$, $ t>0$, $ \in {\mathbb{R}}^d$ and for all $\gamma \in [0,\alpha) $, there exists a constant $c:=c(\alpha,\eta,\gamma)$ s.t. $$\label{INT_Q} \int_{{\mathbb{R}}^N}q_\alpha(t,y)|y|^\gamma dy \le C_{\gamma}t^{\frac{\gamma}{\alpha}},\;\;\; t>0.$$ \[NOTATION\_Q\] [ From now on, for the family of stable densities $\big(q(t,\cdot)\big)_{t>0} $, we also use the notation $q(\cdot):=q(1,\cdot) $, i.e. without any specified argument $q(\cdot)$ stands for the density $q\textcolor{black}{(t,\cdot)}$ at time $\textcolor{black}{t=1}$.]{} . Let us recall that, for a given fixed $t>0$, we can use an Itô-Lévy decomposition at the associated characteristic stable time scale for ${\mathcal{W}}$ (i.e. the truncation is performed at the threshold $t^{\frac {1} {\alpha}} $) to write ${\mathcal{W}}_t:=M_t+N_t$ where $M_t$ and $N_t $ are independent random variables. More precisely, $$\label{dec} N_s = \int_0^s \int_{ |x| > t^{\frac {1} {\alpha}} } \; x N(du,dx), \;\;\; \; M_s = {\mathcal{W}}_s - N_s, \;\; s \ge 0,$$ where $N$ is the Poisson random measure associated with the process ${\mathcal{W}}$; for the considered fixed $t>0$, $M_t$ and $N_t$ correspond to the *small jumps part* and *large jumps part* respectively. A similar decomposition has been already used in [@wata:07], [@szto:10] and [@huan:meno:15], [@huan:meno:prio:19] (see in particular Lemma 4.3 therein). It is useful to note that the cutting threshold in precisely yields for the considered $t>0$ that: $$\label{ind} N_t \overset{({\rm law})}{=} t^{\frac 1\alpha} N_1 \;\; \text{and} \;\; M_t \overset{({\rm law})}{=} t^{\frac 1\alpha} M_1.$$ To check the assertion about $N$ we start with $${\mathbb{E}}[e^{i \langle \lambda , N_t \rangle}] = \exp \Big( t \int_{{\mathbb S}^{d-1}} \int_{t^{\frac 1\alpha}}^{\infty} \Big(\cos (\langle \lambda, r\xi \rangle) - 1 \Big) \, \frac{dr}{r^{1+\alpha}} \mu_{S}(d\xi) \Big), \;\; \lambda \in {\mathbb{R}}^d$$ (see [@sato:99]). Changing variable $r/t^{1/\alpha} =s$ we get that ${\mathbb{E}}[e^{i \langle \lambda , N_t \rangle}]$ $= {\mathbb{E}}[e^{i \langle \lambda , t^{ 1/\alpha} N_1 \rangle}]$ for any $\lambda \in {\mathbb{R}}^d$ and this shows the assertion (similarly we get the statement for $M$). The density of ${\mathcal{W}}_t$ then writes $$\label{DECOMP_G_P} p_\alpha(t,x)=\int_{{\mathbb{R}}^d} p_{M}(t,x-\xi)P_{N_t}(d\xi),$$ where $p_M(t,\cdot)$ corresponds to the density of $M_t$ and $P_{N_t}$ stands for the law of $N_t$. [From Lemma A.2 in [@huan:meno:prio:19] (see as well Lemma B.1 ]{} in [@huan:meno:15]), $p_M(t,\cdot)$ belongs to the Schwartz class ${\mathscr S}({\mathbb{R}}^N) $ and satisfies that for all $m\ge 1 $ and all $\ell \in \{0,1,2\} $, there exist constants $\bar C_m,\ C_{m}$ s.t. for all $t>0,\ x\in {\mathbb{R}}^d $: $$\label{CTR_DER_M} |D_x^\ell p_M(t,x)|\le \frac{\bar C_{m}}{t^{\frac{\ell}{\alpha} }} \, p_{\bar M}(t,x),\;\; \text{where} \;\; p_{\bar M}(t,x) := \frac{C_{m}}{t^{\frac{d}{\alpha}}} \left( 1+ \frac{|x|}{t^{\frac 1\alpha}}\right)^{-m}$$ where $C_m$ is chosen in order that [*$p_{\bar M}(t,\cdot ) $ be a probability density.*]{} We carefully point out that, to establish the indicated results, since we are led to consider potentially singular spherical measures, we only focus on integrability properties similarly to [@huan:meno:prio:19] and not on pointwise density estimates as for instance in [@huan:meno:15]. The main idea thus consists in exploiting [,]{} and . The derivatives on which we want to obtain quantitative bounds will be expressed through derivatives of $p_M(t,\cdot)$, which also give the corresponding time singularities. However, as for general stable processes, the integrability restrictions come from the large jumps (here $N_t $) and only depend on its index $\alpha$. A crucial point then consists in observing that the convolution $\int_{{\mathbb{R}}^d}p_{\bar M}(t,x-\xi)P_{N_t}(d\xi) $ actually corresponds to the density of the random variable $$\label{we2} \bar {\mathcal{W}}_t:=\bar M_t+N_t,\;\; t>0$$ (where $\bar M_t $ has density $p_{\bar M}(t,.)$ and is independent of $N_t $; [to have such decomposition one can define each $\bar {\mathcal{W}}_t$ on a product probability space]{}). Then, the integrability properties of $\bar M_t+N_t $, and more generally of all random variables appearing below, come from those of $\bar M_t $ and $N_t$. One can easily check that $p_{\bar M}(t,x) = {t^{-\frac d\alpha}} \, p_{\bar M} (1, t^{-\frac 1\alpha} x),$ $ t>0, \, $ $x \in {\mathbb{R}}^d.$ Hence $$\bar M_t \overset{({\rm law})}{=} t^{\frac 1\alpha} \bar M_1,\;\;\; N_t \overset{({\rm law})}{=} t^{\frac 1\alpha} N_1.$$ By independence of $\bar M_t$ and $N_t$, using the Fourier transform, one can easily prove that $$\label{ser1} \bar {\mathcal{W}}_t \overset{({\rm law})}{=} t^{\frac 1\alpha} \bar {\mathcal{W}}_1.$$ Moreover, $ {\mathbb{E}}[|\bar {\mathcal{W}}_t|^\gamma]={\mathbb{E}}[|\bar M_t+N_t|^\gamma]\le C_\gamma t^{\frac\gamma \alpha}({\mathbb{E}}[|\bar M_1|^\gamma]+{\mathbb{E}}[| N_1|^\gamma])\le C_\gamma t^{\frac\gamma \alpha}, \; \gamma \in (0,\alpha). $ This shows that the density of $\bar {\mathcal{W}}_t$ verifies . The controls on the spatial derivatives are derived similarly using for $\ell\in \{1,2\} $ and the same previous argument. The bound for the time derivatives follow from the Kolmogorov equation $\partial_t p_\alpha(t,z)=L^\alpha p_\alpha (t,z) $ and using the fact that for all $x\in {\mathbb{R}}^d,\ |L^\alpha p_M(t,x)|\le C_m t^{-1 }\bar p_M(t,x) $ (see again Lemma 4.3 in [@huan:meno:prio:19] for details). ### Thermic characterization of Besov norm. {#SEC_BESOV} In the sequel, we will intensively use the thermic characterisation of Besov spaces, see e.g. Section 2.6.4 of Triebel [@trie:83]. Precisely, for $\vartheta \in {\mathbb{R}}, q\in (0,+\infty] ,p \in (0,\infty] $, ${\mathbb{B}}_{p,q}^\vartheta({\mathbb{R}}^d):=\{f\in {\mathcal S}'({\mathbb{R}}^d): \|f\|_{{\mathcal H}_{p,q}^\vartheta,\alpha}<+\infty \} $ where ${\mathcal S}({\mathbb{R}}^d) $ stands for the Schwartz class and $$\label{strong_THERMIC_CAR_DEF_STAB} \|f\|_{{\mathcal H}_{p,q}^\vartheta,\alpha}:=\|\varphi(D) f\|_{\bL^p({\mathbb{R}}^d)}+ \Big(\int_0^1 \frac {dv}{v} v^{(n-\frac \vartheta\alpha )q} \|\partial_v^n \tilde p_\alpha (v,\cdot)\star f\|_{\bL^p({\mathbb{R}}^d)}^q \Big)^{\frac 1q},$$ with $\varphi \in C_0^\infty({\mathbb{R}}^d)$ (smooth function with compact support) s.t. $\varphi(0)\neq 0 $, $\varphi(D)f := (\varphi \hat f)^{\vee} $ where $\hat f$ and $(\varphi \hat f)^\vee $ respectively denote the Fourier transform of $f$ and the inverse Fourier transform of $\varphi \hat f $. The parameter $n$ is an integer s.t. $n> \vartheta/ \alpha $ and for $v>0$, $z\in {\mathbb{R}}^d $, $\tilde p_\alpha(v,\cdot) $ denotes the density of the $d$-dimensional isotropic stable process at time $v$. In particular $\tilde p_v $ satisfies the bounds of Lemma \[SENS\_SING\_STAB\] and in that case the upper-bounding density can be specified. Namely, in that case holds with $q_\alpha(t,x)=C_\alpha t^{-d/\alpha}(1+|x|/t^{1/\alpha})^{-(d+\alpha)}$. Importantly, it is well known that ${\mathbb{B}}_{p,q}^\vartheta({\mathbb{R}}^d,{\mathbb{R}})$ and ${\mathbb{B}}_{p',q'}^{-\vartheta}({\mathbb{R}}^d,{\mathbb{R}})$ where $p',q' $ are the conjugates of $p,q$ respectively are in duality. Namely, for $(p,q)\in (1,\infty]^2 $, ${\mathbb{B}}_{p,q}^\vartheta=({\mathbb{B}}_{p',q'}^{-\vartheta})^*$, see e.g. Theorem 4.1.3 in [@adam:hedb:96] or . In particular, for all $(f,g)\in {\mathbb{B}}_{p,q}^{\vartheta}({\mathbb{R}}^d,{\mathbb{R}})\times {\mathbb{B}}_{p',q'}^{-\vartheta}({\mathbb{R}}^d,{\mathbb{R}}) $ which are also functions: $$\label{EQ_DUALITY} |\int_{{\mathbb{R}}^d} f(y) g(y) dy|\le \|f\|_{{\mathbb{B}}_{p,q}^\vartheta}\|g\|_{{\mathbb{B}}_{p',q'}^{-\vartheta}}.$$ In the following we call *thermic part* the second term in the right hand side of . This contribution will be denoted by $\mathcal{T}_{p,q}^{\vartheta}[f]$. \[REM\_PART\_NONTHERMIC\] As it will be clear in the following, the first part of the r.h.s. in will be the easiest part to handle (in our case) and will give negligible contributions. For that reason, we will only focus on the estimation of the *thermic part* of the Besov norm below. See Remark \[GESTION\_BESOV\_FIRST\] in the proof of Lemma \[LEM\_BES\_NORM\] in Appendix \[SEC\_APP\_TEC\] . ### Auxiliary estimates We here provide some useful estimates whose proof postponed to Appendix \[SEC\_APP\_TEC\]. We refer to the next Section for a flavor of proof as well as for application of such result. \[LEM\_BES\_NORM\] Let $\Psi : [0,T]\times {\mathbb{R}}^d \to {\mathbb{R}}^d$. Assume that for all $s$ in $[0,T]$ the map $y \mapsto \Psi(s,y)$ is in $\mathbb B_{\infty,\infty}^\beta({\mathbb{R}}^d)$ for some $\beta \in \textcolor{black}{(0,1]}$. Define for any $\alpha$ in $(1,2]$, for all $\eta \in \{0,1,\alpha\}$, the differential operator $\mathscr D^\eta$ by $$\hat{\mathscr D}^\eta := \left\lbrace\begin{array}{lll} {\rm Id} \text{ if }\eta =0,\\ -i\textcolor{black}{\xi} \text{ if }\eta =1,\\ |\xi|^\alpha \text{ if }\eta =\alpha, \end{array}\right.$$ and let $p_\alpha(t,\cdot)$ be the density of ${\mathcal{W}}_t$ defined in . Then, there exists a constant $C := C({{\bf (UE)}},T)>0$ such that for any $\gamma$ in $(1-\beta,1)$, any $p',q'\ge 1$, all $t<s$ in $[0,T]^2$, for all $x$ in ${\mathbb{R}}^d$ $$\label{Esti_BES_NORM} \| \Psi(s,\cdot) \mathscr D^\eta p_\alpha(s-t,\cdot-x) \|_{{\mathbb{B}}_{p',q'}^{\textcolor{black}{1-\gamma}}} \leq \|\Psi(s,\cdot)\|_{{\mathbb{B}}_{\infty,\infty}^\beta} \frac{C}{(s-t)^{\left[\frac{1-\gamma}{\alpha}+\frac d{p\alpha}+\frac {\eta}{\alpha}\right]}},$$ where $p$ is the conjugate of $p'$. Also, for any $\gamma$ in $(1-\beta,1]$ all $t<s$ in $[0,T]^2$, for all $x,x'$ in ${\mathbb{R}}^d$ it holds , $$\label{Esti_BES_HOLD} \| \Psi(s,\cdot) \big(\mathscr D^\eta p_\alpha(s-t,\cdot-x) - \mathscr D^\eta p_\alpha(s-t,\cdot-x')\big) \|_{{\mathbb{B}}_{p',q'}^{\textcolor{black}{1-\gamma}}} \leq \|\Psi(s,\cdot)\|_{ {\mathbb{B}}_{\infty,\infty}^\beta}\frac{C}{(s-t)^{\left[\frac{1-\gamma}{\alpha}+\frac d{p\alpha}+\frac {\eta+\beta'}{\alpha}\right]}}|x-x'|^{\beta'},$$ up to a modification of $C\textcolor{black}{:=C({{\bf (UE)}},T,\beta')}$. A primer on PDE : reading almost optimal regularity through Green kernel estimates {#SEC_GREEN} ---------------------------------------------------------------------------------- Equation can be rewritten as $$\begin{aligned} \p_t u(t,x) + L^\alpha u(t,x) &=& f(t,x)- F(t,x) \cdot D u(t,x),\quad \text{on }[0,T]\times {\mathbb{R}}^d, \notag \\ u(T,x) &=& g(x),\quad \text{on }{\mathbb{R}}^d,\label{ALTERNATE_PDE}\end{aligned}$$ viewing the first order term as a source (depending here on the solution itself). In order to understand what type of smoothing effects can be expected for rough source we first begin by investigating the smoothness of the following equation: $$\begin{aligned} \label{LIN_WITH_ROUGH_SOURCE} \p_t w(t,x) + L^\alpha w(t,x) &=& \textcolor{black}{\Phi}(t,x),\quad \text{on }[0,T]\times {\mathbb{R}}^d,\notag \\ w(T,x) &=& 0,\quad \text{on }{\mathbb{R}}^d,\end{aligned}$$ The parallel with the initial problem , rewritten in , is rather clear. We will aim at applying the results obtained below for the solution of to $\textcolor{black}{\Phi}=f-F\cdot Du $ (where the roughest part of the source will obviously be $ F\cdot Du$). Given a map $\textcolor{black}{\Phi}$ in $\bL^r({\mathbb{B}}_{p,q}^{-1+\gamma})$ we now specifically concentrate on the gain of regularity which can be obtained through the fractional operator $L^\alpha $ for the solution $w$ of w.r.t. the data $\textcolor{black}{\Phi}$. Having a lot of parameters at hand, this will provide a primer to understand what could be, at best, attainable for the *target* PDE -. The solution of corresponds to the Green kernel associated with $\textcolor{black}{\Phi}$ defined as: $$\label{DEF_GREEN} G^\alpha \textcolor{black}{\Phi}(t,x) = \int_t^T ds \int_{{\mathbb{R}}^d} dy \textcolor{black}{\Phi}(s,y) p_\alpha(s-t,y-x).$$ Since to address the well-posedness of the martingale problem we are led to contol, in some sense, gradients, we will here try to do so for the Green kernel introduced in solving the linear problem with *rough* source. Namely for a multi-index $\eta\in {\mathbb{N}}^d, |\eta|:=\sum_{i=1}^d \eta_i\le 1 $, we want to control $\textcolor{black}{D_x^\eta} G^\alpha \textcolor{black}{\Phi}(t,x) $ Avoiding harmonic analysis techniques, which could in some sense allow to average non-integrable singularities, our approach allows to obtain *almost optimal regularity* thresholds that could be attainable on $u$. Thanks to the Hölder inequality (in time) and the duality on Besov spaces (see equation ) we have that: $$\begin{aligned} \left|\textcolor{black}{D_x^\eta} G^\alpha \textcolor{black}{\Phi}(t,x)\right| &=& \left|\int_t^T ds \int_{{\mathbb{R}}^d} dy \textcolor{black}{\Phi}(s,y) \textcolor{black}{D_x^\eta}p(s-t,y-x)\right|\\ &\leq& \| \textcolor{black}{\Phi} \|_{\bL^r((t,T],{\mathbb{B}}_{p,q}^{-1+\gamma})} \|\textcolor{black}{D_x^\eta}p_\alpha(\cdot-t,\cdot-x) \|_{\bL^{r'}((t,T],{\mathbb{B}}_{p',q'}^{1-\gamma})},\end{aligned}$$ where $p'$, $q'$ and $r'$ are the conjugate exponents of $p$, $q$ and $r$. Let us first focus, for $s\in (t,T]$ on the thermic part of $\|\textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x) \|_{{\mathbb{B}}_{p',q'}^{1-\gamma}} $. We have with the notations of Section \[SEC\_BESOV\]: $$\begin{aligned} \Big(\mathcal{T}_{p',q'}^{1-\gamma}[\textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)]\Big)^{q'}&=&\int_0^1 \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \| \partial_v\tilde p_\alpha(v,\cdot) \star \textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)\|_{\bL^{p'}}^{q'}\\ &=& \int_0^{(s-t)^{}} \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \| \partial_v\tilde p_\alpha(v,\cdot) \star \textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)\|_{\bL^{p'}}^{q'}\\ &&+ \int_{(s-t)^{}}^1 \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \| \partial_v\tilde p_\alpha(v,\cdot) \star \textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)\|_{\bL^{p'}}^{q'}\\ &&=:\Big(\mathcal{T}_{p',q'}^{1-\gamma}[\textcolor{black}{D_x^\eta}p_\alpha(\cdot-t,\cdot-x)]|_{[0,(s-t)]}\Big)^{q'}+\Big(\mathcal{T}_{p',q'}^{1-\gamma}[\textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)]|_{[(s-t),1]}\Big)^{q'}.\notag \end{aligned}$$ In the above equation, we split the time interval into two parts. On the upper interval, for which there are no time singularities, we use directly convolution inequalities and the available controls for the derivatives of the heat kernel (see Lemma \[SENS\_SING\_STAB\]). On the lower interval we have to equilibrate the singularities in $v$ and use cancellation techniques involving the sensitivities of $\textcolor{black}{D_x^\eta} p_\alpha $ (which again follow from Lemma \[SENS\_SING\_STAB\]). Let us begin with the upper part. Using the $\bL^1-\bL^{p'}$ convolution inequality, we have from Lemma \[SENS\_SING\_STAB\]: $$\begin{aligned} \Big(\mathcal{T}_{p',q'}^{1-\gamma}[\textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)]|_{[(s-t),1]}\Big)^{q'}&\leq&\int_{(s-t)^{}}^1 \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \|\partial_v \tilde p_\alpha(v,\cdot) \|_{\bL^1}^{q'} \| \textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)\|_{\bL^{p'}}^{q'}\notag\\ &\leq&\frac{C}{(s-t)^{(\frac d{p\alpha}+\frac {|\eta|}{\alpha})q'}} \int_{(s-t)^{}}^1 \frac{dv}{v} \frac{1}{v^{\frac{1-\gamma}{\alpha}q'}} \notag\\ &\leq & \frac{C}{(s-t)^{\left[\frac{1-\gamma}{\alpha}+\frac d{p\alpha}+\frac {|\eta|}{\alpha}\right]q'}}.\label{CTR_COUP_HAUTE_GREEN}\end{aligned}$$ Indeed, we used for the second inequality that equation : $$\begin{aligned} \|\textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)\|_{\bL^{p'}}&=&\Big(\int_{{\mathbb{R}}^d} \big(\partial_x^\eta p_\alpha(s-t,x) \big)^{p'} dx\Big)^{1/p'}\notag\\ &\le& \frac{C_{p'}}{(s-t)^{\frac{|\eta|}{\alpha}}} \Big((s-t)^{-\frac d \alpha(p'-1)}\int_{{\mathbb{R}}^d} \frac{dx}{(\textcolor{black}{s-t})^{\frac d\alpha}}\big(q_\alpha(1, \frac{x}{(s-t)^{\frac 1\alpha}})\big)^{p'}\Big)^{1/p'}\notag\\ &\le & C_{p'}(s-t)^{-[\frac{d}{\alpha p}+\frac{|\eta|}\alpha]}\Big( \int_{{\mathbb{R}}^d} d\tilde x \big(q(1,\tilde x)\big)^{p'}\Big)^{1/p'}\le \bar C_{p'}(s-t)^{-[\frac{d}{\alpha p}+\frac{|\eta|}{\alpha}]}, \label{INT_LP_DENS_STABLE}\end{aligned}$$ recalling that $p^{-1}+(p')^{-1}=1$ and $p\in (1,+\infty] $, $p'\in [1,+\infty) $ for the last inequality. Hence, the map $s \mapsto \mathcal{T}_{p',q'}^{1-\gamma}[\textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)]|_{[(s-t),1]}$ belongs to $\bL^{r'}((t,T],{\mathbb{R}}^+)$ as soon as $$\label{FIRST_COND_ETA} -r'\left[\frac{1-\gamma}{\alpha}+\frac d{p\alpha}+\frac {|\eta|}{\alpha}\right] >-1 \Longleftrightarrow |\eta|< \alpha(1-\frac 1r)+\gamma-1-\frac dp. $$ On the other hand, still from (see again the proof of Lemma 4.3 in [@huan:meno:prio:19] for details), one derives that there exists $C$ s.t. for all $\beta\in (0,1] $ and all $(x,y,z)\in ({\mathbb{R}}^d)^2 $, $$\begin{aligned} |\textcolor{black}{D_x^\eta} p_\alpha(s-t,z-x)- \textcolor{black}{D_x^\eta} p_\alpha(s-t,y-x)|\le \frac{C}{(s-t)^{\frac{\beta+|\eta|}{\alpha}}} |z-y|^\beta \Big( q_\alpha(s-t,z-x)+q_\alpha(s-t,y-x)\Big).\label{CTR_BETA_GREENPART}\end{aligned}$$ Indeed, is direct if $|z-y|\ge (1/2) (s-t)^{1/\alpha} $ (off-diagonal regime). It suffices to exploit the bound for $\textcolor{black}{D_x^\eta} p_\alpha(s-t,y-x) $ and $\textcolor{black}{D_x^\eta} p_\alpha(s-t,z-x) $ and to observe that $\big(|z-y|/(s-t)^{1/\alpha}\big)^{\beta}\ge 1 $. If now $|z-y|\le (1/2)(s-t)^{1/\alpha} $ (diagonal regime), it suffices to observe from that, with the notations of the proof of Lemma \[SENS\_SING\_STAB\] (see in particular ), for all $\lambda\in [0,1] $: $$\begin{aligned} |\textcolor{black}{D_x^\eta} D p_M(s-t,y-x+\lambda(y-z))|&\le& \frac{C_m}{(s-t)^{\frac{|\eta|+1}\alpha}}p_{\bar M}(s-t,y-x-\lambda(y-z))\notag\\ &\le& \frac{C_m}{(s-t)^{\frac{|\eta|+1+d}\alpha}}\frac{1}{\Big( 1+\frac{|y-x-\lambda(z-y)|}{(s-t)^{\frac 1\alpha}} \Big)^{m}} \notag\\ &\le& \frac{C_m}{(s-t)^{\frac{|\eta|+1+d}\alpha}}\frac{1}{\Big( \frac 12+\frac{|y-x|}{(s-t)^{\frac 1\alpha}} \Big)^{m}}\le 2\frac{C_m}{(s-t)^{\frac {|\eta|+1}\alpha}} p_{\bar M}(s-t,y-x).\notag\\ \label{MIN_JUMP_GREENPART}\end{aligned}$$ Therefore, in the diagonal case follows from and writing $|\textcolor{black}{D_x^\eta} p_\alpha(s-t,z-x)- \textcolor{black}{D_x^\eta} p_\alpha(s-t,y-x)|\le \int_0^1 d\lambda | \textcolor{black}{D_x^\eta} D p_\alpha(s-t,y-x+\lambda(y-z)) \cdot (y-z)| \le 2 C_m (s-t)^{-(|\eta|+1)/\alpha} q_{\alpha}(s-t,y-x)|z-y|\le \tilde C_m (s-t)^{-(|\eta|+\beta)/\alpha} q_{\alpha}(s-t,y-x)|z-y|^\beta$ for all $\beta \in [0,1] $ (exploiting again that $|z-y|\le (1/2) (s-t)^{1/\alpha} $ for the last inequality). From we now derive: $$\begin{aligned} &&\| \partial_v \tilde p_\alpha(v,\cdot) \star \textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)\|_{L^{p'}}\notag\\ &=&\Big(\int_{{\mathbb{R}}^d } dz |\int_{{\mathbb{R}}^d}dy \partial_v \tilde p_\alpha(v,z-y)\textcolor{black}{D_x^\eta}p_\alpha(s-t,y-x)|^{p'} \Big)^{1/p'}\notag\\ &=&\Big(\int_{{\mathbb{R}}^d } dz \Big|\int_{{\mathbb{R}}^d}dy \partial_v \tilde p_\alpha(v,z-y)\Big[\textcolor{black}{D_x^\eta}p_\alpha(s-t,y-x)-\textcolor{black}{D_x^\eta}p_\alpha(s-t,z-x)\Big]\Big|^{p'} \Big)^{1/p'}\notag \\ &\le & \frac{1}{(s-t)^{\frac{|\eta|+\beta}{\alpha}}}\Big(\int_{{\mathbb{R}}^d } dz \Big|\int_{{\mathbb{R}}^d}dy|\partial_v \tilde p_\alpha(v,z-y)|\ |z-y|^\beta\big[q_\alpha(s-t,y-x) + q_\alpha(s-t,z-x) \big]\Big|^{p'}\Big)^{1/p'}\notag \\ &\le & \frac{C_{p'}}{(s-t)^{\frac{|\eta|+\beta}{\alpha}}}\Bigg[\Big(\int_{{\mathbb{R}}^d } dz \Big|\int_{{\mathbb{R}}^d}dy |\partial_v \tilde p_\alpha(v,z-y)|\ |z-y|^\beta q_\alpha(s-t,y-x)\Big|^{p'}\Big)^{1/p'} \notag \\ &&+ \Big(\int_{{\mathbb{R}}^d} dz \big(q_\alpha(s-t,z-x)\big)^{p'}\Big(\int_{{\mathbb{R}}^{d}} dy |\partial_v \tilde p_\alpha(v,y-z)|\ |y-z|^\beta \Big)^{p'} \Big)^{1/p'}\Bigg].\label{PREAL_CANC_GREEN}\end{aligned}$$ From the $\bL^1-\bL^{p'}$ convolution inequality and Lemma \[SENS\_SING\_STAB\] we thus obtain: $$\| \tilde p_\alpha(v,\cdot) \star \textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)\|_{\bL^{p'}} \le \frac{C_{p'}}{(s-t)^{\frac{|\eta|+\beta+\frac dp}{\alpha}}} v^{-1+\frac \beta \alpha}.$$ Hence, $$\begin{aligned} \Big(\mathcal{T}_{p',q'}^{1-\gamma}[\textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)]|_{[0,(s-t)]}\Big)^{q'}&\leq&\frac{C}{(s-t)^{\left[\frac{d}{p\alpha}+\frac{|\eta|}{\alpha}+\frac{\beta}{\alpha}\right]q'}}\int_0^{(s-t)^{}} \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha}-1+\frac{\beta}{\alpha})q'} \notag \\ &\leq& \frac{C}{(s-t)^{\left[\frac{d}{p\alpha}+\frac{|\eta|}{\alpha}+\frac{\beta}{\alpha} + \frac{1-\gamma-\beta}{\alpha}\right]q'}}=\frac{C}{(s-t)^{\left[\frac{d}{p\alpha}+\frac{|\eta|}{\alpha} + \frac{1-\gamma}{\alpha}\right]q'}},\label{CTR_CANC_GREEN}\end{aligned}$$ provided $\beta+\gamma>1$ for the second inequality (which can be assumed since we can choose $\beta$ arbitrarily in $(0,1) $). The map $s \mapsto \mathcal{T}_{p',q'}^{1-\gamma}[\textcolor{black}{D_x^\eta}p_\alpha(s-t,\cdot-x)]|_{[0,(s-t)]}$ hence belongs to $\bL^{r'}((t,T],{\mathbb{R}}^+)$ under the same previous condition on $\eta $ than in . . The condition in then precisely gives that the gradient of the Green kernel will exist pointwise (with uniform bound depending on the Besov norm of $\textcolor{black}{\Phi} $) as soon as: $$\label{COND_GRAD_PONCTUEL} 1< \alpha(1-\frac 1r)+\gamma-1-\frac dp \iff \gamma >2- \alpha (1-\frac 1r)+\frac dp.$$ In particular, provided holds, the same type of arguments would also lead to a Hölder control of the gradient in space of index $\zeta< \alpha(1-1/r)+\gamma-1- d/p-1 $. The previous computations somehow provide the almost optimal regularity that could be attainable for $u$ (through what can be derived from $w$ solving ). The purpose of the next section will precisely be to prove that these arguments can be adapted to that framework. The price to pay will be some additional constraint on the $\gamma $ because we will precisely have to handle the product $F\cdot Du $. Eventually, we emphasize that the parameter $q$ does not play a key role in the previous analysis. Indeed, all the thresholds appearing do not depend on this parameter. Since for all $\gamma,p$ we have that for all $q< q'$ that $B^{\gamma}_{p,q} \subset B^{\gamma}_{p,q'}$ the above analysis suggests that it could be enough to consider the case $q=\infty$. Nevertheless, as it does not provide any additional difficulties, we let the parameter $q$ vary in the following. Uniform estimates of the solution of the mollified version of PDE and associated (uniform) Hölder controls. {#SEC_PERTURB} ----------------------------------------------------------------------------------------------------------- This part is dedicated to the proof of Proposition \[PROP\_PDE\_MOLL\] and Corollary \[COR\_ZVON\_THEO\]. It is known that, under [[**(UE)**]{}]{} and for $\vartheta>\alpha ,$ if $g\in {\mathbb{B}}_{\infty,\infty}^\vartheta $ is also bounded and $f\in {\mathbb{B}}_{\infty,\infty}^{\vartheta-\alpha}({\mathbb{R}}^d,{\mathbb{R}}) $, there exists a unique classical solution $u:=u_m\in \bL^\infty([0,T],{\mathbb{B}}_{\infty,\infty}^{\vartheta}({\mathbb{R}}^d,{\mathbb{R}}))$ to the *mollified* PDE . This is indeed the usual Schauder estimates for sub-critical stable operators (see e.g. Priola [@prio:12] or Mikulevicius and Pragarauskas who also address the case of a multiplicative noise [@miku:prag:14]). It is clear that the following Duhamel representation formula holds for $u_m $. With the notations of : $$\label{DUHAMEL} u_m(t,x) = P_{T-t}^{\alpha} [g] (x) + G^\alpha f(t,x) + \mathfrak r_m(t,x),$$ where the Green kernel $G^\alpha$ is defined by and where : $$\label{REMAINDER} \mathfrak r_m(t,x) := \int_t^T ds P_{T-s}^{\alpha} [\langle F_m(s,\cdot), D u_m(s,\cdot)\rangle](x).$$ It is plain to check that, if we now relax the boundedness assumption on $g$, supposing it can have linear growth, there exists $C:=C(d)>0$ such that $$\begin{aligned} \left\|\textcolor{black}{D}P_{T-t}^\alpha [g]\right\|_{\bL^\infty([0,T],{\mathbb{B}}_{\infty,\infty}^{\vartheta\textcolor{black}{-1}})} + \left\|G^\alpha f\right\|_{\bL^\infty([0,T],{\mathbb{B}}_{\infty,\infty}^{\vartheta})} \leq C\big(\|f\|_{\bL^\infty ([0,T],{\mathbb{B}}_{\infty,\infty}^{\vartheta-\alpha})} + \|Dg\|_{{\mathbb{B}}_{\infty,\infty}^{\vartheta-1}}\big).\end{aligned}$$ We also refer to the section concerning the smoothness in time below for specific arguments related to a terminal condition with linear growth.\ In the following, . In order to keep the notations as clear as possible, we drop the superscript $m$ associated with the mollifying procedure for the rest of the section. **(i) Gradient bound.** Let us first control the terminal condition. We have, integrating by parts and using usual cancelation arguments, $$\begin{aligned} |D P_{T-t}^\alpha [g](x) | \leq \sum_{j=1}^d |\p_{x_j}P_{T-t}^\alpha [g](x) |&\leq & \sum_{j=1}^d \left| \int_{{\mathbb{R}}^d} dy \p_{j}g(y) p_\alpha(T-t,y-x) \right|\leq \sum_{j=1}^d C \| Dg \|_{{\mathbb{B}}_{\infty,\infty}^{\theta-1}}.\label{BD_GRAD_TERCOND}\end{aligned}$$ We now turn to control the Green kernel part. Write $$\begin{aligned} |D G^\alpha f(t,x) | \leq \sum_{j=1}^d |\p_{x_j}G^\alpha f(t,x) |&= & \sum_{j=1}^d \left| \int_t^T ds \int_{{\mathbb{R}}^d} dy f(s,y) \p_{x_j}p_\alpha(s-t,y-x) \right|\notag\\ &\leq & \sum_{j=1}^d \| f \|_{\bL^{\infty}({\mathbb{B}}_{\infty,\infty}^{\theta-\alpha})} \| \p_{x_j} p_\alpha(\cdot-t,\cdot-x) \|_{\bL^{1}({\mathbb{B}}_{1,1}^{\alpha-\theta})}.\notag\end{aligned}$$ From the very definition of $\theta$ we have $\theta-\alpha+1<1$ and $(\theta-\alpha+1)+1>1$. We can thus apply Lemma \[LEM\_BES\_NORM\] (see eq. with $\gamma=\theta-\alpha+1$, $\beta =1$, $\eta=1$ and therein) to obtain $$\| \p_{x_j} p_\alpha(s-t,\cdot-x) \|_{{\mathbb{B}}_{1,1}^{\alpha-\theta}\big({\mathbb{R}}^d\big)} \leq \frac{C}{(s-t)^{\left[\frac{\alpha-\theta}{\alpha}+\frac {1}{\alpha}\right]}}.$$ , we thus obtain $$\left\|DG^\alpha f\right\|_{\bL^\infty} \leq C (T-t)^{\frac{\theta-1}{\alpha}} \|f\|_{\bL^\infty ([0,T],{\mathbb{B}}_{\infty,\infty}^{\theta-\alpha})}.\label{BD_GRAD_GREEN}$$ Let us now focus on first gradient estimate of $\mathfrak r$. Using Hölder inequality and then Besov duality we have, $$\begin{aligned} |D \mathfrak r(t,x) | \leq \sum_{j=1}^d |\p_{x_j}\mathfrak r(t,x) |&\leq & \sum_{j=1}^d \sum_{k=1}^d \left| \int_t^T ds \int_{{\mathbb{R}}^d} dy F_k(s,y) \p_{y_k} u(s,y) \p_{x_j}p_\alpha(s-t,y-x) \right|\notag\\ &\leq & \sum_{j=1}^d \sum_{k=1}^d \| F_k \|_{\bL^r({\mathbb{B}}_{p,q}^{-1+\gamma})} \| \p_{k} u \p_{x_j} p_\alpha(\cdot-t,\cdot-x) \|_{\bL^{r'}({\mathbb{B}}_{p',q'}^{1-\gamma})},\label{BD_GRAD}\end{aligned}$$ so that the main issue consists in establishing the required control on the map $(t,T] \ni s \mapsto \| \p_{k} u(s,\cdot) \p_{x_j} p_\alpha(\cdot-t,\cdot-x) \|_{{\mathbb{B}}_{p',q'}^{1-\gamma}}$ for any $j,k$ in $\leftB 1,d\rightB$. Note that since for all $s$ in $[0,T]$ the map $y \mapsto \textcolor{black}{ u(s,y)}$ is in ${\mathbb{B}}_{\infty,\infty}^{\vartheta}$ for any $ \vartheta \in (\alpha, \alpha+1]$, we have in particular from the very definition of $\theta$ (see eq. ) and assumptions on $\gamma$ that there exists $\varepsilon >0$ such that $\theta-1-\varepsilon>0$, $\theta-1-\varepsilon + \gamma >1$ and for all $s$ in $[0,T]$ the map $y \mapsto \p_k u(s,y)$ is in ${\mathbb{B}}_{\infty,\infty}^{\theta-1-\varepsilon}$. One can hence apply Lemma \[LEM\_BES\_NORM\] so that (see eq. with $\beta =\theta - 1-\varepsilon $, $\eta=1$ therein) $$\| \p_k u(s,\cdot) \p_{x_j} p_\alpha(s-t,\cdot-x) \|_{{\mathbb{B}}_{p',q'}^{\textcolor{black}{1-\gamma}}} \leq \|\p_k u (s,\cdot)\|_{{\mathbb{B}}_{\infty,\infty}^{ \textcolor{black}{\theta-1-\varepsilon}}} \frac{C}{(s-t)^{\left[\frac{1-\gamma}{\alpha}+\frac d{p\alpha}+\frac {1}{\alpha}\right]}}.$$ This map hence belongs to $\bL^{r'}((t,T],{\mathbb{R}}_+)$ as soon as $$\label{THRESHOLD_GRAD} -r'\left[\frac{d}{p\alpha}+\frac{1}{\alpha} + \frac{1-\gamma}{\alpha}\right] >-1 \Leftrightarrow \gamma > 2 -\alpha +\frac{\alpha}{r} + \frac dp,$$ which follows from the assumptions on $\gamma$. We then obtain, after taking the $\bL^{r'}((t,T],{\mathbb{R}}_+)$ norm of the above estimate, that $$\begin{aligned} \label{upp_bound_Du} |D \mathfrak r(t,x)| \leq CT^{\frac{\theta-1}{\alpha}} \|D u \|_{\bL^{\infty}({\mathbb{B}}^{ \textcolor{black}{\theta-1-\varepsilon}}_{\infty,\infty})} .\end{aligned}$$ **(ii) Hölder norm of the gradient.** As in the above proof we obtain gradient bounds depending on the spatial Hölder norm of $D u$, we now have to precisely estimate this quantity. : $$\begin{aligned} |D \mathfrak r(t,x) -D \mathfrak r(t,x')| &\leq& \sum_{j=1}^d |\p_{j}\mathfrak r(t,x) -\p_{j}\mathfrak r(t,x') |\\ &\leq & \sum_{j=1}^d \sum_{k=1}^d \left| \int_t^T ds \int_{{\mathbb{R}}^d} dy F_k(s,y) \left(\p_{y_k} u(s,y) \left(\p_{x_j}p_\alpha(s-t,y-x) - \p_{x_j}p_\alpha(s-t,y-x')\right)\right) \right|\\ &\leq & \sum_{j=1}^d \sum_{k=1}^d \| F_k \|_{\bL^r({\mathbb{B}}_{p,q}^{-1+\gamma})} \| \p_{k} u\left(\p_{x_j} p_\alpha(\cdot-t,\cdot-x) -\p_{x_j} p_\alpha(\cdot-t,\cdot-x')\right)\|_{\bL^{r'}({\mathbb{B}}_{p',q'}^{1-\gamma})},\end{aligned}$$ using again the Hölder inequality and duality between the considered Besov spaces (see Section \[SEC\_BESOV\]). Hence, the main issue consists in establishing the required control on the map $$(t,T] \ni s \mapsto \| \p_{k} u(s,\cdot) \left(\p_{x_j} p_\alpha(\textcolor{black}{s}-t,\cdot-x) -\p_{x_j} p_\alpha(\textcolor{black}{s}-t,\cdot-x')\right) \|_{\bL^{r'}({\mathbb{B}}_{p',q'}^{1-\gamma})},$$ for any $j,k$ in $\leftB 1,d\rightB$. Since $\theta-1-\varepsilon<1$, one can again apply Lemma \[LEM\_BES\_NORM\] so that (see eq. with $\beta = \theta-1-\varepsilon$, $\beta'=\theta-1-\varepsilon$, $\eta=1$ and therein): $$\begin{aligned} &&\| \p_k u (s,\cdot) \big(\p_{x_j} p_\alpha(s-t,\cdot-x) - \p_{x_j} p_\alpha(s-t,\cdot-x')\big) \|_{{\mathbb{B}}_{p',q'}^{\textcolor{black}{1-\gamma}}} \\ &\leq& \|\p_k u(s,\cdot)\|_{ {\mathbb{B}}_{\infty,\infty}^{\theta-1-\varepsilon}}\frac{C}{(s-t)^{\left[\frac{1-\gamma}{\alpha}+\frac d{p\alpha}+\frac {1+(\theta-1-\varepsilon)}{\alpha}\right]}}|x-x'|^{\theta-1-\varepsilon}\le \textcolor{black}{\frac{C\|\p_k u\|_{ \bL^\infty({\mathbb{B}}_{\infty,\infty}^{\theta-1-\varepsilon})}}{(s-t)^{\left[\frac{1-\gamma}{\alpha}+\frac d{p\alpha}+\frac {1+(\theta-1-\varepsilon)}{\alpha}\right]}}|x-x'|^{\theta-1-\varepsilon}}.\end{aligned}$$ The above map hence belongs to $\bL^{r'}((t,T],{\mathbb{R}}^+)$ as soon as $$\label{THRESHOLD_HOLDER} -r'\left[\frac{d}{p\alpha}+\frac{1+(\theta-1-\varepsilon)}{\alpha} + \frac{1-\gamma}{\alpha}\right] >-1 \Leftrightarrow \theta-1-\varepsilon < \gamma - \left( 2 -\alpha +\frac{\alpha}{r} + \frac dp\right),$$ which readily follows from the very definition of $\theta$ (see eq. ) and the fact that $\varepsilon >0$. We then obtain $$\begin{aligned} \label{upp_bound_Du_hold} |D \mathfrak r(t,x) -D \mathfrak r(t,x') | \leq CT^{\frac {\textcolor{black}{\varepsilon}} \alpha} \|D u \|_{\textcolor{black}{\bL^{\infty}({\mathbb{B}}^{\theta-1-\varepsilon}_{\infty,\infty})}} |x-x'|^{\theta-1-\varepsilon}.\end{aligned}$$ Note that assuming that $\theta$ is fixed, we readily obtain from together with the constraint $\theta-1-\varepsilon + \gamma >1$ the initial constraint $$\label{bound_gamma} \gamma > \frac{3-\alpha + \frac{d}{p} + \frac{\alpha} r}{2}.$$ In comparison with the threshold obtained when investigating the smoothing effect of the Green kernel (see eq. and the related discussion) this regularity allows to the product $F\cdot Du$. Indeed, if one wants to define it *e.g.* as a Young integral, one has to require the sum of the local regularity indexes of the two maps to be greater than one: $\theta-1-\varepsilon + \gamma >1$. Extensions are possible and there already exist robust theories to bypass such a constraint (rough path in dimension 1, paracontrolled distribution or regularity structure) but, to the best of our knowledges, it requires the map $F$ to be enhanced to a rough distribution $\tilde F$, which significantly restraints the possible choices of the drift. Let us eventually estimate the Hölder moduli of the of the first and second terms in the Duhamel representation . We first note that, Green kernel, . When doing so, we obtain that $$\label{BD_HOLD_GREEN} |D G^\alpha f(t,x) - D G^\alpha f(t,x')| \leq CT^{\varepsilon}\|f\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-\alpha}} |x-x'|^{\theta-1-\varepsilon}.$$ Concerning the terminal condition we have the one hand, when : $$\begin{aligned} |D P_{T-t}^\alpha [g](x) - D P_{T-t}^\alpha [g](x')| &= & \left| \int_{{\mathbb{R}}^d} dy Dg(y) \big(p_\alpha(T-t,y-x)-p_\alpha(T-t,y-x')\big) \right|\notag\\ &\leq & \Bigg| \int_{{\mathbb{R}}^d} dy \big(Dg(y) - Dg(x)\big) p_\alpha(T-t,y-x) +Dg(x)-Dg(x')\notag\\ && - \int_{{\mathbb{R}}^d} dy \big(Dg(y) - Dg(x')\big) p_\alpha(T-t,y-x') \Bigg|\notag\\ &\leq & C \|Dg\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-1}} |x-x'|^{\theta-1-\varepsilon}.\end{aligned}$$ On the other hand, when , we have using cancellations arguments $$\begin{aligned} &&|D P_{T-t}^\alpha [g](x) - D P_{T-t}^\alpha [g](x')| \\ &\leq &\big |\int_{{\mathbb{R}}^{d}} [ p_\alpha(T-t,y-x) -p_\alpha(T-t,y-x') ] Dg(y) dy \big | \\ &\leq& \big |\int_0^1 d\lambda \int_{{\mathbb{R}}^{d}} [D_{x} p_\alpha\big(T-t,y-\textcolor{black}{(x'+\mu(x-x'))}\big) \cdot (x-x') ][D g(y)\textcolor{black}{-Dg(x'+\mu(x-x'))}] dy \big |\\ &\leq& \|Dg\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-1}} (T-t)^{-\frac 1 \alpha + \frac{\theta-1}{\alpha}}|x-x'| \leq C(T-t)^{\varepsilon} \|Dg\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-1}} |x-x'|^{\theta-1-\varepsilon}.\end{aligned}$$ Hence $$\label{BD_HOLD_TERCOND} |D P_{T-t}^\alpha [g](x) - D P_{T-t}^\alpha [g](x')| \leq C(T^{\varepsilon}+1)\|Dg\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-1}} |x-x'|^{\theta-1-\varepsilon}.$$ Putting together estimates , , , , and we deduce that $$\label{THE_BD_GRAD_EN_THETA_1_EPS} \forall\, \alpha \in \left( \frac{1+\frac dp}{1-\frac 1r},2\right],\, \forall \gamma \in \left(\frac{3-\alpha + \frac{d}{p} + \frac{\alpha} r}{2}, 1\right],\ \exists C(T)>0 \text{ s.t. }\| D u \|_{\bL^{\infty}\left({\mathbb{B}}^{\gamma - 2+\alpha - \frac dp - \frac{\alpha}r-\varepsilon}_{\infty,\infty}\right)} < C_T.$$ In particular, , $\lim C_T =0$ when $T$ tends to $0$. **(iii) Smoothness in time for $u$ and $Du$.** We restart here from the Duhamel representation . Namely, $$u(t,x) = P_{\textcolor{black}{T-t}}^{\alpha} [g] (x) + G_{}^{\alpha} [ f](t,x) + \mathfrak r(t,x),$$ where from , : $$\mathfrak r(t,x) = \int_t^T ds \int_{{\mathbb{R}}^d} dy \langle F(s,y), D u(s,y)\rangle p_\alpha(s-t,y-x).$$ We now want to control for a fixed $x\in {\mathbb{R}}^d $ and $0\le t< t'\le T $ the difference: $$\begin{aligned} \label{DIFF_U} u(t',x)-u(t,x) = \big(P_{\textcolor{black}{T-t'}}^{\alpha}-P_{\textcolor{black}{T-t}}^{\alpha} \big) [g] (x) +\big(G^{\alpha}f(t',x)- G^{\alpha}f(t,x)\big) + \big(r(t',x)- r(t,x)\big).\end{aligned}$$ For the first term in the r.h.s. of we write: $$\begin{aligned} \big(P_{\textcolor{black}{T-t'}}^{\alpha}-P_{\textcolor{black}{T-t}}^{\alpha} \big) [g] (x)&=&\int_{{\mathbb{R}}^d} \big[ p_\alpha(T-t',y-x)-p_\alpha(T-t,y-x)\big] g(y)dy\\ &=&-\int_{{\mathbb{R}}^d} \int_0^1 d\lambda \big[ \partial_s p_\alpha(s,y-x)\big]\Big|_{s=T-t-\lambda (t'-t)} g(y)dy (t'-t).\end{aligned}$$ From the Fubini’s theorem and usual cancellation arguments we get: $$\begin{aligned} \big(P_{\textcolor{black}{T-t'}}^{\alpha}-P_{\textcolor{black}{T-t}}^{\alpha} \big) [g] (x)&=&-(t'-t) \int_0^1 d\lambda \Big[\int_{{\mathbb{R}}^d} \partial_s p(s,y-x)\big( g(y) -g(x)-Dg(x)\cdot(y-x)\big)dy\Big] \Big|_{s=T-t-\lambda (t'-t)}. \end{aligned}$$ We indeed recall that, because of the symmetry of the driving process ${\mathcal{W}}$, and since $\alpha>1 $, one has for all $s>0$, $\int_{{\mathbb{R}}^d} p(s,y-x)(y-x)dy=0 $. Recalling as well that we assumed $Dg\in {\mathbb{B}}_{\infty,\infty}^{\theta-1} $, we therefore derive from Lemma \[SENS\_SING\_STAB\]: $$\begin{aligned} |\big(P_{\textcolor{black}{T-t'}}^{\alpha}-P_{\textcolor{black}{T-t}}^{\alpha} \big) [g] (x)|&\le& (t'-t)\int_{0}^1 d\lambda \Big[\frac{C\|Dg\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-1}}}s \int_{{\mathbb{R}}^d} q_\alpha(s,y-x) |y-x|^{\theta}dy\Big]\Big|_{s=T-t-\lambda (t'-t)}\notag\\ &\le & C (t'-t)\|Dg\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-1}} \int_{0}^1 d\lambda s^{-1+\frac \theta \alpha} \big|_{s=T-t-\lambda (t'-t)},$$ . Observe now that since $0\le t<t'\le T $, one has $s=T-t-\lambda(t'-t)\ge (1-\lambda)(t'-t) $ for all $\lambda\in[0,1] $. Hence, $$\begin{aligned} |\big(P_{\textcolor{black}{T-t'}}^{\alpha}-P_{\textcolor{black}{T-t}}^{\alpha} \big) [g] (x)|&\le&C (t'-t)\|Dg\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-1}} \int_{0}^1 \frac{d\lambda}{(1-\lambda)^{1-\frac\theta\alpha}}(t'-t)^{-1+\frac\theta\alpha} \notag\\ &\le & C (t'-t)^{\frac \theta\alpha}\|Dg\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-1}},\label{REG_TEMPS_COND_TERM}\end{aligned}$$ which is the expected control. We now focus on the remainder term $r$ since the control of the Green kernel is easier and can be derived following the same lines of reasoning. Write $$\begin{aligned} \mathfrak r(t',x)- \mathfrak r(t,x)=\int_{t'}^T ds \big(P_{\textcolor{black}{s-t'}}^{\alpha}-P_{\textcolor{black}{s-t}}^{\alpha} \big) [\langle F(s,\cdot) Du(s,\cdot)\rangle] (x)+\int_t^{t'} ds P_{\textcolor{black}{s-t}}^{\alpha} [\langle F(s,\cdot) Du(s,\cdot)\rangle] (x).\label{DIFF_R}\end{aligned}$$ From Lemma \[LEM\_BES\_NORM\] (see eq. with $\beta = \theta-1-\varepsilon$ and $\eta=0$) it can be deduced (see computations in point **(i)** of the current section) that $$\label{REG_TEMPS_REMAINDER} |\int_t^{t'} ds P_{\textcolor{black}{s-t}}^{\alpha} [\langle F(s,\cdot), Du(s,\cdot)\rangle] (x)|\le C |t-t'|^{\frac{\theta}{\alpha}}.$$ Let us now focus on $$\begin{aligned} \int_{t'}^T ds \big(P_{\textcolor{black}{s-t'}}^{\alpha}-P_{\textcolor{black}{s-t}}^{\alpha} \big) [\langle F(s,\cdot) Du(s,\cdot)\rangle] (x)&=&\int_{t'}^T ds \int_{0}^1 d\lambda \Big\{\partial_w P_{\textcolor{black}{s-w}}^\alpha [\langle F(s,\cdot) Du(s,\cdot)\rangle] (x)\Big\} \Big|_{w=t+\lambda (t'-t)}(t'-t)\notag\\ &=& \int_{0}^1 d\lambda \int_{t'}^T ds \Big\{L^\alpha P_{\textcolor{black}{s-w}}^\alpha [\langle F(s,\cdot) Du(s,\cdot)\rangle] (x)\Big\} \Big|_{w=t+\lambda (t'-t)}(t'-t).\notag\\ \label{DEV_SENSI_EN_TEMPS}\end{aligned}$$ We have $$\begin{aligned} &&\int_{t'}^T ds|L^\alpha P_{\textcolor{black}{s-w}}^\alpha [\langle F(s,\cdot) Du(s,\cdot)\rangle] (x) | \notag\\ &\leq & \sum_{k=1}^d \int_{t'}^Tds \Big|\int_{{\mathbb{R}}^d} dy F_k(s,y) \p_{y_k} u(s,y) L^\alpha p_\alpha(s-w,y-x) \Big| \notag \\ &\leq & \sum_{k=1}^d \| F_k \|_{\bL^r([t',T],{\mathbb{B}}_{p,q}^{-1+\gamma})} \| \p_{k} u L^\alpha p_\alpha(\cdot-w,\cdot-x) \|_{\bL^{r'}([t',T],{\mathbb{B}}_{p',q'}^{1-\gamma})}.\label{DUALITY_TO_BE_INTEGRATED_FOR_TIME_SMOTHNESS}\end{aligned}$$ Lemma \[LEM\_BES\_NORM\] ( with $\beta = \theta-1-\varepsilon$ and $\eta=\alpha$ therein), : $$\| \p_{k} u(s,\cdot) L^\alpha p_\alpha(s-\textcolor{black}{w},\cdot-x) \|_{{\mathbb{B}}_{p',q'}^{\textcolor{black}{1-\gamma}}} \leq \|\p_{k} u(s,\cdot)\|_{{\mathbb{B}}_{\infty,\infty}^{\textcolor{black}{ \theta-1-\varepsilon}}} \frac{C}{(s-\textcolor{black}{w})^{\left[\frac{1-\gamma}{\alpha}+\frac d{p\alpha}+1\right]}}.$$ Thus, : $$\begin{aligned} \| \p_{k} u L^\alpha p_\alpha(\cdot-w,\cdot-x) \|_{\bL^{r'}([t',T],{\mathbb{B}}_{p',q'}^{1-\gamma})} &\le& C(t'-w)^{\frac 1{r'}-\big(\frac{1-\gamma}{\alpha}+\frac d{p\alpha}+1\big)} =C(t'-w)^{\frac \theta \alpha-1}.\label{PREAL_BD_CTR_TEMPS_BAS}\end{aligned}$$ Therefore, from and , we derive: $$\begin{aligned} \int_{t'}^T ds|L^\alpha P_{\textcolor{black}{s-w}}^\alpha [\langle F(s,\cdot) Du(s,\cdot)\rangle] (x) | &\leq & C\sum_{k=1}^d \| F_k \|_{\bL^r({\mathbb{B}}_{p,q}^{-1+\gamma})} (t'-w)^{\frac \theta \alpha-1},\end{aligned}$$ which in turn, plugged into , gives: $$\begin{aligned} &&|\int_{t'}^T ds \big(P_{\textcolor{black}{s-t'}}^{\alpha}-P_{\textcolor{black}{s-t}}^{\alpha} \big) [\langle F(s,\cdot) Du(s,\cdot)\rangle] (x)|\le \int_{0}^1 d\lambda \int_{t'}^T ds \Big| L^\alpha P_{\textcolor{black}{s-w}}^\alpha [\langle F(s,\cdot) Du(s,\cdot)\rangle] (x)\Big| \Bigg|_{w=t+\lambda (t'-t)}(t'-t)\notag\\ &\le & C\sum_{k=1}^d \| F_k \|_{\bL^r({\mathbb{B}}_{p,q}^{-1+\gamma})} \int_0^1 d\lambda (t'-(t+\lambda(t'-t)))^{\frac \theta \alpha -1}(t'-t)\notag\\ &\le & C\sum_{k=1}^d \| F_k \|_{\bL^r({\mathbb{B}}_{p,q}^{-1+\gamma})} (t'-t)^{\frac \theta \alpha}.\label{CTR_SENSI_EN_TEMPS} \end{aligned}$$ From , and we thus obtain: $$\label{CTR_DIFF_R} \big|\mathfrak r(t',x)- \mathfrak r(t,x)\big|\le C \| F \|_{\bL^r({\mathbb{B}}_{p,q}^{-1+\gamma})} (t'-t)^{\frac { \theta} \alpha}.$$ The Hölder control of the Green kernel $G^\alpha f$ follows from similar arguments. Indeed, repeating the above proof it is plain to check that there exists $C\ge 1$ s.t. for all $0\le t<t'\le T$, $x\in {\mathbb{R}}^d $: $$\label{SMOOTH_TIME_GREEN_KERNEL} \Big|\big(G^{\alpha}f(t',x)- G^{\alpha}f(t,x)\big) \Big|\le C \|f\|_{\bL^{\infty}({\mathbb{B}}_{\infty,\infty}^{\theta-\alpha})} (t'-t)^{\frac{\theta}{\alpha}}.$$ The final control of concerning the smoothness in time then follows plugging , and into . The control concerning the time sensitivity of the spatial gradient would be obtained following the same lines.\ **(iv) Conclusion: proof of Proposition \[PROP\_PDE\_MOLL\], Corollary \[COR\_ZVON\_THEO\] and Theorem \[THE\_PDE\].** Points **(i)** to **(iii)** conclude the proof of Proposition \[PROP\_PDE\_MOLL\]. Let us eventually notice that Corollary \[COR\_ZVON\_THEO\] is a direct consequence of the above computations. Indeed, replacing the source term $f$ the $k^{\rm th}$ coordinate of $F$ in the Green kernel, the proof follows from the control obtained for the remainder term in the Duhamel representation . Eventually, the proof of Theorem \[THE\_PDE\] follows from compactness argument together with the Schauder like control of Proposition \[PROP\_PDE\_MOLL\]. \[REM\_COEFF\_DIFF\]Let us first explain how, in the diffusive setting, $\alpha=2 $ the diffusion coefficient can be handled. Namely, this would lead to consider for the PDE with mollified coefficients an additional term in the Duhamel formulation that would write: $$\label{DUHAM_PERT} u_m(t,x) = P_{\textcolor{black}{s-t}}^{\alpha,\xi,m}[g](x) + \int_t^T ds P_{\textcolor{black}{s-t}}^{\alpha,\xi,m}[\left\{ f (s,\cdot)+ F_m \cdot D u_m (s,\cdot)+ \frac 12 {\rm Tr }\big((a_m(s,\cdot)-a_m(s,\xi)) D^2 u_m(s,\cdot)\big)\right\}](x),$$ for an auxiliary parameter $\xi$ which will be taken equal to $x$ after potential differentiations in . Here, $P_{\textcolor{black}{s-t}}^{\alpha,\xi,m}$ denotes the two-parameter semi-group associated with $\big(\frac 12 {\rm Tr} \big(a_m(v,\xi) D^2\big)\big)_{v\in [s,t]} $ (mollified diffusion coefficient frozen at point $\xi$). Let us focus on the second order term. Recall from the above proof of Proposition \[PROP\_PDE\_MOLL\] that we aim the gradient pointwise, deriving as well some Hölder continuity for it. Hence, focusing on the additional term, we write for the gradient part: $$\begin{aligned} &&D_x \int_t^T ds P_{\textcolor{black}{s-t}}^{\alpha,\xi,m}[ \frac 12 {\rm Tr }\big((a_m(s,\cdot)-a_m(s,\xi)) D^2 u_m(s,\cdot)\big)](x)\\ &=&\int_t^T ds \int_{{\mathbb{R}}^d}D_x p_\alpha^{\xi,m}(t,s,x,y)\frac 12 {\rm Tr }\big((a_m(s,y)-a_m(s,\xi)) D^2 u_m(s,y)\big) dy\\ &&=\frac 12 \sum_{i,j=1}^d\int_t^T ds \int_{{\mathbb{R}}^d}\Big( D_x p_\alpha^{\xi,m}(t,s,x,y) \big((a_{m,i,j}(s,y)-a_{m,i,j}(s,\xi))\Big)D_{y_iy_j}u_m(s,y) dy.\end{aligned}$$ From the previous Proposition \[PROP\_PDE\_MOLL\], we aim at establishing that $Du_m $ has Hölder index $\theta-1-\varepsilon=\gamma-2+\alpha- d/p-\alpha /r-\varepsilon$ and therefore $D_{y_iy_j}u_m\in {\mathbb{B}}_{\infty,\infty}^{ \theta-2-\varepsilon} $. Assume for a while that $ p=q=r=+\infty$. The goal is now to bound the above term through Besov duality. Namely, taking $\xi=x $ after having taken the gradient w.r.t. $x$ for the heat kernel, we get: $$\begin{aligned} &&|D_x \int_t^T ds P_{\textcolor{black}{s-t}}^{\alpha,\xi,m}[ \frac 12 {\rm Tr }\big((a_m(s,\cdot)-a_m(s,\xi)) D^2 u_m(s,\cdot)\big)](x)| \Big|_{\xi=x}\\ &\le &\textcolor{black}{\sum_{i,j=1}^d}\int_t^T ds \|\Big( D_x p_\alpha^{\xi,m}(t,s,x,\cdot) \big((a_{m,i,j}(s,\cdot)-a_{m,i,j}(s,\xi))\Big)\|_{{\mathbb{B}}_{1,1}^{2+\varepsilon-\theta}} \Big|_{\xi=x} \|\textcolor{black}{\partial_{i,j}^2} u_m(s,\cdot)\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-2-\varepsilon}}.\end{aligned}$$ Now, in the considered case $\theta-2-\varepsilon=\gamma-1-\varepsilon$. Recalling that $D_x p_\alpha^{\xi,m}(t,s,x,\cdot)\in {\mathbb{B}}_{1,1}^{1/2-\tilde \varepsilon} $ for any $\tilde \varepsilon>0 $ for $\gamma> 1/2=(3-\alpha)/2 $ and $\varepsilon$ small enough, we will indeed have that $D_x p_\alpha^{\xi,m}(t,s,x,\cdot) \big((a_{m,i,j}(s,\cdot)-a_{m,i,j}(s,\xi)) \in {\mathbb{B}}_{1,1}^{2+\varepsilon-\theta}$ provided the bounded function $a$ itself has the same regularity, . Since $ \|\textcolor{black}{\partial_{i,j}^2} u_m(s,\cdot)\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-2-\varepsilon}}\le C \|D u_m(s,\cdot)\|_{{\mathbb{B}}_{\infty,\infty}^{\theta-1-\varepsilon}}$, see e.g. Triebel [@trie:83], this roughly means that, the same Schauder estimate should hold with a diffusion coefficient $a\in \bL^\infty([0,T],{\mathbb{B}}_{\infty,\infty}^{2+\varepsilon-\theta})$. . The general diffusive case for $p,q,r \ge 1$ and $\gamma $ satisfying the conditions of Theorem \[THEO\_WELL\_POSED\] can be handled similarly through duality arguments.\ For the pure jump case, we illustrate for simplicity what happens if the diffusion coefficient is scalar. Namely, when $L^{\alpha,\sigma}\varphi(x)={\rm p.v.} \int_{{\mathbb{R}}^d} \big(\varphi(x+\sigma(x)z)- \varphi(x)\big)\nu(dz)=-\sigma^\alpha(x )(-\Delta)^{\alpha /2} \varphi(x) $, where $\sigma $ is a non-degenerate diffusion coefficient. Introducing $L^{\alpha,\sigma,\xi}\varphi(x)={\rm p.v.} \int_{{\mathbb{R}}^d} \big(\varphi(x+\sigma(\xi)z)- \varphi(x)\big)\nu(dz)=-\sigma^\alpha(\xi)(-\Delta)^{ \alpha/ 2} \varphi(x) $, we rewrite for the Duhamel formula, similarly to : $$\label{DUHAM_PERT_JUMP} u_m(t,x) = P_{\textcolor{black}{s-t}}^{\alpha,\xi,m}[g](x) + \int_t^T ds P_{\textcolor{black}{s-t}}^{\alpha,\xi,m}[\left\{ f (s,\cdot)+ F_m \cdot D u_m (s,\cdot)+ (L^{\alpha,\sigma_m}-L^{\alpha,\sigma_m,\xi}) u_m(s,\cdot)\big)\right\}](x).$$ Focusing again on the non-local term, we write for the gradient part: $$\begin{aligned} &&D_x \int_t^T ds P_{\textcolor{black}{s-t}}^{\alpha,\xi,m}[\big(\sigma_m^\alpha(s,\cdot)-\sigma_m^\alpha(s,\xi)) \Delta^{\frac \alpha 2} u_m(s,\cdot)\big)](x)\\ &=&-\int_t^T ds \int_{{\mathbb{R}}^d}D_x p_\alpha^{\xi,m}(t,s,x,y)\big(\sigma_m^\alpha(s,y)-\sigma_m^\alpha(s,\xi)\big)(- \Delta)^{\frac \alpha 2} u_m(s,y) dy.\end{aligned}$$ Consider again the case $ p=q=r=\infty$. Since $Du_m\in \bL^\infty([0,T],{\mathbb{B}}_{\infty,\infty}^{\theta-1-\varepsilon})$, $-(-\Delta)^{ \alpha/ 2}u_m \in \bL^\infty([0,T],{\mathbb{B}}_{\infty,\infty}^{\theta-\alpha-\varepsilon}) $, where $\theta-\alpha-\varepsilon =-1+\gamma-\varepsilon$. Still by duality one has to control $ D_x p_\alpha^{\xi,m}(t,s,x,y)\big(\sigma_m^\alpha(s,y)-\sigma_m^\alpha(s,\xi)\big)$ in the Besov space ${\mathbb{B}}_{1,1}^{1-\gamma+\varepsilon} $. Since $\gamma> (3-\alpha)/2 $ and $D_x p_\alpha^{\xi,m}(t,s,x,y) \in {\mathbb{B}}_{1,1}^{1- 1/\alpha}$, this will be the case provided $\sigma \in \bL^\infty([0,T],{\mathbb{B}}_{\infty,\infty}^{1-\gamma+\varepsilon})$ for $\varepsilon $ small enough observing that $1-\gamma+\varepsilon\textcolor{black}{<}(\alpha-1)/2$. Note that, in comparison with the result obtained in [@ling:zhao:19], the above threshold is precisely the one appearing in [@ling:zhao:19] in this specific case. The general matrix case for $\sigma$ is more involved. It requires in [@ling:zhao:19] the Bony decomposition. We believe it could also be treated through the duality approach considered here but postpone discussion to further research. In the scalar case, the analysis for general $ p,q,r,\gamma$ as in Theorem \[THEO\_WELL\_POSED\] could be performed similarly. Building the dynamics {#SEC_RECON_DYN} ===================== In this part, we aim at proving Theorem \[THEO\_DYN\] and Corollary \[INTEG\_STO\]. : we first recover the noise through the martingale problem, then recover a drift as the difference between the weak solution and the noise obtained before and estimate its contribution. This is the purpose of Proposition \[PROP\_REG\_PARTIELLE\] below. Having such tools at hand, we recover the dynamics of the weak solution of the *formal* SDE by giving a meaning of each of the above quantities as $\bL^\ell$ stochastic-Young integrals (). More precisely, the $\bL^\ell$ stochastic-Young integral are defined for a suitable class of integrand consisting in the predictable processes $(\psi_s)_{0\leq s \leq T}$ defined in Corollary \[INTEG\_STO\], leading e.g. to the application of Itô’s formula for the dynamics . \[PROP\_REG\_PARTIELLE\] Let $\alpha \in (1,2)$. For any initial point $x\in {\mathbb{R}}^d$, one can find a probability measure on $\mathcal D([0,T], {\mathbb{R}}^{2d}) $ (still denoted by ${\mathbb{P}}^\alpha$) s.t. the canonical process $(X_t, {\mathcal{W}}_t)_{t\in [0,T]} $ satisfies the following properties: Under ${\mathbb{P}}^\alpha$, the law of $(X_t)_{t\ge 0}$ is a solution of the martingale problem associated with data ($L^\alpha,F,x)$, $x \in {\mathbb{R}}^d$ and the law of $({\mathcal{W}}_t)_{t\ge 0} $ corresponds to the one of a $d$-dimensional stable process with generator $L^\alpha$. For any $1 \leq q < \alpha $, there exists a constant $ C:=C(\alpha,p,q,r,\gamma)$ s.t. for any $0\le v<s\le T$: $$\label{REG_DRIFT_FOR_DYN} {\mathbb{E}}^{{\mathbb{P}}^\alpha}[| X_{s}- X_v-( {\mathcal{W}}_{s}- {\mathcal{W}}_v)|^q]^{\frac 1q}\le C (s-v)^{\frac 1\alpha+\frac{\theta-1}{\alpha}},$$ Let $({\mathcal{F}}_v)_{v\ge 0}:=\big(\sigma ( (X_w,{\mathcal{W}}_ w)_{0\le w \le v} ) \big)_{v\ge 0} $ denote the filtration generated by the couple $(X,{\mathcal{W}})$. For any $0\le v<s\le T $, it holds that: $${\mathbb{E}}^{{\mathbb{P}}^\alpha}[ X_{s}- X_v|{\mathcal{F}}_v] =\mathfrak f(v,X_v,s-v)={\mathbb{E}}^{{\mathbb{P}}^\alpha}[u^s(v,X_v)-u^s(s,X_v)|{\mathcal{F}}_v],$$ with $\mathfrak f(v,X_v,s-v):=u^{s}(v,X_v)-X_v $, . Furthermore, the following decomposition holds: $$\begin{aligned} {\mathfrak f}(v,X_v,s-v)&=&\mathscr F(v,X_v,s-v)+{\mathscr R}(v,X_v,s-v),\notag \\ |\mathscr F(v,X_v,s-v)|&=&\Big|\int_v^s d\textcolor{black}{w} \int_{{\mathbb{R}}^d}dy F(w,y) p_\alpha(\textcolor{black}{w}-s,y-X_v)\Big|\notag\\ &\le& C\|F\|_{\bL^r([0,T],{\mathbb{B}}_{p,q}^{-1+\gamma})} (s-v)^{\frac 12+\chi},\ \chi\in (0,1/2],\notag \\ |{\mathscr R}(v,X_v,s-v)|&\le & C(s-v)^{1+\varepsilon'},\ \varepsilon'>0.\label{THE_CONTROLS_FOR_THE_DRIFT}\end{aligned}$$ Coming back to point **(i)** in Section \[SDE\_2\_PDE\] we have that the couple $\big((X_t^m, {\mathcal{W}}_t^m)_{t\in [0,T]}\big)_{m \ge 0}$ is tight (pay attention that the stable noise ${\mathcal{W}}^m$ feels the mollifying procedure as it is obtained through solvability of the martingale problem) so that it converges, along a subsequence, to the couple $(X_t, {\mathcal{W}}_t)_{t\in [0,T]}$. Let $0\le v<s $. With the notations of , where each $u_m^i$, $i$ in $\{1,\ldots,d\}$ is chosen as the solution of with terminal condition $x_i$ (i.e. the $i^{{\rm th}}$ coordinate of $x=(x_1,\ldots,x_d)\in {\mathbb{R}}^d$) at time $s$ and source term $f\equiv 0$ we obtain, from Itô’s formula $$\begin{aligned} &&X_s^m-X_v^m\notag\\ &=&M_{v,s}^{s,m}(\alpha,u_m,X^m)+[u_m (v,X_v^m)-u_m(s,X^m_v)]\label{Del_trans}\\ &=& \int_v^s \int_{ {\mathbb{R}}^d \backslash\{0\} } \{u_m(w,X^m_{w^-}+x) - u_m(w,X_{w^-}^m) \}\tilde N^m(dw,dx)+[u_m(v,X_v)-u_m(s,X_v)]\notag\\ &=&{\mathcal{W}}_s^m-{\mathcal{W}}_v^m +[u_m(v,X_v^m)-u_m(s,X_v^m)] + \int_v^s \int_{ |x|\le 1} \{u_m(w,X_{w^-}^m+x) - u_m(w,X_{w^-}^m) -x\}\tilde N^m(dw,dx)\notag\\ &&+\int_v^{\textcolor{black}{s}}\int_{|x|\ge 1}\{u_m(w,X_{w^-}^m+x) - u_m(w,X_{w^-}^m) -x\}\tilde N^m(dw,dx).\notag\\ &:=& {\mathcal{W}}_s^m-{\mathcal{W}}_v^m + +[u_m(v,X_v^m)-u_m(s,X_v^m)]+ \mathcal M_S^m(v,s)+ \mathcal M_L^m(v,s).\end{aligned}$$ From the smoothness properties of $u_m$ established in (in particular $|u^s_m(v,X_v^m)-u^s_m(s,X_v^m)]|\leq C(s-v)^{\theta/\alpha}$ and the gradient is uniformly bounded) we have $$\begin{aligned} \textcolor{black}{|}\mathcal U(w,X_{w^-}^m,x) \textcolor{black}{|}:= \big|u_m(w,X_{w^-}^m+x) - u_m(w,X_{w^-}^m) -x \big|&=& \Big|\int_0^1 d\lambda (D u_m(w,X_{w^-}^m+\lambda x)-I) \cdot x\Big|\notag\\ &\leq& C(s-w)^{\frac{\theta-1}{\alpha}} |x|, \label{ESTI_COUP_BASSE_POISSON}\end{aligned}$$ recalling that for all $z$ in ${\mathbb{R}}^d$, $u_m(s,z) = z$. Note that $\big(\mathcal M_S^m(v,s)\big)_{0\leq v < s \leq T}$ and $\big(\mathcal M_L^m(v,s)\big)_{0\leq v < s \leq T}$ are respectively $\bL^2$ and $\bL^q$ martingales associated respectively with the “small” and “large” jumps. Let us first handle the “large” jumps. We have by the inequality that $${\mathbb{E}}\big[|\mathcal M_L^m(v,s)|^{\textcolor{black}{q}}\big] \leq C_\ell {\mathbb{E}}\big[[\mathcal M_L^m]_{(v,s)}^{\frac q2}\big],$$ where $[\mathcal M_L^m]_{(v,s)}$ denotes the corresponding bracket given by $\sum_{v\leq w \leq s} |\mathcal U(w,X_{w^-}^m,\Delta {\mathcal{W}}_w^m)|^2\mathbf{1}_{|\Delta {\mathcal{W}}_w^m|\ge 1}$. Using the linear growth of $\mathcal U$ w.r.t. its third variable (uniformly w.r.t. the second one) from together with the fact that $q/2\leq 1$ we obtain $$\begin{aligned} \Big(\sum_{v\leq w \leq s} | \mathcal U(w,X_{w^-}^m,\Delta {\mathcal{W}}_w^m)|^2\textcolor{black}{\mathbf{1}_{|\Delta {\mathcal{W}}_w^m|\ge 1}}\Big)^{q/2} &\leq& C (s-w)^{q\frac{\theta-1}{\alpha}} \Big(\sum_{v\leq w \leq s} | \Delta {\mathcal{W}}_w^m|^2\textcolor{black}{\mathbf{1}_{|\Delta {\mathcal{W}}_w^m|\ge 1}}\Big)^{q/2}\\ & \leq& C(s-w)^{q\frac{\theta-1}{\alpha}} \sum_{v\leq w \leq s} |\Delta {\mathcal{W}}_w^m|^q\textcolor{black}{\mathbf{1}_{|\Delta {\mathcal{W}}_w^m|\ge 1}}.\end{aligned}$$ We then readily get from the compensation formula that $${\mathbb{E}}\big[|\mathcal M_L^m(v,s)|^q\big] \leq C (s-w)^{1 + q\frac{\theta-1}{\alpha}}\int |x|^q \textcolor{black}{\mathbf{1}_{|x|\ge 1}} \nu(dx) \leq C'(s-w)^{1 + q\frac{\theta-1}{\alpha}}\le C'(s-w)^{\frac{q}{\alpha} + q\frac{\theta-1}{\alpha}}.$$ We now deal with the “small” jumps and split them w.r.t. their characteristic scale writing $$\begin{aligned} \mathcal M_S^m(v,s) &=& \mathcal M_{S,1}^m(v,s)+\mathcal M_{S,2}^m(v,s) \\ &=: &\int_v\int_{|x| > (s-v)^{\frac 1\alpha}}\mathcal U(w,X_{w^-}^m,x) \tilde N^m(dw,dx) + \int_v\int_{ |x| \le (s-v)^{\frac 1\alpha}}\mathcal U(w,X_{w^-}^m,x) \tilde N^m(dw,dx).\end{aligned}$$ In the off-diagonal regime (namely for $\mathcal M_{S,1}^m(v,s)$), we do not face any integrability problem w.r.t. the Lévy measure. The main idea consists then in using first inequality, then the compensation formula and and eventually together with the compensation formula again to obtain $$\begin{aligned} {\mathbb{E}}[|\mathcal M_{S,1}^m(v,s)|^q] &=& {\mathbb{E}}\left[\left| \int_v^s\int_{|x| > |s-v|^{\frac 1\alpha}} \mathcal U(w,X_{w^-}^m,x)\tilde N^m(dr,dx)\right|^q \right]\\ &\leq& C_q {\mathbb{E}}\left[ \left(\sum_{v\leq w \leq s} |\mathcal U(w,X_{w^-}^m,\Delta {\mathcal{W}}_w^m)|^2\mathbf{1}_{|\Delta {\mathcal{W}}_w^m| > |v-s|^{\frac 1\alpha}}\right)^{\frac q2}\right]\notag\\ &\leq& C_q (s-v)^{1+q\frac{\theta-1}{\alpha}} \int_{|x| > |v-s|^{\frac 1\alpha}} \big|x\big|^q \nu(dx)\notag\\ &\leq& C_q |v-s|^{\frac q\alpha +\frac{\theta-1}{\alpha}}.\end{aligned}$$ In the diagonal regime (i.e. for $\mathcal M_{S,2}^m(v,s)$) we use the BDG inequality and to recover integrability w.r.t. the Lévy measure and then use the integrability to obtain better estimate. : $$\begin{aligned} {\mathbb{E}}[|\mathcal M_{S,2}^m(v,s)|^q] &=& C {\mathbb{E}}\left[ \left| \int_v^s \int_{|x|\le |v-s|^{\frac 1\alpha}} \mathcal U(w,X_{w^-}^m,x) \tilde N^m(dw,dx)\right|^q\right]\notag\\ &\leq& C_q \left( \int_v^s \int_{|x|\le |v-s|^{\frac 1\alpha}} \big|\mathcal U(w,X_{w^-}^m,x)\big|^2 dw \nu(dx)\right)^{\frac q 2}\notag\\ &\leq& C_q \left((s-v)^{1+2\frac{\theta-1}{\alpha}} \int_{|x|\le |v-s|^{\frac 1\alpha}} \big|x\big|^2 \nu(dx)\right)^{\frac q 2}\notag\\ &\le &C_q (s-v)^{\frac q\alpha +q\frac {\theta-1}\alpha}.\notag\end{aligned}$$ Using the above estimates on the $q$-moments of $\mathcal M_{L}^m(v,s)$, $\mathcal M_{S,1}^m(v,s)$ and $\mathcal M_{S,2}^m(v,s)$ the statement follows passing to the limit in $m$. Letting $({\mathcal{F}}_v^m)_{v\ge 0}:=\big(\sigma ( (X_w^m,{\mathcal{W}}_ w^m)_{0\le w \le v} ) \big)_{v\ge 0} $, restarting from and taking the conditional expectation w.r.t. ${\mathcal{F}}^m$ yields $$\begin{aligned} {\mathbb{E}}[ X_s^m- X_v^m |{\mathcal{F}}_v^m]&=&{\mathbb{E}}[u_m(v,X_v^m)-u_m(s,X_v^m)|{\mathcal{F}}_v^m]=u_m^s(v,X_v^m) - X_v^m.\end{aligned}$$ Passing to the limit in $m$, it can be deduced that $${\mathbb{E}}[ X_{s}- X_v|{\mathcal{F}}_v] =u^{s}(v,X_v)-X_v = : \mathfrak f(v,X_v,s-v),$$ where $u$ is the mild solution of with terminal condition $x$ at time $s$ and source term $f\equiv 0$. From the mild definition of $u$ in Theorem \[THE\_PDE\] we obtain that integrating by parts to derive the last inequality. We thus get: $$\begin{aligned} {\mathbb{E}}[ X_s- X_v |{\mathcal{F}}_v]&=&u(v,X_v)-u(s,X_v)=\int_v^s dw \int_{{\mathbb{R}}^d}dy Du(w,y)F(w,y) p_\alpha(w-v,y-X_v)\notag\\ &=&\int_v^s dw \int_{{\mathbb{R}}^d}dy F(w,y) p_\alpha(w-\textcolor{black}{v},y-X_v)\notag\\ && + \int_v^s dw \int_{{\mathbb{R}}^d}dy \int_w^s dw' \int_{{\mathbb{R}}^d}dy' \big[[Du(w',y')F(w',y')] \otimes \textcolor{black}{D}_y p_\alpha(w'-w,y'-y)\big] F(w,y)\notag\\ &&\quad \times p_\alpha(w-v,y-X_v),\label{dvp}\end{aligned}$$ where we have again plugged the mild formulation of $Du$. Let us first prove that the first term in the above has the right order. Thanks to Lemma \[LEM\_BES\_NORM\] (with $\eta=0$ and $\Psi =\rm{Id}$ therein) that: $$\begin{aligned} &&{\mathscr F}(v,X_v,s-v)\notag\\ &:=&\Big|\int_v^s d\textcolor{black}{w} \int_{{\mathbb{R}}^d}dy F(\textcolor{black}{w},y) p_\alpha(\textcolor{black}{w}-\textcolor{black}{v},y-X_v)\Big|\notag\\ &\le& C\|F\|_{\bL^r([0,T],{\mathbb{B}}_{p,q}^{-1+\gamma})} (s-v)^{1-(\frac 1r +\frac d{p\alpha}+\frac{1-\gamma}\alpha)}\notag\\ &\le & C\|F\|_{\bL^r([0,T],{\mathbb{B}}_{p,q}^{-1+\gamma})} (s-v)^{\frac 12+\big[\frac 12-(\frac 1r +\frac d{p\alpha}+\frac{1-\gamma}\alpha)\big]}.\notag\\ \label{PREAL_CTR_GOOD_CONTROL_FOR_YOUNG}\end{aligned}$$ Let us now prove that $\chi:=\frac 12-(\frac 1r +\frac d{p\alpha}+\frac{1-\gamma}\alpha)>0 $. Recall that we have assumed in Theorem \[THEO\_WELL\_POSED\] that $\gamma>[3-\alpha(1-\frac 1r)+\frac dp]/2 $. Note carefully that, for $\alpha >(1-\frac dp)/(1-\frac 1r) $ it also holds that $\gamma>[3-\alpha(1-\frac 1r)+\frac dp]/2>2-\alpha+ \alpha / r+ d/p $ which was the natural condition appearing in the analysis of the Green kernel to give a pointwise meaning to the underlying gradient. This eventually gives that $\chi >0$. Let us now prove that the second in the r.h.s. of is a negligible perturbation. Setting with the notations of Section \[SEC\_PERTURB\]: $$\begin{aligned} \textcolor{black}{\psi}_{v,\textcolor{black}{w},s}(y) &:=& p_\alpha(\textcolor{black}{w}-v,y-X_v)\int_{\textcolor{black}{w}}^s d\textcolor{black}{w}' \int_{{\mathbb{R}}^d}dy' [Du(\textcolor{black}{w}',y')F(\textcolor{black}{w}',y')] \otimes \textcolor{black}{D}_y p_\alpha(\textcolor{black}{w}'-r,y'-y)\\ &=&p_\alpha(\textcolor{black}{w}-v,y-X_v) D\mathfrak r(\textcolor{black}{w},y)\end{aligned}$$ we write: $$\begin{aligned} {\mathscr R}(v,X_v,s-v):=\textcolor{black}{\int_v^s d\textcolor{black}{w} \int_{{\mathbb{R}}^d}dy \textcolor{black}{\psi}_{v,\textcolor{black}{w},s}(y) F(\textcolor{black}{w},y).}\end{aligned}$$ We thus have the following estimate: $$\label{DEF_REMAINDER_DRIFT} |{\mathscr R}(v,X_v,s-v)|\le \|F\|_{\bL^r([0,T],{\mathbb{B}}_{p,q}^{-1+\gamma})}\|\psi_{v,\cdot,s}(\cdot)\|_{\bL^{r'}([0,T],{\mathbb{B}}_{p',q'}^{1-\gamma})}.$$ Let us now consider the thermic part of $ \|\psi_{v,\cdot,s}(\cdot)\|_{\bL^{r'}([0,T],{\mathbb{B}}_{p',q'}^{1-\gamma})}$. With the same previous notations[^3]: $$\begin{aligned} \Big({\mathcal T}_{p',q'}^{1-\gamma}(\psi_{v,\textcolor{black}{w},s}(\cdot))\Big|_{[(\textcolor{black}{w}-v),1]}\Big)^{q'}&\le& C(\textcolor{black}{w}-v)^{-\frac{1-\gamma}{\alpha}q'}\|D\mathfrak r(\textcolor{black}{w},\cdot)\|_{\infty}^{q'}\|p_\alpha(\textcolor{black}{w}-v,\cdot,-X_v)\|_{\bL^{p'}}^{q'}\notag\\ &\le & C(\textcolor{black}{w}-v)^{-\frac{1-\gamma}{\alpha}q'}(s-\textcolor{black}{w})^{\frac{(\theta-1)}\alpha q'}(\textcolor{black}{w}-v)^{-\frac{d}{\alpha p}q'},\end{aligned}$$ using and for the last inequality. Hence, $$\begin{aligned} \Big( \int_v^s d\textcolor{black}{w} \Big({\mathcal T}_{p',q'}^{1-\gamma}(\psi_{v,\textcolor{black}{w},s}(\cdot))\Big|_{[(\textcolor{black}{w}-v),1]}\Big)^{r'}\Big)^{1/r'}& \le& C(s-v)^{\frac 1{r'}+\frac{\theta-1}{\alpha}-\frac{d}{\alpha p}-\frac{1-\gamma}{\alpha}}.\label{BD_REMAINDER_COUPURE_HAUTE}\end{aligned}$$ Observe that, for this term to be a remainder on small time intervals, we need: $$\frac 1{r'}+\frac{\theta-1}{\alpha}-\frac{d}{\alpha p}-\frac{1-\gamma}{\alpha}>1 \iff \gamma-1+\theta-1-\frac{d}{ p}-\frac \alpha r >0.$$ Recalling the definition of $\theta $ in , we obtain the condition: $$\label{COND_TO_BE_A_REMAINDER} \gamma>\frac{3-\alpha+\frac {2d}p+\frac{2\alpha}r}{2} .$$ This stronger condition appears only in the case where one is interested in expliciting exactly the dynamics in terms of a drift which actually writes as the mollified version of the initial one along the density of the driving noise (regularizing kernel). Note that if one chooses to work in a bounded setting, i.e. for $p=r=\infty $, again corresponds to the appearing in Theorem \[THEO\_WELL\_POSED\]. Let us now deal with the second term from the thermic characterization. Restarting from , we get for $\beta=\theta-1-\varepsilon $: $$\begin{aligned} \label{Holder_prod_AGAIN} && \left|D \mathfrak r(\textcolor{black}{w},y)p_\alpha (\textcolor{black}{w}-v,y-x) - D \mathfrak r(\textcolor{black}{w},z)p_\alpha(\textcolor{black}{w}-v,z-x)\right|\\ &\leq & C\left[ \left(\|D \mathfrak r(\textcolor{black}{w},\cdot)\|_{ \textcolor{black}{\dot {\mathbb{B}}^\beta_{\infty,\infty}}} + \frac{\|D \mathfrak r(\textcolor{black}{w},\cdot)\|_{\textcolor{black}{\bL^{\infty}}}}{(r-v)^{\frac{ \beta}\alpha}}\right)\left(q_\alpha(\textcolor{black}{w}-v,y-x) +q_\alpha(\textcolor{black}{w}-v,z-x) \right)\right] |y-z|^\beta\notag\\ &\leq & C\Big( (s-\textcolor{black}{w})^{\frac \varepsilon \alpha}+\frac{(s-\textcolor{black}{w})^{\frac{\theta-1}\alpha}}{(\textcolor{black}{w}-v)^{\frac \beta\alpha}}\Big) \left(q_\alpha(\textcolor{black}{w}-v,y-x) +q_\alpha(\textcolor{black}{w}-v,z-x) \right) |y-z|^\beta,\notag\end{aligned}$$ recalling also for the last inequality . Hence: $$\begin{aligned} \Big({\mathcal T}_{p',q'}^{1-\gamma}(\psi_{v,\textcolor{black}{w},s}(\cdot))\Big|_{[0,(\textcolor{black}{w}-v)]}\Big)^{q'}&\le&\frac{C}{(\textcolor{black}{w}-v)^{(\frac{d}{p\alpha})q'}}\int_0^{\textcolor{black}{w}-v} \frac{d\bar v}{\bar v}\bar v^{(\frac{\gamma-1+\beta}{\alpha})q'}\Big( (s-\textcolor{black}{w})^{\frac \varepsilon \alpha}+\frac{(s-\textcolor{black}{w})^{\frac{\theta-1}\alpha}}{(\textcolor{black}{w}-v)^{\frac \beta\alpha}}\Big)^{q'},\notag\\ \textcolor{black}{\bigg(}\int_v^s d\textcolor{black}{w} \Big({\mathcal T}_{p',q'}^{1-\gamma}(\psi_{v,\textcolor{black}{w},s}(\cdot))\Big|_{[0,(\textcolor{black}{w}-v)]}\textcolor{black}{\Big)^{r'}\bigg)^{1/r'}}&\le&\Big(\int_v^s d\textcolor{black}{w} (\textcolor{black}{w}-v)^{(\frac{\gamma-1+\beta}{\alpha}-\frac{d}{p\alpha})r'}\Big( (s-\textcolor{black}{w})^{\frac \varepsilon \alpha}+\frac{(s-\textcolor{black}{w})^{\frac{\theta-1}\alpha}}{(\textcolor{black}{w}-v)^{\frac \beta\alpha}}\Big)^{r'} \Big)^{1/r'} \notag\\ &\le& C(s-v)^{\frac 1{r'}+(\frac{\gamma-1+\beta}{\alpha}-\frac{d}{p\alpha})+\frac{\varepsilon}{\alpha}}= C(s-v)^{\frac 1{r'}+(\frac{\gamma-1+\theta-1}{\alpha}-\frac{d}{p\alpha})},\label{BD_REMAINDER_COUPURE_BASEE}\end{aligned}$$ which precisely gives a contribution homogeneous to the one of .We eventually derive that, under the condition , the remainder in is s.t. there exists $\varepsilon':=\textcolor{black}{-}\frac 1{r}\textcolor{black}{+}(\frac{\gamma-1+\theta-1}{\alpha}-\frac{d}{p\alpha})>0$ $$|{\mathscr R}(v,X_v,s-v)|\le C (s-v)^{1+\varepsilon'}, C:=C(\|F\|_{\bL^r([0,T],{\mathbb{B}}_{p,q}^{-1+\gamma})}).$$ Having this result at hand, one can now appeal to the construction implemented in Section 4.4 of [@dela:diel:16] in order to conclude the proof of Theorem \[THEO\_DYN\]. Let us try to sum up how such a construction can be adapted in our setting. As in Section 4.4.1 of [@dela:diel:16], we introduce in a generic way the process $(A(s,t))_{0\leq s \leq t \leq T}$ as $(i) A(t,t+h) = X_{t+h}-X_t$ or $(ii) A(t,\textcolor{black}{t+h}) = {\mathcal{W}}_{t+h}-{\mathcal{W}}_t$ or $(iii) A(t,\textcolor{black}{t+h}) = \mathfrak f(t,X_t,h)$. We then claim that the following estimates hold: for any $1\leq q <\alpha$ there exists $\varepsilon_0 \in (0,1-1/\alpha]$, $\varepsilon_1,\varepsilon_1' >0$ and a constant $C:=C(p,q,r,\gamma,q,T)>0$ such that $$\begin{aligned} \label{Esti_Inter} {\mathbb{E}}[|{\mathbb{E}}[A(t,t+h)|\mathcal F_t]|^q]^{\frac{1}{q}} &\leq& C h^{\frac 1\alpha + \varepsilon_0},\notag\\ {\mathbb{E}}[|A(t,t+h)|^q]^{\frac{1}{q}} &\leq& C h^{\frac 1\alpha},\notag\\ {\mathbb{E}}[|{\mathbb{E}}[A(t,t+h) + A(t+h,t+h') - A(t,t+h')|\mathcal F_t|^q]^{\frac 1q}]&\leq& C(h')^{1+\varepsilon_1},\notag\\ {\mathbb{E}}[|A(t,t+h) + A(t+h,t+h') - A(t,t+h')|^q]^{\frac 1q}&\leq& C(h')^{\frac 1\alpha+\varepsilon_1'}.\end{aligned}$$ Then, we aim at for any $T>0$ the stochastic integral $\int_0^T \psi_s A(t,t+dt)$, for the class of predictable process $(\psi_s)_{s\in [0,t]} $, $((1-1/\alpha)-\varepsilon_2) $-Hölder continuous in $\bL^{q'}$ with $q'\ge 1$ such that $1/q'+1/q=1/\ell$, $\ell<\alpha$ and $0<\varepsilon_2<\varepsilon_0$, as an $\bL^\ell$ limit of the associated Riemann sum: for $\Delta=\{0=t_0<t_1,\ldots,t_N=T\}$ $$S(\Delta) := \sum_{i=0}^{N-1}\psi_{t_i}A(t_i,t_{i+1}) \to \int_0^T \psi_s A(t,t+dt),\quad \text{in } \bL^\ell,$$ which justifies the fact that such an integral is called $\bL^\ell$ stochastic-Young integral by the Authors. To do so, the main idea in [@dela:diel:16] consists in splitting the process $A$ as the sum of a drift and a martingale: $$\label{DECOMP_A} A(t,t+h) = A(t,t+h)-{\mathbb{E}}[A(t,t+h)|\mathcal F_t] + {\mathbb{E}}[A(t,t+h)|\mathcal F_t] := M(t,t+h)+ R(t,t+h),$$ and define $\bL^\ell$-stochastic-Young integral w.r.t. each of these terms. We then have \[THEO\_DEL\_DIEL\] There exists $C=C(q,q',p,q,r,\gamma)>0$ such that, given two subdivisions $\Delta \subset \Delta '$ of $[0,T]$, such that $\pi(\Delta) < 1$, $$\| S(\Delta)-S(\Delta')\|_{\bL^\ell} \leq C\max\{T^{1/\alpha},T\} (\pi(\Delta))^\eta,$$ where $\pi(\Delta)$ denotes the step size of the subdivision $\Delta$ and with $\eta = \min\{\epsilon_0-\varepsilon_2,\varepsilon_1,\varepsilon_1' \}$. The main consists in noticing that the proof in [@dela:diel:16] remains valid in our setting (for parameter therein) and that the only difference is the possible presence of jumps. To , the the martingale part (which in our current framework may jumps) into two parts: an $\bL^2$-martingale (which includes the compensated small jumps) and an $\bL^{\textcolor{black}{\ell}}$-martingale (which includes the compensated large jumps). The first part can be handled using inequality (and this is what is done in [@dela:diel:16]) and the other part by using the compensation formula (such a strategy is somehow classical in the pure-jump setting and has been implemented to prove point *(ii)* in Proposition \[PROP\_REG\_PARTIELLE\] above). Thus, we obtain that for any fixed $t$ in $[0,T]$ we are able to define an additive (on $[0,T]$) integral $\int_0^t \psi_s A(s,s+ds)$. The main point consists now in giving a meaning on this quantity as a process (i.e. that all the time integrals can be defined simultaneously). In the current pure-jump setting, we rely on the Aldous criterion, whereas in the diffusive framework of [@dela:diel:16], the Kolmogorov continuity criterion was used. Thanks to Theorem \[THEO\_DEL\_DIEL\], one has $$\Big\|\int_t^{t+h} \psi_sA(s,s+ds) - \psi_t A(t,t+dt) \Big\|_{\bL^\ell} \leq C h^{\frac 1\alpha + \eta},$$ so that one can apply Proposition 34.9 in Bass [@bass:11] and Proposition 4.8.2 in Kolokoltsov [@kolo:11] to the sequence $\Big(\int_0^{t} \psi_s A(s,s+ds)\Big)_{s \leq t}$ and deduce that the limit is stochastically continuous.\ Eventually, following Section 4.6 of [@dela:diel:16] we can thus define the the processes $\Big(\int_0^t \psi_s dX_s\Big)_{0 \leq t \leq T}$ and $\Big(\int_0^t \psi_s \mathfrak f(s,X_s,ds)\Big)_{0 \leq t \leq T}$ for any $(\psi_s)_{0 \leq s \leq T}$ with $\varepsilon_2<(\theta-1)/\alpha$. Let us conclude by emphasizing the following fact in [@dela:diel:16]. When building the $\bL^\ell$ stochastic-Young version of the drift, one has from that $$R(t,t+h) = {\mathbb{E}}[X_{t+h}-X_t|\mathcal F_t],\quad M(t,t+h) =X_{t+h}-X_t- {\mathbb{E}}[X_{t+h}-X_t|\mathcal F_t].$$ Thanks to Proposition \[PROP\_REG\_PARTIELLE\] we have that $ \Big(\int_0^t \psi_s R(t,t+dt )\Big)_{0 \leq t \leq T} = \Big(\int_0^t \psi_s \mathfrak f(t,X_t,dt)\Big)_{0 \leq t \leq T},$ so that the l.h.s. is well defined. Also, we have that $\Big(\int_0^t \psi_s (R(t,t+dt ) - \mathscr F(t,X_t,dt))\Big)_{0 \leq t \leq T} = \Big(\int_0^t \psi_s \mathfrak r(t,X_t,dt)\Big)_{0 \leq t \leq T}$ is well defined and is null since the bound appearing in the increment of the l.f.s. is greater than one. Hence, $$\Big(\int_0^t \psi_s \mathfrak f(t,X_t,dt)\Big)_{0 \leq t \leq T} = \Big(\int_0^t \psi_s \mathscr F(t,X_t,dt)\Big)_{0 \leq t \leq T}.$$ On the other hand, we have that $\Big(\int_0^t \psi_s M(t,t+dt)\Big)_{0 \leq t \leq T}$ is well defined as well and that $\Big(\int_0^t \psi_s M(t,t+dt) - d{\mathcal{W}}_t\Big)_{0 \leq t \leq T} =\Big(\int_0^t \psi_s \hat M(t,t+dt)\Big)_{0 \leq t \leq T} $ where $$\hat M(t,t+h) = X_{t+h}-X_t - ({\mathcal{W}}_{t+h}-{\mathcal{W}}_t) - {\mathbb{E}}[X_{t+h}-X_t - ({\mathcal{W}}_{t+h}-{\mathcal{W}}_t)|\mathcal F_t],$$ is an $\bL^q$ martingale with $q$ moment bounded by $C_q h^{q[1+ (\theta-1)/\alpha]}$ so that it is null as well, meaning that when reconstructing the drift as above, we indeed get that only the “original” noise part in the dynamics matter. The proof follows from Proposition \[PROP\_REG\_PARTIELLE\] and Theorem \[THE\_PDE\]. Note that the two last estimates are equals to $0$ in case $(i)-(ii)$ since the process $A$ is additive. We eventually conclude this part with the following Lemma. \[LE\_LEMME\_DE\_REG\] Under the previous assumptions we have that for any smooth functions $(F_m)_{m \in \mathbb N}$ satisfying $$\lim_{m\to \infty} \| F-F_m\|_{\bL^r([0,T],{\mathbb{B}}_{p,q}^{-1+\gamma}({\mathbb{R}}^d))} =0,$$ that for all $t$ in $[0,T]$, $$\lim_{m \to \infty} \left\| \int_0^t \psi_{s} \mathscr F(s,X_s,ds) - \int_0^t \psi_{s} F_m(s,X_s)ds \right\|_{\bL^\ell} = 0$$ We want to investigate: $$\lim_{m\to \infty} {\mathbb{E}}\left|\int_0^t \psi_s \mathscr F(s,X_s,ds) - \int_0^{t} \psi_s F_m(s,X_s) ds\right|^\ell.$$ Coming back to the definition of such integrals, this means that we want to control $$\begin{aligned} \lim_{m\to \infty} {\mathbb{E}}\left|\lim_{N \to \infty} \sum_{i=0}^{N-1}\psi_{t_i} \int_{t_i}^{t_{i+1}}ds\left\{ \int dy F(s,y)p_\alpha(s-t_i,y-X_{t_i}) - F_m(t_i,X_{t_i})\right\}\right|^\ell.\end{aligned}$$ We have the following decomposition: $$\begin{aligned} &&\lim_{m\to \infty} {\mathbb{E}}\left|\lim_{N \to \infty} \sum_{i=0}^{N-1}\psi_{t_i} \int_{t_i}^{t_{i+1}}ds\left\{ \int dy F(s,y)p_\alpha(s-t_i,y-X_{t_i}) - F_m(t_i,X_{t_i})\right\}\right|^\ell\\ &\leq &\lim_{m\to \infty} {\mathbb{E}}\Bigg|\lim_{N \to \infty} \sum_{i=0}^{N-1}\psi_{t_i} \int_{t_i}^{t_{i+1}}ds\Bigg\{ \int dy [F(s,y)-F_m(s,y)]p_\alpha(s-t_i,y-X_{t_i})\Bigg\}\Bigg|^\ell\\ &&+\lim_{m\to \infty} {\mathbb{E}}\Bigg|\lim_{N \to \infty} \sum_{i=0}^{N-1}\psi_{t_i} \int_{t_i}^{t_{i+1}}ds \int dy [F_m(s,y)-F_m(t_i,X_{t_i})]p_\alpha(s-t_i,y-X_{t_i})\Bigg\}\Bigg|^\ell\\ &&:= \lim_{m\to \infty} \|\lim_{\pi(\Delta)\to 0}S_m^1(\Delta)\|_{\bL^\ell} + \lim_{m\to \infty} \|\lim_{\pi(\Delta)\to 0} S_m^2(\Delta)\|_{\bL^\ell}\end{aligned}$$ with the previous notations. Note that $\lim_{m \to \infty}\|S_m^1(\Delta)\|_{\bL^\ell} = 0$, uniformly $\Delta$ and that for each $m$, $\|S_m^1(\Delta)\|_{\bL^\ell}$ tends to some $\| S_m^1\|_{\bL^\ell}$ as $ \pi(\Delta) \to 0$. One can hence invert both limits and deduce that $$\lim_{m\to \infty} \lim_{\pi(\Delta)\to 0}\|S_m^1(\Delta)\|_{\bL^\ell} = \lim_{\pi(\Delta)\to 0} \lim_{m\to \infty} \|S_m^1(\Delta)\|_{\bL^\ell} = 0.$$ For the second term, we note that due to the regularity of $F_m$ (using e.g. its $\bL^r({\mathbb{B}}_{p,q}^1)$ norm) that $${\mathbb{E}}\left|\int_{t_i}^{t_{i+1}}ds \int dy [F_m(s,y)-F_m(t_i,X_{t_i})]p_\alpha(s-t_i,y-X_{t_i})\right|^\ell \leq C_m (t_{i+1}-t_{i})^{\ell(\frac 12 + \frac 1\alpha + \chi)},$$ so that $\lim_{m\to \infty} \lim_{\pi(\Delta)\to 0}\|S_m^2(\Delta)\|_{\bL^\ell} =0$. This concludes the proof. Pathwise uniqueness in dimension one {#SEC_PATH_UNIQUE} ==================================== The aim of this part is to prove Theorem \[THEO\_STRONG\], adapting to this end the proof of Proposition 2.9 in [@athr:butk:mytn:18] to our current inhomogeneous and parabolic (for the auxiliary PDE concerned) framework. Let us consider $(X^1,{\mathcal{W}}) $ and $(X^2,{\mathcal{W}})$ two weak solutions of . With the notations of , we consider the two corresponding Itô-Zvonkin transforms $X_t^{Z,m,i} := X_t^i-u_m(t,X_t^i) = x-u_m(0,x) + \mathcal{W}_t- M_{0,t}(\alpha,u_m,X^i) + R_{0,t}(\alpha,F_m,\mathscr F, X^i), \ i\in\{1,2\} $. We point out that we here use the mollified PDE, keeping therefore the remainder term and dependence in $m$ for the martingale part. This is mainly to avoid passing to the limit for the martingale term (as Athreya *et al.* [@athr:butk:mytn:18] do but which requires many additional technical lemmas therein). Of course, we will have to control the remainders, . From now on, we assume that $\alpha<2$. .\ As a starting point, we now expand, for a smooth approximation of the absolute value $$V_n(x)=\begin{cases} |x|,\ |x|\ge \frac 1n,\\ \frac 3{8n}+\frac 34 nx^2-\frac 18 n^3x^4,\ |x|\le \frac 1n, \end{cases}$$ the quantity $ V_n(X_t^{Z,m,1}-X_t^{Z,m,2})$ approximating $|X_t^{Z,m,1}-X_t^{Z,m,2}| $. For fixed $m,n$ we can apply Itô’s formula to obtain: $$\begin{aligned} &&V_n(X_t^{Z,m,1}-X_t^{Z,m,2})\notag\\ &=&V_n(0)+\int_0^t V_n'(X_t^{Z,m,1}-X_t^{Z,m,2}) \big[\mathscr F(s,X_s^1,ds) - F_m(s,X_s^1)ds-(\mathscr F(s,X_s^2,ds) - F_m(s,X_s^2)ds)\big]\notag\\ &&+\int_0^t [V_n(X_s^{Z,m,1}-X_s^{Z,m,2}+h_m(X_s^{1},X_s^{2},r))-V_n(X_t^{Z,m,1}-X_t^{Z,m,2})] \tilde N(ds,dr)\notag\\ &&+\int_{0}^t\int_{|r|\ge 1} \psi_n(X_s^{Z,m,1}-X_s^{Z,m,2},h_m(X_s^{1},X_s^{2},r))\nu (dr) ds\notag\\ &&+\int_{0}^t\int_{|r|\le 1} \psi_n(X_s^{Z,m,1}-X_s^{Z,m,2},h_m(X_s^{1},X_s^{2},r))\nu (dr) ds\notag\\ &=:&\frac 3{8n}+\Delta R_{0,t}^{m,n}+\Delta M_{0,t}^{m,n}+\Delta C_{0,t,L}^{m,n}+\Delta C_{0,t,S}^{m,n},\label{ITO_FINAL}\end{aligned}$$ recalling that $X_0^{Z,m,1}=X_0^{Z,m,2} $, using the definition of $V_n$ and denoting for all $(x_1,x_2,r)\in {\mathbb{R}}^3 $: $$\begin{aligned} h_m(x_1,x_2,r)&=&u_m(x_1+r)-u_m(x_1)-[u_m(x_2+r)-u_m(x_2)],\label{DEF_HM}\\ \psi_n(x_1,r)&=&V_n(x_1+r)-V_n(x_1)-V_n'(x_1)r.\notag\end{aligned}$$ The point is now to take the expectations in . Since $\Delta M_{0,t}^{m,n}$ is a martingale, we then readily get ${\mathbb{E}}[\Delta M_{0,t}^{m,n}]=0 $. On the other hand, since $|V_n'(x)|\le 2 $, we also have from Lemma \[LE\_LEMME\_DE\_REG\] that: $$\label{L1_LIMITE_RESTE} {\mathbb{E}}[|\Delta R_{0,t}^{m,n} |] \underset{m}{\to} 0.$$ It now remains to handle the compensator terms. For the *large* jumps, we readily write: $$\label{PATHWISE_BIG_JUMPS} {\mathbb{E}}[| \Delta C_{0,t,L}^{m,n}|]\le 2\|V_n'\|_\infty\|Du_m\|_{\bL^\infty(\bL^{\infty})}\int_0^t{\mathbb{E}}[|X_s^{1}-X_s^{2}|]ds\le C\int_0^t{\mathbb{E}}[|X_s^{1}-X_s^{2}|]ds,$$ from Corollary \[COR\_ZVON\_THEO\], $\|Du_m\|_{\bL^\infty(\bL^{\infty})} \le C_T\underset{T\rightarrow 0}{\longrightarrow} 0$ uniformly in $m$ (as the terminal condition of the PDE is 0). In particular, for $T$ small enough one has $\|Du_m\|_{\bL^\infty(\bL^{\infty})}\le 1/4 $ and $$|x_1-u_m(t,x_1)-(x_2-u_m(t,x_2))|\ge |x_1-x_2|-|u_m(t,x_1)-u_m(t,x_2)|\ge |x_1-x_2|(1-\|Du_m\|_{\bL^\infty(\bL^{\infty})})\ge \frac 34 |x_1-x_2|.\label{DOM_1}$$ Hence, $$\label{FROM_X_TO_XZ} |h_m(X_s^{1},X_s^{2},r)|\le 2\|Du_m\|_{\bL^\infty(\bL^{\infty})} |X_s^{1}-X_s^{2}|\le \frac 23 |X_s^{Z,m,1}-X_s^{Z,m,2}|.$$ Therefore, if $ |X_s^{Z,m,1}-X_s^{Z,m,2}|\ge 3/n$, it is readily seen that $\psi_n(X_s^{Z,m,1}-X_s^{Z,m,2},h_m(X_s^{1},X_s^{2},r))=0 $. We thus have: $$\begin{aligned} |{\mathbb{E}}[C_{0,t,S}^{m,n}]|&=&\Big|{\mathbb{E}}[\int_{0}^t\int_{|r|\le 1} {\mathbb{I}}_{|X_s^{Z,m,1}-X_s^{Z,m,2}|\le \frac 3n}\psi_n(X_s^{Z,m,1}-X_s^{Z,m,2},h_m(X_s^{1},X_s^{2},r))\nu (dr) ds]\Big|\notag\\ &\le& Cn {\mathbb{E}}[\int_{0}^t\int_{|r|\le 1} {\mathbb{I}}_{|X_s^{Z,m,1}-X_s^{Z,m,2}|\le \frac 3n}|h_m(X_s^1,X_s^2,r)|^2\nu(dr)ds],\label{BD_EXPLO}\end{aligned}$$ using for the last inequality the definition of $V_n$ which gives that there exists $C$ s.t. for all $ y\in {\mathbb{R}}$, $|V_n''(y)| \le Cn|y|^2$. We now use the definition of $h_m$ and the smoothness of $u_m$ in order to balance the explosive contribution in $n$ and to keep an exponent of $r$ which allows to integrate the small jumps. From and usual interpolation techniques (see e.g. Lemma 5.5 in [@athr:butk:mytn:18] or Lemma 4.1 in [@prio:12]) we get: $$\textcolor{black}{|h_m(X_s^{1},X_s^{2},r)|\le \|u_m\|_{\bL^\infty({\mathbb{B}}_{\infty,\infty}^{\gamma})}|X_s^1-X_s^2|^{\eta_1} r^{\eta_2},\ (\eta_1,\eta_2)\in (0,1)^2,\ \eta_1+\eta_2=\eta<\theta-\varepsilon}.$$ The point is now to apply the above identity with $\gamma_1$ large enough in order to get of the explosive term in (i.e. $\textcolor{black}{\eta_1}>1/2$) and with $\gamma_2$ sufficiently large in order to guarantee the integrability of the Lévy measure (i.e. $\textcolor{black}{\eta_2>\alpha/2}$). This suggests to choose $\textcolor{black}{\eta_1 = 1/2 + \tilde \varepsilon/2}$ and $\textcolor{black}{\eta_2 = \alpha/2 + \tilde \varepsilon/2}$, with $\tilde \varepsilon>0 $ meant to be small. In order to satisfy such constraints, we obtain that $\gamma$ must satisfy $\gamma>[3-\alpha+2d/p+2\alpha/r]/2$, which is precisely the thresholds appearing when reconstructing the dynamics (see condition in Theorem \[THEO\_DYN\] and computations leading to in the proof of Proposition \[PROP\_REG\_PARTIELLE\]). Hence, $$\begin{aligned} |{\mathbb{E}}[C_{0,t,S}^{m,n}]|&\le& Cn {\mathbb{E}}\left[\int_{0}^t\int_{|r|\le 1} {\mathbb{I}}_{|X_s^{Z,m,1}-X_s^{Z,m,2}|\le \frac 3n}|X_s^1-X_s^2|^{1+\tilde \varepsilon}r^{ \alpha+\tilde \varepsilon}\frac{dr}{r^{1+\alpha}}ds\right]\notag\\ &\le& Cn {\mathbb{E}}\left[\int_{0}^t {\mathbb{I}}_{|X_s^{Z,m,1}-X_s^{Z,m,2}|\le \frac 3n}|X_s^{Z,m,1}-X_s^{Z,m,2}|^{1+\tilde \varepsilon}ds\right]\le Cn^{-\tilde \varepsilon},\label{CTR_SMALL_JUMPS_PTHW}\end{aligned}$$ using for the last but one inequality. Plugging , into (taking therein the expectations) and recalling that ${\mathbb{E}}[\Delta M_{0,t}^{m,n}] =0 $, eventually yields: $$\begin{aligned} {\mathbb{E}}[V_n(X_t^{Z,m,1}-X_t^{Z,m,2})]\le \frac{3}{8n}+{\mathbb{E}}[|\Delta R_{0,t}^{m,n}|]+C\int_0^t {\mathbb{E}}[|X_s^1-X_s^2|] ds+\frac{C}{n^{\tilde \varepsilon}}.\end{aligned}$$ Passing to the limit, first in $m$ recalling that ${\mathbb{E}}[|\Delta R_{0,t}^{m,n}|]\underset{m}{\rightarrow }0$ uniformly in $n$, gives (from the smoothness properties of $ (u_m)_{m\ge 1}$ in Proposition \[PROP\_PDE\_MOLL\], see also point **(ii)** in Section \[SDE\_2\_PDE\]): $$\begin{aligned} {\mathbb{E}}[V_n(X_t^{Z,1}-X_t^{Z,2})]\le \frac{3}{8n}+C\int_0^t {\mathbb{E}}[|X_s^1-X_s^2|] ds+\frac{C}{n^{\tilde \varepsilon}}, \ X_t^{Z,i}:=X_t^i-u(t,X_t^i),\ i\in \{1,2\}.\end{aligned}$$ Take now the limit in $n$ and write from (which also holds replacing $u_m$ by $u$): $$\begin{aligned} \frac 34{\mathbb{E}}[|X_t^{1}-X_t^{2}|]&\le& {\mathbb{E}}[|X_t^{Z,1}-X_t^{Z,2}|]\le C\int_0^t {\mathbb{E}}[|X_s^1-X_s^2|] ds,\\\end{aligned}$$ which readily gives from the Gronwall Lemma ${\mathbb{E}}[|X_t^{1}-X_t^{2}|]=0. $ Proof of Lemma \[LEM\_BES\_NORM\] {#SEC_APP_TEC} ================================= We start with the proof of estimate . Having in mind the thermic characterization of the Besov norm , the main point consists in establishing suitable controls on the thermic part of (i.e. the second term in the r.h.s. therein) viewed as the map $$s \mapsto \mathcal{T}_{p',q'}^{1-\gamma}[\Psi(s,\cdot ) \mathscr D^\eta p_\alpha(s-t,\cdot-x)].$$ : $$\begin{aligned} \label{DEF_HIGH_LOW_CO} &&\Big(\mathcal{T}_{p',q'}^{1-\gamma}[\Psi(s,\cdot) \mathscr D^\eta p_\alpha(s-t,\cdot-x)]\Big)^{q'}\notag\\ &=&\int_0^1 \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \| \p_v \tilde p_\alpha(v,\cdot) \star \big(\Psi(s,\cdot) \mathscr D^\eta p_\alpha(s-t,\cdot-x)\big)\|_{\bL^{p'}}^{q'}\notag\\ &=& \int_0^{(s-t)^{}} \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \| \p_v\tilde p_\alpha(v,\cdot) \star \big(\Psi(s,\cdot) \mathscr D^\eta p_\alpha(s-t,\cdot-x)\big)\|_{\bL^{p'}}^{q'}\notag\\ &&+ \int_{(s-t)^{}}^1 \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \| \p_v\tilde p_\alpha(v,\cdot) \star \big(\Psi (s,\cdot) \mathscr D^\eta p_\alpha(s-t,\cdot-x)\big)\|_{\bL^{p'}}^{q'}\notag\\ &=:&\Big(\mathcal{T}_{p',q'}^{1-\gamma}[\Psi(s,\cdot) \mathscr D^\eta p_\alpha(s-t,\cdot-x)]|_{[0,(s-t)^{}]}\Big)^{q'}+\Big(\mathcal{T}_{p',q'}^{1-\gamma}[\Psi(s,\cdot) \mathscr D^\eta p_\alpha(s-t,\cdot-x)]|_{[(s-t)^{},1]}\Big)^{q'}. \end{aligned}$$ For the high cut-off, the singularity induced by the differentiation of the heat kernel in the thermic part is always integrable. Hence using $\bL^1-\bL^{p'}$ convolution inequalities we have $$\begin{aligned} &&\Big(\mathcal{T}_{p',q'}^{1-\gamma}[\Psi (s,\cdot) \mathscr D^\eta p_\alpha(s-t,\cdot-x)]|_{[(s-t),1]}\Big)^{q'}\\ &\leq&\int_{(s-t)^{}}^1 \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \| \p_v \tilde p_\alpha(v,\cdot) \|_{\bL^1}^{q'} \| \Psi(s,\cdot) \mathscr D^\eta p_\alpha(s-t,\cdot-x)\|_{\bL^{p'}}^{q'}.\end{aligned}$$ $$\begin{aligned} \|\mathscr D^\eta p_\alpha(s-t,\cdot-x)\|_{\textcolor{black}{\bL^{p'}}} &\le& \frac{\bar C_{p'}}{(s-t)^{\frac{d}{\alpha p}+\frac{|\eta|}{\alpha}}}. $$ We thus obtain $$\begin{aligned} &&\Big(\mathcal{T}_{p',q'}^{1-\gamma}[\Psi (s,\cdot) \mathscr D^\eta p_\alpha(s-t,\cdot-x)]|_{[(s-t),1]}\Big)^{q'}\notag\\ &\leq&\textcolor{black}{\|\Psi (s,\cdot)\|_{\bL^{\infty}}^{q'}}\frac{C}{(s-t)^{(\frac d{p\alpha}+\frac {\eta}{\alpha})q'}} \int_{(s-t)^{}}^1 \frac{dv}{v} \frac{1}{v^{\frac{1-\gamma}{\alpha}q'}}\notag \\ &\leq & \frac{C\|\Psi\|_{\bL^\infty(\bL^\infty)}^{q'}}{(s-t)^{\left[\frac{1-\gamma}{\alpha}+\frac d{p\alpha}+\frac {\eta}{\alpha}\right]q'}}.\label{ESTI_COUP_HAUTE}\end{aligned}$$ To deal with the low cut-off of the thermic part, we need to smooth the singularity induced by the differentiation of the heat kernel of the thermic characterization. Coming back to the very definition of this term, we note that $$\begin{aligned} \label{centering_BESOV_COUPURE_BASSE} &&\| \partial_v \tilde p_\alpha(v,\cdot) \star \Psi(s,\cdot) \mathscr D^\eta p_\alpha(s-t,\cdot-x)\|_{\bL^{p'}}\\ &=&\Big(\int_{{\mathbb{R}}^d } dz |\int_{{\mathbb{R}}^d}dy \partial_v \tilde p_\alpha(v,z-y) \Psi(s,\cdot)\mathscr D^\eta p_\alpha(s-t,y-x)|^{p'} \Big)^{1/p'}\notag\\ &=&\Big(\int_{{\mathbb{R}}^d } dz \Big|\int_{{\mathbb{R}}^d}dy \partial_v \tilde p_\alpha(v,z-y)\Big[\Psi(s,\cdot)\mathscr D^\eta p_\alpha(s-t,y-x)-\Psi(s,\cdot)\mathscr D^\eta p_\alpha(s-t,z-x)\Big]\Big|^{p'} \Big)^{1/p'}\notag.\end{aligned}$$ To smooth the singularity, one then needs to establish a suitable control on the Hölder moduli of the product $\Psi(s,\cdot) \mathscr D^{\eta}p_\alpha (s-t,\cdot-x)$. We claim that for all $(t<s,x)$ in $[0,T]^2 \times {\mathbb{R}}^d$, for all $(y,z)$ in $({\mathbb{R}}^{\textcolor{black}{d}})^2$: $$\begin{aligned} \label{Holder_prod} && \left|\Psi(s,y) \mathscr D^\eta p_\alpha (s-t,y-x) - \Psi(s,z) \mathscr D^\eta p_\alpha(s-t,z-x)\right|\\ &\leq & C\left[ \left(\frac{\|\Psi(s,\cdot)\|_{\dot{\mathbb{B}}^\beta_{\infty,\infty}}}{(s-t)^{\frac{\eta}\alpha}} + \frac{\|\Psi(s,\cdot)\|_{\bL^{\infty}}}{(s-t)^{\frac{\eta + \beta}\alpha}}\right)\left(q_\alpha(s-t,y-x) +q_\alpha(s-t,z-x) \right)\right] |y-z|^\beta\notag\\ &\leq & \frac{C}{(s-t)^{\frac{\eta + \beta}\alpha}}\|\Psi(s,\cdot)\|_{{\mathbb{B}}^\beta_{\infty,\infty}} \left(q_\alpha(s-t,y-x) +q_\alpha(s-t,z-x) \right) |y-z|^\beta.\notag\end{aligned}$$ This readily gives, using $\bL^1-\bL^{p'}$ convolution estimates and , that $$\begin{aligned} \Big(\mathcal{T}_{p',q'}^{1-\gamma}[\Psi(s,\cdot) \mathscr D^\eta p(s-t,\cdot-x)]|_{[0,(s-t)]}\Big)^{q'} &\leq&\frac{C\textcolor{black}{\|\Psi(s,\cdot)\|_{{\mathbb{B}}^\beta_{\infty,\infty} }^{q'} }}{(s-t)^{\left[\frac{d}{p\alpha}+\frac{\eta}{\alpha}+\frac{\beta}{\alpha}\right]q'}}\int_{0}^{s-t} \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha}-1+\frac{\beta}{\alpha})q'} \notag\\ &\leq& \frac{C\textcolor{black}{\|\Psi(s,\cdot)\|_{{\mathbb{B}}^\beta_{\infty,\infty}}^{q'} } }{(s-t)^{\left[\frac{d}{p\alpha}+\frac{\eta}{\alpha}+\frac{\beta}{\alpha} + \frac{1-\gamma-\beta}{\alpha}\right]q'}}.\label{ESTI_COUP_BASSE}\end{aligned}$$ .\ \[GESTION\_BESOV\_FIRST\] This term is easily handled by the $\bL^{p'}$ norm of the product $\Psi(s,\cdot ) \mathscr D^\eta p_\alpha(s-t,\cdot-x)$ and hence on $\bL^{p'}$ norm of $\mathscr D^\eta p_\alpha$ times the $\bL^\infty$ norm of $\Psi$. This, in view of , clearly brings only a negligible contribution in comparison with the one of the thermic part. To conclude with , it remains to prove . From (see again the proof of Lemma 4.3 in [@huan:meno:prio:19] for details), we claim that there exists $C$ s.t. for all $\beta'\in (0,1] $ and all $(x,y,z)\in ({\mathbb{R}}^d)^2 $, $$\begin{aligned} |\mathscr D^{\eta} p_\alpha(s-t,z-x)- \mathscr D^{\eta} p_\alpha(s-t,y-x)|\le \frac{C}{(s-t)^{\frac{\beta'+\eta}{\alpha}}} |z-y|^{\beta'} \Big( q_\alpha(s-t,z-x)+q_\alpha(s-t,y-x)\Big).\label{CTR_BETA}\end{aligned}$$ Indeed, is direct if $|z-y|\ge [1/2] (s-t)^{1/\alpha} $ (off-diagonal regime). It suffices to exploit the bound for $\mathscr D^{\eta} p_\alpha(s-t,y-x) $ and $\mathscr D^{\eta} p_\alpha(s-t,z-x) $ and to observe that $\big(|z-y|/(s-t)^{1/\alpha}\big)^{\beta'}\ge 1 $. If now $|z-y|\le [1/2] (s-t)^{1/\alpha} $ (diagonal regime), it suffices to observe from that, with the notations of the proof of Lemma \[SENS\_SING\_STAB\] (see in particular ), for all $\lambda\in [0,1] $: $$\begin{aligned} |\mathscr D^{\eta} \textcolor{black}{D} p_M(s-t,y-x+\lambda(y-z))|&\le& \frac{C_m}{(s-t)^{\frac{\eta+1}\alpha}}p_{\bar M}(s-t,y-x-\lambda(y-z))\notag\\ &\le& \frac{C_m}{(s-t)^{\frac{\eta+1+d}\alpha}}\frac{1}{\Big( 1+\frac{|y-x-\lambda(z-y)|}{(s-t)^{\frac 1\alpha}} \Big)^{m}} \notag\\ &\le& \frac{C_m}{(s-t)^{\frac{\eta+1+d}\alpha}}\frac{1}{\Big( \frac 12+\frac{|y-x|}{(s-t)^{\frac 1\alpha}} \Big)^{m}}\le 2\frac{C_m}{(s-t)^{\frac {\eta+1}\alpha}} p_{\bar M}(s-t,y-x). \label{MIN_JUMP}\end{aligned}$$ Therefore, in the diagonal case follows from and writing $|\mathscr D^{\eta}p_\alpha(s-t,z-x)- \mathscr D^{\eta} p_\alpha(s-t,y-x)|\le \int_0^1 d\lambda |\mathscr D^{\eta} D p_\alpha(s-t,y-x+\lambda(y-z)) \cdot (y-z)| \le 2C_m(s-t)^{-[(\textcolor{black}{\eta+1})/\alpha]} q_{\alpha}(s-t,y-x)|z-y|\le \tilde C_m (s-t)^{-[ (\textcolor{black}{\eta+\beta'})/\alpha]} q_{\alpha}(s-t,y-x)|z-y|^{\beta'}$ for all $\beta' \in [0,1] $ (exploiting again that $|z-y|\le [1/2] (s-t)^{1/\alpha} $ for the last inequality). We conclude noticing that for all $s$ in $(0,T]$ the map ${\mathbb{R}}^d \ni y \mapsto \Psi (s,y)$ is $\beta$-Hölder continuous and choosing $\beta'=\beta$ in the above estimate.\ We now prove . Splitting again the thermic part of the Besov norm into two parts (high and low cut-off) we write $$\begin{aligned} &&\Big(\mathcal{T}_{p',q'}^{1-\gamma}[ \Big(\Psi (s,\cdot)\big(\mathscr D^\eta p_\alpha(s-t,\cdot-x) - \mathscr D^\eta p_\alpha(s-t,\cdot-x')\big)]\Big)^{q'}\\ &=&\int_0^1 \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \| \p_v\tilde p_\alpha(v,\cdot) \star \Big(\Psi(s,\cdot)\big(\mathscr D^\eta p_\alpha(s-t,\cdot-x) - \mathscr D^\eta p_\alpha(s-t,\cdot-x')\big) \Big)\|_{\bL^{p'}}^{q'}\\ &=& \int_0^{(s-t)^{}} \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \| \p_v\tilde p_\alpha(v,\cdot) \star \Big(\Psi (s,\cdot)\big(\mathscr D^\eta p_\alpha(s-t,\cdot-x) - \mathscr D^\eta p_\alpha(s-t,\cdot-x')\big) \Big)\|_{\bL^{p'}}^{q'}\\ &&+ \int_{(s-t)^{}}^1 \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \| \p_v\tilde p_\alpha(v,\cdot) \star \Big(\Psi(s,\cdot)\big(\mathscr D^\eta p_\alpha(s-t,\cdot-x) - \mathscr D^\eta p_\alpha(s-t,\cdot-x')\big) \Big)\|_{\bL^{p'}}^{q'}\\ &=:&\Big(\mathcal{T}_{p',q'}^{1-\gamma}[\Big(\Psi(s,\cdot)\big(\mathscr D^\eta p_\alpha(s-t,\cdot-x) - \mathscr D^\eta p_\alpha(s-t,\cdot-x')\big) \Big)]|_{[0,(s-t)^{}]}\Big)^{q'}\\ &&+\Big(\mathcal{T}_{p',q'}^{1-\gamma}[\Big(\Psi (s,\cdot)\big(\mathscr D^\eta p_\alpha(s-t,\cdot-x) - \mathscr D^\eta p_\alpha(s-t,\cdot-x')\big) \Big)]|_{[(s-t)^{},1]}\Big)^{q'}.\notag \end{aligned}$$ Proceeding as we did before for the high cut-off and using , we have for any $\beta'$ in $[0,1]$: $$\begin{aligned} &&\Big(\mathcal{T}_{p',q'}^{1-\gamma}[\Big(\Psi (s,\cdot)\big(\mathscr D^\eta p_\alpha(s-t,\cdot-x) - \mathscr D^\eta p_\alpha(s-t,\cdot-x')\big) \Big)]|_{[(s-t)^{},1]}\Big)^{q'}\\ &\leq&\int_{(s-t)^{}}^1 \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha})q'} \|\p_v \tilde p_\alpha(v,\cdot) \|_{\bL^1}^{q'} \| \Big(\Psi (s,\cdot)\big(\mathscr D^\eta p_\alpha(s-t,\cdot-x) -\mathscr D^\eta p_\alpha(s-t,\cdot-x')\big) \Big)\|_{\bL^{p'}}^{q'}\\ &\leq&\frac{C\|\Psi (s,\cdot)\|_{\bL^\infty}^{\textcolor{black}{q'}}}{(s-t)^{(\frac d{p\alpha}+\frac {\eta+\beta'}{\alpha})q'}} \int_{(s-t)^{}}^1 \frac{dv}{v} \frac{1}{v^{\frac{1-\gamma}{\alpha}q'}} |x-x'|^{\beta' \textcolor{black}{q'}}\\ &\leq & \frac{C\textcolor{black}{\|\Psi (s,\cdot)\|_{\bL^\infty}^{q'}}}{(s-t)^{\left[\frac{1-\gamma}{\alpha}+\frac d{p\alpha}+\frac {\eta+\beta'}{\alpha}\right]q'}}|x-x'|^{\beta' \textcolor{black}{q'}}.\end{aligned}$$ To deal with the low cut-off, we proceed as we did for in order to smooth the singularity induced by the differentiation of the thermic kernel. We are hence to control the Hölder moduli of $\Psi(s,\cdot)\Big(\mathscr D^\eta {p}_\alpha(s-t,\cdot-x)-\mathscr D^\eta {p}_\alpha(s-t,\cdot-x')\Big)$. We claim that for any $\beta'$ in $(0,1]$ and all $(t<s,x)$ in $[0,T]^2 \times {\mathbb{R}}^d$, we have that for all $(y,z)$ in $({\mathbb{R}}^d)^2$: $$\begin{aligned} && \bigg|\Psi u(s,y)\Big(\mathscr D^\eta {p}_\alpha(s-t,y-x)-\mathscr D^\eta {p}_\alpha(s-t,y-x')\Big) - \Psi(s,z)\Big(\mathscr D^\eta {p}_\alpha(s-t,z-x)-\mathscr D^\eta {p}_\alpha(s-t,z-x')\Big)\bigg|\notag\\ &\leq & \frac{C}{(s-t)^{\frac{\eta + \beta+\beta'}\alpha}}\|\Psi(s,\cdot)\|_{{\mathbb{B}}^\beta_{\infty,\infty}}\Big(q_\alpha(s-t,y-x) +q_\alpha(s-t,z-x)+q_\alpha(s-t,y-x') +q_\alpha(s-t,z-x') \Big) \notag\\ && \quad \times |y-z|^\beta|x-x'|^{\beta'}.\label{Holder_prod-2}\end{aligned}$$ Repeating the computations in and using the above estimate, we obtain that: $$\begin{aligned} &&\Big(\mathcal{T}_{p',q'}^{1-\gamma}[\Big(\Psi(s,\cdot)\big(\mathscr D^\eta p_\alpha(s-t,\cdot-x) - \mathscr D^\eta p_\alpha(s-t,\cdot-x')\big) \Big)]|_{[0,(s-t)^{}]}\Big)^{q'}\\ &\leq&\frac{C \|\Psi(s,\cdot)\|_{{\mathbb{B}}_{\infty,\infty}^\beta}^{\textcolor{black}{q'}}}{(s-t)^{\left[\frac{d}{p\alpha}+\frac{\eta+\beta'}{\alpha}+\frac{\beta}{\alpha}\right]q'}}\int_0^{(s-t)^{}} \frac{dv}{v} v^{(1-\frac{1-\gamma}{\alpha}-1+\frac{\beta}{\alpha})q'} |x-x'|^{\beta'\textcolor{black}{q'}} \leq \frac{C \|\textcolor{black}{\Psi(s,\cdot)}\|_{{\mathbb{B}}_{\infty,\infty}^\beta}^{\textcolor{black}{q'}}}{(s-t)^{\left[\frac{d}{p\alpha}+\frac{\eta+\beta'}{\alpha} + \frac{1-\gamma}{\alpha}\right]q'}}|x-x'|^{\textcolor{black}{\beta' q'}},\end{aligned}$$ provided $$\label{cond_beta_gamma} \beta+\gamma>1.$$ It thus remains to prove . It directly follow from that: $$\begin{aligned} && \bigg|\Psi (s,y)\Big(\mathscr D^\eta {p}_\alpha(s-t,y-x)-\mathscr D^\eta {p}_\alpha(s-t,y-x')\Big) - \Psi (s,z)\Big(\mathscr D^\eta {p}_\alpha(s-t,z-x)-\mathscr D^\eta {p}_\alpha(s-t,z-x')\Big)\bigg|\notag\\ &\le& \|\Psi (s,\cdot)\|_{\dot {\mathbb{B}}_{\infty,\infty}^\beta} |z-y |^{\beta} \frac{C}{(s-t)^{\frac{\eta+\beta'}{\alpha}}}|x-x|^{\beta'} \big(q_\alpha(s-t,y-x)+q_\alpha(s-t,y-x')\big)\label{CTR_INTERMEDIAIRE_POUR_Holder_prod-2}\\ &&+\|\Psi(s,\cdot)\|_{\bL^\infty}\Big|\big( \mathscr D^\eta {p}_\alpha(s-t,y-x)-\textcolor{black}{\mathscr D^\eta} {p}_\alpha(s-t,y-x')\big)-\big(\mathscr D^\eta {p}_\alpha(s-t,z-x)-\mathscr D^\eta {p}_\alpha(s-t,z-x')\big) \Big|.\notag\end{aligned}$$ Setting: $$\Delta(s-t,x,x',y,z):=\Big|\big( \mathscr D^\eta {p}_\alpha(s-t,y-x)-\mathscr D^\eta {p}_\alpha(s-t,y-x')\big)-\big(\mathscr D^\eta {p}_\alpha(s-t,z-x)-\mathscr D^\eta {p}_\alpha(s-t,z-x')\big) \Big|,$$ it now remains to control this term. Precisely, If $|x-x'|\ge (s-t)^{1/\alpha}/4 $, we write: $$\begin{aligned} &&\Delta(s-t,x,x',y,z) \label{HD_1}\\ &\le& \big|\mathscr D^\eta {p}_\alpha(s-t,y-x)-\mathscr D^\eta {p}_\alpha(s-t,z-x) \big|+\big|\mathscr D^\eta {p}_\alpha(s-t,y-x')-\mathscr D^\eta {p}_\alpha(s-t,z-x') \big|\notag\\ &\underset{\eqref{CTR_BETA}}{\le}& \frac{C}{(s-t)^{\frac {\eta+\beta}\alpha}} |y-z|^{\beta}\big( q_\alpha(s-t,y-x)+q_\alpha(s-t,y-x')+q_\alpha(s-t,z-x)+q_\alpha(s-t,z-x')\big)\notag\\ &\le& \frac{4C}{(s-t)^{\frac {\eta+\beta+\beta'}\alpha}} |y-z|^{\beta}|x-x'|^{\beta'}\big( q_\alpha(s-t,y-x)+q_\alpha(s-t,y-x')+q_\alpha(s-t,z-x)+q_\alpha(s-t,z-x')\big).\notag\end{aligned}$$ If $|z-y|\ge (s-t)^{1/\alpha}/4 $, we write symmetrically: $$\begin{aligned} &&\Delta(s-t,x,x',y,z) \label{HD_2}\\ &\le& \big|\mathscr D^\eta {p}_\alpha(s-t,y-x)-\mathscr D^\eta {p}_\alpha(s-t,y-x') \big|+\big|\mathscr D^\eta {p}_\alpha(s-t,z-x)-\mathscr D^\eta{p}_\alpha(s-t,z-x') \big|\notag\\ &\underset{\eqref{CTR_BETA}}{\le}& \frac{C}{(s-t)^{\frac {\eta+\beta'}\alpha}} |x-x'|^{\beta'}\big( q_\alpha(s-t,y-x)+q_\alpha(s-t,y-x')+q_\alpha(s-t,z-x)+q_\alpha(s-t,z-x')\big)\notag\\ &\le& \frac{4C}{(s-t)^{\frac {\eta+\beta+\beta'}\alpha}} |y-z|^{\beta}|x-x'|^{\beta'}\big( q_\alpha(s-t,y-x)+q_\alpha(s-t,y-x')+q_\alpha(s-t,z-x)+q_\alpha(s-t,z-x')\big).\notag\end{aligned}$$ If $|z-y|\le (s-t)^{1/\alpha}/4 $ and $|x-x'|\le (s-t)^{1/\alpha}/4 $, we get: $$\begin{aligned} &&\Delta(s-t,x,x',y,z)\label{D}\\ &\le& \int_0^1 d\lambda\int_0^1d\mu |\textcolor{black}{D}_x^{\textcolor{black}{2}} \mathscr D^\eta p_\alpha(s-t,z-x'+\mu(y-z)-\lambda(x-x'))| |x-x'||z-y|\notag\\ &\le& \frac{C}{(s-t)^{\frac{\eta+\beta+\beta'}{\alpha}}} |y-z|^\beta |x-x'|^{\beta'} \big( q_\alpha(s-t,y-x)+q_\alpha(s-t,y-x')+q_\alpha(s-t,z-x)+q_\alpha(s-t,z-x')\big)\notag\end{aligned}$$ proceeding as in and exploiting for the last identity. Plugging , and into eventually yields the control . Acknowledgments. {#acknowledgments. .unnumbered} ================ For the first Author, this work has been partially supported by the ANR project ANR-15-IDEX-02. For the second author, the article was prepared within the framework of a subsidy granted to the HSE by the Government of the Russian Federation for the implementation of the Global Competitiveness Program. [^1]: Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA, 73000 Chambéry, France. pe.deraynal@univ-smb.fr [^2]: Laboratoire de Modélisation Mathématique d’Evry (LaMME), CNRS UNMR 8071, Université d’Evry Val d’Essonne, Paris Saclay, 23 Boulevard de France 91037 Evry, France and Laboratory of Stochastic Analysis, HSE, Shabolovka 31, Moscow, Russian Federation. stephane.menozzi@univ-evry.fr [^3]: .
--- abstract: 'We report the detection of massive star formation along a bar in the peculiar starburst galaxy Mkn 439. We present optical $B$, $R$ and $H\alpha$+$[NII]$ emission line images as well as $H$ band images to show that the signature of the bar becomes progressively weak at longer wavelengths. Moreover, this bar is misaligned with the main body of the galaxy. The peak $H\alpha$ emission does not coincide with the bluest regions seen in the colour maps. We infer that the starburst is young since the stars in the burst have not started influencing the light in the near infrared. There are indications of dust in the inner regions of this galaxy.' author: - 'Aparna Chitre, U.C. Joshi and S.Ganesh' date: 'Received ; accepted' title: Star formation along a misaligned bar in the peculiar starburst galaxy Mkn 439 --- Introduction ============ Mkn 439 (NGC 4369, UGC 7489, IRAS 12221+3939) is a nearby early type starburst galaxy (z=0.0035). It has been classified as a starburst by Balzano ([@balz]) based on the equivalent width of $H\alpha$ and also belongs to the $\it IRAS$ Bright Galaxy Sample (Soifer et al. [@soif]). On the basis of multiaperture near infrared photometry and optical spectroscopy, Devereux ([@dev]) describes this galaxy as a M82 type starburst galaxy. Rudnick & Rix ([@rudrix]) report an azimuthal asymmetry in the stellar mass distribution of Mkn 439 based on the $R$ band surface brightness. The peculiar morphology of Mkn 439 attracted our attention during the course of an optical imaging study of a sample of starburst galaxies derived from the Markarian lists (Chitre [@chitre]). The galaxy image was nearly circular and appeared featureless in long exposure images. The outer isophotes were smooth and nearly circular in $B$ and $R$ bands. However, the isophotal contours show highly complex features in the inner parts. Moreover, the strength of these features is wavelength dependent. Wiklind & Henkel ([@wik]) report the detection of a molecular bar in the central region of this galaxy based on the results of CO mapping. No detailed surface photometric studies of this galaxy have been reported. Usui, Saito & Tomita ([@usui]) report the detection of two regions bright in $H\alpha$ that are displaced from the nucleus and faint emission from the nucleus. However, their data were obtained at a seeing of 5. In order to study the spatial distribution of various stellar populations in Mkn 439, we imaged this galaxy in $B$, $R$, $H\alpha$ and $H$ bands. The $B$ and $R$ band continuum trace the intermediate age populations while $H\alpha$ traces the young, massive stellar populations. The infrared continuum of galaxies is dominated by evolved stellar populations. Hence, the $H$ band and the line emission images can be used alongwith the optical continuum images to separate the young and old stellar populations spatially. Observations and data reduction =============================== Optical ($B$,$R$) and $H\alpha$ imaging --------------------------------------- The $B$, $R$ and $H\alpha$ images were obtained under photometric conditions from the 1.2m telescope at Gurushikhar, Mt. Abu. The images were taken at the Cassegrain focus employing a thinned back illuminated Tektronix 1K $\times$ 1K CCD. Binning of 2 $\times$ 2 was employed before recording the images to increase the signal-to-noise ratio of the measurements and keeping in mind the data storage requirements. The final resolution was 0.634$^\prime$$^\prime$/pixel which is sufficient to sample the point spread function (PSF) appropriately. Typical seeing (full width at half maximum (FWHM) of the stellar images) was $\sim$ 1.8$^\prime$$^\prime$ for the images. For the $H\alpha$ images a filter having FWHM of 80 Åwas used. Another off-band filter of the same FWHM was used to measure the galactic red continuum. About 3-4 exposures were taken in each of the photometric bands. The total exposure times were 510 sec, 360 sec and 1600 sec in $B$, $R$ and $H\alpha$ respectively. Standard stars from Landolt ([@land]) were observed to calibrate the broad band data. Twilight flats were taken and median filtered to construct the master flats. The data was reduced using IRAF [^1] on the IBM-6000 RISC at PRL, Ahmedabad. A detailed reduction procedure can be found in Chitre & Joshi ([@cucj]). $H$ band images --------------- The $H$ band images were recorded with a 256$\times$256 NICMOS array at the 1.2 m Gurushikhar telescope. An exposure of 30 sec ensured that the background and the galaxy signal were in the linear portion of the detector response curve (Joshi et al. [@jetal]). Observations were made by alternating between the galaxy and positions 4-5to the north and south till a total integration time of 600 seconds on the galaxy was achieved. Several dark frames having the same time sequences as that of galaxy or sky were taken and median filtered master dark frames were constructed. The median filtered master sky frames were constructed using several sky frames with integration times equal to those given for the galaxy. All the source frames were corrected for the sky background by subtracting the master sky frame from the source frames. As the program galaxy does not occupy the whole detector array, the residual sky was determined from the image corners and the images were then corrected for residual sky. The dark subtracted sky frame was used to construct the master flat. The sky corrected galaxy frames were corrected for flat field response of the detector by dividing the galaxy frames by the master flat.\ Finally, the galaxy images were aligned by finding the center of the galaxy nucleus using the IMCNTR task in IRAF and co-added to improve the S/N ratio. The plate scale was selected to be 0$^\prime$$^\prime$.5 per pixel. Faint standard stars from the UKIRT lists were observed for calibration. Parameter Value --------------------------------- --------------------------------------- $\alpha$(2000) 12$^h$22$^m$0.8$^s$.4 $\delta$(2000) 39$\degr$39$^{\arcmin}$41$^{\arcsec}$ RC3 type RSAT1 UGC type S0/Sa $^{\mathrm{b}}$Adopted distance 18 Mpc $^{\mathrm{a}}$$B^{^0}_T$ 12.27 $^{\mathrm{a}}$$(U-B)_T$ -0.02 $^{\mathrm{a}}$$(B-V)_T$ 0.65 L$_{FIR}$ 4$\times$$10^9$ $L_\odot$ : Global properties of Mkn 439 RC3 Deutsch & Willner ([@deut]) Morphology of Mkn 439 ===================== Fig. \[cont\] illustrates the isophotal contours of the inner 25 of Mkn 439 in $B$, $R$, continuum subtracted $H\alpha$ and $H$ band. A comparison of the various panels in Fig. \[cont\] shows that the morphological structures vary at different wavelengths. The morphology of Mkn 439 in the $B$ band is characterized by smooth outer isophotes and a very complex light distribution in the inner region. The central region is elliptical and is elongated in the NS direction. Faint indications of a spiral arm in the NE direction is seen in the isophotal maps in $B$ and $R$. The contour maps show two projections - one along the NW and the other along the SE from the nuclear region. These projections are most prominent in the $B$ continuum, getting progressively fainter at longer wavelengths and nearly disappearing in $H$. The $B$ band image shows another condensation to the SW of the nuclear region. Similar to the projections, this feature also gets progressively fainter at longer wavelength. The $H$ band image shows smoother isophotes. The signature of the projections is absent at this wavelength. As seen in the $R$ band, the outer isophotes are nearly circular. However, unlike other optical bands, there are no spurs or bar-like features apparent in the $H$ band image. The continuum subtracted $H\alpha$ image shows an elongated bar-like strucure corresponding to the projections seen in the contour maps. $H\alpha$ emission is seen along the bar in the form of clumps. Emission is most intense at the ends of the bar, though it is found to extend throughout the body of the galaxy. Emission from the nucleus is much fainter as compared to that from the clumps in the bar ends. $H\alpha$ emission is maximum in Spot 1. The clump of $H\alpha$ emission seen to the E of the extended emission has no counterpart in the continuum colour map. The bright blobs of emission in $H\alpha$ have no counterparts in the $H$ band. This indicates that the HII regions are young and have not yet evolved enough to form a considerable number of red giants and supergiants to start influencing the light in the $H$ band. It is also seen that the latest episode of star formation is misaligned with the isophotal contours of the near infrared continuum. The ($B$-$H$) colour map (Fig. \[bhcol\]) was constructed by scaling the images, rotating and aligning them. It shows interesting features. A bar-like structure made up of blue clumps is seen in the central part of the galaxy. A spiral arm starts from the nuclear region and curves towards the eastern side. A distinct blue clump is present at either end of the bar marked as Spot 1 and Spot 2 in Fig. \[bhcol\]. These correspond to the ends of the two projections seen in the isophotal contours in $B$. Another blue region (Spot 3) is seen about 8 to the south of Spot 1. The clump of $H\alpha$ emission seen to the E of the extended emission has no counterpart in the continuum colour map. The ($B$-$R$) and ($B$-$H$) colours of these regions are listed in Table 2. The isophotal contours of Mkn 439 appear different in the optical, near infrared and the line emission indicating the spatial separation of the distribution of these various populations. The gaseous component in the galaxy appears to be under the influence of a potential which has distributed it in the form of a gaseous bar. Compression of the gas in the bar has led to the formation of young, massive stars which are seen as clumpy HII regions along the bar. We infer that the latest dynamical episode experienced by the galaxy has given rise to the formation of young, massive stars along the bar as a result of the response of the gas to the perturbing potential. A comparison of the $H\alpha$ contours in Fig. \[cont\] and Fig. \[bhcol\] reveals that no HII regions are seen in the blue spiral arm-like feature emerging from the nucleus indicating that the blue spiral arm is made up of an intermediate age stellar population. Wiklind & Henkel ([@wik]) report the detection of a molecular bar in the central region in this galaxy based on CO mapping. They observed Mkn 439 in both the J=1-0 and J=2-1 line of CO and found that the ratio of the J=2-1 to the J=1-0 intensity varies with position and inferred that this was due to changing physical conditions in the molecular cloud population. The contour maps of these two transitions can be found in Wiklind ([@wikthes]). Many galaxies with weak stellar bars have been found to contain pronounced bar-like gas distributions similar to the one found in Mkn 439. For example, the center of the nearby Scd galaxy IC 342 harbors a bar-like molecular gas structure and a modest nuclear starburst (Lo et al. [@lo]; Ishizuki et al. [@ish]). Other examples of galaxies having a molecular bar at their centers are NGC 253 and M83. Simulations by Combes ([@combes]) describe the formation of a gas bar which is phase shifted from the stellar component in the innermost regions of a galaxy due to the existence of perpendicular orbits. However, her models describe the situation for the nuclear bars and in the innermost 1kpc region. An alternative explanation could be that two unequal mass spirals have merged to form the S0 galaxy. Bekki Kenji ([@bek]) suggests that S0 galaxies are formed by the merging of spirals and when two spirals are of unequal mass, the S0 galaxy thus formed has an outer diffuse stellar envelope or a diffuse disk like component and a central thin stellar bar composed mainly of new stars. region $B$-$R$ $B$-$H$ --------- --------- --------- nucleus 0.4 2.8 spot 1 0.7 2.9 spot 2 0.6 2.6 spot 3 0.8 3.2 : Colours of clumpy regions Isophotal analysis ================== In order to provide a quantitative description of the morphological aspects at various wavelengths, we explored Mkn 439 using ellipse fitting techniques. The procedure consists of fitting elliptical isophotes to the galaxy images and deriving 1-dimensional azimuthally averaged radial profiles for the surface brightness, ellipticity and the position angle based on the algorithm given by Jedrejewski ([@jedr]). This technique has been used successfully in studying various structures in galaxies like bars, rings, shells, etc. and in searching for dust in them (Bender & Möllenhoff [@bend]; Wozniak et al. [@woz] and Jungweirt, Combes & Axon [@jung]). Multiband isophotal analysis can also be used to indicate whether the reddening seen in colour maps is due to a redder stellar population or due to the presence of dust (Prieto et al. [@prietoa], [@prietob]). The surface brightness distribution and the variation of the position angle and ellipticity of the isophotes in each filter (Fig. \[fit\]) were obtained by fitting ellipses to the images in each filter using the ISOPHOTE package within STSDAS[^2]. The detailed fitting procedure used is outlined in Chitre ([@chitre]). The radial distribution of the colour indices (Fig. \[brh\]) was derived from the surface brightness profiles. Fitting isophotes to the images reveals changing ellipticity and position angle throughout the body of the galaxy (refer Fig \[fit\]). The luminosity profile is smooth except for small features at 5 and 10 in the optical bands. An inspection of Fig. \[brh\] shows that the galaxy is bluest near the center and gets redder outwards. The ellipticity of the elliptical feature is maximum at the center and goes on decreasing outwards unlike a bar in which ellipticity increases accompanied by a constant position angle. The ellipticity profile shows a double peaked structure in the inner region. The first peak is seen between 2- 3 and the second peak at 5. The ellipticity of the first peak is wavelength dependent, the isophotes at shorter wavelengths being rounder. The colour map also shows a small local redder region between 3 and 4. The surface brightness profiles also show a small dip in the intensity at shorter wavelengths at 4. All these features indicate the presence of dust in the inner 4 of this galaxy. van den Bergh & Pierce ([@van]) do not find any trace of dust in Mkn 439 from a direct inspection of the $B$ band images on a CCD frame. However, ellipse fitting analysis has been successfully employed in the present study to infer the presence of dust in the inner regions of this galaxy based on multiband observations. The other peak occurs at 5 which corresponds to the brightest region seen in H$\alpha$. The depth of the dip between the two peak reduces at longer wavelengths. The first peak and the dip is probably due to dust while the second one corresponds to the blue region at the end of the bar. Both these factors, namely dust and star forming regions contribute the maximum at shorter wavelengths. At longer wavelengths, the effects of both dust and the star forming regions are reduced hence we see the underlying old stellar population. As a result the depth of the dip is reduced at longer wavelengths. Beyond 5, the ellipticity starts dropping and reaches a value ($\sim$0.05) at 15 and remains at a low value beyond that in all filters. Between 5 and 15, the isophotes at shorter wavelengths are rounder than the corresponding isophotes at longer wavelengths indicating the presence of dust in this region of Mkn 439. The position angle is nearly constant in the inner 10. The luminosity profiles show an inner steeply rising part and an outer exponential disk. We derived the scale lengths of Mkn 439 in each of the filter bands. This was done by marking the disk and fitting an exponential to the surface brightness profile in this region. The range of fit was taken to be from 18 to the region where the signal falls to 2$\sigma$ of the background. The fit to the $H$ band is shown in Fig.\[fit\]. The scale lengths derived were 0.97$\pm 0.14$ kpc in $B$, 0.84$\pm 0.02$ kpc in $R$ and 0.61$\pm 0.03$ kpc in $H$ band. Conclusions =========== 1. Mkn 439 is a peculiar galaxy made up of three distinct components: an elliptical structure in the inner regions, a smooth outer envelope in which this structure is embedded and a bar. We detect massive star formation along the bar in Mkn 439. This bar is misaligned with the main body of the galaxy. 2. The signature of the bar gets progressively fainter at longer wavelengths. 3. The stars in the bar are young and have not yet started influencing the light in the near infrared region. This indicates that the galaxy has undergone some perturbation which trigerred the bar formation and the starburst along the bar in recent times. 4. There are indications for the presence of dust in the inner 15 of the galaxy. We are grateful to the anonymous referee for useful suggestions. One of the authors A. Chitre wishes to thank Tommy Wiklind for useful discussions. The authors are thankful to Dr. K.S. Baliyan for helping with observations. This work was supported by the Department of Space, Government of India. Balzano V.A., 1983, ApJ 268, 602 Bekki Kenji, 1998, ApJ 502, L133 Bender R., Möllenhoff C., 1987, A&A 177, 71 Chitre A., 1999, Ph.D thesis, Gujarat University Chitre A., Joshi U.C., 1999, A&AS [*in press*]{} Combes F., 1994, in: [*The Formation and Evolution of Galaxies*]{}, V Canary Islands Winter School of Astrophysics, eds. C. Muñoz-Tuñon & F. Sánchez, Cambridge Univ. Press, p.359 Deutsch L.K., Willner S.P., 1987, ApJS, 63, 803 Devereux N.A., 1989, ApJ 346, 126 Ishizuki S., Kawabe R., Ishiguro M., et al., 1990, Nature 344, 224 Jedrejewski R.I., 1987, MNRAS 226, 747 Joshi U.C., et al., 1999 [*in preparation*]{} Jungweirt B., Combes F., Axon D.J., 1997, A&AS 125, 497 Landolt A.U., 1992, AJ 104, 340 Lo K.Y., Berge G.L., Claussen M.J., et al., 1984 ApJL 282, 59 Prieto M., Beckman J.E., Cepa J., et al., 1992a, A&A 257, 85 Prieto M., Longley D.P.T., Perez E., et al., 1992b, A&AS 93, 557 Rudnick G., Rix H., 1998, AJ, 116, 1163 Soifer B.T., Sanders D.B., Madore B.F., et al., 1987, ApJ 320, 238 Usui T., Saito M., Tomita A., 1998, AJ 116, 2166 van den Bergh S., Pierce M.J., 1990, ApJ 364, 444 Wiklind T., 1990, Ph D thesis, Chalmers University of Technology, Sweden Wiklind T., Henkel C., 1989, A&A 225, 1 Wozniak H., Friedli D., Martinet L., et al., 1995, A&AS 111, 115 [^1]: IRAF is distributed by National Optical Astronomy Observatories, which is operated by the Association of Universities Inc. (AURA) under cooperative agreement with the National Science Foundation, USA. [^2]: The Space Telescope Science Data Analysis System STSDAS is distributed by the Space Telescope Science Institute.
--- abstract: 'Algebraic spin liquids, which are exotic gapless spin states preserving all microscopic symmetries, have been widely studied due to potential realizations in frustrated quantum magnets and the cuprates. At low energies, such putative phases are described by quantum electrodynamics in $2+1$ dimensions. While significant progress has been made in understanding this nontrivial interacting field theory and the associated spin physics, one important issue which has proved elusive is the quantum numbers carried by so-called monopole operators. Here we address this issue in the “staggered-flux” spin liquid which may be relevant to the pseudogap regime in high-$T_c$. Employing general analytical arguments supported by simple numerics, we argue that proximate phases encoded in the monopole operators include the familiar Neel and valence bond solid orders, as well as other symmetry-breaking orders closely related to those previously explored in the monopole-free sector of the theory. Surprisingly, we also find that one monopole operator carries trivial quantum numbers, and briefly discuss its possible implications.' author: - Jason Alicea title: Monopole Quantum Numbers in the Staggered Flux Spin Liquid --- Introduction ============ When frustration or doping drives quantum fluctuations sufficiently strong to destroy symmetry-breaking order even at zero temperature, exotic ground states known as spin liquids emerge. “Algebraic spin liquids” comprise one class in which the spins appear “critical”, exhibiting gapless excitations and power-law correlations which, remarkably, can be unified for symmetry-unrelated observables such as magnetic and valence bond solid fluctuations. This unification of naively unrelated correlations is a particularly intriguing feature, in part because it constitutes a “smoking gun” prediction for the detection of such phases. While the unambiguous experimental observation of a quantum spin liquid (either gapless, or the related topological variety) remains to be fulfilled, there are a number of candidate materials which may host such exotic ground states. Recently the spin-1/2 kagome antiferromagnet *herbertsmithite* has emerged as a prominent example,[@kagomeExpt1; @kagomeExpt2; @kagomeExpt3; @kagomeDisorder1; @kagomeDisorder2; @kagomeNMR] and several gapless spin liquid proposals[@kagomeASLshort; @kagomeASLlong; @kagomeASLdisorder; @kagomeAVL; @kagomeFermiSurface], as well as a more conventional valence bond solid phase[@kagomeVBS1; @kagomeVBS2; @kagomeVBSseries1; @kagomeVBSseries2], have been put forth for this material. Furthermore, the cuprates have long been speculated to harbor physics connected to an algebraic spin liquid—the so-called “staggered-flux” state which we will focus on here—in the pseudogap regime of the phase diagram (for a recent comprehensive review, see Ref. ). On the theoretical end, our understanding of algebraic spin liquids has grown dramatically over the past several years. Such states are conventionally formulated in terms of fermionic, charge-neutral “spinon” fields coupled to a U(1) gauge field, whose low-energy dynamics is described by compact quantum electrodynamics in $2+1$ dimensions (QED3). Much effort has been focused on addressing two basic questions concerning these states. First, can they be stable? In more formal terms, is criticality in QED3 protected, or are there relevant perturbations allowed by symmetry which generically drive the system away from the critical fixed point? And second, if algebraic spin liquids are stable, what are the measurable consequences for the spin system? Both are nontrivial questions that require consideration of two classes of operators in QED3—those that conserve gauge flux such as spinon bilinears, and “monopole operators” that increment the gauge flux by discrete units of $2\pi$. While QED3 is known to be a strongly interacting field theory which lacks a free quasi-particle description, the theory can nevertheless be controlled by generalizing to a large number $N$ of spinon fields and performing an analysis in powers of $1/N$. Within such a large-$N$ approach, the answer to the first question has been rigorously shown to be ‘yes’—such phases can in principle be stable.[@U1stability] In particular, despite some controversy concerning the relevance of monopoles, it has now been established that such operators are strongly irrelevant in the large-$N$ limit, their scaling dimension scaling linearly with $N$.[@BKW; @U1stability] Significant progress has also been made in addressing the second question, particularly in the monopole-free sector. The effective low-energy QED3 theory for algebraic spin liquids is known to possess much higher symmetry than that of the underlying microscopic spin Hamiltonian, leading to the remarkable unification of naively unrelated competing orders noted above. Furthermore, the machinery of the projective symmetry group[@QuantumOrder] allows one to establish how correlations of flux-conserving operators in QED3 relate to physical observables such as Neel or valence bond solid correlations[@MikeSF], and the large-$N$ analysis additionally provides quantitative predictions for the corresponding scaling dimensions[@RantnerWen]. The physical content of monopole operators in QED3, however, is much less understood. Essentially, the difficulty here is that, due to gauge-invariance, determining monopole quantum numbers requires examination of full many-body spinon wavefunctions, rather than just a few low-energy single-particle states as suffices, say, for the spinon bilinears. Although the monopoles are highly irrelevant in the large-$N$ limit, their scaling dimensions may become of order unity for realistic values of $N$ (*e.g.*, $N = 4$ for the staggered-flux state), so understanding the competing orders encoded in these operators becomes an important and physically relevant issue. Moreover, since monopoles are allowed perturbations in compact QED3 which can in principle destroy criticality for small enough $N$, one would like to identify the leading symmetry-allowed monopole operators. Some progress on these issues has been made for gapless spin liquids on the triangular and kagome lattices[@FermVortSpin1; @AVLlong; @AVLonethird; @kagomeAVL; @kagomeASLlong], though in the important staggered-flux state the physics encoded in the monopoles remains a mystery[@MikeSF; @LeonSubir]. The goal of this paper is to generalize the techniques employed earlier in the former cases to deduce the monopole quantum numbers for the staggered-flux state and reveal the competing orders encoded in this sector of the theory. Assumptions and Strategy ------------------------ Let us at the outset discuss the core assumptions on which our quantum-number analysis will be based. First, we will assume that it is sufficient to study monopoles at the mean-field level. That is, we will treat the flux added by a monopole operator as a static background “felt” by the spinons. This is reasonable coming from the large-$N$ limit, where gauge fluctuations are strongly suppressed, and is in fact the standard approach adopted when discussing such flux insertions (see, *e.g.*, Ref. ). The second, and more crucial, assumption we employ is that the quantum numbers for the leading monopoles (those with the slowest-decaying correlations) can be obtained from the difference in quantum numbers between the mean-field ground states with and without the flux insertion. Put more physically, the leading monopole quantum numbers are taken to be the momentum, angular momentum, *etc.*, imparted to the spinon ground states upon flux insertion. The latter is equivalent to assuming that 1.) the flux insertion is “adiabatic” in the sense that the fermionic spinons remain in their relative ground state everywhere between the initial and final state and 2.) no Berry phases are accumulated during this evolution. The first point follows because if the fermions remain in their relative ground before and after the flux insertion, then this ought to be true everywhere in between as well. Such an assumption is quite delicate given that the mean-field states we will study are gapless in the thermodynamic limit. We will not attempt to justify this point rigorously, but we note that treating the problem in this way is in the same spirit as the conventional mean-field treatment of flux insertions mentioned above. If invalid, then treating flux insertions as a static background in the first place may not be a very useful starting point for addressing this problem. Assuming no Berry phases is equally delicate. It is worth mentioning that this assumption is known to break down in certain cases. As an illustration, consider the following gauge theory on the square lattice, $$\begin{aligned} H &=& H_f + H_{G}, \\ H_{f} &=& v\sum_{\bf r}(-1)^{r_x+r_y}c^\dagger_{{\bf r}\alpha} c_{{\bf r}\alpha} \nonumber \\ &-& t\sum_{\langle{\bf r r'}\rangle}[c^\dagger_{{\bf r}\alpha} c_{{\bf r'}\alpha}e^{-i A_{\bf r r'}} + h.c.], \\ H_{G} &=& -K \sum_{\square}\cos(\Delta \times A) + \frac{h}{2}\sum_{\langle{\bf r r'}\rangle} E_{\bf r r'}^2, \label{HG}\end{aligned}$$ where $c_{{\bf r}\uparrow/\downarrow}$ are spinful fermionic operators, the first sum in Eq. (\[HG\]) represents a lattice curl summed over all plaquettes, and the divergence of the electric field $E_{\bf r r'}$ is constrained such that $$(\Delta\cdot E)_{\bf r} = 1-c^\dagger_{{\bf r}\alpha} c_{{\bf r}\alpha}.$$ The standard electric-magnetic duality can be applied in the limit $v/t\rightarrow \infty$,[@LeonMonopoles] in which case one obtains a pure gauge theory with $(\Delta\cdot E)_{\bf r} = (-1)^{r_x+r_y}$. Such an analysis reveals that the leading monopole operators carry nontrivial quantum numbers as a consequence of Berry phase effects,[@LeonMonopoles] even though the quantum numbers of the fermions clearly can not change in this limit. The root of these nontrivial quantum numbers can be traced to the fact that the electric field divergence changes sign between neighboring sites. If one alternatively considered a pure gauge theory with vanishing electric field divergence, then no such Berry phases arise. Since in the staggered-flux state of interest the physical Hilbert space has exactly one fermion per site and thus a vanishing electric field divergence, we believe that it is reasonable to suspect that Berry phases do not play a role there as well. Given these assumptions, we will adopt the following strategy below. First, we will give a quick overview of the $\pi$-flux and staggered flux states, deriving a low-energy mean-field Hamiltonian for these states as well as the symmetry properties for the continuum fields. We will then consider $\pm 2\pi$ flux insertions, and in particular obtain the transformation properties for the four quasi-localized zero-modes which appear. Armed with this information, we will follow closely the monopole study of Refs.  and and constrain the monopole quantum numbers as much as possible using various symmetry relations which must generically hold on physical states, such as two reflections yielding the identity. The ambiguities that remain will be sorted out by appealing to general quantum number conservation and simple numerical diagonalization for systems with convenient geometries and gauge choices. This will allow us to unambiguously determine the monopole quantum numbers, subject to the above assumptions. We will then explore the competing orders encoded in the monopole operators, and close with a brief discussion of some outstanding questions. Preliminaries ============= Overview of $\pi$-flux and staggered-flux states ------------------------------------------------ Although we will ultimately be interested in exploring monopole quantum numbers in the staggered-flux state, we will use proximity to the $\pi$-flux state in our analysis and thus discuss both states here. Consider, then, a square-lattice antiferromagnet with Hamiltonian $$H = J \sum_{\langle {\bf r r'}\rangle} {\bf S}_{\bf r}\cdot {\bf S}_{\bf r'}. \label{spinH}$$ Mean field descriptions of the $\pi$-flux and staggered-flux states can be obtained from Eq. (\[spinH\]) by first decomposing the spin operators in terms of slave fermions via $${\bf S}_{\bf r} = \frac{1}{2}f^\dagger_{{\bf r}\alpha}{\bm \sigma}_{\alpha \beta} f_{{\bf r}\beta}, \label{S}$$ where ${\bm \sigma}$ is a vector of Pauli spin matrices and the fermions are constrained such that there is exactly one per site. As discussed in Refs. , there is an SU(2) gauge redundancy in this rewriting. The resulting bi-quadratic fermion Hamiltonian can then be decoupled using a Hubbard-Stratonovich transformation, giving rise to a simple free-fermion Hamiltonian at the mean-field level of the form $$H_{MF} = -t \sum_{\langle {\bf r r'}\rangle} [f_{{\bf r}\alpha}^\dagger f_{{\bf r'}\alpha} e^{-i a_{\bf r r'}} + \text{h.c.}].$$ The $\pi$-flux state corresponds to an ansatz where the fermions hop in a background of $\pi$ flux per plaquette; *i.e.*, $a_{\bf r r'}$ is chosen such that $(\Delta\times a) = \pi$ around each square. This state retains the full SU(2) gauge redundancy inherent in Eq. (\[S\]). As the name suggests, the staggered-flux state corresponds to an ansatz in which the fermions hop in flux which alternates in sign between adjacent plaquettes; *i.e.*, $(\Delta\times a) = \pm \Phi$, where $\Phi$ is the flux magnitude. Note that this ansatz reduces to the $\pi$-flux ansatz when $\Phi = \pi$ since $\pi$ flux and $-\pi$ flux are equivalent on the lattice. In contrast to the $\pi$-flux state, there is only a U(1) gauge redundancy remaining here. Note also that, despite appearances, staggering the flux does not break translation symmetry. Rather, translation symmetry (and others) are realized nontrivially as a result of gauge redundancy—the operators transform under a projective symmetry group[@QuantumOrder]. Both ansatzes in fact preserve all microscopic symmetries of the original spin Hamiltonian, namely, $x$ and $y$ translations $T_{x,y}$, $\pi/2$ rotations about plaquette centers $R_{\pi/2}$, $x$-reflection about square lattice sites $R_x$, time reversal $\mathcal{T}$, and SU(2) spin symmetry. Notably, there is no symmetry leading to conservation of gauge flux, which is why monopole operators are in principle allowed perturbations. Continuum Hamiltonian and symmetry transformations -------------------------------------------------- To derive a continuum Hamiltonian and deduce how the fields transform under the microscopic symmetries, we will now choose a gauge and set $e^{i a_{\bf r r'}} = 1$ on vertical links and $e^{i a_{\bf r r'}} = (-1)^{y}$ on horizontal links. Although this corresponds to $\pi$ flux, the transformations for the staggered-flux state can still be readily obtained from this choice. Furthermore, adopting this starting point yields the same continuum Hamiltonian as if we had chosen a staggered-flux pattern, up to irrelevant perturbations.[@MikeSF] To obtain the spectrum we take a two-site unit cell and label unit cells by vectors ${\bf R} = n_x {\bf \hat{x}}+ 2n_y {\bf \hat{y}}$ ($n_{x,y}$ are integers) which point to sites on sublattice 1; sublattice 2 is located at ${\bf R} + {\bf \hat{y}}$. We denote the spinon operators on the two sublattices by $f_{{\bf R}\alpha 1,2}$, where $\alpha$ labels spin. The band structure is straightforward to evaluate, and at the Fermi level one finds two Dirac points at momenta $\pm {\bf Q}$, with ${\bf Q} = (\pi/2,\pi/2)$. Focusing on low-energy excitations in the vicinity of these Dirac points, a continuum theory can be derived by expanding the lattice fermion operators as follows, $$\begin{aligned} f_{{\bf R}\alpha 1} &\sim& e^{i ({\bf Q}\cdot {\bf R}+\pi/4)}[\psi_{\alpha R 1} + \psi_{\alpha R2}] \nonumber \\ &+& e^{-i ({\bf Q}\cdot {\bf R}+\pi/4)}[\psi_{\alpha L 1} - \psi_{\alpha L 2}] \label{f1} \\ f_{{\bf R}\alpha 2} &\sim& e^{i ({\bf Q}\cdot {\bf R}+\pi/4)}[-\psi_{\alpha R 1} +\psi_{\alpha R2}] \nonumber \\ &+& e^{-i ({\bf Q}\cdot {\bf R}+\pi/4)}[\psi_{\alpha L 1} + \psi_{\alpha L 2}]. \label{f2}\end{aligned}$$ Here we have introduced four flavors of two-component Dirac fermions $\psi_{\alpha A}$, where $\alpha$ labels the spin and $A = R/L$ labels the node. We then obtain the continuum mean-field Hamiltonian $${\mathcal H}_{MF} \sim \int_{\bf x} -i v\psi^\dagger [\partial_x\tau^x + \partial_y \tau^y] \psi,$$ where $v \sim t$ is the Fermi velocity and $\tau^{a}_{jk}$ are Pauli matrices that contract with the Dirac indices. It is a straightforward exercise to deduce the transformation properties of continuum fields from Eqs. (\[f1\]) and (\[f2\]). For either the $\pi$-flux or staggered-flux states, these can be realized as follows: $$\begin{aligned} T_x &:& \psi \rightarrow -i \tau^x \sigma^y \mu^z [\psi^\dagger]^t \label{Tx} \\ T_y &:& \psi \rightarrow i \tau^x \sigma^y \mu^x [\psi^\dagger]^t \\ R_x &:& \psi \rightarrow -\mu^x \tau^y \psi \\ R_{\pi/2} &:& \psi \rightarrow e^{-i \frac{\pi}{4}\tau^z} e^{i \frac{\pi}{4}\mu^y}i \mu^x \psi \\ \mathcal{T} &:& \psi \rightarrow -i \mu^y \tau^z [\psi^\dagger]^t, \label{T}\end{aligned}$$ where in addition to the spin and Dirac matrices we have introduced Pauli matrices $\mu^a_{AB}$ that contract with the node indices. In the $\pi$-flux and staggered-flux cases, these transformations can be followed by an arbitrary SU(2) and U(1) gauge transformation, respectively. For the former, it will prove useful to consider a particle-hole gauge transformation $\mathcal{C}_G$ which is an element of the SU(2) gauge group and transforms the lattice fermion operators as $$\begin{aligned} f_{{\bf R}\alpha 1} &\rightarrow& e^{i \pi R_x} i\sigma^y_{\alpha\beta} f_{{\bf R}\beta 1}^\dagger \\ f_{{\bf R}\alpha 2} &\rightarrow& -e^{i \pi R_x} i\sigma^y_{\alpha\beta} f_{{\bf R}\beta 2}^\dagger .\end{aligned}$$ It follows that for the continuum fields we have $$\begin{aligned} \mathcal{C}_G &:& \psi \rightarrow \tau^x \sigma^y [\psi^\dagger]^t. \label{CG}\end{aligned}$$ We stress that in the staggered-flux state $\mathcal{C}_G$ reverses the sign of the flux microscopically and therefore does not represent a valid gauge transformation there. Flux insertion and zero-modes ----------------------------- Next we discuss the sector of the theory with $\pm2\pi$ flux inserted over a large area compared to the lattice unit cell. Treating the flux as a static background, the mean-field Hamiltonian then becomes $${\mathcal H}_{MF,q} = \int_{\bf x} -i v\psi^\dagger [(\partial_x-ia^q_x)\tau^x + (\partial_y -ia^q_y)\tau^y] \psi.$$ The vector potential is chosen such that $\nabla\times a^q = 2\pi q$, where $q = \pm 1$ is the monopole charge. It is well known that the above Hamiltonian admits one quasi-localized zero-mode for each fermion flavor,[@Jackiw] four in this case. These zero-modes can be obtained by replacing $\psi_{\alpha A}({\bf x}) \rightarrow \phi_{\alpha A,q}({\bf x}) d_{\alpha A,q}$, where $\phi_{\alpha A,q}({\bf x})$ is the quasi-localized wavefunction and $d_{\alpha A,q}$ annihilates the corresponding state. Employing the Coulomb gauge, the wave functions are simply $$\begin{aligned} \phi_{\alpha A,+} &\sim& \frac{1}{|{\bf x}|}\binom{1}{0} \\ \phi_{Aa,-} &\sim& \frac{1}{|{\bf x}|} \binom{0}{1}.\end{aligned}$$ It follows that the zero-mode operators $d_{\alpha A,q}$ transform in exactly the same way as $\psi_{\alpha A j}$, so the transformations can be read off from Eqs. (\[Tx\]) through (\[T\]) and (\[CG\]). For example, under reflections, we have $d_{\alpha R/L,q} \rightarrow i q d_{\alpha L/R,-q}$. Since gauge-invariant states are half-filled, two of the four zero-modes must be filled in the ground states here. Thus, it will be convenient to introduce the following short-hand notation: $$\begin{aligned} D_{1,q} &=& d_{\uparrow R,q}d_{\downarrow R,q} + d_{\uparrow L,q} d_{\downarrow L,q} \label{D1} \\ D_{2,q} &=& d_{\uparrow R,q}d_{\downarrow R,q} - d_{\uparrow L,q} d_{\downarrow L,q} \\ D_{3,q} &=& d_{\uparrow R,q}d_{\downarrow L,q} - d_{\downarrow R,q} d_{\uparrow L,q} \\ D_{4,q} &=& d_{\uparrow R,q} d_{\uparrow L,q} \\ D_{5,q} &=& d_{\uparrow R,q}d_{\downarrow L,q} + d_{\downarrow R,q} d_{\uparrow L,q} \\ D_{6,q} &=& -d_{\downarrow R,q} d_{\downarrow L,q}. \label{D6}\end{aligned}$$ Of these, $D_{1,2,3}$ are spin-singlets, while $D_{4,5,6}$ are spin triplets. The transformation properties of these operators under the microscopic symmetries, as well as the gauge transformation $\mathcal{C}_G$ in the case of the $\pi$-flux state, are given in Table \[Dtable\]. Note that $\mathcal{C}_G$ changes the sign of the monopole charge $q$, indicating that the states with $+2\pi$ flux and $-2\pi$ flux are not physically distinct in the $\pi$-flux case. We will use this fact to infer which of the leading monopole operators have dominant amplitudes in the neighboring staggered-flux state in Sec. \[CompetingOrders\]. $T_x$ $T_y$ $R_x$ $R_{\pi/2}$ $\mathcal{T}$ ${\mathcal C}_G$ ------------------------ --------------------- --------------------- ------------- ---------------- -------------------- --------------------- $ D_{1,q} \rightarrow$ $-D_{1,-q}^\dagger$ $-D_{1,-q}^\dagger$ $-D_{1,-q}$ $i q D_{1,q}$ $-D_{1,q}^\dagger$ $D_{1,-q}^\dagger$ $ D_{2,q} \rightarrow$ $-D_{2,-q}^\dagger$ $D_{2,-q}^\dagger$ $D_{2,-q}$ $i q D_{3,q}$ $D_{2,q}^\dagger$ $D_{2,-q}^\dagger$ $ D_{3,q} \rightarrow$ $D_{3,-q}^\dagger$ $-D_{3,-q}^\dagger$ $-D_{3,-q}$ $i q D_{2,q}$ $D_{3,q}^\dagger$ $D_{3,-q}^\dagger$ $ D_{4,q} \rightarrow$ $-D_{6,-q}^\dagger$ $-D_{6,-q}^\dagger$ $D_{4,-q}$ $-i q D_{4,q}$ $-D_{4,q}^\dagger$ $-D_{6,-q}^\dagger$ $ D_{5,q} \rightarrow$ $-D_{5,-q}^\dagger$ $-D_{5,-q}^\dagger$ $D_{5,-q}$ $-i q D_{5,q}$ $-D_{5,q}^\dagger$ $-D_{5,-q}^\dagger$ $ D_{6,q} \rightarrow$ $-D_{4,-q}^\dagger$ $-D_{4,-q}^\dagger$ $D_{6,-q}$ $-i q D_{6,q}$ $-D_{6,q}^\dagger$ $-D_{4,-q}^\dagger$ : \[Dtable\] Transformation properties of the operators $D_{j,q}$ defined in Eqs. (\[D1\]) through (\[D6\]) which fill two of the four zero-modes in the presence of a $2\pi q$ flux insertion. The gauge transformation $\mathcal{C}_G$ applies only in the $\pi$-flux state. We pause now to comment in greater detail on the subtlety with determining the staggered-flux monopole quantum numbers. Naively, one might suspect that these can be inferred from the transformation properties of the zero-modes, which we have at hand. Realizing the microscopic symmetries, however, generically requires gauge transformations, which leads to inherent ambiguities in how the fields transform. In particular, for the staggered-flux case, there is an arbitrary overall U(1) phase in the transformations quoted in Table \[Dtable\], and a still greater ambiguity in the $\pi$-flux state due to its larger SU(2) gauge group. But the monopole operators are gauge-invariant, so one must instead examine the symmetries of the full many-body wavefunctions, which are gauge invariant, rather than single-particle states. In what follows we will first deduce the transformation properties of flux insertion operators $\Phi^\dagger_{j,q}$ which add $2\pi q$ flux to the ground state and fill two of the zero modes, $$\Phi^\dagger_{j,q} = D_{j,q}^\dagger|q \rangle\langle0|. \label{halfmonopole}$$ Here $|q\rangle$ represents the filled Dirac sea in the presence of $2\pi q$ flux with all four zero-modes empty and $|0\rangle$ is the ground state in the absence of a flux insertion. The monopoles we will ultimately be interested in will be simply related to these objects. Once we know the transformation properties of $\Phi^\dagger_{j,q}$ it will be trivial to read off the monopole quantum numbers. Quantum Number Determination ============================ Symmetry relations ------------------ As a first step, we will now constrain the quantum numbers of the operators $\Phi_{j,q}^\dagger$ defined above using various symmetry relations which must hold when acting on gauge-invariant states. In particular, we will utilize the following, $$\begin{aligned} (R_x)^2 &=& 1 \label{2reflections} \\ T_xT_y &=& T_y T_x \label{TxTy} \\ R_x T_y &=& T_y R_x \label{RxTy} \\ T_y R_{\pi/2} &=& R_{\pi/2} T_x \label{TxyRelation}\end{aligned}$$ Furthermore, all lattice symmetries must commute with time-reversal (when acting on gauge-invariant states). Quite generally, we expect the following transformations to hold, $$\begin{aligned} T_{x,y} &:& |q\rangle\langle0| \rightarrow e^{i \varphi^{q}_{x,y}}[\prod_{\alpha A}d_{\alpha A,-q}^\dagger]|-q\rangle\langle0| \label{Txy} \\ R_{x} &:& |q\rangle\langle0| \rightarrow e^{i \theta^{q}_{x}}|-q\rangle\langle0| \label{Rx} \\ R_{\pi/2} &:& |q\rangle\langle0| \rightarrow e^{i \theta^{q}_{\pi/2}}|q\rangle\langle0| \label{rot} \\ \mathcal{T} &:& |q\rangle\langle0| \rightarrow [\prod_{\alpha A}d_{\alpha A,q}^\dagger]|q\rangle\langle0| \\ \mathcal{C}_G &:& |q\rangle\langle0| \rightarrow e^{i \theta^{q}_{G}}[\prod_{\alpha A}d_{\alpha A,-q}^\dagger]|-q\rangle\langle0|. \label{CG2}\end{aligned}$$ The last transformation holds only for the $\pi$-flux state. All phases introduced above are arbitrary at this point, but will be constrained once we impose symmetry relations on gauge-invariant states which have two of the zero-modes filled. Moreover, since time-reversal is anti-unitary, we have chosen the phases of $|q\rangle$ such that no additional phase factor appears under this symmetry. Consider reflections first. Equation (\[2reflections\]) and commutation with time-reversal imply that $e^{i\theta^q_x} = s$, for some $q$-independent sign $s$. The value of $s$ is insignificant, however, since we can always remove it by sending $|+\rangle \rightarrow s |+\rangle$. Hence we will take $$e^{i\theta^q_x} = 1.$$ For translations, Eqs. (\[TxTy\]) and (\[RxTy\]), as well as commutation with time-reversal, yield $$e^{i \varphi^q_{x,y}} = s_{x,y},$$ for some unknown signs $s_{x,y}$. Similarly, Eq. (\[TxyRelation\]) and commutation with time-reversal allow us to determine $\theta^q_{\pi/2}$ up to signs $s^q_{\pi/2}$: $$\begin{aligned} e^{i \theta^q_{\pi/2}} &=& i s^q_{\pi/2}, \\ s^+_{\pi/2}s^-_{\pi/2} &=& -s_x s_y.\end{aligned}$$ Let us turn now to the $\pi$-flux state, where the mean-field Hamiltonian is invariant under the particle-hole transformation $\mathcal{C}_G$ as well. For the moment we will treat this operation like the other physical symmetries, which is merely a convenient trick for backing out the quantum numbers of interest for the staggered-flux state. In particular, we will assert that $\mathcal{C}_G^2 = 1$ and that this particle-hole transformation commutes with the physical symmetries when acting on half-filled states. This yields $$e^{i \theta^q_G} = s_G,$$ for an undetermined sign $s_G$, and also gives the useful constraint $$s^+_{\pi/2} = - s^-_{\pi/2}.$$ It follows from the last equation that $$s_x = s_y.$$ Since the staggered-flux mean-field continuously connects to the $\pi$-flux ansatz, we will assume that the latter two constraints hold in the staggered-flux case as well. (We could alternatively obtain this result using the numerics from the next section, without appealing to the $\pi$-flux state.) To recap, in our study of the flux-insertion operators $\Phi^\dagger_{j,q}$ thus far, we have shown that symmetry relations highly constrain how these objects transform, and proximity to the $\pi$-flux state constrained these transformations even further. All that remains to be determined are the signs $s_x$ and $s^+_{\pi/2}$ which appear under $x$-translations and $\pi/2$ rotations. In the following section we argue that these can be obtained by employing general quantum number conservation arguments supported by simple numerical diagonalization. Numerical Diagonalization ------------------------- To determine the remaining signs $s_x$ and $s^+_{\pi/2}$, we will now discuss our numerical diagonalization study of the mean-field Hamiltonian with and without a flux insertion, and discuss a more intuitive quantum number conservation argument which is consistent with these numerics. The basic idea behind our numerics is that we will judiciously choose the system geometry and gauge such that the symmetry under consideration can be realized without implementing a gauge transformation. This is a crucial point, as only in this case can we avoid overall phase ambiguities that would otherwise appear in such a mean-field treatment. Once the single-particle wavefunctions with and without a flux insertion are at hand, one can proceed to deduce the transformation properties of the corresponding many-body wavefunctions and, in turn, the flux-insertion operators $\Phi^\dagger_{j,q}$ by using the results of the previous section. Consider first $\pi/2$ rotations. Here we diagonalize the mean-field Hamiltonian in a square $L$ by $L$ system with open boundary conditions and $L$ odd so that the system is invariant under $\pi/2$ rotations about the central plaquette’s midpoint. For all flux configurations we choose a rotationally symmetric gauge so that $\pi/2$ rotations are realized trivially. We work in the $\pi$-flux ansatz for simplicity, though staggering the flux can easily be done and clearly does not change any of the results. Flux is inserted over the few innermost “rings” of the system, and the “zero-modes” that appear quasi-localized around the $2\pi$ flux can be unambiguously identified by examining the spread of their wave functions. (The “zero-modes” here are pushed away from zero energy due to finite-size effects; for each spin, one is pushed to higher energy while the other to lower energy.) We consider a variety of system sizes, with up to on the order of 1000 lattice sites, and obtain consistent results in all cases examined. (More details on these numerics can be found in Ref. , which carried out a similar study on the triangular lattice.) In particular, by considering the six ways of filling the zero-modes, we find numerically that there are four $-1$ and two $+1$ rotation eigenvalues for the operators $\Phi^\dagger_{j,+}$. To then back out the sign $s^+_{\pi/2}$, we use Eq. (\[rot\]) and Table \[Dtable\] to show that these operators must have four $-s^+_{\pi/2}$ and two $+s^+_{\pi/2}$ rotation eigenvalues. It immediately follows that $$s^+_{\pi/2} = 1.$$ Actually, one can recover this result without resorting to numerics using the following argument. Note first that the quantum numbers for each single-particle state must be identical for the two spin species. Assume that as flux is inserted, no single-particle levels cross zero energy, as is typically the case in our observations. The quantum numbers for the states below zero-energy are then conserved under flux insertion. For simplicity, let us assume that the half-filled state $|0\rangle$ with no added flux carries trivial quantum numbers (which is by no means essential). This implies that if for each spin the lower zero-mode (*i.e.*, the one pushed downward in energy due to finite-size effects) has eigenvalue $e^{i \alpha_{\pi/2}}$ under rotation, then all other negative-energy states must have eigenvalue $e^{-i \alpha_{\pi/2}}$. Denote the upper zero-mode eigenvalue for each spin by $e^{i\beta_{\pi/2}}$. One can then easily show that under rotation, the operators $\Phi_{j,+}^\dagger$ must have one trivial eigenvalue, one eigenvalue $e^{2i(\beta_{\pi/2}-\alpha_{\pi/2})}$, and four eigenvalues $e^{i(\beta_{\pi/2}-\alpha_{\pi/2})}$. The only consistent possibility is for $e^{i(\beta_{\pi/2}-\alpha_{\pi/2})} = -1$, which yields $s^+_{\pi/2} = 1$ as deduced from numerics. Deducing the sign $s_x$ is more delicate. To this end we consider the composite operation $R_x T_x$, which is convenient since it does not change the sign of the flux inserted. This combination does, however, require a particle-hole transformation, so we can not simply read off the eigenvalues of the half-filled states from numerics as we did for the rotations. An argument similar to the one raised in the previous paragraph does nevertheless allow us to make progress. As before, we consider a finite-size system where $R_x T_x$ is a well-defined symmetry. A system with periodic boundary conditions along the $x$-direction and hard-wall along the $y$-direction is particularly convenient since one can then insert $2\pi$ flux without any difficulty. To make the eigenvalues well-defined here, we must imagine this flux being inserted slowly so that we can monitor the wavefunction continuously during the evolution. Assuming no zero-energy level crossings (this has been verified in most cases; see below), then there must be at least one half-filled state with two zero-modes filled that carries the same quantum numbers as the original half-filled ground state before the flux insertion. In particular, both states must be spin singlets. Now, using Eqs. (\[Txy\]) and (\[Rx\]) along with Table \[Dtable\], one can readily show that the spin singlet operators $\Phi_{1,2,3;q}^\dagger$ all have eigenvalue $s_x$ under $R_x T_x$. So we conclude that $$s_x = 1.$$ Although we have now fully determined the transformation properties of the flux insertion operators $\Phi_{j,q}^\dagger$, it will be useful to specialize to the $\pi$-flux state and deduce the sign $s_G$ that appears under the particle-hole transformation $\mathcal{C}_G$. For this purpose we consider the combination $T_x \mathcal{C}_G$, which is a simple translation whose eigenvalues are easy to determine numerically. As above, we consider an $L_x$ by $L_y$ system with periodic boundary conditions along the $x$-direction and hard-wall along the $y$-direction, and choose the Landau gauge for all flux configurations so that $T_x \mathcal{C}_G$ can be realized without a gauge transformation. Flux insertions are placed uniformly over several consecutive rows midway between the hard walls. We restrict ourselves to the case where $L_x/2$ is odd, since the “zero-modes” that appear in the presence of $2\pi$ flux can be unambiguously identified for such systems. As in our analysis of rotations, we examine the six ways of filling the two zero-modes, and find numerically that there are four $-1$ and two $+1$ eigenvalues under $T_x \mathcal{C}_G$ for the operators $\Phi_{j,q}^\dagger$. Using Eqs. (\[Txy\]) and (\[CG2\]) and Table \[Dtable\], one can also deduce from our earlier results that these operators must have four $s_G$ and two $-s_G$ eigenvalues under $T_x\mathcal{C}_G$, implying that $$s_G = -1.$$ Note that we have confirmed here that typically there are indeed no zero-energy level crossings during flux insertion. Moreover, the sign $s_G$ can be recovered without numerics using the same logic as we outlined for rotations, though we will not repeat the argument here. The transformation properties for the flux-insertion operators $\Phi^\dagger_{j,q}$ under all symmetries are summarized in Table \[Ftable\]. Definition of monopole operators -------------------------------- We will now define the monopole operators as follows, $$\begin{aligned} M_1^\dagger &=& \Phi^\dagger_{1,+} + \Phi_{1,-} \label{M1} \\ M_2^\dagger &=& \Phi^\dagger_{2,+} - \Phi_{2,-} \\ M_3^\dagger &=& \Phi^\dagger_{3,+} - \Phi_{3,-} \\ M_4^\dagger &=& \Phi^\dagger_{4,+} + \Phi_{6,-} \\ M_5^\dagger &=& \Phi^\dagger_{5,+} + \Phi_{5,-} \\ M_6^\dagger &=& \Phi^\dagger_{6,+} + \Phi_{4,-}. \label{M6}\end{aligned}$$ We have organized these “ladder” operators such that the monopoles add the same quantum numbers when acting on ground states within the $q = 0,\pm 1$ monopole charge sectors. For instance, $M_4^\dagger$ adds $S^z = 1$ by filling two spin-up zero-modes when acting on $|0\rangle$ and by annihilating two spin-down zero-modes when acting on $D^\dagger_{6,-}|-\rangle$. Furthermore, these operators have been defined so that they transform into one another under the emergent SU(4) symmetry enjoyed by the critical theory[@MikeSF], implying that all six have the same scaling dimension. Thus the various competing orders captured by the monopoles are unified, just as is the case for those encoded in the spinon bilinears whose correlations are enhanced by gauge fluctuations[@MikeSF]. Again, this constitutes a highly nontrivial, and in principle verifiable, experimental prediction which we will elucidate further below. Before exploring the competing orders, we note that there is another important set of related operators that one should consider, which are the following composites involving the monopole charge operator $Q$, $$\mathcal{M}_j^\dagger = \{M_j^\dagger,Q\}.$$ \[Such operators effectively send $\Phi_{j,-} \rightarrow -\Phi_{j,-}$ in Eqs. (\[M1\]) through (\[M6\]).\] Our analysis thus far does not enable us to distinguish which of these two sets of operators dominates at the staggered-flux fixed point. The following argument, however, suggests that both sets have the same scaling dimension. Consider the current $J^\mu = \frac{1}{4\pi}\epsilon^{\mu\nu\rho}F_{\nu\rho}$, where $F_{\nu\rho}$ is the field-strength tensor. The monopole charge operator is given by an integral over $J^0$: $$Q = \int dx dy J^0,$$ which clearly yields an integer $q$ if there is $2\pi q$ flux present. To all orders in $1/N$, $J^\mu$ scales like an inverse length squared,[@BKW] implying that $Q$ has zero scaling dimension. Typically knowing the scaling dimension of two operators is not sufficient to determine the scaling dimension of the composite. However, since $Q$ is not a local operator, but rather an integral of a charge density, the scaling dimensions for the composites $M_j^\dagger Q$ are additive. Thus the scaling dimensions for $M_j$ and $\mathcal{M}_j$ should be equal. $T_x$ $T_y$ $R_x$ $R_{\pi/2}$ $\mathcal{T}$ ${\mathcal C}_G$ ----------------------------------- ------------------------ ------------------------ ------------------------ ----------------------- ----------------------- ------------------------ $ \Phi_{1,q}^\dagger \rightarrow$ $-\Phi_{1,-q}^\dagger$ $-\Phi_{1,-q}^\dagger$ $-\Phi_{1,-q}^\dagger$ $\Phi_{1,q}^\dagger$ $-\Phi_{1,q}^\dagger$ $-\Phi_{1,-q}^\dagger$ $ \Phi_{2,q}^\dagger \rightarrow$ $\Phi_{2,-q}^\dagger$ $-\Phi_{2,-q}^\dagger$ $\Phi_{2,-q}^\dagger$ $\Phi_{3,q}^\dagger$ $-\Phi_{2,q}^\dagger$ $\Phi_{2,-q}^\dagger$ $ \Phi_{3,q}^\dagger \rightarrow$ $-\Phi_{3,-q}^\dagger$ $\Phi_{3,-q}^\dagger$ $-\Phi_{3,-q}^\dagger$ $\Phi_{2,q}^\dagger$ $-\Phi_{3,q}^\dagger$ $\Phi_{3,-q}^\dagger$ $ \Phi_{4,q}^\dagger \rightarrow$ $-\Phi_{4,-q}^\dagger$ $-\Phi_{4,-q}^\dagger$ $\Phi_{4,-q}^\dagger$ $-\Phi_{4,q}^\dagger$ $-\Phi_{6,q}^\dagger$ $\Phi_{4,-q}^\dagger$ $ \Phi_{5,q}^\dagger \rightarrow$ $-\Phi_{5,-q}^\dagger$ $-\Phi_{5,-q}^\dagger$ $\Phi_{5,-q}^\dagger$ $-\Phi_{5,q}^\dagger$ $-\Phi_{5,q}^\dagger$ $\Phi_{5,-q}^\dagger$ $ \Phi_{6,q}^\dagger \rightarrow$ $-\Phi_{6,-q}^\dagger$ $-\Phi_{6,-q}^\dagger$ $\Phi_{6,-q}^\dagger$ $-\Phi_{6,q}^\dagger$ $-\Phi_{4,q}^\dagger$ $\Phi_{6,-q}^\dagger$ : \[Ftable\] Transformation properties of the flux-insertion operators $\Phi^\dagger_{j,q}$. The gauge transformation $\mathcal{C}_G$ applies only in the $\pi$-flux state. Competing Orders Encoded in Monopoles {#CompetingOrders} ===================================== Now that we have all transformation properties for the flux-insertion operators $\Phi^\dagger_{j,q}$, we can finally deduce the quantum numbers of the six monopole operators defined in Eqs. (\[M1\]) through (\[M6\]) and explore the competing orders encoded in this sector of the theory. To this end, we will examine in detail the quantum numbers carried by the 12 Hermitian operators $M_{j}^\dagger + M_j$ and $i (M_j^\dagger-M_j)$. These are summarized in Table \[HermitianMtable\], which is the main result of this paper. (The quantum numbers carried by the Hermitian operators constructed from $\mathcal{M}_j$ can be trivially obtained from these, and we will only comment on such operators briefly at the end.) In contrast to the monopole scaling dimensions, the amplitudes for their correlations are non-universal and will only be related where required by symmetry. We can gain some intuition for which operators have the dominant amplitudes, at least for weak staggering of the flux, by examining their quantum numbers under the particle-hole gauge transformation $\mathcal{C}_G$ in the $\pi$-flux ansatz. Those which are even under this operation will survive projection into the physical Hilbert space, and are thus expected to have the largest amplitudes in the staggered-flux case as well. Those which are odd vanish upon projection and should have suppressed amplitudes. In passing we note that a similar analysis may provide useful, though non-universal, information for the flux-conserving operators as well. The first six Hermitian monopole operators listed in Table \[HermitianMtable\] are expected to have dominant amplitudes by the above logic, while the latter six should be suppressed. We proceed now to discuss the results, comparing with previous results for the well-studied monopole-free sector[@MikeSF] where appropriate. Momentum $(k_x,k_y)$ $R_x$ $R_{\pi/2}$ $\mathcal{T}$ Spin Meaning -------------------------- ---------------------- ------- ------------------------------- --------------- --------- ------------------------------------------------ $ i {M}_{1} + h.c.$ $(0,0)$ $1$ $1$ $1$ Singlet Allowed perturbation $ i{M}_{2} + h.c.$ $(0,\pi)$ $1$ $\rightarrow i{M}_{3} + h.c.$ $1$ Singlet VBS $ i{M}_{3} + h.c.$ $(\pi,0)$ $-1$ $\rightarrow iM_2 + h.c.$ $1$ Singlet VBS $ ({M}_{4}-M_6) + h.c.$ $(\pi,\pi)$ $1$ $-1$ $-1$ Triplet Neel $ {M}_{5} + h.c.$ $(\pi,\pi)$ $1$ $-1$ $-1$ Triplet Neel $ i(M_4+{M}_{6}) + h.c.$ $(\pi,\pi)$ $1$ $-1$ $-1$ Triplet Neel $ M_{1} + h.c.$ $(\pi,\pi)$ $-1$ $1$ $-1$ Singlet $(\pi,\pi)$ component of scalar spin chirality $ M_{2} + h.c.$ $(\pi,0)$ $-1$ $\rightarrow M_3 + h.c.$ $-1$ Singlet $(0,\pi)$ component of skyrmion density $ {M}_{3} + h.c.$ $(0,\pi)$ $1$ $\rightarrow M_2 + h.c.$ $-1$ Singlet $(\pi,0)$ component of skyrmion density $ i({M}_{4}-M_6) + h.c.$ $(0,0)$ $-1$ $-1$ $1$ Triplet uniform vector spin chirality $ i {M}_{5} + h.c.$ $(0,0)$ $-1$ $-1$ $1$ Triplet uniform vector spin chirality $ (M_4+{M}_{6}) + h.c.$ $(0,0)$ $-1$ $-1$ $1$ Triplet uniform vector spin chirality The first operator in Table \[HermitianMtable\], interestingly, is a singlet that carries *no* nontrivial quantum numbers, and thus constitutes an allowed perturbation to the Hamiltonian; we discuss possible implications of this in the next section. Note that there is no symmetry-equivalent operator in the set of fermionic spinon bilinears, all of which carry nontrivial quantum numbers[@MikeSF]. As an aside we comment that naively it may appear, given our quantum-number-conservation argument employed earlier, that having one singlet monopole operator carrying no quantum numbers is generic. We stress that this is not the case. We applied this argument in different geometries, which were designed so that the symmetry under consideration was realized in a particularly simple way. Within each geometry, there must be one singlet flux insertion which transforms trivially as claimed. But there are three such singlet operators, so the same one need not transform trivially in all cases. Indeed, similar arguments applied to monopoles on the triangular lattice yield no such operators carrying trivial quantum numbers.[@AVLlong] Remarkably, the next five operators encode perhaps the most natural phases for the square-lattice antiferromagnet—valence bond solid (VBS) and Neel orders. We find it quite encouraging that these appear as the dominant nearby orders in our analysis. Both VBS and Neel fluctuations are also captured by enhanced fermion bilinears, which are labeled $N_{C}^{1,2}$ and ${\bf N}_A^3$, respectively, in Ref. . It is intriguing to note that a recent study that neglected monopoles but took into account short-range fermion interactions found that the staggered-flux spin liquid may be unstable towards an SO(5)-symmetric fixed point, at which Neel and VBS correlations were unified.[@CenkeSO5] In light of our results, it would be interesting to revisit that work with the inclusion of monopoles, which for the physical value $N = 4$ may also play an important role. The remaining six operators in the table are expected to have suppressed amplitudes compared to the operators discussed above. The first of these transforms microscopically like $$\begin{aligned} M_1 + h.c. &\sim& (-1)^{r_x+r_y}[{\bf S}_{a}\cdot({\bf S}_{b}\times {\bf S}_c) - {\bf S}_{b}\cdot({\bf S}_{c}\times {\bf S}_d) \nonumber \\ &+& {\bf S}_{c}\cdot({\bf S}_{d}\times {\bf S}_a) - {\bf S}_{d}\cdot({\bf S}_{a}\times {\bf S}_b) ],\end{aligned}$$ where ${\bf S}_{a} = {\bf S}_{\bf r-\hat{y}}$, ${\bf S}_{b} = {\bf S}_{\bf r + \hat{x}}$, ${\bf S}_{c} = {\bf S}_{\bf r + \hat{y}}$, and ${\bf S}_{d} = {\bf S}_{\bf r - \hat{x}}$. This operator represents the $(\pi,\pi)$ component of the scalar spin chirality. Apart from the finite momentum carried, $M_1 + h.c.$ carries the same quantum numbers as the enhanced fermion bilinear denoted $M$ in Ref.  that when added to the Hamiltonian drives the system into the Kalmeyer-Laughlin spin liquid[@KLshort; @KLlong] which breaks time-reversal and reflection symmetry. The next two singlet operators in the table transform like the following microscopic spin operators, $$\begin{aligned} {M}_2 + h.c. &\sim& (-1)^{r_x}[{\bf S}_{1}\cdot({\bf S}_{2}\times {\bf S}_3) - {\bf S}_{2}\cdot({\bf S}_{3}\times {\bf S}_4) \nonumber \\ &+& {\bf S}_{3}\cdot({\bf S}_{4}\times {\bf S}_1) - {\bf S}_{4}\cdot({\bf S}_{1}\times {\bf S}_2) ] \\ {M}_3 + h.c. &\sim& -(-1)^{r_y}[{\bf S}_{1}\cdot({\bf S}_{2}\times {\bf S}_3) - {\bf S}_{2}\cdot({\bf S}_{3}\times {\bf S}_4) \nonumber \\ &+& {\bf S}_{3}\cdot({\bf S}_{4}\times {\bf S}_1) - {\bf S}_{4}\cdot({\bf S}_{1}\times {\bf S}_2) ],\end{aligned}$$ where we have used abbreviated notation with ${\bf S}_{1} = {\bf S}_{\bf r}$, ${\bf S}_{2} = {\bf S}_{\bf r + \hat{x}}$, ${\bf S}_{3} = {\bf S}_{\bf r + \hat{x} + \hat{y}}$, and ${\bf S}_{4} = {\bf S}_{\bf r + \hat{y}}$. These monopole operators are closely related to an enhanced fermion bilinear, dubbed $N_C^3$ in Ref. , that transforms like $$\begin{aligned} N_C^3 &\sim& {\bf S}_{1}\cdot({\bf S}_{2}\times {\bf S}_3) - {\bf S}_{2}\cdot({\bf S}_{3}\times {\bf S}_4) \nonumber \\ &+& {\bf S}_{3}\cdot({\bf S}_{4}\times {\bf S}_1) - {\bf S}_{4}\cdot({\bf S}_{1}\times {\bf S}_2) .\end{aligned}$$ Furthermore, Ref.  observed that $N_C^3$ also possesses the same symmetry as the $(\pi,\pi)$ component of the skyrmion density $\rho_S$, $$\rho_S = \frac{1}{4\pi} {\bf n}\cdot (\partial_x {\bf n}\times \partial_y{\bf n}),$$ where ${\bf n}$ is a unit vector encoding slow variations in the Neel order parameter. Consequently, ${M}_{2,3} + h.c.$ are symmetry equivalent to the $(0,\pi)$ and $(\pi,0)$ components of the skyrmion density. Finally, the last three triplets in the table transform like components of the spin operator $${\bf S}_{1} \times {\bf S}_{3} - {\bf S}_{2} \times {\bf S}_{4}.$$ Thus, these operators represent the uniform part of the vector spin chirality. Enhanced fermion bilinears (${\bf N}_{A}^{1,2}$ in Ref. ) also represent vector spin chirality fluctuations, though at momenta $(0,\pi)$ and $(\pi,0)$. What about Hermitian operators constructed from the composites $\mathcal{M}_j$? Their quantum numbers can be easily deduced from those listed in Table \[HermitianMtable\] by noting that the monopole charge is odd under translations, reflection, and $\mathcal{C}_G$ (in the $\pi$-flux state), but even under rotations and time-reversal. Consequently, Hermitian $\mathcal{M}_j$ operators have relative momentum $(\pi,\pi)$ and opposite parity under reflection compared with the corresponding $M_j$ operators. One can repeat the analysis given above for the latter, but we choose not to do so here. Discussion ========== In this paper we have attempted to help resolve an outstanding issue in the study of algebraic spin liquids—namely, the quantum numbers carried by monopole operators—by considering the well-studied case of the staggered-flux state. Our study builds on previous work[@FermVortSpin1; @AVLlong] in the slightly different context of “algebraic vortex liquids”, and can be generalized to other settings as well. Essentially, our analysis was predicated on the assumption that the leading monopole quantum numbers can be deduced from the symmetry properties of the mean-field ground states with and without a flux insertion, with no additional Berry phase effects. While we believe this is reasonable, and find the end results to be quite natural, such issues can be delicate since we are dealing with a gapless state. Thus, we encourage further scrutiny of the conclusions reach in this paper. Projected wavefunction studies of the type described in Ref.  provide one distinct approach which can shed further light on the problem and may help to support our findings[@YingPC]. A more dynamical treatment of monopoles, however, may ultimately be required. Assuming we have succeeded in finding the quantum numbers of the leading monopole operators, one issue is worth discussing further. Specifically, our analysis showed that there is one Hermitian monopole operator which carries no quantum numbers and thus represents a symmetry-allowed perturbation to the Hamiltonian. An important issue is whether this perturbation destabilizes the staggered-flux state for the physical number of fermion flavors, which is $N = 4$. The single-monopole scaling dimension computed in the large-$N$ limit in Ref.  is $\Delta_m \approx 0.265N$. Extrapolating to $N = 4$ yields $\Delta_m \approx 1.06$, substantially lower than 3, suggesting that the symmetry-allowed monopole operator may constitute a relevant perturbation. However, caution is warranted here (more so than usual in such extrapolations), since the subleading correction to the scaling dimension is generically an $N$-independent, possibly $O(1)$ number. Given the obvious importance of this question for the high-$T_c$ problem, further studies of these scaling dimensions are certainly worthwhile. And if the operator turns out to be relevant, what are the properties of the phase to which the system eventually flows? An interesting possibility is that the system may flow off to a distinct spin liquid state, but it is also possible that dangerously irrelevant operators lead to broken symmetries. This question is left for future work. [34]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , , , , , , , , ****, (). , , , , , , , , (). , , , , , , , , , , ****, (). , , , , , (). , , , , , , , , ****, (). , , , , , , , ****, (). , , , , ****, (). , , , , (). , (). , , , , ****, (). , (). , ****, (). , ****, (). , ****, (). , (). , , , ****, (). , , , , , , ****, (). , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , , , , , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , (). , .
--- abstract: | Recent works have explained the principle of using ultrasonic transmissions to jam nearby microphones. These signals are inaudible to nearby users, but leverage “hardware nonlinearity” to induce a jamming signal inside microphones that disrupts voice recordings. This has great implications on audio privacy protection. In this work, we gain a deeper understanding on the effectiveness of ultrasonic jammer under [*practical scenarios*]{}, with the goal of disabling both visible and hidden microphones in the surrounding area. We first experiment with existing jammer designs (both commercial products and that proposed by recent papers), and find that they all offer limited angular coverage, and can only target microphones in a particular direction. We overcome this limitation by building a circular transducer array as a wearable bracelet. It emits ultrasonic signals simultaneously from many directions, targeting surrounding microphones without needing to point at any. More importantly, as the bracelet moves with the wearer, its motion increases jamming coverage and diminishes blind spots (the fundamental problem facing any transducer array). We evaluate the jammer bracelet under practical scenarios, confirming that it can effectively disrupt visible and hidden microphones in the surrounding areas, preventing recognition of recorded speech. We also identify limitations and areas for improvement. author: - Yuxin Chen - Huiying Li - Steven Nagels - Zhijing Li - Pedro Lopes - 'Ben Y. Zhao' - Haitao Zheng bibliography: - 'ultra.bib' title: Understanding the Effectiveness of Ultrasonic Microphone Jammer --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10003138.10003141.10010900&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human-centered computing Personal digital assistants&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10003121.10003124.10010870&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human-centered computing Natural language interfaces&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10003121.10003125.10010597&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human-centered computing Sound-based input / output&lt;/concept\_desc&gt; &lt;concept\_significance&gt;100&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt;
--- abstract: | In the past few years there was a growing interest in proving the security of cryptographic protocols, such as key distribution protocols, from the sole assumption that the systems of Alice and Bob cannot signal to each other. This can be achieved by making sure that Alice and Bob perform their measurements in a space-like separated way (and therefore signalling is impossible according to the non-signalling postulate of relativity theory) or even by shielding their apparatus. Unfortunately, it was proven in [@hanggi2010impossibility] that, no matter what hash function we use, privacy amplification is impossible if we only impose non-signalling conditions between Alice and Bob and not within their systems. In this letter we reduce the gap between the assumptions of [@hanggi2010impossibility] and the physical relevant assumptions, from an experimental point of view, which say that the systems can only signal forward in time within the systems of Alice and Bob. We consider a set of assumptions which is very close to the conditions above and prove that the impossibility result of [@hanggi2010impossibility] still holds. author: - Rotem Arnon Friedman - 'Esther H[ä]{}nggi' - 'Amnon Ta-Shma' bibliography: - 'biblopropos.bib' title: 'Towards the Impossibility of Non-Signalling Privacy Amplification from Time-Like Ordering Constraints' --- Introduction and Contribution ============================== Non-signalling cryptography --------------------------- In the past few years there was a growing interest in proving the security of cryptographic protocols, such as quantum key distribution (QKD) protocols, from the sole assumption that the system on which the protocol is being executed does not allow for signalling between Alice and Bob. One way to make sure that this assumption holds is for Alice and Bob to have secured shielded laboratories, such that information cannot leak outside. It could also be ensured by performing Alice’s and Bob’s measurements in a space-like separated way; this way, relativity theory predicts the impossibility of signalling between them. For this reason, such cryptographic protocols are sometimes called “relativistic protocols”. Since the condition that information cannot leak outside is a necessary condition in any cryptographic protocol (otherwise the key could just leak out to the adversary, Eve), basing the security proof on this condition alone will mean that the protocol has minimal assumptions. We consider families of protocols which have two special properties. First, the security of the protocols is based only on the observed correlations of Alice’s and Bob’s measurements outcomes and not on the physical apparatus they use. I.e., the protocols are device-independent [@mayers1998quantum; @pironio2009device]. In device-independent protocols, we assume that the system of Alice and Bob was prepared by the adversary Eve. Note that although the system was created by Eve, Alice and Bob have to be able to make sure that information does not leak outside by shielding the systems. Alice and Bob therefore perform some (unknown) measurements on their system and privacy should be concluded only from the correlations of the outcomes. Second, in the protocols that we consider, the adversary is limited only by the non-signalling principle and not by quantum physics (i.e., super-quantum adversary). By combining these two properties together we can say that quantum physics guarantees the protocol to work, but the security is completely independent of quantum physics. Systems and correlations ------------------------ For two correlated random variables $X,U$ over $\varLambda_{1}\times\varLambda_{2}$, we denote the conditional probability distribution of $X$ given $U$ by $P_{X|U}(x|u)=Pr(X=x|U=u)$. A bipartite system is defined by the joint input-output behavior $P_{XY|UV}$ (see Figure \[fig:A-two-partite-system\]). (-1.5,1.5) node\[anchor=east\] [U]{} –(0,1.5) ; (-1.5,0.5) node\[anchor=east\] [X]{} –(0,0.5); (0,0) rectangle (3,2); (1.5,1) node [$P_{XY|UV}$]{}; (3,1.5)–(4.5,1.5) node\[anchor=west\] [V]{}; (3,0.5)–(4.5,0.5) node\[anchor=west\] [Y]{}; In a system $P_{XY|UV}$ $U$ and $X$ are usually Alice’s input and output respectively, while $V$ and $Y$ are Bob’s input and output. We denote Alice’s interface of the system by $X(U)$ and Bob’s interface by $Y(V)$. In a similar way, when considering a tripartite system $P_{XYZ|UVW}$ Eve’s interface of the system is denoted by $Z(W)$. We are interested in non-local systems - systems which cannot be described by shared randomness of the parties. Bell proved in [@bell64] that entangled quantum states can display non-local correlations under measurements. Bell’s work was an answer to Einstein, Podolsky, and Rosen’s claim in [@EPR] that quantum physics is incomplete and should be augmented by classical variables determining the behavior of every system under any possible measurement. In this letter we deal with a specific type of Bell inequality, called the CHSH inequality after [@CHSH]. We can think about the CHSH inequity as a game. In the CHSH game Alice and Bob share a bipartite system $P_{XY|UV}$. Alice gets a random input $U$, Bob gets a random input $V$ and the goal is that the outputs of Alice and Bob, $X$ and $Y$ respectively, will satisfy $X\oplus Y=U\text{·}V$. For all local systems the probability of winning the game satisfies $\Pr[X\oplus Y=U\text{·}V]\leq0.75$. This can be easily seen from the fact that only three out of the four conditions represented by $\Pr[X\oplus Y=U\cdot V]=1$ can be satisfied together. If a system violates the inequality then it is non-local. (CHSH non-locality). A system $P_{XY|UV}$ is non-local if $\underset{u,v}{\sum}\frac{1}{4}\Pr[X\oplus Y=u\cdot v]>0.75$. When measuring entangled quantum states, one can achieve roughly 85%; this is a Bell inequality violation. The maximal violation of the CHSH inequality, i.e. $\underset{u,v}{\sum}\frac{1}{4}\Pr[X\oplus Y=u\cdot v]=1$ for any $u,v$, is achieved by the following system, called a Popescu-Rohrlich box, or a PR-box [@PR-box]. (PR-box). A PR box is the following bipartite system $P_{XY|UV}$: For each input pair $(u,v)$, the random variables $X$ and $Y$ are uniform bits and we have $\underset{u,v}{\sum}\frac{1}{4}\Pr[X\oplus Y=u\cdot v]=1$ (see Figure \[fig:PR-box\]). (-5,-4) grid (4,5); (-6,4)–(4,4); (-6,-4)–(4,-4); (-4,-4)–(-4,6); (4,-4)–(4,6); (-6,0)–(4,0); (0,-4)–(0,6); (-4,4)–(-6,6); (-3,3) node [$\frac{1}{2}$]{}; (-1,3) node [0]{}; (1,3) node [$\frac{1}{2}$]{}; (3,3) node [0]{}; (-3,1) node [0]{}; (-1,1) node [$\frac{1}{2}$]{}; (1,1) node [0]{}; (3,1) node [$\frac{1}{2}$]{}; (-3,-1) node [$\frac{1}{2}$]{}; (-1,-1) node [0]{}; (1,-1) node [0]{}; (3,-1) node [$\frac{1}{2}$]{}; (-3,-3) node [0]{}; (-1,-3) node [$\frac{1}{2}$]{}; (1,-3) node [$\frac{1}{2}$]{}; (3,-3) node [0]{}; (-5,3) node [0]{}; (-5,1) node [1]{}; (-5,-1) node [0]{}; (-5,-3) node [1]{}; (-6,2) node [0]{}; (-6,-2) node [1]{}; (-3,5) node [0]{}; (-1,5) node [1]{}; (1,5) node [0]{}; (3,5) node [1]{}; (-2,6) node [0]{}; (2,6) node [1]{}; (-5,4.4) node [Y]{}; (-4.4,5) node [X]{}; (-6,5) node [V]{}; (-5,6) node [U]{}; As seen from Figure \[fig:PR-box\] the outputs are perfectly random, and since the correlations are non-local, they cannot be described by pre-shared randomness. I.e., PR-boxes correspond to perfect secrecy. This implies that PR-boxes could have been a good resource for cryptographic protocols. Unfortunately, perfect PR-boxes do not exist in nature; as was proven by Tsirelson [@Tsirelson], quantum physics is non-local, but not maximally. Therefore, for a protocol which can be implemented using quantum systems, we should consider approximations of PR-boxes, or PR-boxes with some error. For example, an 85%-approximations can be achieved with maximally entangled qubits. For a more general treatment we can define the following. \[PR-box-error\](Unbiased PR-box with error $\varepsilon$). An unbiased PR-box with error $\varepsilon$ is the following bipartite system $P_{XY|UV}$: For each input pair $(u,v)$, the random variables $X$ and $Y$ are uniform bits and we have $\Pr[X\oplus Y=u\cdot v]=1-\varepsilon$ (see Figure \[fig:PR-box-error\]). Note that the error here is the same error for all inputs. In a similar way we can define different errors for different inputs. (-5,-4) grid (4,5); (-6,4)–(4,4); (-6,-4)–(4,-4); (-4,-4)–(-4,6); (4,-4)–(4,6); (-6,0)–(4,0); (0,-4)–(0,6); (-4,4)–(-6,6); (-3,3) node [$\frac{1}{2}-\frac{\varepsilon}{2}$]{}; (-1,3) node [$\frac{\varepsilon}{2}$]{}; (1,3) node [$\frac{1}{2}-\frac{\varepsilon}{2}$]{}; (3,3) node [$\frac{\varepsilon}{2}$]{}; (-3,1) node [$\frac{\varepsilon}{2}$]{}; (-1,1) node [$\frac{1}{2}-\frac{\varepsilon}{2}$]{}; (1,1) node [$\frac{\varepsilon}{2}$]{}; (3,1) node [$\frac{1}{2}-\frac{\varepsilon}{2}$]{}; (-3,-1) node [$\frac{1}{2}-\frac{\varepsilon}{2}$]{}; (-1,-1) node [$\frac{\varepsilon}{2}$]{}; (1,-1) node [$\frac{\varepsilon}{2}$]{}; (3,-1) node [$\frac{1}{2}-\frac{\varepsilon}{2}$]{}; (-3,-3) node [$\frac{\varepsilon}{2}$]{}; (-1,-3) node [$\frac{1}{2}-\frac{\varepsilon}{2}$]{}; (1,-3) node [$\frac{1}{2}-\frac{\varepsilon}{2}$]{}; (3,-3) node [$\frac{\varepsilon}{2}$]{}; (-5,3) node [0]{}; (-5,1) node [1]{}; (-5,-1) node [0]{}; (-5,-3) node [1]{}; (-6,2) node [0]{}; (-6,-2) node [1]{}; (-3,5) node [0]{}; (-1,5) node [1]{}; (1,5) node [0]{}; (3,5) node [1]{}; (-2,6) node [0]{}; (2,6) node [1]{}; (-5,4.4) node [Y]{}; (-4.4,5) node [X]{}; (-6,5) node [V]{}; (-5,6) node [U]{}; Using this notation, systems $P_{XY|UV}$ which approximate the PR-box with error $\varepsilon\in[0,0.25)$ are non-local. For a proof that any unbiased PR-box with error $\varepsilon<0.25$ “holds” some secrecy, see for example Lemma 5 in [@hanggi2009quantum]. While PR-Boxes correspond to perfect secrecy, PR-boxes with error correspond to partial secrecy. The problem is that the amount of secrecy (defined formally in Section \[sub:Distance-measures\]) which can be achieved from a quantum system is not enough for our purposes. Therefore we must have some privacy amplification protocol in order for such systems to be useful. Privacy amplification --------------------- In the privacy amplification problem we consider the following scenario. Alice and Bob share information that is only partially secret with respect to an adversary Eve. Their goal is to distill this information to a shorter string, the key, that is completely secret. The problem was introduced in [@bennett1985privacy; @bennett1995generalized] for classical adversaries and in [@konigQuantumPA] for quantum adversaries. In our case, Alice and Bob want to create a secret key using a system $P_{XY|UV}$ while Eve, who is only limited by the non-signalling principle, tries to get some information about it. Assume that Alice and Bob share a system from which they can create a partially secret bit string $X$. Information theoretically, if there is some entropy in one system, we can hope that by using several systems we will have enough entropy to create a more secure key. The idea behind privacy amplification is to consider Alice’s and Bob’s system as a black box, take several such systems which will produce several partially secret bit strings $X_{1},...,X_{n}$ and then apply some hash function $f$ (which might take a short random seed as an additional input) to $X_{1},...,X_{n}$, in order to receive a shorter but more secret bit string $K$, which will act as the key. The amount of secrecy, as will be defined formally in Section \[sub:Distance-measures\], is usually measured by the distance of the actual system of Alice, Bob and Eve from an ideal system, in which the key is uniformly distributed and not correlated to the information held by Eve. We will denote this distance by $d(K|E)$, where $E$ is Eve’s system. We say that a system generating a key is $\epsilon$-indistinguishable from an ideal system if $d(K|E)\leq\epsilon$ for some small $\epsilon>0$. Therefore, the problem of privacy amplification is actually the problem of finding such a ‘good’ function $f$. Privacy amplification is said to be possible when $\epsilon$ is a decreasing function of $n$, the number of systems held by Alice and Bob. In order to prove an impossibility result it is enough to give a specific system, in which each of the subsystems holds some secrecy, but this secrecy cannot be amplified by using any hash function - the distance from uniform remains high, no matter what function Alice and Bob apply to their output bits and how many systems they share. In the classical scenario, this problem can be solved almost optimally by extractors [@nisan1999extracting; @shaltiel2011introduction]. Although not all classical extractors work against quantum adversaries [@gavinsky2006exponential], some very good extractors do, for example, [@de2009trevisan]. Since we consider a super-quantum adversary, we cannot assume that protocols which work for the classical and quantum case, will stay secure against a more powerful adversary. Therefore a different treatment is needed when considering non-signalling adversaries. Related work ------------- Barrett, Hardy, and Kent have shown in [@barrett2005no] a protocol for QKD which is based only on the assumption that Alice and Bob cannot signal to each other. Unfortunately, the suggested protocol cannot tolerate any errors caused by noise in the quantum channel and is inefficient in the number of quantum systems used in order to produce one secure bit. This problem could have been solved by using a privacy amplification protocol, which works even when the adversary is limited only by the non-signalling principle. Unfortunately, it was proven in [@hanggi2010impossibility] that such a privacy amplification protocol does not exist if signalling is possible within the laboratories of Alice and Bob. On the contrary, in [@hanggi2009quantum], [@masanes2009universally] and [@masanes2011secure] it was proven that if we assume full non-signalling conditions, i.e., that any subset of systems cannot signal to any other subset of systems, QKD which is based only on the non-locality of the correlations is possible. In particular, the step of privacy amplification is possible. In the gap between these two extreme cases little has been known. There is one particular set of assumptions of special interest from an experimental point of view; the set of assumptions which says that the systems can only signal forward in time within the systems of Alice and Bob. For this setting it was only known that privacy amplification using the XOR or the AND function is impossible [@MasanesXORimpossibility]. Contribution ------------ In this letter we reduce the gap between the assumptions of [@hanggi2010impossibility], in which signalling is impossible only between Alice and Bob, and the physical relevant assumptions which says that the systems can only signal forward in time within the systems of Alice and Bob. We consider a set of assumptions which is very close to the conditions which only allow to signal forward in time and prove that the impossibility result of [@hanggi2010impossibility] still holds. Since our set of assumptions differs only a bit from the assumptions of signalling only forward in time, called “backward non-signalling”, we can highlight the specific assumptions which might make the difference between possibility and impossibility results. If the adversary does not necessarily need to exploit these specific assumptions, then privacy amplification will be impossible also in the physical assumptions of “backward non-signalling” systems. On the other hand, if privacy amplification will be proved to be possible we will know that the power of the adversary arises from these assumptions. The proof given here is an extension of the proof in [@hanggi2010impossibility]. We prove that the adversarial strategy suggested in [@hanggi2010impossibility] is still valid under stricter non-signalling assumptions (Theorem \[thm:main\]), and as a consequence also under the assumption of an “almost backward non-signalling” system (Corollary \[cor:seq-cor\]). This will imply that privacy amplification against non-signalling adversaries is impossible under our stricter assumptions (which include a lot more non-signalling conditions than in [@hanggi2010impossibility]), as stated formally in Theorem \[thm:main\]. Outline -------- The rest of this letter is organized as follows. In Section \[sec:Preliminaries\] we describe several different non-signalling conditions and explain the model of non-signalling adversaries. In Section \[sec:The-Underlying-System\] we define a specific system which respects many non-signalling conditions and yet we cannot use privacy amplification in order to create an arbitrary secure bit from it. In addition, we prove that an impossibility result for our set of assumptions implies an impossibility result for “almost backward non-signalling” systems (Corollary ). In Section \[sec:Privacy-Amplification-Against\] we prove our main theorem, Theorem \[thm:main\]. We conclude in Section \[sec:Concluding-Remarks\]. Preliminaries \[sec:Preliminaries\] =================================== Notations --------- We denote the set $\{1,...,n\}$ by $[n]$. For any string $x\in\{0,1\}^{n}$ and any subset $I\subseteq[n]$, $x_{i}$ stands for the i’th bit of $x$ and $x_{I}\in\{0,1\}^{|I|}$ stands for the string formed by the bits of $x$ at the positions given by the elements of $I$. $\overline{I}$ is the complementary set of $I$, i.e., $\overline{I}=[n]/I$. $x_{\overline{i}}$ is the string formed by all the bits of $x$ except for the i’th bit. For two correlated random variables $X,U$ over $\varLambda_{1}\times\varLambda_{2}$, we denote the conditional probability distribution of $X$ given $U$ as $P_{X|U}(x|u)=Pr(X=x|U=u)$. Non-signalling systems\[sub:Non-signaling-systems\] \[sub:Different-non-signaling\] ----------------------------------------------------------------------------------- We start by formally defining the different types of non-signalling systems and conditions which will be relevant in this letter. \[n.s.-def\](Fully non-signalling system). An n-party conditional probability distribution $P_{X|U}$ over $X,U\in\{0,1\}^{n}$ is called a fully non-signalling system if for any set $I\subseteq[n]$, $$\forall x_{\overline{I}},u_{I},u'_{I},u_{\overline{I}}\underset{x_{I}\in\{0,1\}^{|I|}}{\sum}P_{X|U}(x_{I},x_{\overline{I}}|u_{I},u_{\overline{I}})=\underset{x_{I}\in\{0,1\}^{|I|}}{\sum}P_{X|U}(x_{I},x_{\overline{I}}|u'_{I},u_{\overline{I}})\,.$$ This definition implies that any group of parties cannot infer from their part of the system which inputs were given by the other parties. A measurement of a subset $I$ of the parties does not change the statistics of the outcomes of parties $\overline{I}$; the marginal system they see is the same for all inputs of the other parties. This means that different parties cannot signal to other parties using only the system. Note that this type of condition is not symmetric. The fact that parties $I$ cannot signal to parties $\overline{I}$ does not imply that parties $\overline{I}$ cannot signal to parties $I$. The fully non-signalling conditions can also be written in the following way. \[lem:n.s.-equiv-def\](Lemma 2.7 in [@hanggi2010device]). An n-party system $P_{X|U}$ over $X,U\in\{0,1\}^{n}$ is a fully non-signalling system if and only if for all $i\in[n]$, $$\forall x_{\overline{i}},u_{i},u'_{i},u_{\overline{i}}\underset{x_{i}\in\{0,1\}}{\sum}P_{X|U}(x_{i},x_{\overline{i}}|u_{i},u_{\overline{i}})=\underset{x_{i}\in\{0,1\}}{\sum}P_{X|U}(x_{i},x_{\overline{i}}|u'_{i},u_{\overline{i}})\,.$$ In order to make sure that the fully non-signalling conditions as in Definition \[n.s.-def\] hold one will have to create the system such that each of the $2n$ subsystems is space-like separated from all the others, or shielded, to exclude signalling. This is of course impractical from an experimental point of view. Therefore, we need to consider more practical, weaker, conditions. A minimal requirement needed for any useful system is that Alice cannot signal to Bob and vice versa[^1]. We stress that this is an assumption, since the non-signalling condition cannot be tested (not even with some small error) using a parameter estimation protocol as a previous step. This assumption can be justified physically by shielding the systems or by performing the measurements in a space-like separated way. \[Alice-&-Bob-n.s.\](Non-signalling between Alice and Bob). A $2n$-party conditional probability distribution $P_{XY|UV}$ over $X,Y,U,V\in\{0,1\}^{n}$ does not allow for signalling from Alice to Bob if $$\forall y,u,u',v\quad\underset{x}{\sum}P_{XY|UV}(x,y|u,v)=\underset{x}{\sum}P_{XY|UV}(x,y|u',v)$$ and does not allow for signalling from Bob to Alice if $$\forall x,v,v',u\quad\underset{y}{\sum}P_{XY|UV}(x,y|u,v)=\underset{y}{\sum}P_{XY|UV}(x,y|u,v')\,.$$ On top of the assumption that Alice and Bob cannot signal to each other, we can now add different types of non-signalling conditions. In a more mathematical way, we can think about it as follows. The full non-signalling conditions are a set of linear equations as in Definition \[n.s.-def\] and Lemma \[lem:n.s.-equiv-def\]. We can assume that all of these equations hold (this is the full non-signalling scenario) or we can use just a subset (which does not span the whole set) of these equations. One type of systems which are physically interesting are the systems which can only signal forward in time (messages cannot be sent to the past). This can be easily achieved by measuring several quantum systems one after another, and therefore these are the non-signalling conditions that one “gets for free” when performing an experiment of QKD. For example, an entanglement-based protocol in which Alice and Bob receive entangled photons and measure them one after another using the same apparatus will lead to the conditions of Definition \[alternative-seq-n.s.\]. If the apparatus has memory signalling is possible from $A_{i}$ to $A_{i+1}$ for example. However, signals cannot go outside from Alice’s laboratory to Bob’s laboratory. Formally, we use the following definition for backward non-signalling systems. \[alternative-seq-n.s.\](Backward non-signalling system). For any $i\in\{2,...,n-1\}$ denote the set $\{1,...,i-1\}$ by $I_{1}$ and the set $\{i,...,n\}$ by $I_{2}$. A $2n$-party conditional probability distribution $P_{XY|UV}$ over $X,Y,U,V\in\{0,1\}^{n}$ is a backward non-signalling system (does not allow for signalling backward in time) if for any $i\in[n]$, $$\begin{aligned} \forall x_{I_{1}},y,u_{I_{1}},u_{I_{2}},u'_{I_{2}},v\quad\underset{x_{I_{2}}}{\sum}P_{XY|UV}(x_{I_{1}},x_{I_{2}},y|u_{I_{1}},u_{I_{2}},v) & = & \underset{x_{I_{2}}}{\sum}P_{XY|UV}(x_{I_{1}},x_{I_{2}},y|u_{I_{1}},u'_{I_{2}},v)\\ \forall x,y_{I_{1}},u,v_{I_{1}},v_{I_{2}},v'_{I_{2}}\quad\underset{y_{I_{2}}}{\sum}P_{XY|UV}(x,y_{I_{1}},y_{I_{2}}|u,v_{I_{1}},v_{I_{2}}) & = & \underset{y_{I_{2}}}{\sum}P_{XY|UV}(x,y_{I_{1}},y_{I_{2}}|u,v_{I_{1}},v'_{I_{2}}).\end{aligned}$$ In order to understand why these are the conditions that we choose to call “backward non-signalling” note that in these conditions Alice’s (and analogously Bob’s) systems $A_{I_{2}}$ cannot signal not only to $A_{I_{1}}$, but even to $A_{I_{1}}$ and all of Bob’s systems together. I.e., $A_{I_{2}}$ cannot change the statistics of $A_{I_{1}}$ and $B$, even if they are collaborating with one another. Another way to see why these conditions make sense, is to consider a scenario in which Bob, for example, performs all of his measurements first. This of course should not change the results of the experiment since Alice and Bob are separated and cannot send signals to each other. Therefore when Alice performs her measurements on the systems $A_{I_{2}}$, her outcomes cannot impact the statistics of both $A_{I_{1}}$ and $B$ together. In this letter we consider a different set of conditions, which does not allow for most types of signalling to the past. \[sequential-signaling\](Almost backward non-signalling system). For any $i\in\{2,...,n-1\}$ denote the set $\{1,...,i-1\}$ by $I_{1}$ and the set $\{i,...,n\}$ by $I_{2}$. A $2n$-party conditional probability distribution $P_{XY|UV}$ over $X,Y,U,V\in\{0,1\}^{n}$ is an almost backward non-signalling system if for any $i\in[n]$, $$\begin{aligned} \forall x_{I_{1}},y_{I_{1}},u_{I_{1}},u_{I_{2}},u'_{I_{2}},v_{I_{1}},v_{I_{2}},v'_{I_{2}}\hphantom{------}\\ \underset{x_{I_{2}},y_{I_{2}}}{\sum}P_{XY|UV}(x_{I_{1}},x_{I_{2}},y_{I_{1}},y_{I_{2}}|u_{I_{1}},u_{I_{2}},v_{I_{1}},v_{I_{2}}) & =\underset{x_{I_{2}},y_{I_{2}}}{\sum}P_{XY|UV}(x_{I_{1}},x_{I_{2}},y_{I_{1}},y_{I_{2}}|u_{I_{1}},u'_{I_{2}},v_{I_{1}},v'_{I_{2}}).\end{aligned}$$ Figure \[fig:Different-non-signaling\] visualizes the difference between all of these non-signalling conditions. \[font=\] (0,0) node [$A_{n}$]{}; (0,0.8) node [.]{}; (0,1) node [.]{}; (0,1.2) node [.]{}; (0,2) node [$A_{3}$]{}; (0,3) node [$A_{2}$]{}; (0,4) node [$A_{1}$]{}; (2,0) node [$B_{n}$]{}; (2,0.8) node [.]{}; (2,1) node [.]{}; (2,1.2) node [.]{}; (2,2) node [$B_{3}$]{}; (2,3) node [$B_{2}$]{}; (2,4) node [$B_{1}$]{}; (1,-0.25) – (1,4.25); (0.5,2) – (1.5,2); (1,-0.75) node\[font=\] [(a)]{}; (0,0) node [$A_{n}$]{}; (0,0.8) node [.]{}; (0,1) node [.]{}; (0,1.2) node [.]{}; (0,2) node [$A_{3}$]{}; (0,3) node [$A_{2}$]{}; (0,4) node [$A_{1}$]{}; (2,0) node [$B_{n}$]{}; (2,0.8) node [.]{}; (2,1) node [.]{}; (2,1.2) node [.]{}; (2,2) node [$B_{3}$]{}; (2,3) node [$B_{2}$]{}; (2,4) node [$B_{1}$]{}; (-0.25,2.5) – (2.25,2.5); (1.25,2.1) – (1.25,2.9); (0.75,2.1) – (0.75,2.9); (1,-0.75) node\[font=\] [(b)]{}; (0,0) node [$A_{n}$]{}; (0,0.8) node [.]{}; (0,1) node [.]{}; (0,1.2) node [.]{}; (0,2) node [$A_{3}$]{}; (0,3) node [$A_{2}$]{}; (0,4) node [$A_{1}$]{}; (2,0) node [$B_{n}$]{}; (2,0.8) node [.]{}; (2,1) node [.]{}; (2,1.2) node [.]{}; (2,2) node [$B_{3}$]{}; (2,3) node [$B_{2}$]{}; (2,4) node [$B_{1}$]{}; (-0.25,2.5) – (0.5,2.5); (0.5,2.5) – (0.5,-0.25); (0.25,1.5) – (0.75,2); (1.5,2.5) – (2.25,2.5); (1.5,2.5) – (1.5,-0.25); (1.75,1.5) – (1.25,2); (1,-0.75) node\[font=\] [(c)]{}; (0,0) node\[draw, dashed\] [$A_{n}$]{}; (0.25,0) – (0.75,0.2); (0,0.8) node [.]{}; (0,1) node [.]{}; (0,1.2) node [.]{}; (0,2) node\[draw, dashed\] [$A_{3}$]{}; (0.25,2) – (0.75,2); (0,3) node\[draw, dashed\] [$A_{2}$]{}; (0.25,3) – (0.75,3); (0,4) node\[draw, dashed\] [$A_{1}$]{}; (0.25,4) – (0.75,3.8); (2,0) node\[draw, dashed\] [$B_{n}$]{}; (1.75,0) – (1.25,0.2); (2,0.8) node [.]{}; (2,1) node [.]{}; (2,1.2) node [.]{}; (2,2) node\[draw, dashed\] [$B_{3}$]{}; (1.75,2) – (1.25,2); (2,3) node\[draw, dashed\] [$B_{2}$]{}; (1.75,3) – (1.25,3); (2,4) node\[draw, dashed\] [$B_{1}$]{}; (1.75,4) – (1.25,3.8); (1,-0.75) node\[font=\] [(d)]{}; The difference between the conditions of Definition \[alternative-seq-n.s.\] and Definition \[sequential-signaling\] is that when assuming the conditions of an almost backward non-signalling system signalling is not forbidden from $A_{i}$ to $B_{i}$ and $A_{j}$ together for any $i$ and $j<i$. I.e., if $A_{i}$ wants to signal to some system in the past, $A_{j}$, $B_{i}$ has to cooperate with $A_{j}$. To see this consider the following system for example. Alice and Bob share a system $P_{XY|UV}$ for $X,Y,U,V\in\{0,1\}^{2}$. We define the system such that each of the outputs is a perfectly random bit and independent of any input, except for $X_{1}$, which is equal to $Y_{2}\oplus U_{2}$. Obviously, the outputs on Bob’s side look completely random and independent of any input, i.e., the system is non-signalling from Alice to Bob. Now note that whenever we do not have access to $Y_{2}$, $X_{1}$ also looks like a perfectly random bit and independent of the input. Therefore, the system is also non-signalling from Bob to Alice, and almost backward non-signalling. However, the conditions of Definition \[alternative-seq-n.s.\] does not hold, since the input $U_{2}$ can be perfectly known from $X_{1}$ and $Y_{2}$ (i.e. $A_{2}$ can signal $A_{1}$ and $B_{2}$ together). For every system $P_{XY|UV}$ which fulfills some arbitrary non-signalling conditions we can define marginal systems and extensions to the system in the following way. (Marginal system). A system $P_{X|U}$ is called a marginal system of the system $P_{XZ|UW}$ if $\forall x,u,w\quad P_{X|U}(x|u)=\underset{z}{\sum}P_{XZ|UW}(x,z|u,w)$. Note that for the marginal system $P_{X|U}$ of $P_{XZ|UW}$ to be defined properly, all we need is a non-signalling condition between the parties holding $X(U)$ and the parties holding $Z(W)$. (Extension system). A system $P_{XZ|UW}$ is called an extension to the system $P_{X|U}$, which fulfills some arbitrary set of non-signalling conditions $\mathcal{C}$, if: 1. $P_{XZ|UW}$ does not allow for signalling between the parties holding $X(U)$ and the parties holding $Z(W)$. 2. The marginal system of $P_{XZ|UW}$ is $P_{X|U}$. 3. For any $z$ the system $P_{X|U}^{Z=z}$ fulfills the same non-signalling conditions $\mathcal{C}$. Note that for every system $P_{X|U}$ there are many different extensions. Next, in an analogous way to the definition of a classical-quantum state, $\rho_{XE}=\underset{x}{\sum}P_{X}(x)|x\rangle\langle x|\otimes\rho_{E}^{x}$, we would like to define a classical-non-signalling system. (Classical - non-signalling system). A classical - non-signalling (c-n.s.) system is a system $P_{XZ|UW}$ such that $|U|=1$. We can think about it as a system in which some of the parties cannot choose or change the input on their side of the system. When it is clear from the context which side of the system is classical and which side is not we drop the index which indicates the trivial choice for $U$ and just write $P_{XZ|W}$. Notice that for a general system with some $U$, after choosing an input $u_{i}\in U$ we get the c-n.s. system $P_{XZ|U=u_{i},W}$. Distance measures\[sub:Distance-measures\] ------------------------------------------ In general, the distance between any two systems $P_{X|U}$ and $Q_{X|U}$ can be measured by introducing another entity - the distinguisher. Suppose $P_{X|U}$ and $Q_{X|U}$ are two known systems. The distinguisher gets one of these systems, $S$, and has to guess which system he was given. In the case of our non-signalling systems, the distinguisher can choose which measurements to make (which inputs to insert to the system) and to see all the outputs. He then outputs a bit $B$ with his guess. The distinguishing advantage between systems $P_{X|U}$ and $Q_{X|U}$ is the maximum guessing advantage the best distinguisher can have. (Distinguishing advantage). The distinguishing advantage between two systems $P_{X|U}$ and $Q_{X|U}$ is $$\delta(P_{X|U},Q_{X|U})=\underset{D}{max}[P(B=1|S=P_{X|U})-P(B=1|S=Q_{X|U})]$$ where the maximum is over all distinguishers $D$, $S$ is the system which is given to the distinguisher and $B$ is its output bit. Two systems $P_{X|U}$ and $Q_{X|U}$ are called $\epsilon$-indistinguishable if $\delta(P_{X|U},Q_{X|U})\leq\epsilon$. If the distinguisher is given an n-party system for $n>1$ he can choose not only the n inputs but also the order in which he will insert them. The distinguisher can be adaptive, i.e., after choosing an input and seeing an output he can base his later decisions for the following inputs on the results seen so far. Therefore the maximization in this case will be on the order of the measurements and their values. If the distinguisher is asked to distinguish between two c-n.s. systems we can equivalently write the distinguishing advantage as in the following lemma. \[lem:distance c-n.s\](Distinguishing advantage between two c-n.s. systems). The distinguishing advantage between two c-n.s systems $P_{KZ|W}$ and $Q_{KZ|W}$ is $$\text{\ensuremath{\delta}}(P_{KZ|W},Q_{KZ|W})=\underset{k}{\sum}\underset{w}{max}\underset{z}{\sum}\biggm|P_{KZ|W=w}(k,z)-Q_{KZ|W=w}(k,z)\biggm|.$$ In order to distinguish between two c-n.s. systems, $P_{KZ|W}$ and $Q_{KZ|W}$, the distinguisher has only one input to choose, $W$. In addition, because the distinguisher has no choice for the input of the classical part, the distinguishing advantage can only increase if the distinguisher will first read the classical part of the system and then choose $W$ according to the value of $K$. Therefore, for two c-n.s. systems, the best strategy will be to read $K$ and then to choose the best $W$, as indicated in the expression above. The distance (in norm 1) between two systems is defined to be half of the distinguishing advantage between these systems. (Distance between two c-n.s. systems). The distance between two c-n.s systems $P_{KZ|W}$ and $Q_{KZ|W}$ in norm 1 is $$\biggm|P_{KZ|W}-Q_{KZ|W}\biggm|_{1}\equiv\frac{1}{2}\underset{k}{\sum}\underset{w}{max}\underset{z}{\sum}\biggm|P_{KZ|W=w}(k,z)-Q_{KZ|W=w}(k,z)\biggm|.$$ In a cryptographic setting, we mostly consider the distance between the real system in which the key is being calculated from the output of the system held by the parties, and an ideal system. The ideal system in our case is a system in which the key is uniformly distributed and independent of the adversary’s system. For a c-n.s. system $P_{KZ|W}$ where $K$ is over $\{0,1\}^{n}$, let $U_{n}$ denote the uniform distribution over $\{0,1\}^{n}$ and let $P_{Z|W}$ be the marginal system held by the adversary. The distance from uniform is a defined as follows. \[Distance-from-uniform\](Distance from uniform). The distance from uniform of the c-n.s. system $P_{KZ|W}$ is $$d(K|Z(W))\equiv\biggm|P_{KZ|W}-U_{n}\times P_{Z|W}\biggm|_{1}$$ where the system $U_{n}\times P_{Z|W}$ is defined such that $U_{n}\times P_{Z|W}(k,z|w)=U_{n}(k)\cdot P_{Z|W}(z|w)$. In the following sections we consider the distance from uniform given a specific input (measurement) of the adversary, $W=w$. In this case, according to Definition \[Distance-from-uniform\], we get $$\begin{aligned} d(K|Z(w)) & = & \frac{1}{2}\underset{k,z}{\sum}\biggm|P_{KZ|W=w}(k,z)-U_{n}(k)\cdot P_{Z|W=w}(z)\biggm|=\nonumber \\ & = & \frac{1}{2}\underset{k,z}{\sum}P_{Z|W=w}(z)\biggm|P_{K|Z=z}(k)-\frac{1}{n}\biggm|.\label{eq:dist}\end{aligned}$$ Modeling non-signalling adversaries ----------------------------------- When modeling a non-signalling adversary, the question in mind is: given a system $P_{XY|UV}$ shared by Alice and Bob, for which some arbitrary non-signalling conditions hold, which extensions to a system $P_{XYZ|UVW}$, including the adversary Eve, are possible? The only principle which limits Eve is the non-signalling principle, which means that the conditional system $P_{XY|UV}^{Z=z}$ , for any $z\in Z$, must fulfill all of the non-signalling conditions that $P_{XY|UV}$ fulfills, and in addition $P_{XYZ|UVW}$ does not allow signalling between Alice and Bob together and Eve. Since any non-signalling assumptions about the system of Alice and Bob are ensured physically (by shielding the systems for example), they must still hold even if Eve’s output $z$ is given to some other party. Therefore the conditional system $P_{XY|UV}^{Z=z}$ must also fulfill all the non-signalling conditions of $P_{XY|UV}$, which justifies our assumptions about the power of the adversary in this setting. (-1.5,1.5) node\[anchor=east\] [U]{} –(0,1.5) ; (-1.5,0.5) node\[anchor=east\] [X]{} –(0,0.5); (0,0) rectangle (3,2); (1.5,1) node [$P_{XYZ|UVW}$]{}; (3,1.5)–(4.5,1.5) node\[anchor=west\] [V]{}; (3,0.5)–(4.5,0.5) node\[anchor=west\] [Y]{}; (1,-1) node\[anchor=north\] [Z]{}–(1,0); (2,-1) node\[anchor=north\] [W]{} –(2,0); We adopt here the model given in [@hanggi2010impossibility; @hanggi2010device; @hanggi2009quantum] of non-signalling adversaries. We reduce the scenario in which Alice, Bob and Eve share a system $P_{XYZ|UVW}$ to the scenario considering only Alice and Bob in the following way. Because Eve cannot signal to Alice and Bob (even together) by her choice of input, we must have, for all $x,y,u,v,w,w'$, $$\sum_{z}P_{XYZ|UVW}(x,y,z|u,v,w)=\sum_{z}P_{XYZ|UVW}(x,y,z|u,v,w')=P_{XY|UV}(x,y|u,v).$$ Moreover, as said before, since any non-signalling condition must still hold even if Eve’s output $z$ is given to some other party, the system conditioned on Eve’s outcome, $P_{XY|UV}^{Z=z}$, must also fulfill all the non-signalling conditions of $P_{XY|UV}$. We can therefore see Eve’s input as a choice of a convex decomposition of Alice’s and Bob’s system and her output as indicating one part of this decomposition. Formally, (Partition of the system). A partition of a given multipartite system $P_{XY|UV}$, which fulfills a certain set of non-signalling conditions $\mathcal{C}$, is a family of pairs $(p^{z},P_{XY|UV}^{z})$, where: 1. $p^{z}$ is a classical distribution (i.e. for all $z$ $p^{z}\geq0$ and $\underset{z}{\sum}\, p^{z}=1$). 2. For all $z$, $P_{XY|UV}^{z}$ is a system that fulfills $\mathcal{C}$. 3. $P_{XY|UV}=\underset{z}{\sum}\, p^{z}\cdot P_{XY|UV\,.}^{z}$ We can use the same proof as in Lemma 2 and 3 in [@hanggi2009quantum] to prove that this is indeed a legitimate model, i.e., that the set of all partitions covers exactly all the possible strategies of a non-signalling adversary in our case. It is further proven in [@hanggi2010impossibility] that for showing an impossibility result, we can assume that Eve’s information $Z$ is a binary random variable: (Lemma 5 in [@hanggi2010impossibility]). If $(p^{z=0},P_{XY|UV}^{z=0})$ is an element of a partition with $m$ elements, then it is also possible to define a new partition with only two elements, in which one of the elements is $(p^{z=0},P_{XY|UV}^{z=0})$ . Moreover, it is not necessary to determine both parts of the partition ($(p^{z=0},P_{XY|UV}^{z=0})$ and\ $(p^{z=1},P_{XY|UV}^{z=1})$) explicitly. Instead, a condition on the system given outcome $z=0$ is given, which will make sure that there exists a second part, complementing it to a partition: \[lem:one-part-enough\](Lemma 6 in [@hanggi2010impossibility]). Given a non-signalling distribution $P_{XY|UV}$, there exists a partition with element $(p^{z=0},P_{XY|UV}^{z=0})$ if and only if for all inputs and outputs $x,y,u,v$ it holds that $p^{z=0}\cdot P_{XY|UV}^{z=0}(x,y|u,v)\le P_{XY|UV}(x,y|u,v)$. For the formal proofs of these lemmas, note that since the non-signalling conditions are linear the same proofs as in Lemma 5 and Lemma 6 in [@hanggi2010impossibility] will hold here as well, no matter which non-signalling conditions are imposed for $P_{XY|UV}$. Defining a partition is equivalent to choosing a measurement $W=w$, therefore, we can also write the distance from uniform of a key, as in Equation (\[eq:dist\]), using the partition itself. Since we will only need to consider the case where Alice and Bob try to output one secret bit, we can further simplify the expression, as in the following lemma. (Lemma 5.1 in [@hanggi2010device]). For the case $K=f(X)$, where $f:\{0,1\}^{|X|}\rightarrow\{0,1\}$, $U=u$, $V=v$, and where the strategy $W=w$ is defined by the partition $\left\{ (p^{z_{w}},P_{XY|UV}^{z_{w}})\right\} _{z_{w}\in\{0,1\}}$, $$d\left(K|Z(w)\right)=\frac{1}{2}\sum_{z_{w}}p^{z_{w}}\cdot\biggm|\sum_{x,y}(-1)^{f(x)}P_{XY|UV}^{z_{w}}(x,y|u,v)\biggm|.$$ For a proof see Lemma 5.1 in [@hanggi2010device]. The Non-signalling Assumptions \[sec:The-Underlying-System\] ============================================================ The basic assumptions --------------------- It was proven in [@hanggi2009quantum] (Lemma 5) that any unbiased PR-box with error $\varepsilon<0.25$ holds some secrecy. With the goal of amplifying the privacy of the secret in mind, Alice and Bob now share $n$ such systems. The underlying system of Alice and Bob that we consider is a product of $n$ independent PR-boxes with errors (Definition \[PR-box-error\]), as seen from Alice’s and Bob’s point of view. This is stated formally in the following definition: \[A-product-system\]\[basic-system\](Product system). A product system of $n$ copies of PR-boxes with error $\varepsilon$ is the system $P_{XY|UV}=\underset{i\in[n]}{\prod}P_{X_{i}Y_{i}|U_{i}V_{i}}$, where for each $i$, the system $P_{X_{i}Y_{i}|U_{i}V_{i}}$ is an unbiased PR-box with error $\varepsilon$ as in Definition \[PR-box-error\]. In addition, as explained in Section \[sub:Different-non-signaling\], in order for any system to be useful, we will always make sure that Alice and Bob cannot signal to each other (otherwise any non-local violation will not have any meaning - it could have also been achieved by signalling between the systems). Mathematically, this means that for any outcome $z$ of any adversary, Alice and Bob cannot signal to each other using the system $P_{XY|UV}^{z}$. I.e., $P_{XY|UV}^{z}$ fulfills the conditions of Definition \[Alice-&-Bob-n.s.\]. On top of this assumption we can now add more non-signalling assumptions of different types. For example, in [@hanggi2009quantum], [@masanes2009universally] and [@masanes2011secure] it was proven that if we assume full non-signalling conditions then privacy amplification is possible. On the contrary, in [@hanggi2010impossibility] it was proven that if we do not add more non-signalling assumption (and use only the assumption that Alice and Bob cannot signal to each other) then privacy amplification is impossible. An interesting question is therefore, what happens in the middle? Is privacy amplification possible when we use some additional assumptions but not all of them? The goal of this letter is to consider the conditions of almost backward non-signalling systems, given in Definition \[sequential-signaling\]. We will do so by considering a larger set of equations, defined formally in Section \[sub:Our-assumptions\]. Our additional assumptions \[sub:Our-assumptions\] -------------------------------------------------- Consider the following system. \[our-system\] Alice and Bob and Eve share a system $P_{XYZ|UVW}$ such that: 1. The marginal system of Alice and Bob $P_{XY|UV}$ is a product system as in Definition \[basic-system\]. 2. For any $z$, $P_{XY|UV}^{z}$ fulfills the conditions of Definition \[Alice-&-Bob-n.s.\] (Alice and Bob cannot signal each other). 3. For all $i\in[n]$ and for any $z$ $$\begin{aligned} \forall x_{\overline{i}},y_{\overline{i}},u_{i},u'_{i},u_{\overline{i}},v\qquad\underset{x_{i},y_{i}}{\sum}P_{XY|UV}^{z}(x,y|u,v) & = & \underset{x_{i},y_{i}}{\sum}P_{XY|UV}^{z}(x,y|u',v)\\ \forall x_{\overline{i}},y_{\overline{i}},u,v_{i},v'_{i},v_{\overline{i}}\qquad\underset{x_{i},y_{i}}{\sum}P_{XY|UV}^{z}(x,y|u,v) & = & \underset{x_{i},y_{i}}{\sum}P_{XY|UV}^{z}(x,y|u,v').\end{aligned}$$ Note that the set of these conditions is equivalent to $$\forall x_{\overline{i}},y_{\overline{i}},u_{i},u'_{i},u_{\overline{i}},v_{i},v'_{i},v_{\overline{i}}\qquad\underset{x_{i},y_{i}}{\sum}P_{XY|UV}^{z}(x,y|u,v)=\underset{x_{i},y_{i}}{\sum}P_{XY|UV}^{z}(x,y|u',v').\label{eq:equiv}$$ To see this first note that the conditions of Definition \[our-system\] are a special case of Equation (\[eq:equiv\]). For the second direction: $\forall x_{\overline{i}},y_{\overline{i}},u_{i},u'_{i},u_{\overline{i}},v_{i},v'_{i},v_{\overline{i}}$, $$\underset{x_{i},y_{i}}{\sum}P_{XY|UV}^{z}(x,y|u,v)=\underset{x_{i},y_{i}}{\sum}P_{XY|UV}^{z}(x,y|u',v)=\underset{x_{i},y_{i}}{\sum}P_{XY|UV}^{z}(x,y|u',v').$$ Therefore, the equations of Definition \[our-system\] mean that for all $i$, parties $A_{i}$ and $B_{i}$ together cannot signal the other parties (See Figure \[fig:our-conditions\]). \[font=\] (0,0) node [$A_{n}$]{}; (0,0.8) node [.]{}; (0,1) node [.]{}; (0,1.2) node [.]{}; (0,2) node [$A_{3}$]{}; (0,3) node [$A_{2}$]{}; (0,4) node [$A_{1}$]{}; (3,0) node [$B_{n}$]{}; (3,0.8) node [.]{}; (3,1) node [.]{}; (3,1.2) node [.]{}; (3,2) node [$B_{3}$]{}; (3,3) node [$B_{2}$]{}; (3,4) node [$B_{1}$]{}; (-0.5,1.5) rectangle (3.5,2.5); (1.5,2.25) – (1.5,2.75); (1.5,1.25) – (1.5,1.75); Adding these assumptions to the the non-signalling assumption between Alice and Bob (Definition \[Alice-&-Bob-n.s.\]) does not imply the full non-signalling conditions. To see this consider the following example. Alice and Bob share a system $P_{XY|UV}$ such that $X,Y,U,V\in\{0,1\}^{2}$. We define the system such that each of the outputs is a perfectly random bit and independent of any input, except for $X_{2}$, which is equal to $Y_{1}\oplus U_{1}$. The outputs on Bob’s side look completely random and independent of any input, i.e., the system is non-signalling from Alice to Bob. Now note that whenever we do not have access to $Y_{1}$, then $X_{2}$ also looks like a perfectly random bit and independent of the input. Therefore, the system is also non-signalling from Bob to Alice, and the conditions of Definition \[our-system\] hold as well. However, this system is not fully non-signalling, since the input $U_{1}$ can be perfectly known from $X_{2}$ and $Y_{1}$ (i.e. $A_{1}$ can signal $A_{2}$ and $B_{1}$ together). Adding this set of equations as assumptions means to add a lot more assumptions about the system (on top of the basic system described before). Intuitively, such a system is close to being a fully non-signalling system. We will prove that even in this case, Theorem 15 in [@hanggi2010impossibility] still holds and privacy amplification is impossible: \[thm:main\]There exists a system as in Definition \[our-system\] such that for any hash function $f$, there exists a partition $w$ for which the distance from uniform of $f(X)$ given $w$ is at least $c(\varepsilon)$, i.e., $d(f(X)|Z(w))\geq c(\varepsilon)$, where $c(\varepsilon)$ is some constant which depends only on the error of a single box, $\varepsilon$ (as in Definition \[A-product-system\]). Note that although our set of equations might seem unusual, proving an impossibility result for this set implies the same impossibility result for all sets of linear equations that are determined by it. The set of equations of an almost backward non-signalling system, as in Definition \[sequential-signaling\], is one interesting example of such a set. \[lem:our-imply-sequentialy-signaling\]The almost backward non-signalling conditions, as in Definition \[sequential-signaling\], are implied by the non-signalling conditions of Definition \[our-system\]. Consider the set of equations in Definition \[sequential-signaling\]. We will now prove them using the equations in Definition \[our-system\], this will imply that if the assumptions of Definition \[our-system\] hold then so do the assumption of almost backward non-signalling. For every $i\in[n]$ we can write $$\begin{gathered} \underset{x_{I_{2}},y_{I_{2}}}{\sum}P_{XY|UV}(x,y|u_{I_{1}},u_{I_{2}},v_{I_{1}},v_{I_{2}})=\shoveright{\underset{\begin{array}{c} x_{I_{2}/\{i\}}\\ y_{I_{2}/\{i\}} \end{array}}{\sum}\underset{\begin{array}{c} x_{i}\\ y_{i} \end{array}}{\sum}P_{XY|UV}(x,y|u_{I_{1}},u_{i},u_{I_{2}/\{i\}},v_{I_{1}},v_{i},v_{I_{2}/\{i\}})}\\ =\shoveright{\underset{\begin{array}{c} x_{I_{2}/\{i\}}\\ y_{I_{2}/\{i\}} \end{array}}{\sum}\underset{\begin{array}{c} x_{i}\\ y_{i} \end{array}}{\sum}P_{XY|UV}(x,y|u_{I_{1}},u'_{i},u_{I_{2}/\{i\}},v_{I_{1}},v'_{i},v_{I_{2}/\{i\}})}\\ =\underset{\begin{array}{c} x_{I_{2}/\{i+1\}}\\ y_{I_{2}/\{i+1\}} \end{array}}{\sum}\underset{\begin{array}{c} x_{i+1}\\ y_{i+1} \end{array}}{\sum}P_{XY|UV}(x,y|u_{I_{1}},u'_{i},u_{i+1},u_{I_{2}/\{i,i+1\}},v_{I_{1}},v_{i}',v_{i+1},v_{I_{2}/\{i,i+1\}})\end{gathered}$$ $$\begin{gathered} =\shoveright{\underset{\begin{array}{c} x_{I_{2}/\{i+1\}}\\ y_{I_{2}/\{i+1\}} \end{array}}{\sum}\underset{\begin{array}{c} x_{i+1}\\ y_{i+1} \end{array}}{\sum}P_{XY|UV}(x,y|u_{I_{1}},u'_{\{i,i+1\}},u_{I_{2}/\{i,i+1\}},v_{I_{1}},v_{\{i,i+1\}}',v_{I_{2}/\{i,i+1\}})}\\ =...=\underset{x_{I_{2}},y_{I_{2}}}{\sum}P_{XY|UV}(x,y|u_{I_{1}},u'_{I_{2}},v_{I_{1}},v'_{I_{2}}).\tag*{\qedhere}\end{gathered}$$ Combining Lemma \[lem:our-imply-sequentialy-signaling\] together with Theorem \[thm:main\] implies the following. \[cor:seq-cor\]There exists an almost backward non-signalling system as in Definition \[sequential-signaling\] such that for any hash function $f$, there exists a partition $w$ for which the distance from uniform of $f(X)$ given $w$ is at least $c(\varepsilon)$, i.e., $d(f(X)|Z(w))\geq c(\varepsilon)$, where $c(\varepsilon)$ is some constant which depends only on the error of a single box, $\varepsilon$ (as in Definition \[A-product-system\]). Another interesting example is the set of equations which includes non-signalling conditions between all of Alice’s systems alone and non-signalling conditions between all of Bob’s systems alone, together with the condition of non-signalling between Alice and Bob. \[completly-Alice-completly-Bob\]An n-party conditional probability distribution $P_{XY|UV}$ over $X,Y,U,V\in\{0,1\}^{n}$ is completely non-signalling on Alice’s side and completely non-signalling on Bob’s side, if for any $i\in[n]$, $$\begin{aligned} \forall x_{\overline{i}},u_{i},u'_{i},u_{\overline{i}}\quad\underset{x_{i}}{\sum}P_{X|U}(x_{i},x_{\overline{i}}|u_{i},u_{\overline{i}}) & = & \underset{x_{i}}{\sum}P_{X|U}(x_{i},x_{\overline{i}}|u'_{i},u_{\overline{i}})\\ \forall y_{\overline{i}},v_{i},v'_{i},v_{\overline{i}}\quad\underset{y_{i}}{\sum}P_{Y|V}(y_{i},y_{\overline{i}}|v_{i},v_{\overline{i}}) & = & \underset{y_{i}}{\sum}P_{Y|V}(y_{i},y_{\overline{i}}|v'_{i},v_{\overline{i}})\end{aligned}$$ where $P_{X|U}$ is the marginal system of $P_{XY|UV}$, held by Alice, and $P_{Y|V}$ is the marginal system of $P_{XY|UV}$, held by Bob. \[lem:our-imply-completly-Alice\]The non-signalling conditions of Definition \[completly-Alice-completly-Bob\] are implied by the non-signalling conditions of Definition \[our-system\]. We show that this is true for Alice’s side. The proof for Bob’s side is analogous. First, for any $i\in[n]$, we can write the equation $$\forall x_{\overline{i}},u_{i},u'_{i},u_{\overline{i}}\qquad\underset{x_{i}}{\sum}P_{X|U}(x_{i},x_{\overline{i}}|u_{i},u_{\overline{i}})=\underset{x_{i}}{\sum}P_{X|U}(x_{i},x_{\overline{i}}|u'_{i},u_{\overline{i}})$$ using the original system $P_{XY|UV}$ and the definition of a marginal system: $$\forall x_{\overline{i}},u_{i},u'_{i},u_{\overline{i}},v\qquad\underset{\begin{array}{c} x_{i},y\end{array}}{\sum}P_{XY|UV}(x,y|u_{i},u_{\overline{i}},v)=\underset{\begin{array}{c} x_{i},y\end{array}}{\sum}P_{XY|UV}(x,y|u'_{i},u_{\overline{i}},v).$$ Now, as in the proof of Lemma \[lem:our-imply-sequentialy-signaling\], $$\begin{aligned} \underset{\begin{array}{c} x_{i},y\end{array}}{\sum}P_{XY|UV}(x,y|u_{i},u_{\overline{i}},v) & =\underset{y/\{y_{i}\}}{\sum}\underset{\begin{array}{c} x_{i},y_{i}\end{array}}{\sum}P_{XY|UV}(x,y|u_{i},u_{\overline{i}},v)\\ & =\underset{y/\{y_{i}\}}{\sum}\underset{\begin{array}{c} x_{i},y_{i}\end{array}}{\sum}P_{XY|UV}(x,y|u'_{i},u_{\overline{i}},v)\\ & =\underset{\begin{array}{c} x_{i},y\end{array}}{\sum}P_{XY|UV}(x,y|u'_{i},u_{\overline{i}},v).\tag*{\qedhere}\end{aligned}$$ Combining Lemma \[lem:our-imply-completly-Alice\] together with Theorem \[thm:main\] implies the following. There exists a system as in Definition \[completly-Alice-completly-Bob\] such that for any hash function $f$, there exists a partition $w$ for which the distance from uniform of $f(X)$ given $w$ is at least $c(\varepsilon)$, i.e., $d(f(X)|Z(w))\geq c(\varepsilon)$, where $c(\varepsilon)$ is some constant which depends only on the error of a single box, $\varepsilon$ (as in Definition \[A-product-system\]). Privacy Amplification Against Non-signalling Adversaries \[sec:Privacy-Amplification-Against\] ============================================================================================== \[sub:The-impossibility-without-n.s.\]The impossibility of privacy amplification under the basic non-signalling assumptions ---------------------------------------------------------------------------------------------------------------------------- We use here the same adversarial strategy as presented in [@hanggi2010impossibility] and therefore repeat it here shortly for completeness. For additional intuitive explanations and complete formal proofs please see [@hanggi2010impossibility]. As explained before, Alice’s and Bob’s goal is to create a highly secure key using a system, $P_{XY|UV}$, shared by both of them. Eve’s goal is to get some information about the key. It is therefore natural to model this situation in the following way: Alice, Bob and Eve share together a system $P_{XYZ|UVW}$, an extension of the system $P_{XY|UV}$ held by Alice and Bob, which fulfills some known non-signalling conditions. Each party can perform measurements on its part of the system (i.e., insert input and read the outputs of their interfaces of the system), communicate using a public authenticated channel, Alice then applies some public hash function $f$ to the outcome she holds, $X$, and in the end Alice should have a key $K=f(X)$, which is $\epsilon$-indistinguishable from an ideal, uniformly distributed key, even conditioned on Eve’s information. I.e., $d(K|Z(W))\leq\epsilon$. The distance from uniform of the key $k$ is lower-bounded by the distance from uniform of a single bit of the key, and therefore, for an impossibility result, it is enough to assume that $f$ outputs just one bit. Note that since the adversarial strategy can be chosen after all public communication is over, it can also depend on a random seed for the hash function. Therefore it is enough to consider deterministic functions in this case. We consider a partition with only two outputs, $z=0$ and $z=1$, each occurring with probability $\frac{1}{2}$, such that given $z=0$, $f(X)$ is maximally biased towards 0. According to Lemma \[lem:one-part-enough\] it is enough to explicitly construct the conditional system given measurement outcome $z=0$. In order to do so we start from the unbiased system as seen by Alice and Bob and “shift around” probabilities such that $f(X)$ is maximally biased towards 0 and the marginal system remains valid. By valid me mean that: 1. All entries must remain probabilities between 0 and 1. 2. The normalization of the probability distribution must remain. 3. The non-signalling condition between Alice and Bob must be satisfied. 4. There must exist a second measurement outcome $z=1$ occurring with probability $\frac{1}{2}$, and such that the conditional system, given outcome $z=1$, is also a valid probability distribution. This second system must be able to compensate for the shifts in probabilities. According to Lemma \[lem:one-part-enough\] this means that the entry in every cell must be smaller or equal twice the original entry. The system $P_{XY|UV}^{z=0}$ which describes this strategy is defined formally in the following way. For simplicity we will drop the subscript of $P_{XY|UV}(x,y|u,v)$ and write only $P(x,y|u,v)$. We use the same notations as in [@hanggi2010impossibility; @hanggi2010device] and define the following groups: $$\begin{aligned} y_{<} & = & \left\{ y\biggm|\sum_{x|f(x)=0}P(x,y|u,v)<\sum_{x|f(x)=1}P(x,y|u,v)\right\} \\ y_{>} & = & \left\{ y\biggm|\sum_{x|f(x)=0}P(x,y|u,v)>\sum_{x|f(x)=1}P(x,y|u,v)\right\} \\ x_{0} & = & \left\{ x\biggm|f(x)=0\right\} \\ x_{1} & = & \left\{ x\biggm|f(x)=1\right\} \end{aligned}$$ and a factor $c(x,y|u,v)$ as: $$\begin{aligned} \forall x & \in & x_{0},y\in y_{<}\qquad c(x,y|u,v)=2\\ \forall x & \in & x_{1},y\in y_{<}\qquad c(x,y|u,v)=\frac{\underset{x'}{\sum}(-1)^{\left(f(x')+1\right)}P(x',y|u,v)}{\underset{x'|f(x')=1}{\sum}P(x',y|u,v)}\\ \forall x & \in & x_{0},y\in y_{>}\qquad c(x,y|u,v)=\frac{\underset{x'}{\sum}P(x',y|u,v)}{\underset{x'|f(x')=0}{\sum}P(x',y|u,v)}\\ \forall x & \in & x_{1},y\in y_{>}\qquad c(x,y|u,v)=0\end{aligned}$$ The system $P^{z=0}$ is then defined as $P^{z=0}(x,y|u,v)=c(x,y|u,v)\cdot P(x,y|u,v)$. Intuitively, this definition of the strategy means that for each $u,v$ and within each row, Eve shifts as much probability as possible out from the cells $P(x,y|u,v)$ for which $f(x)=1$ and into the cells $P(x',y|u,v)$ for which $f(x')=0$ (she wants $P^{z=0}$ to be biased towards 0). The factor $c(x,y|u,v)$ is defined in such a way that as much probability as possible is being shifted, while still keeping the system $P^{z=0}$ a valid element of a partition. Although Eve shifts probabilities for each $u,v$ separately, $P^{z=0}$ will still fulfill the required non-signalling conditions, which connect the inputs $u,v$ to other inputs $u',v'$; this is due to the high symmetry in the original marginal box of Alice and Bob (Definition \[A-product-system\]). For example, it is easy to see that since Eve only shifts probabilities within the same row (i.e. cells with the same value of $y$) Bob cannot signal to Alice using $P^{z=0}$; the sum of the probabilities in one row stays the same as it was in $P$, and since $P$ did not allow for signalling from Bob to Alice, so do $P^{z=0}$. The other non-signalling conditions follow from a bit more complex symmetries. It was proven in [@hanggi2010impossibility] that for this strategy[^2] $d(K|Z(w))\leq\frac{-1+\sqrt{1+64\varepsilon^{2}}}{32\varepsilon}$. Proof of the theorem - a more general impossibility result ---------------------------------------------------------- In order to prove Theorem \[thm:main\] we will just prove that the adversarial strategy presented in [@hanggi2010impossibility] still works. Formally, this means that we need to prove that the element $\left(p^{z=0}=\frac{1}{2},P^{z=0}(x,y|u,v)\right)$ in the partition is still valid, even when we add the assumptions of Definition \[our-system\], and that $d(K|Z(w))$ is high. Since we do not change the strategy, the same bound on $d(K|Z(w))$ still holds. Moreover, it was already proven in [@hanggi2010impossibility] that $P^{z=0}(x,y|u,v)$ does not allow signalling between Alice and Bob, therefore we only need to prove that our additional non-signalling assumptions of Definition \[our-system\] hold in the system $P^{z=0}(x,y|u,v)$, i.e., the system satisfies our assumptions even conditioned on Eve’s result. The first three lemmas deal with the impossibility of signalling from Alice’s side and the next three lemmas deal with Bob’s side. All the lemmas use the high symmetry of the marginal box (Definition \[A-product-system\]). What these lemmas show is that most of this symmetry still exists in $P^{z=0}$, because we only shift probabilities within the same row. We use the following notation; for all $i\in[n]$ let $u^{i'}$ be $u^{i'}=u_{1}...u_{i-1},\overline{u_{i}},u_{i+1}...u_{n}$ (i.e., only the i’th bit is flipped) and the same for $x^{i'}$, $y^{i'}$ and $v^{i'}$. \[lem:Alice-equiv-cell\]For all $i\in[n]$ and for all $x$,$y$,$u$,$v$ such that $v_{i}=1$, $P(x,y^{i'}|u,v)=P(x,y|u^{i'},v)$. For every single box , $P_{X_{i}Y_{i}|U_{i}V_{i}}(x_{i},y_{i}|u_{i},v_{i})=P_{X_{i}Y_{i}|U_{i}V_{i}}(\overline{x_{i}},\overline{y_{i}}|u_{i},v_{i})$. Therefore it also holds that $P(x,y|u,v)=P(x^{i'},y^{i'}|u,v)$. Moreover, $$\begin{aligned} P(x,y|u^{i'},v) & = & \left(\frac{1}{2}-\frac{\varepsilon}{2}\right)^{\underset{l}{\sum}1\oplus x_{l}\oplus y_{l}\oplus u_{l}^{i'}\cdot v_{l}}\cdot\left(\frac{\varepsilon}{2}\right)^{\underset{l}{\sum}x_{l}\oplus y_{l}\oplus u_{l}^{i'}\cdot v_{l}}=\\ & = & \left(\frac{1}{2}-\frac{\varepsilon}{2}\right)^{\underset{l}{\sum}1\oplus x_{l}^{i'}\oplus y_{l}\oplus u_{l}\cdot v_{l}}\cdot\left(\frac{\varepsilon}{2}\right)^{\underset{l}{\sum}x_{l}^{i'}\oplus y_{l}\oplus u_{l}\cdot v_{l}}=\\ & = & P(x^{i'},y|u,v)\end{aligned}$$ Combining these two properties together, we get $P(x,y|u^{i'},v)=P(x^{i'},y|u,v)=P(x,y^{i'}|u,v)$. \[lem:Alice-equiv-type\]For all $i\in[n]$ and for all $x$,$y$,$u$,$v$ such that $v_{i}=1$, $c(x,y^{i'}|u,v)=c(x,y|u^{i'},v)$. I.e., the cells $P(x,y^{i'}|u,v)$ and $P(x,y|u^{i'},v)$ are from the same type ($x_{0}/x_{1},\: y_{>}/y_{<}$). First, it is clear that if $P(x,y^{i'}|u,v)$ was a $x_{0}$ ($x_{1}$) cell, so is $P(x,y|u^{i'},v)$ because this only depends on $x$. Now note that Lemma \[lem:Alice-equiv-cell\] is correct for every $x$, therefore the entire row $P(\bullet,y^{i'}|u,v)$ is equivalent to the row $P(\bullet,y|u^{i'},v)$. This means that if we change $y^{i'}$ to $y$ and $u$ to $u^{i'}$ together, we will get the same row, and therefore if $P(x,y^{i'}|u,v)$ was a $y_{<}$ ($y_{>}$) cell, so is $P(x,y|u^{i'},v)$. All together we get $c(x,y^{i'}|u,v)=c(x,y|u^{i'},v)$. The properties of the marginal system $P_{XY|UV}$ which are being used in Lemma \[lem:Alice-equiv-cell\] and Lemma \[lem:Alice-equiv-type\] can be easily seen, for example, in Table \[tab:11\] and Table \[tab:10\]. For simplicity we consider a product of only 2 systems. When changing Alice’s input from $u=11$ to $u=10$ while $v=11$, the rows interchange as Lemma \[lem:Alice-equiv-cell\] suggests. [c c | c c c c]{} & &\ & & 00 & 01 & 10 & 11\ \ & 00 & $(\frac{\varepsilon}{2})^2$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $(\frac{1-\varepsilon}{2})^2$\ & 01 & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $(\frac{\varepsilon}{2})^2$ & $(\frac{1-\varepsilon}{2})^2$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$\ & 10 & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $(\frac{1-\varepsilon}{2})^2$ & $(\frac{\varepsilon}{2})^2$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$\ & 11 & $(\frac{1-\varepsilon}{2})^2$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $(\frac{\varepsilon}{2})^2$\ [c c | c c c c]{} & &\ & & 00 & 01 & 10 & 11\ \ & 00 & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $(\frac{\varepsilon}{2})^2$ & $(\frac{1-\varepsilon}{2})^2$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$\ & 01 & $(\frac{\varepsilon}{2})^2$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $(\frac{1-\varepsilon}{2})^2$\ & 10 & $(\frac{1-\varepsilon}{2})^2$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $(\frac{\varepsilon}{2})^2$\ & 11 & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$ & $(\frac{1-\varepsilon}{2})^2$ & $(\frac{\varepsilon}{2})^2$ & $\frac{\varepsilon}{2}\cdot\frac{1-\varepsilon}{2}$\ \[lem:proof-Alice-side\]In the conditional system , for any $i\in[n]$ $$\forall x_{\overline{i}},y_{\overline{i}},u_{i},u_{\overline{i}},v\qquad\underset{x_{i},y_{i}}{\sum}P^{z=0}(x,y|u,v)=\underset{x_{i},y_{i}}{\sum}P^{z=0}(x,y|u^{i'},v).$$ First note that for any $u$ and $v$ such that $v_{i}=0$ the probability distribution $P_{XY|U=u,V=v}$ is identical to $P_{XY|U=u^{i'},V=v}$ (because of the properties of a single box, see Figure \[fig:PR-box-error\]). Therefore Eve will shift the probabilities in these two systems in the same way, which implies that $P_{XY|U=u,V=v}^{z=0}$ is identical to $P_{XY|U=u^{i'},V=v}^{z=0}$, and in particular, any non-signalling conditions will hold in this case. Assume $v_{i}=1$. We will prove something a bit stronger than needed. We prove that for all $x,y_{\overline{i}},u_{i},u_{\overline{i}},v$, $\underset{y_{i}}{\sum}P^{z=0}(x,y|u,v)=\underset{y_{i}}{\sum}P^{z=0}(x,y|u^{i'},v)$. This in particular implies that $\underset{x_{i},y_{i}}{\sum}P^{z=0}(x,y|u,v)=\underset{x_{i},y_{i}}{\sum}P^{z=0}(x,y|u^{i'},v)$ also holds. $$\begin{aligned} \underset{y_{i}}{\sum}P^{z=0}(x,y|u^{i'},v) & =\underset{y_{i}}{\sum}c(x,y|u^{i'},v)\cdot P(x,y|u^{i'},v)\\ & =\underset{y_{i}}{\sum}c(x,y^{i'}|u,v)\cdot P(x,y^{i'}|u,v)\\ & =\underset{y_{i}}{\sum}P^{z=0}(x,y^{i'}|u,v)\\ & =\underset{y_{i}}{\sum}P^{z=0}(x,y|u,v).\end{aligned}$$ The first and third equalities are by the definition of $P^{z=0}$, the second equality is due to Lemma \[lem:Alice-equiv-cell\] and Lemma \[lem:Alice-equiv-type\] and the last equality is due the fact that the sum is over $y_{i}$. \[lem:Bob-equiv-cell\]For all $i\in[n]$ and for all $x$,$y$,$u$,$v$, $P(x,y^{i'}|u,v)=P(x,y|u,v^{i'})$. $$\begin{aligned} P(x,y|u,v^{i'}) & =\left(\frac{1}{2}-\frac{\varepsilon}{2}\right)^{\underset{l}{\sum}1\oplus x_{l}\oplus y_{l}\oplus u_{l}\cdot v_{l}^{i'}}\cdot\left(\frac{\varepsilon}{2}\right)^{\underset{l}{\sum}x_{l}\oplus y_{l}\oplus u_{l}\cdot v_{l}^{i'}}=\\ & =\left(\frac{1}{2}-\frac{\varepsilon}{2}\right)^{\underset{l}{\sum}1\oplus x_{l}\oplus y_{l}^{i'}\oplus u_{l}\cdot v_{l}}\cdot\left(\frac{\varepsilon}{2}\right)^{\underset{l}{\sum}x_{l}\oplus y_{l}^{i'}\oplus u_{l}\cdot v_{l}}=\\ & =P(x,y^{i'}|u,v).\tag*{\qedhere}\end{aligned}$$ \[lem:Bob-equiv-type\]For all $i\in[n]$ and for all $x$,$y$,$u$,$v$ such that $v_{i}=1$, $c(x,y^{i'}|u,v)=c(x,y|u,v^{i'})$. I.e., the cells $P(x,y^{i'}|u,v)$ and $P(x,y|u,v^{i'})$ are from the same type ($x_{0}/x_{1},\: y_{>}/y_{<}$). As in Lemma \[lem:Alice-equiv-type\], it is clear that if $P(x,y^{i'}|u,v)$ was a $x_{0}$ ($x_{1}$) cell, so is $P(x,y|u,v^{i'})$ because this only depends on $x$. Lemma \[lem:Bob-equiv-cell\] is correct for every $x$, therefore the entire row $P(\bullet,y^{i'}|u,v)$ is equivalent to the row $P(\bullet,y|u,v^{i'})$ and therefore if $P(x,y^{i'}|u,v)$ was a $y_{<}$ ($y_{>}$) cell, so is $P(x,y|u,v^{i'})$. All together we get $c(x,y^{i'}|u,v)=c(x,y|u,v^{i'})$. \[lem:proof-Bob-side\]In the conditional system , for any $i\in[n]$ $$\forall x_{\overline{i}},y_{\overline{i}},u,v_{i},v_{\overline{i}}\qquad\underset{x_{i},y_{i}}{\sum}P^{z=0}(x,y|u,v)=\underset{x_{i},y_{i}}{\sum}P^{z=0}(x,y|u,v^{i'}).$$ In an analogous way to the proof of Lemma \[lem:proof-Alice-side\], if $u_{i}=0$ the proof is trivial. Assume $u_{i}=1$. We prove that for all $x,y_{\overline{i}},u,v_{i},v_{\overline{i}}$, $\underset{y_{i}}{\sum}P^{z=0}(x,y|u,v)=\underset{y_{i}}{\sum}P^{z=0}(x,y|u,v^{i'})$. This in particular implies that $\underset{x_{i},y_{i}}{\sum}P^{z=0}(x,y|u,v)=\underset{x_{i},y_{i}}{\sum}P^{z=0}(x,y|u,v^{i'})$ also holds. $$\begin{aligned} \underset{y_{i}}{\sum}P^{z=0}(x,y|u,v^{i'}) & =\underset{y_{i}}{\sum}c(x,y|u,v^{i'})\cdot P(x,y|u,v^{i'})=\\ & =\underset{y_{i}}{\sum}c(x,y^{i'}|u,v)\cdot P(x,y^{i'}|u,v)=\\ & =\underset{y_{i}}{\sum}P^{z=0}(x,y^{i'}|u,v)=\\ & =\underset{y_{i}}{\sum}P^{z=0}(x,y|u,v).\tag*{\qedhere}\end{aligned}$$ Note that the only difference between the full non-signalling conditions and what we have proved here is that in Lemma \[lem:proof-Alice-side\] we have to keep the summation over $y_{i}$. Moreover, it is interesting to see that at least on Bob’s side, the “full” non-signalling conditions also hold in $P^{z=0}$. Since Eve’s strategy is defined to work on each row separately, the symmetry on Bob’s side does not break at all. Lemmas \[lem:proof-Alice-side\] and \[lem:proof-Bob-side\] together prove that the assumption of Definition \[our-system\] holds even conditioned on Eve’s result. Adding this to the rest of the proof of [@hanggi2010impossibility] proves Theorem \[thm:main\]. Concluding Remarks and Open Questions \[sec:Concluding-Remarks\] ================================================================ In this letter we proved that privacy amplification is impossible even if we add a lot more non-signalling conditions over the assumptions of [@hanggi2010impossibility]. This also implies that privacy amplification is impossible under the assumptions of an almost backward non-signalling system. An interesting question which arises from our theorem is whether the non-signalling conditions in which the backward non-signalling systems and the almost backward non-signalling system differs are the ones which give Eve the tremendous power which makes privacy amplification impossible. If yes, then it might be the case that privacy amplification is possible in the relevant setting of backward non-signalling systems. On the other hand, if the answer to this question is no, then privacy amplification is also impossible for backward non-signalling systems. If this is indeed the case then it seems that the security proof for any practical QKD protocol will have to be based on quantum physics somehow, and not on the non-signalling postulate alone. Another interesting question is whether we can extend our result to the case where Alice and Bob use a more interactive protocol to amplify the secrecy of their key; instead of just applying some hash function only on Alice’s output $X$ and get a key $K=f(X)$, maybe they can use Bob’s output $Y$ as well and create a key $K=g(X,Y)$. #### Acknowledgments: {#acknowledgments .unnumbered} Rotem Arnon Friedman thanks Renato Renner for helpful discussions. Amnon Ta-Shma and Rotem Arnon Friedman acknowledge support from the FP7 FET-Open project QCS. Esther Hänggi acknowledges support from the National Research Foundation (Singapore) and the Ministry of Education (Singapore). [^1]: If we will not ensure this condition, say by making sure that they are in space-like separated regions or by shielding their systems, the measured Bell violation will have no meaning and any protocol based on some kind of non locality will fail [^2]: Actually, this strategy is being used only when Alice is using an hash function which does not allow Bob to generate a bit from his output of the system $Y$, which is highly correlated with the key. If Alice uses a function which does allow Bob to get an highly correlated key, then this function has to be biased and therefore Eve can just use the trivial strategy of doing nothing. For more details please see [@hanggi2010impossibility].
--- abstract: | We simulate the inner 100pc of the Milky-Way Galaxy to study the formation and evolution of the population of star clusters and intermediate mass black holes. For this study we perform extensive direct $N$-body simulations of the star clusters which reside in the bulge, and of the inner few tenth of parsecs of the super massive black hole in the Galactic center. In our $N$-body simulations the dynamical friction of the star cluster in the tidal field of the bulge are taken into account via (semi)analytic soluations. The $N$-body calculations are used to calibrate a (semi)analytic model of the formation and evolution of the bulge. We find that $\sim 10$% of the clusters born within $\sim100$pc of the Galactic center undergo core collapse during their inward migration and form intermediate-mass black holes (IMBHs) via runaway stellar merging. After the clusters dissolve, these IMBHs continue their inward drift, carrying a few of the most massive stars with them. We predict that region within $\sim10$ parsec of the SMBH is populated by $\sim 50$ IMBHs of $\sim 1000$ . Several of these are expected to be accompanied still by some of the most massive stars from the star cluster. We also find that within a few milliparsec of the SMBH there is a steady population of several IMBHs. This population drives the merger rate between IMBHs and the SMBH at a rate of about one per 10Myr, sufficient to build the accumulate majority of mass of the SMBH. Mergers of IMBHs with SMBHs throughout the universe are detectable by LISA, at a rate of about two per week. author: - 'Simon F. Portegies Zwart, Holger Baumgardt, Stephen L. W. McMillan, Junichiro Makino, Piet Hut' - Toshi Ebisuzaki date: 'Received 2005 August 1; in original form 1687 October 3.6; Accepted xxxx xxx xx.' title: The ecology of star clusters and intermediate mass black holes in the Galactic bulge --- \#1[[**\[\#1 – Steve\]**]{}]{} \#1[[**\[\#1 – Steve\]**]{}]{} \#1[[**\[\#1 – Simon\]**]{}]{} \#1[[**\[\#1 – Simon\]**]{}]{} \#1[[**\[\#1 – Holger\]**]{}]{} \#1[[**\[\#1 – Holger\]**]{}]{} \#1[[**\[\#1 – Jun\]**]{}]{} \#1[[**\[\#1—Jun\]**]{}]{} \#1[[**\[\#1—Piet\]**]{}]{} \#1[[**\[\#1—Piet\]**]{}]{} \[firstpage\] Introduction ============ In recent years the Galactic center has been explored extensively over most of the electromagnetic spectrum, revealing complex structures and a multitude of intriguing physical phenomena. At the center lies a $\sim3.7\times10^6$ solar mass () black hole [@1997MNRAS.284..576E; @1998ApJ...509..678G; @2000Natur.407..349G]. The presence of a water-rich dust ring at about one parsec from SgrA\* . further underscores the complexity of this region, as does the presence within the central parsec of a few million year old population of very massive Ofpe/WN9 [@1993ApJ...414..573T] and luminous blue variable stars . These young stars may indicate recent star formation in the central region [@1993ApJ...408..496M; @2005astro.ph..7687N], or they may have migrated inward from larger distances to their current locations [@2001ApJ...546L..39G]. In addition, the [*Chandra*]{} X-ray Observatory has detected an unusually large number ($\apgt 2000$) of hard X-ray (2–10 keV) point sources within 23pc of the Galactic center [@2003ApJ...589..225M]. Seven of these sources are transients, and are conjectured to contain stellar-mass black holes [@2004astro.ph.12492M]; some may even harbor IMBHs [@2001ApJ...558..535M]. The Galactic center is a dynamic environment, where young stars and star clusters form in molecular clouds or thick dusty rings [@2004astro.ph..9541N; @2005astro.ph..7687N], and interact with their environment. Several star clusters are known to exist in this region [@1999ApJ...525..750F], and the star formation rate in the inner bulge is estimated to be comparable to that in the solar neighborhood [@2001ApJ...546L.101P], enough to grow the entire bulge over the age of the Galaxy. Of particular interest here are the several star clusters discovered within $\sim 100$pc of the Galactic center, 11 of which have reliable mass estimates . Most interesting of these are the two dense and young ($\aplt 10$Myr) star clusters Arches [@2002ApJ...581..258F] and the Quintuplet[@1999ApJ...514..202F], and the recently discovered groups IRS13E and IRS16SW [@2005astro.ph..4276L]. In this paper we study the relation between the star clusters in the inner $\sim 100$pc of the Galactic center and, to some extend, the partial formation of the central supermassive black hole. In particular we simulate the evolution of the star clusters born over a range of distances from the Galactic center. While we follow their internal dynamical evolution we allow the star clusters to spiral inwards towards the Galactic center until they dissolve in the background. During this process a runaway collision may have occurred in the cluster and we follow the continuing spiral-in of the resulting intermediate mass black hole. Our prescription for building an intermediate mass black hole has been well established in numerous papers concerning stellar collision runaways in dense star clusters . We just build on these earlier results for our description of the collision runaway and the way in which it leads to the formation of a black hole of intermediate mass. Eventually the IMBHs merge with the supermassive black hole, building the SMBH in the process. This model was initially proposed by [@2001ApJ...562L..19E], and here we validate the model by detailed simulations of the dynamical evolution of individual star clusters and the final spiral-in of the IMBH toward the SMBH. Using the results of the direct N-body simulations we calibrate a semi-analytic model to simulate a population of star clusters which are born within $\sim 100$pc over the age of the Galaxy. Collision Runaways and Cluster Inspiral {#Sect:picture} ======================================= A substantial fraction of stars are born in clusters and these have a power-law stellar mass functions fairly well described by a “Salpeter” exponent of -2.35, and with stellar masses ranging from the hydrogen burning limit ($\sim 0.08$) or a bit above [@2005ApJ...628L.113S] to an upper limit of $\sim100\,{\mbox{${\rm M}_\odot$}}$ or possibly as high as 150[@2005Natur.434..192F]. The massive stars start to sink to the cluster center immediately after birth, driving the cluster into a state of core collapse on a time scale ${{\mbox{$t_{\rm cc}$}}}\simeq 0.2{\mbox{${t_{\rm rh}}$}}$ [@2002ApJ...576..899P; @2004ApJ...604..632G], where [@1971ApJ...166..483S] $${\mbox{${t_{\rm rh}}$}}\simeq 2\,{\rm Myr} \left( {r \over [{\rm pc}]} \right)^{3/2} \left( {m \over [{\mbox{${\rm M}_\odot$}}]} \right)^{-1/2} {n \over {\mbox{${\log \lambda}$}}}.$$ Here $m$ is the cluster mass, $r$ is its half-mass radius, $n$ is the number of stars, and ${\mbox{${\log \lambda}$}}\simeq \log(0.1n)\sim10$. In sufficiently compact clusters the formation of a dense central subsystem of massive stars may lead to a “collision runaway,” where multiple stellar mergers result in the formation of an unusually massive object . If the mass of this runaway grows beyond $\sim300$ it collapses to an IMBH without losing significant mass in a supernova explosion [@2003ApJ...591..288H]. Recently, this model has been applied successfully to explain the ultraluminous X-ray source associated with the star cluster MGG-11 in the starburst galaxy M82 [@2004Natur.428..724P]. This model for creating an intermediate mass black hole in a dense star cluster was adopted by [@2005ApJ...628..236G], who continued by studying the evolution of massive $\apgt 10^6$ star clusters within about 60pc from the Galactic center. Their conclusions are consistent with the earlier $N$-body models [@2000ApJ...545..301K; @2003ApJ...593..352P; @2003ApJ...596..314M; @2004ApJ...607L.123K] and analytic calculations [@2001ApJ...546L..39G] in that massive clusters can reach the galactic center but in doing so they populate the inner few parsecs with a disproportionately large number of massive stars. The main requirement for a successful collision runaway is that the star cluster must experience core collapse (i) before the most massive stars explode as supernovae ($\sim3$Myr) and (ii) before the cluster dissolves in the Galactic tidal field. The collisional growth rate slows dramatically once the runaway collapses to an IMBH. We estimate the maximum runaway mass achievable by this process as follows. For compact clusters (${\mbox{${t_{\rm rh}}$}}\aplt100$ Myr), essentially all the massive stars reach the cluster core during the course of the runaway, and the runaway mass scales with the cluster mass: ${\mbox{${m_{\rm r}}$}}\simeq8\times10^{-4}m\,{\mbox{${\log \lambda}$}}$ [@2002ApJ...576..899P]. For systems with longer relaxation times, only a fraction of the massive stars reach the core in time and the runaway mass scales as $m{\mbox{${t_{\rm rh}}$}}^{-1/2}$ [@2004astro.ph.12622M] (see their Eq. 11). The relaxation based argument may result in higher mass runaways in star clusters with a very small relaxation time compared to the regime studied in Monte Carlo N-body simulations [@2004astro.ph.12622M]. A convenient fitting formula combining these scalings, calibrated by N-body simulations for Salpeter-like mass functions, is [@2002ApJ...576..899P; @2004astro.ph.12622M] $${\mbox{${m_{\rm r}}$}}\sim 0.01 m \left( 1 + \frac{{\mbox{${t_{\rm rh}}$}}}{\rm 100 Myr} \right)^{-1/2}\,.$$ Early dissolution of the cluster reduces the runaway mass by prematurely terminating the collision process. As core collapse proceeds, the orbit of the cluster decays by dynamical friction with the stars comprising the nuclear bulge. The decay of a circular cluster orbit of radius $R$ is described by (see \[Eq. 7-25\] in [@1987gady.book.....B], or [@2003ApJ...596..314M] for the more general case): $$\frac{dR}{dt} = -0.43 {Gm {\mbox{${\log \Lambda}$}}\over R^{(\alpha + 1)/2} v_c}\,, \label{Eq:df}$$ where $v_c^2 = GM(R)/R$, $\alpha = 1.2$, $M(R)$ is the mass within a distance $R$ from the Galactic center and we take ${\mbox{${\log \Lambda}$}}\sim8$ [@2003MNRAS.344...22S]. Numerical solution of this equation is required due to the complicating effects of stellar mass loss, which drives an adiabatic expansion of the cluster, and by tidal stripping, whereby the cluster mass tends to decrease with time according to $m(t) = m_0(1 - \tau/{\mbox{${t_{\rm dis}}$}})$, [@2002ApJ...576..899P]. Here $m_0$ is the initial mass of the cluster, $\tau$ is the cluster age in terms of the instantaneous relaxation time () within the Jacobi radius, and ${\mbox{${t_{\rm dis}}$}}$ is the time scale for cluster disruption: ${\mbox{${t_{\rm dis}}$}}\simeq 0.29{\mbox{${t_{\rm rJ}}$}}$ [^1]. Even after the bulk of the cluster has dissolved, a dense stellar cusp remains surrounding the newborn IMBH, and accompanies it on its descent toward the Galactic center. The total mass of stars in the cusp is typically comparable to that of the IMBH itself [@2004ApJ...613.1143B] and it is composed predominantly of massive stars, survivors of the population that initiated the core collapse during which the IMBH formed. Eventually even that cusp slowly decays by two-body relaxation [@2003ApJ...593L..77H], depositing a disproportionately large number of massive stars and the orphaned IMBH close to the Galactic center [@2005ApJ...628..236G]. Ultimately, the IMBH merges with the SMBH. Simulating star clusters within $\sim 100$pc from the Galactic center {#Sect:Nbody} ===================================================================== We have performed extensive direct N-body calculations to test the validity of the general scenario presented above, and to calibrate the semi-analytic model. Our analysis combines several complementary numerical, analytical and theoretical techniques in a qualitative model for the formation and evolution of the nuclear bulge of the Milky Way Galaxy. The semi-analytical model outlined in Sect.\[Sect:picture\], and which is based on equation \[Eq:df\] of [@2003ApJ...596..314M], is based on simple characterizations of physical processes, which we calibrate using large-scale N-body simulations. The initial conditions for these simulations are selected to test key areas in the parameter space for producing IMBHs in the inner $\sim 100$pc of the Galactic center. The N-body calculations employ direct integration of Newton’s equations of motion, while accounting for complications such as dynamical friction and tidal effects due to the Galactic field, stellar and binary evolution, physical stellar sizes and the possibility of collisions, and the presence of a supermassive black hole in the Galactic center. Two independent but conceptually similar programs are used: (1) the “kira” integrator, part of the Starlab software environment (see [http://www.manybody.org/$\sim$manybody/starlab.html]{}, [@2001MNRAS.321..199P]), and (2) NBODY4\ (see [http://www.sverre.org]{}) [@Aarseth2003]. Both codes achieve their greatest speed, as in the simulations reported here, when run in conjunction with the special-purpose GRAPE-6 (see [http://www.astrogrape.org]{}) hardware acceleration [@2003PASJ...55.1163M]. Both kira and NBODY4 incorporate parametrized treatments of stellar evolution and allow for the possibility of direct physical collisions between stars, thus including the two key physical elements in the runaway scenario described here (see also [-@2004Natur.428..724P]). A collision occurs if the distance between two stars becomes smaller than the sum of the stellar radii, except that, for collisions involving black holes, we use the tidal radius instead. During a collision mass and momentum are conserved. These are reasonable assumptions since the relative velocity of any two colliding stars is typically much smaller than the escape speed from either stellar surface [@2003MNRAS.345..762L; @2005MNRAS.358.1133F]. We performed N-body simulations of star clusters containing up to 131,072 stars and starting at $R=1$pc, 2, 4, 10 and 100pc from the Galactic center, with various initial concentrations ($W_0=6$ and 9) and with lower limits to the initial mass function of 0.1 and 1. These simulations were carried out as part of the calibration of the semi-analytic model which we presented in Sect.\[Sect:analytic\]. One such comparison is presented in Figure\[fig:N-body\], which shows the orbital evolution and runaway growth in a star cluster born with 65536 stars in a circular orbit at a distance of 2pc from the Galactic center. The solid lines in the figure result from the semi-analytic model (based on equation \[Eq:df\] and [@2003ApJ...596..314M]), while the high precision N-body calculations are represented by dotted lines. They match quite well, indicating that the simple analytic model produces satisfactory results in reproducing the general features and physical scales of the evolution. As the cluster in Figure\[fig:N-body\] sinks toward the Galactic center, it produce one massive star through the collision runaway process. In Figure\[fig:image\] we show a snapshot of this simulations projected in three different planes at an age of 0.35Myr. By this time $\sim 30\%$ of the cluster has already dispersed and its stars have spread out into the shape of a disk spanning the inward-spiraling orbit. By the time of Figure \[fig:image\], a $\sim 1100$ collision runaway star has formed in the cluster center; this object subsequently continues to grow by repeated stellar collisions. The growth of the collision runaway is indicated by the dotted line in Figure\[fig:N-body\] running from bottom left to top right (scale on the right vertical axis). By an age of about 0.7Myr the cluster is almost completely disrupted and the runaway process terminates. After the cluster dissolves, the IMBH continues to sink toward the Galactic center, still accompanied by 10–100 stars which initially were among the most massive in the cluster. Near the end of its lifetime, the runaway star loses about 200 in a stellar wind and subsequently collapses to a $\sim 1000$ IMBH at about 2.4Myr. The IMBH and its remaining stellar companions continue to sink toward the Galactic center. The continuing “noise” in the dotted curve in Figure \[fig:N-body\] reflects the substantial eccentricity of the IMBH orbit. At an age of 2.5–3Myr, the remnant star cluster consisting of an IMBH orbited by a few of the most massive stars, quite similar to the observed star cluster IRS13, arrives in the inner 0.1 pc of the Galaxy (see sect.\[Sect:Observations\]). Merger with the central black hole {#Sect:finalparsec} ================================== When the IMBH arrives within about 0.1pc of the Galactic center the standard formula for dynamical friction[@1987gady.book.....B] is becoming unreliable, as the background velocity dispersion increases and the effects of individual encounters become more significant. It is important, however, to ascertain whether the IMBH spirals all the way into the SMBH, or if it stalls in the last tenth of a parsec, as higher-mass black holes may tend to do [@2005ApJ...621L.101M]. To determine the time required for the IMBH to reach the central SMBH, we have performed additional N-body calculations, beginning with a 1000 and a 3000 IMBH in circular orbits at a distance of 0.1pc from the Galactic center. Both IMBHs are assumed to have shed their parent cluster by the start of the simulation. The inner parsec of the Galaxy is modeled by 131,071 identical stars with a total stellar mass of $4\times 10^6$, distributed according to a $R^{-1.75}$ density profile; a black hole of $3\times 10^6$ resides at the center. The region within a milliparsec of the central SMBH is depleted of stars in our initial conditions. This is supported by the fact that the total Galactic mass inside that radius, excluding the central SMBH is probably less than $10^3\,{\mbox{${\rm M}_\odot$}}$ [@1998ApJ...509..678G; @2003ApJ...594..812G]. We stop the calculations as soon as the IMBH reaches this distance. Figure\[fig:final\_parsec\] (see also the dotted line in Figs.\[fig:N-body\]) shows the orbital evolution of the 1000 and 3000 IMBHs in our simulations. Although the black-hole orbits are initially circular, eccentricities on the order of $\aplt 0.6$ are induced quite quickly by close encounters with field stars. The rate of spiral-in near the SMBH is smaller than farther out, because the increasing velocity dispersion tends to reduce the effect of dynamical friction and because the IMBH reaches the inner depleted area. The central milliparsec was initially empty in our simulations, and there was insufficient time to replenish it during our calculations. It is unlikely that sufficient stellar mass exists within this region for dynamical friction to drive the IMBH much closer to the SMBH. (Interestingly, this distance is comparable to the orbital semi-major axis of the star S0-2, which is observed in a 15 year orbit around the Galactic center [@2003ApJ...586L.127G].) The time scale for a 1 mpc orbit to decay by gravitational radiation exceeds the age of the Galaxy for circular motion, so unless the IMBH orbit is already significantly eccentric, or is later perturbed to higher eccentricity ($\apgt 0.9$ to reduce the merger time to $\aplt 10^9$years) by encounters with field stars or another IMBH, the orbital decay effectively stops near the central SMBH. While the IMBH stalls, another star cluster may form, sink toward the Galactic center, and give rise to a new IMBH which subsequently arrives in the inner milliparsec (see Sec. 5). This process will be repeated for each new IMBH formed, until interactions become frequent enough to drive a flux of IMBHs into the loss cone where gravitational radiation can complete the merger process. We can estimate the number of IMBHs in a steady state in the inner few milliparsecs of the SMBH. The time scale for a close (90 degree deflection) encounter in a system of $n_{\rm IMBH}$ IMBHs is $$t_{\rm close} \sim \left(\frac{M_{\rm SMBH}}{m_{\rm IMBH}}\right)^2 \frac{t_{\rm orb}}{n_{\rm IMBH}}\,,$$ where $M_{\rm SMBH}$ and $m_{\rm IMBH}$ are the masses of the SMBH and the IMBH, respectively, and $t_{\rm orb} \sim 1-10$ years is the typical orbital period at a distance of 1 mpc from the SMBH. For $M_{\rm SMBH} \sim 10^6{\mbox{${\rm M}_\odot$}}$ and $m_{\rm IMBH} \sim 10^3{\mbox{${\rm M}_\odot$}}$, we find $t_{\rm close} \sim 1-10 \times 10^6/n_{\rm IMBH}$ years, comparable to the in-fall time scale unless $n_{\rm IMBH}$ is large. Close encounters are unlikely to eject IMBHs from the vicinity of the Galactic center, but they do drive the merger rate by replenishing the loss cone around the SMBH [@2004ApJ...606..788M; @2004ApJ...616..221G]. As IMBHs accumulate, the cusp around the SMBH eventually reaches a steady state in which the merger rate equals the rate of in-fall, with a roughly constant population of a few IMBHs within about a milliparsec of the SMBH. A comparable analysis was performed by [@2004ApJ...606L..21A] for stellar mass black holes around the SMBH, and if we scale their results to IMBHs we arrive at a similar steady state population. The evolution of a population of star clusters {#Sect:analytic} ============================================== We now turn to the overall evolution of the population of clusters which gave rise to the nuclear bulge. We have performed a Monte-Carlo study of the cluster population, adopting a star formation rate that declines as $1/t$ over the past 10 Gyr [@2004Natur.428..625H]. Cluster formation times are selected randomly following this star formation history, and masses are assigned as described below, until the total mass equals the current mass of the nuclear bulge within 100 pc of the Galactic center—about $10^9$. The total number of clusters thus formed is $\sim 10^5$ over the 10 Gyr period. For each cluster, we select a mass ($m$) randomly from a cluster initial mass function which is assumed to follow the mass distribution observed in starburst galaxies—a power-law of the form $N(m) \propto m^{-2}$ between $\sim10^3$ and $\sim10^7$ [@1999ApJ...527L..81Z]. The distance to the Galactic center ($R$) is again selected randomly, assuming that the radial distribution of clusters follows the current stellar density profile in the bulge between 1pc and 100pc[@1997MNRAS.284..576E; @1998ApJ...509..678G]. The current distribution of stars must reflect the formation distribution to a large extent, because most stars’ orbits don’t evolve significantly, but only the orbits of the more massive stellar clusters. The initial density profiles of the clusters are assumed to be $W_0=6$–9 King models. This choice of high-concentrated King models is supported by the recent theoretical understanding by [@2004ApJ...608L..25M] of the relation between age and core-radius for young star clusters in the large Magellanic cloud observed by [@2003MNRAS.338...85M]. We establish a cluster mass-radius relation by further assuming that clusters are born precisely filling their Jacobi surfaces in the Galactic tidal field. This provides a lower limit to the fraction of clusters that produce an IMBH and sink to the Galactic center. The evolution of each cluster, including specifically the moment at which it undergoes core collapse, the mass of the collision runaway (if any) produced, and the distance from the Galactic center at which the cluster dissolves, is then calculated deterministically using our semi-analytic model. After cluster disruption, the IMBH continues to sink by dynamical friction, eventually reaching the Galactic center. Results of the cluster population model {#Sect:Results} --------------------------------------- Figure\[fig:MC\] summarizes how the fates of the star clusters in our simulation depend on $m$ and $R$. Open and filled circles represent initial conditions that result in an IMBH reaching the central parsec by the present day. The various lines define the region of parameter space expected to contribute to the population of IMBHs within the central parsec, as described in the caption. Here we emphasize that our results depend linearly on the fraction of stars in the bulge that form in star clusters. The number of star clusters and IMBHs is proportional to this factor, which is not necessarily constant with time. Bear in mind also that, though theoretical uncertainties are about a factor of two, the systematic uncertainties can be much larger and depend critically on various assumptions in the models, like the amount of mass loss in the stellar wind of the collision product and the fate of the stellar remnant in the supernova explaion. The results of our calculations may be summarized as follows: 1. 5% – 10% of star clusters born within 100pc of the Galactic center produce an IMBH. 2. The mean mass of IMBHs now found in the inner 10pc is $\sim 1000$, whereas IMBHs between 90 and 100pc average $\sim 500$. 3. Over the age of the Galaxy ($\sim10^{10}$ years) a total of 1000–3000 IMBHs have reached the Galactic center, carrying a total mass of $\sim 1\times10^6{\mbox{${\rm M}_\odot$}}$. Here the range in masses stems from variations in the adopted stellar mass function. 4. At any instant, approximately $\sim 50$ IMBHs reside in the inner 10pc, about ten times that number lie within the nuclear star cluster (inner 30pc), and several lie within the innermost few tenths of a parsec. 5. One in every $\sim 30$ IMBHs is still accompanied by a remnant of its young (turn-off mass $\apgt 10$) star cluster when it reaches the inner parsec, resulting in a few IMBHs at any time in the inner few parsecs with young stars still bound to them, much like IRS13E or IRS16SW. On the basis of our N-body simulations of the central 0.1 pc in Sect.\[Sect:finalparsec\] we expect that the majority of IMBHs which arrive in the Galactic center eventually merge with the SMBH on a time scale of a few Myr, driven by the emission of gravitational radiation and interactions with local field stars and other IMBHs. In our simulations the in-fall rate has increased over the lifetime of the Galaxy (following our assumed star formation rate), from one arrival per $\sim 20$Myr to the current value of one every $\sim 5$Myr, with a time average IMBH in-fall rate of roughly one per $\sim 7$Myr. (A lower minimum mass in the initial mass function produces higher in-fall rates.) Some of the field stars near the SMBH may be ejected from the Galactic center with velocities of up to $\sim 2000$km/s following encounters with the hard binary system formed by the IMBH and the central SMBH [@1988Natur.331..687H; @2003ApJ...599.1129Y; @2005MNRAS.363..223G]. Support for this possibility comes from the recent discovery of SSDSJ090745.0+024507, a B9 star with a measured velocity of 709km/s directly away from the Galactic center [@2005ApJ...622L..33B]. IMBHs are potentially important sources of gravitational wave radiation. A merger between a 1000 IMBH and a $\sim 3 \times 10^6$ SMBH would be detectable by the LISA gravitational wave detector to a distance of several billion parsecs. Assuming that the processes just described operate in most spiral galaxies, which have a density of roughly 0.002 Mpc$^{-3}$ [@2004MNRAS.353..713K], we estimate a detectable IMBH merger rate of around two per week, with a signal to noise $\sim 10^3$. The current cluster population {#Sect:currentclusters} ------------------------------ Our semi-analytic model for the evolution of star clusters in the inner $\sim 100$pc of the Galaxy yields a steady-state distribution of cluster masses which we can compare with observed star clusters in the vicinity of the Galactic center. Figure\[fig:Borissova\] compares the observed mass distribution of young star clusters in the bulge with our steady-state solution. The data include the Arches cluster [@2002ApJ...581..258F], the Quintuplet [@1999ApJ...514..202F], IRS13E , IRS16SW [@2005astro.ph..4276L], and 7 recently discovered star clusters with reliable mass estimates . For comparison we show a realization of the present-day population of star cluster masses generated by our semi-analytic model. Using the adopted declining star-formation rate from Sect.\[Sect:analytic\] (see [@2004Natur.428..625H]), we find about $\sim 50$ star clusters within the central 100 pc at any given time, consistent with the earlier prediction of [@2001ApJ...546L.101P]. Assuming a flat (i.e. uniform) star formation rate, we predict $\sim 400$ clusters in the same region, about an order of magnitude more than currently observed. In our semi-analytic model, about 15% of all present-day star clusters host an IMBH or are in the process of producing one. Between 1% and 8% of star clusters with a present-day mass less than $10^4$ contain an IMBH, whereas more than 80% of clusters with masses between 30,000 and $2\times 10^5$ host an IMBH. For more massive clusters the probability of forming an IMBH drops sharply. Finally, we note that we are rather unlikely to find an orphaned very massive ($\apgt 200$) star. During the last 1 Gyr, only about 10–40 of such objects have formed in the inner 100pc of the Galaxy. Lower mass merger products, however, are quite common. The Pistol star[@1999ApJ...514..202F] may be one observational example. Discussion ========== Evolution of the merger product {#Sect:massloss} ------------------------------- One of the main uncertainties in our calculations is whether or not mass gain by stellar collisions exceeds mass loss by stellar winds. Although the accretion rate in our models is very high, mass loss rates in massive stars are uncertain, and it is conceivable that sufficiently high mass loss rates might prevent the merger product from achieving a mass of more than a few hundred . Mass loss in massive ($\apgt 100$) stars may be radiatively driven by optically thin lines. In this case it is possible to derive upper limits to the mass loss. Such calculations, including the von Zeipel [@1924MNRAS..84..665V] effect for stars close to the Eddington-Gamma limit, indicate that stellar wind mass loss rates may approach $10^{-3}$yr$^{-1}$ . If the star is rotating near the critical rate, the mass loss rate may be even larger . Outflow velocities, however, may be so small that part of the material falls back on the equatorial zone, where the mass loss is least . The calculations of match the observed mass loss rates for $\eta$ Carina, which has a peak of $1.6\pm 0.3\times10^{-3}$yr$^{-1}$ (assuming spherical symmetry) during normal outbursts, falling to $10^{-5}$yr$^{-1}$ during the intervening 5.5years . For young ($\aplt4$Myr) O stars in the small Magellanic cloud low ($\aplt 10^{-8}$/yr) mass loss rates were observed , indicating that massive stars may have much lower mass loss rates until they approach the end of their main-sequence lifetimes (see ). Thus it remains unclear whether the periods of high mass loss persist for long enough to seriously undermine the runaway scenario adopted here. We note that the collision runaways in our simulations are initiated by the arrival of a massive star in the cluster core . If such a star grows to exceed $\sim 300$, most collisions occur within the first 1.5Myr of the cluster evolution. The collision rate during the period of rapid growth typically exceeds one per $\sim 10^4$ years, sustained over about 1 Myr, resulting in an average mass accretion rate exceeding $10^{-3}$/yr, comparable to, and possibly exceeding, the maximum mass loss rates derived for massive stars. Furthermore, in our N-body simulations (and in the semi-analytic model), the stellar mass loss rate increases with time, with little mass loss at the zero-age main sequence and substantially more near the end of the main-sequence stellar lifetime ($\dot{m}_{\rm wind} \propto L^{2.2}$) . In other words, mass loss rates are relatively low while most of the accretion is occurring. This prescription for the mass loss rate matches that of evolutionary calculations for massive Wolf-Rayet stars . We also emphasize that a large mass loss rate in the merger product [*cannot*]{} prevent the basic mass segregation and collision process, even though it might significantly reduce the final growth rate . These findings are consistent with recent N-body simulations of small clusters in which the assumed mass loss rate from massive ($>120$) stars exceeded $10^{-3}$/yr [@2004astro.ph.10510B]. The stellar evolution of a runaway merger product has never been calculated in detail, and is poorly understood. However, it is worth mentioning that its thermal time scale significantly exceeds the mean time between collisions. Even if the star grows to $\apgt 10^3$, the thermal time scale will be 1–$4\times 10^4$years, still comparable to the collision rate. The accreting object will therefore be unable to reestablish thermal equilibrium before the next collision occurs. We note in passing that the supermassive star produced in the runaway collision may be hard to identify by photometry if the cluster containing it cannot be resolved: The runaway is mainly driven by collisions between massive stars, which themselves have luminosities close to the Eddington-Gamma limit. Since the Eddington luminosity scales linearly with mass, a collection of luminous blue variables at the Eddington luminosity are comparable in brightness to an equally massive single star. Spectroscopically, however, the collision runaway may be very different. Mass loss in the form of a dense stellar wind before the supernova can dramatically reduce the mass of the final black hole, or could even prevent black hole formation altogether [@2003ApJ...591..288H]. The runaway merger in fig.\[fig:N-body\] develops a strong stellar wind near the end of its lifetime before collapsing to a $\sim1000$ IMBH at $\sim2.4$Myr. It is difficult to quantify the effect of stellar winds on the final IMBH mass because the mass loss rate of such a massive star remains uncertain . However, it is important to underscore here the qualitative results that stellar winds are unable to prevent the occurrence of repeated collisions, and significantly limit the outcome only if the mass loss rate is very high—more than $\sim10^{-3}\,{\mbox{${\rm M}_\odot$}}/{\rm yr}$—and sustained over the lifetime of the star. The star clusters IRS13E and IRS16SW {#Sect:Observations} ==================================== The best IMBH candidate in the milky-way Galaxy was recently identified in the young association IRS13E in the Galactic center region. IRS13E is a small cluster of stars containing three spectral type O5I to O5III and four Wolf-Rayet stars, totaling at most $\sim300\,{\mbox{${\rm M}_\odot$}}$ . (The recently discovered cluster IRS16SW[@2005astro.ph..4276L] also lies near the Galactic center and reveals similarly interesting stellar properties.) Both clusters are part of the population of helium-rich bright stars in the inner parsec of the Galactic center . With a “normal” stellar mass function, as found elsewhere in the Galaxy, stars as massive as those in IRS13E are extremely rare, occurring only once in every $\sim 2000$ stars. However, in the Galactic center, a “top-heavy” mass function may be common [@2004astro.ph..9415F; @2005ApJ...628L.113S]. The mean proper motion of five stars in IRS13E is $\langle v\rangle_{\rm 2D} = 245$km/s;[@2005ApJ...625L.111S] an independent measurement of four of these stars yields $270$km/s . If IRS13E were part of the rotating central stellar disk [@2003ApJ...594..812G], this would place the cluster $\sim 0.12$pc behind the plane on the sky containing the SMBH, increasing its galactocentric distance to about 0.18pc, consistent with a circular orbit around the SMBH at the observed velocity. The five IRS16SW stars have $\langle v\rangle_{\rm 2D} \simeq 205$km/s [@2005astro.ph..4276L], corresponding to the circular orbit speed at a somewhat larger distance ($\sim 0.4$pc). The greatest distance between any two of the five stars in IRS13E with known velocities is $\sim 0.5$seconds of arc (0.02pc at 8.5kpc), providing a lower limit on the Jacobi radius: ${{r_{\rm J}}}\apgt 0.01$pc. It then follows from the Hills equation (${{r_{\rm J}}}^3 \simeq R^3 m/M$) that the minimum mass required to keep the stars in IRS13E bound is about 1300 (see also ). A more realistic estimate is obtained by using the measured velocities of the observed stars, using the expression: $m = \langle v^2 \rangle R/G$. The velocity dispersion of all stars, E1, E2, E3, E4, and E6, is about $\langle v \rangle \simeq 68$–84km/s, which results in a estimated mass mass of about 11000–16000. Such a high mass would be hard to explain with the collision runaway scenario. However, the stars E1 and E6 may not be members. The extinction of the latter star is smaller than that of the other stars, indicating that it may be closer to the sun than the rest of the cluster and therefore not a member [@2005ApJ...625L.111S]. One could also argue that star E1 should be excluded from the sample. With a high velocity in the opposite direction of the other stars it is equally curious as star E6 in both velocity space and the projected cluster image, where it is somewhat off from the main cluster position. Without star E1 the velocity dispersion of the cluster becomes $\langle v \rangle \simeq 47$–$50$km/s, which results in a estimated mass mass of about 5100–5800. These estimates for the total cluster mass are upper limits for the estimated mass of the dark point mass in the cluster center. If the cluster potential is dominated by a point mass object with a total mass exceeding the stellar mass by a seizable fraction, the stars are in orbit around this mass point. In that case some of the stars may be near the pericenter of their orbit. Since the velocity of a star at pericenter will be a factor of $\sim \sqrt{2}$ larger compared to the velocity in a circular orbit, the estimated black hole mass may therefore also be smaller by up to a factor of 2. We stress that the IMBH mass will be smaller than the total mass derived above since the cluster is made up out of the visible stars, unseen lower mass stars, possible stellar remnants and the potential IMBH. With 300 (seen) but possibly up to $\sim 1000$ of luminous material the mass for the IMBH is then reduced to 2000 – 5000. This is much more than the observed mass of the association, providing a lower limit on its dark mass component. Simulating IRS13E ----------------- With a present density of $\sim4 \times 10^8$[pc]{}$^{-3}$, a collision runaway in IRS13E is inevitable, regardless of the nature of the dark material in the cluster . Therefore, even if the cluster currently does not contain an IMBH, a collision runaway cannot be prevented if the stars are bound. We have tested this using N-body simulations of small clusters of 256 and 1024 stars, with masses drawn from a Salpeter mass function between 1 and 100. These clusters, with $W_0=6$–9 King model initial density profiles, exactly filled their Jacobi surfaces, and moved in circular orbits at 0.18pc from the Galactic center. We continued the calculations until the clusters dissolved. These simulations lost mass linearly in time, with a half-mass lifetime of a few 10,000 years, irrespective of the initial density profile. This is consistent with the results of independent symplectic N-body simulations [@2005astro.ph..2143L]. In each of these simulations a minor runaway merger occurred among roughly a dozen stars, creating runaways of $\aplt 250$. In another set of larger simulations with 1024–16386 stars, the runaway mergers were more extreme, with collision rates exceeding one per century! We draw two conclusions from these simulations. If the unseen material in IRS13E consists of normal stars, then (i) the cluster cannot survive for more than a few $\times10^4$ years, and (ii) runaway merging is overwhelmingly likely. If IRS13E is bound, a cluster of normal stars cannot be hidden within it, and the dark material must ultimately take the form of an IMBH of about 2000–5000 (see also ). Thus we argue that the properties of the dark-matter problem in IRS13E could be solved by the presence of a single IMBH of mass $\sim1000$–5000, consistent earlier discussions . The seven observed stars may in that case be the remnant of a larger star cluster which has undergone runaway merging, forming the IMBH during core collapse while sinking toward the Galactic center [@2001ApJ...562L..19E; @2003ApJ...593..352P; @2005ApJ...628..236G]. According to this scenario the stars we see are the survivors which have avoided collision and remained in tight orbits around the IMBH. Extensive position determinations with the National Radio Astronomy Observatory’s Very Long Baseline Array (VLBA) of Sgr A\* over an $\sim 8$ year baseline has revealed that the SMBH in the Galactic center (assuming 4 million Msun and a distance of 8.0kpc) is about $7.6\pm0.7$kms$^{-1}$ [@2004ApJ...616..872R]. An IMBH of 2000–5000 orbiting at a distance of $\sim0.18$pc would create the linear velocity of about 0.15–0.39kms$^{-1}$ for Sgr A\*, since the orbital velocity of IMBH is $\sim 310$km/s and its mass is $\aplt 1/800$ of the central BH, assuming a circular orbit. If observations with the VLBA continue with the same accuracy for the next decade, the IMBH in IRS13E can be detected by measuring the motion of Sgr A\*. X-ray and Radio observations of the Galactic center --------------------------------------------------- X-ray observations may offer a better chance of observing an individual IMBH near the Galactic center than the VLBA radio observations discussed in the previous section. Among the $\sim 2000$ X-ray point sources within 23pc of the Galactic center [@2003ApJ...589..225M], the source CXOGC J174540.0-290031 [@2004astro.ph.12492M], with $L_{2-8{\rm keV}} \simeq 8.5\times 10^{34}$ erg/s at a projected galactocentric distance of 0.11pc, is of particular interest. The peak radio intensity of this source is 0.1Jansky at 1GHz [@2005astro.ph..7221B], which corresponds to $L_r \sim 8\times 10^{30}$ erg/s at the distance of the Galactic center. Using the recently proposed empirical relation between X-ray luminosity, radio flux, and the mass of the accreting black hole [@2003MNRAS.345.1057M], $$\log L_r = 7.3 + 0.6 \log L_X + 0.8 \log M_{\rm bh}, \label{Eq:Merloni}$$ we derive a black hole mass of about 2000. Interestingly, this source has an 7.8 hour periodicity [@2004astro.ph.12492M], which, if it reflects the orbital period, would indicate a semi-major axis of $\sim 25$. The companion to the IMBH would then have a Roche radius of $\sim 1$, consistent with a 1 main-sequence star. Mass transfer in such a binary would be driven mainly by the emission of gravitational waves at a rate of $\sim 0.01$/Myr [@2004MNRAS.355..413P], which is sufficient to power an X-ray transient with the observed X-ray luminosity and a duty cycle on the order of a few percent [@2004MNRAS.355..413P]. It is a pleasure to thank drs. Clovis Hopman, Tom Maccarone and Mike Muno for interesting discussions, and Prof. Ninomiya for the kind hospitality at the Yukawa Institute at Kyoto University, through the Grants-in-Aid for Scientific Research on Priority Areas, number 763, “Dynamics of Strings and Fields,” from the Ministry of Education, Culture, Sports, Science and Technology, Japan. This work was made possible by financial support from the NASA Astrophysics Theory Program under grant NNG04GL50G, the Netherlands Organization for Scientific Research (NWO) under grant 630.000.001, The Royal Netherlands Academy of Arts and Sciences (KNAW), and the Netherlands Advanced School for Astronomy (NOVA). Part of the calculations in this paper were performed on the GRAPE-6 system in Tokyo and the MoDeStA platform in Amsterdam. , S. A. 2003, , Cambridge University press, 2003 , C., [Lamers]{}, H. J. G. L. M., [Molenberghs]{}, G. 2004, , 418, 639 , T., [Livio]{}, M. 2004, , 606, L21 , H., [Makino]{}, J. 2003, , 340, 227 , H., [Makino]{}, J., [Ebisuzaki]{}, T. 2004, , 613, 1143 , H., [Van Bever]{}, J., [Vanbeveren]{}, D. 2004, ArXiv Astrophysics e-prints , J., [Tremaine]{}, S. 1987, Galactic dynamics, Princeton, NJ, Princeton University Press, 1987, 747 p. , J., [Ivanov]{}, V. D., [Minniti]{}, D., [Geisler]{}, D., [Stephens]{}, A. W. 2005, , 435, 95 , G. C., [Roberts]{}, D. A., [Yusef-Zadeh]{}, F., [Backer]{}, D. C., [Cotton]{}, W. D., [Goss]{}, W. M., [Lang]{}, C. C., [Lithwick]{}, Y. 2005, ArXiv Astrophysics e-prints , W. R., [Geller]{}, M. J., [Kenyon]{}, S. J., [Kurtz]{}, M. J. 2005, , 622, L33 , T., [Makino]{}, J., [Tsuru]{}, T. G., [Funato]{}, Y., [Portegies Zwart]{}, S., [Hut]{}, P., [McMillan]{}, S., [Matsushita]{}, S., [Matsumoto]{}, H., [Kawabe]{}, R. 2001, , 562, L19 , A., [Genzel]{}, R. 1997, , 284, 576 , D. F. 2004, ArXiv Astrophysics e-prints , D. F. 2005, , 434, 192 , D. F., [Kim]{}, S. S. 2002, in ASP Conf. Ser. 263: Stellar Collisions, Mergers and their Consequences, p. 287 , D. F., [Kim]{}, S. S., [Morris]{}, M., [Serabyn]{}, E., [Rich]{}, R. M., [McLean]{}, I. S. 1999a, , 525, 750 , D. F., [McLean]{}, I. S., [Morris]{}, M. 1999b, , 514, 202 , D. F., [Najarro]{}, F., [Gilmore]{}, D., [Morris]{}, M., [Kim]{}, S. S., [Serabyn]{}, E., [McLean]{}, I. S., [Gilbert]{}, A. M., [Graham]{}, J. R., [Larkin]{}, J. E., [Levenson]{}, N. A., [Teplitz]{}, H. I. 2002, , 581, 258 , M., [Atakan G[" u]{}rkan]{}, M., [Rasio]{}, F. A. 2005a, ArXiv Astrophysics e-prints , M., [Benz]{}, W. 2005, , 358, 1133 , M., [Rasio]{}, F. A., [Baumgardt]{}, H. 2005b, ArXiv Astrophysics e-prints , K., [Miller]{}, M. C., [Hamilton]{}, D. P. 2004, , 616, 221 , M. A., [Freitag]{}, M., [Rasio]{}, F. A. 2004, , 604, 632 , R., [Sch[" o]{}del]{}, R., [Ott]{}, T., [Eisenhauer]{}, F., [Hofmann]{}, R., [Lehnert]{}, M., [Eckart]{}, A., [Alexander]{}, T., [Sternberg]{}, A., [Lenzen]{}, R., [Cl[' e]{}net]{}, Y., [Lacombe]{}, F., [Rouan]{}, D., [Renzini]{}, A., [Tacconi-Garman]{}, L. E. 2003, , 594, 812 , O. 2001, , 546, L39 , A. M., [Duch[\^ e]{}ne]{}, G., [Matthews]{}, K., [Hornstein]{}, S. D., [Tanner]{}, A., [Larkin]{}, J., [Morris]{}, M., [Becklin]{}, E. E., [Salim]{}, S., [Kremenek]{}, T., [Thompson]{}, D., [Soifer]{}, B. T., [Neugebauer]{}, G., [McLean]{}, I. 2003, , 586, L127 , A. M., [Klein]{}, B. L., [Morris]{}, M., [Becklin]{}, E. E. 1998, , 509, 678 , A. M., [Morris]{}, M., [Becklin]{}, E. E., [Tanner]{}, A., [Kremenek]{}, T. 2000, , 407, 349 , A., [Zwart]{}, S. P., [Sipior]{}, M. S. 2005, , 363, 223 , M. A., [Rasio]{}, F. A. 2005, , 628, 236 , B. M. S., [Milosavljevi[' c]{}]{}, M. 2003, , 593, L77 , A., [Panter]{}, B., [Jimenez]{}, R., [Dunlop]{}, J. 2004, , 428, 625 , A., [Fryer]{}, C. L., [Woosley]{}, S. E., [Langer]{}, N., [Hartmann]{}, D. H. 2003, , 591, 288 Hills, J. G. 1988, , 331, 687 , G., [White]{}, S. D. M., [Heckman]{}, T. M., [M[' e]{}nard]{}, B., [Brinchmann]{}, J., [Charlot]{}, S., [Tremonti]{}, C., [Brinkmann]{}, J. 2004, , 353, 713 , S. S., [Figer]{}, D. F., [Lee]{}, H. M., [Morris]{}, M. 2000, , 545, 301 , S. S., [Figer]{}, D. F., [Morris]{}, M. 2004, , 607, L123 , R. P. 2002, , 577, 389 , C. J., [Lada]{}, E. A. 2003, , 41, 57 , N., [Hamann]{}, W.-R., [Lennon]{}, M., [Najarro]{}, F., [Pauldrach]{}, A. W. A., [Puls]{}, J. 1994, , 290, 819 , Y., [Wu]{}, A. S. P., [Thommes]{}, E. W. 2005, ArXiv Astrophysics e-prints , J. C., [Thrall]{}, A. P., [Deneva]{}, J. S., [Fleming]{}, S. W., [Grabowski]{}, P. E. 2003, , 345, 762 , J. R., [Ghez]{}, A. M., [Hornstein]{}, S. D., [Morris]{}, M., [Becklin]{}, E. E. 2005, ArXiv Astrophysics e-prints , A. D., [Gilmore]{}, G. F. 2003, , 338, 85 , J. P., [Paumard]{}, T., [Stolovy]{}, S. R., [Rigaut]{}, F. 2004, , 423, 155 , J., [Fukushige]{}, T., [Koga]{}, M., [Namura]{}, K. 2003, , 55, 1163 , F., [Schaerer]{}, D., [Hillier]{}, D. J., [Heydari-Malayeri]{}, M. 2004, , 420, 1087 , S., [Portegies Zwart]{}, S. 2004, ArXiv Astrophysics e-prints astro-ph/0412622 , S. L. W., [Portegies Zwart]{}, S. F. 2003, , 596, 314 , K., [Haiman]{}, Z., [Narayanan]{}, V. K. 2001, , 558, 535 , A., [Heinz]{}, S., [di Matteo]{}, T. 2003, , 345, 1057 , D., [Piatek]{}, S., [Zwart]{}, S. P., [Hemsendorf]{}, M. 2004, , 608, L25 , D., [Poon]{}, M. Y. 2004, , 606, 788 , D., [Wang]{}, J. 2005, , 621, L101 , G., [Maeder]{}, A. 2003, , 404, 975 , G., [Maeder]{}, A. 2005, , 429, 581 , M. 1993, , 408, 496 , M. P., [Baganoff]{}, F. K., [Bautz]{}, M. W., [Brandt]{}, W. N., [Broos]{}, P. S., [Feigelson]{}, E. D., [Garmire]{}, G. P., [Morris]{}, M. R., [Ricker]{}, G. R., [Townsley]{}, L. K. 2003, , 589, 225 , M. P., [Pfahl]{}, E., [Baganoff]{}, F. K., [Brandt]{}, W. N., [Ghez]{}, A., [Lu]{}, J., [Morris]{}, M. R. 2004, ArXiv Astrophysics e-prints , F., [Krabbe]{}, A., [Genzel]{}, R., [Lutz]{}, D., [Kudritzki]{}, R. P., [Hillier]{}, D. J. 1997, , 325, 700 , S., [Cuadra]{}, J. 2004, ArXiv Astrophysics e-prints , S., [Sunyaev]{}, R. 2005, ArXiv Astrophysics e-prints , T., [Maillard]{}, J. P., [Morris]{}, M., [Rigaut]{}, F. 2001, , 366, 466 , S. F., [Baumgardt]{}, H., [Hut]{}, P., [Makino]{}, J., [McMillan]{}, S. L. W. 2004a, , 428, 724 , S. F., [Dewi]{}, J., [Maccarone]{}, T. 2004b, , 355, 413 Portegies Zwart, S. F., McMillan, S. L. W., & Gerhard, O. 2003, , 593, 352 , S. F., [Makino]{}, J., [McMillan]{}, S. L. W., [Hut]{}, P. 1999, , 348, 117 , S. F., [Makino]{}, J., [McMillan]{}, S. L. W., [Hut]{}, P. 2001a, , 546, L101 , S. F., [McMillan]{}, S. L. W. 2002, , 576, 899 , S. F., [McMillan]{}, S. L. W., [Hut]{}, P., [Makino]{}, J. 2001b, , 321, 199 , G. D., [Shapiro]{}, S. L. 1990, , 356, 483 , M. J., [Brunthaler]{}, A. 2004, , 616, 872 , A., [Bergman]{}, P., [Black]{}, J. H., [Booth]{}, R., [Buat]{}, V., [Curry]{}, C. L., [Encrenaz]{}, P., [Falgarone]{}, E., [Feldman]{}, P., [Fich]{}, M., [Floren]{}, H. G., [Frisk]{}, U., [Gerin]{}, M., [Gregersen]{}, E. M., [Harju]{}, J., [Hasegawa]{}, T., [Hjalmarson]{}, [Å]{}., [Johansson]{}, L. E. B., [Kwok]{}, S., [Larsson]{}, B., [Lecacheux]{}, A., [Liljestr[" o]{}m]{}, T., [Lindqvist]{}, M., [Liseau]{}, R., [Mattila]{}, K., [Mitchell]{}, G. F., [Nordh]{}, L., [Olberg]{}, M., [Olofsson]{}, A. O. H., [Olofsson]{}, G., [Pagani]{}, L., [Plume]{}, R., [Ristorcelli]{}, I., [Sch[' e]{}ele]{}, F. v., [Serra]{}, G., [Tothill]{}, N. F. H., [Volk]{}, K., [Wilson]{}, C. D., [Winnberg]{}, A. 2003, , 402, L63 , R., [Eckart]{}, A., [Iserlohe]{}, C., [Genzel]{}, R., [Ott]{}, T. 2005, , 625, L111 , P. F., [Fellhauer]{}, M., [Portegies Zwart]{}, S. F. 2003, , 344, 22 , L. J., [Hart]{}, M. H. 1971, , 166, 483 , A., [Brandner]{}, W., [Grebel]{}, E. K., [Lenzen]{}, R., [Lagrange]{}, A.-M. 2005, , 628, L113 , P., [Rieke]{}, G. H. 1993, , 414, 573 , R., [Kervella]{}, P., [Sch[" o]{}ller]{}, M., [Herbst]{}, T., [Brandner]{}, W., [de Koter]{}, A., [Waters]{}, L. B. F. M., [Hillier]{}, D. J., [Paresce]{}, F., [Lenzen]{}, R., [Lagrange]{}, A.-M. 2003, , 410, L37 , J. S., [de Koter]{}, A., [Lamers]{}, H. J. G. L. M. 2000, , 362, 295 , J. S., [de Koter]{}, A., [Lamers]{}, H. J. G. L. M. 2001, , 369, 574 , H. 1924, , 84, 665 , Q., [Tremaine]{}, S. 2003, , 599, 1129 , Q., [Fall]{}, S. M. 1999, , 527, L81 [^1]: Theoretical considerations suggest that the time scale for cluster dissolution has the form ${\mbox{${t_{\rm dis}}$}}= k {{\mbox{$t_{\rm hc}$}}}^{1/4} {\mbox{${t_{\rm rh}}$}}^{3/4}$, where is the cluster crossing time [@2003MNRAS.340..227B]. The constant $k$ may be obtained from direct N-body simulations of star clusters near the Galactic center [@2001ApJ...546L.101P], resulting in $k\simeq 7.5$, with and expressed in Myr.
--- abstract: 'We construct indecomposable modules for the $0$-Hecke algebra whose characteristics are the dual immaculate basis of the quasi-symmetric functions.' address: - | Fields Institute\ Toronto, ON, Canada - 'Université du Québec à Montréal, Montréal, QC, Canada' - | York University\ Toronto, ON, Canada author: - Chris Berg$^2$ - 'Nantel Bergeron$^{1,3}$' - Franco Saliola$^2$ - Luis Serrano$^2$ - 'Mike Zabrocki$^{1,3}$' title: 'Indecomposable modules for the dual immaculate basis of quasi-symmetric functions' --- Introduction ============ The algebra of symmetric functions $\sym$ has an important basis formed by Schur functions, which appear throughout mathematics. For example, as the representatives for the Schubert classes in the cohomology of the Grassmannian, as the characters for the irreducible representations of the symmetric group and the general linear group, or as an orthonormal basis for the space of symmetric functions, to name a few. The algebra $\Nsym$ of noncommutative symmetric functions projects under the forgetful map onto $\sym$, which injects into the algebra $\Qsym$ of quasi-symmetric functions. $\Nsym$ and $\Qsym$ are dual Hopf algebras. In [@BBSSZ], the authors developed a basis for $\Nsym$, which satisfied many of the combinatorial properties of Schur functions. This basis, called the *immaculate basis* $\{\fS_\alpha\}$, projects onto Schur functions under the forgetful map. When indexed by a partition, the corresponding projection of the immaculate function is precisely the Schur function of the given partition. The dual basis $\{\fS_\alpha^*\}$ is a basis for $\Qsym$. The main goal of this paper is to express the dual immaculate functions as characters of a representation, in the same way that Schur functions are the characters of the irreducible representations of the symmetric group. We achieve this in Theorem \[thm:repthry\], where we realize them as the characteristic of certain indecomposable representations of the *0-Hecke algebra*. Acknowledgements ---------------- This work is supported in part by NSERC. It is partially the result of a working session at the Algebraic Combinatorics Seminar at the Fields Institute with the active participation of C. Benedetti, J. Sánchez-Ortega, O. Yacobi, E. Ens, H. Heglin, D. Mazur and T. MacHenry. This research was facilitated by computer exploration using the open-source mathematical software `Sage` [@sage] and its algebraic combinatorics features developed by the `Sage-Combinat` community [@sage-co]. Prerequisites ============= The symmetric group ------------------- The symmetric group $S_n$ is the group generated by the set of $\{ s_1, s_2, \ldots, s_{n-1}\}$ satisfying the following relations: $$\begin{aligned} s_i^2 &= 1;\\ s_i s_{i+1}s_i &= s_{i+1}s_i s_{i+1};\\ s_i s_j &= s_j s_i \textrm{ if } |i-j| > 1.\end{aligned}$$ Compositions and combinatorics {#sec:compositions} ------------------------------ A *partition* of a non-negative integer $n$ is a tuple $\lambda = [\lambda_1, \lambda_2, \dots, \lambda_m]$ of positive integers satisfying $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_m$ which sum to $n$. If $\lambda$ is a partition of $n$, one writes $\lambda \vdash n$. (When needed, we will consider partitions with zeroes at the end, but they are equivalent to the underlying partition made of positive numbers.) Partitions are of particular importance in algebraic combinatorics, as they index a basis for the symmetric functions of degree $n$, $\sym_n$, and the character ring for the representations of the symmetric group $S_n$, among others. These concepts are intimately connected; we assume the reader is well versed in this area (see for instance [@Sagan] for background details). A *composition* of a non-negative integer $n$ is a tuple $\alpha = [\alpha_1, \alpha_2, \dots, \alpha_m]$ of positive integers which sum to $n$. If $\alpha$ is a composition of $n$, one writes $\alpha \models n$. The entries $\alpha_i$ of the composition are referred to as the parts of the composition. The size of the composition is the sum of the parts and will be denoted $|\alpha|$. The length of the composition is the number of parts and will be denoted $\ell(\alpha)$. Note that $|\alpha|=n$ and $\ell(\alpha)=m$. Compositions of $n$ are in bijection with subsets of $\{1, 2, \dots, n-1\}$. We will follow the convention of identifying $\alpha = [\alpha_1, \alpha_2, \dots, \alpha_m]$ with the subset ${\mathcal S}(\alpha) = \{\alpha_1, \alpha_1+\alpha_2, \alpha_1+\alpha_2 + \alpha_3, \dots, \alpha_1+\alpha_2+\dots + \alpha_{m-1} \}$. If $\alpha$ and $\beta$ are both compositions of $n$, say that $\alpha \leq \beta$ in refinement order if ${\mathcal S}(\beta) \subseteq {\mathcal S}(\alpha)$. For instance, $[1,1,2,1,3,2,1,4,2] \leq [4,4,2,7]$, since ${\mathcal S}([1,1,2,1,3,2,1,4,2]) = \{1,2,4,5,8,10,11,15\}$ and ${\mathcal S}([4,4,2,7]) = \{4,8,10\}$. In this presentation, compositions will be represented as diagrams of left adjusted rows of cells. We will also use the matrix convention (‘English’ notation) that the first row of the diagram is at the top and the last row is at the bottom. For example, the composition $[4,1,3,1,6,2]$ is represented as $${{ \def\newtableau{{X, X, X, X},{X}, {X, X, X}, {X}, {X,X,X,X,X,X}, {X, X}} \begin{array}{c} \begin{tikzpicture}[scale=0.45,every node/.style={font=\rm\small}] \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.5,0.5); \foreach \row in \newtableau { \coordinate (x) at ($(x)-(0,1)$); \coordinate (y) at (x); \foreach \entry in \row { \ifthenelse{\equal{\entry}{X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray!10] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \ifthenelse{\equal{\entry}{\boldentry X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \node (y) at ($(y) + (1,0)$) {\entry}; \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } } } } \end{tikzpicture} \end{array}}}~.$$ Symmetric functions ------------------- We let $\sym$ denote the ring of symmetric functions. As an algebra, $\sym$ is the ring over $\mathbb{Q}$ freely generated by commutative elements $\{h_1, h_2, \dots\}$. $\sym$ has a grading, defined by giving $h_i$ degree $i$ and extending multiplicatively. A natural basis for the degree $n$ component of $\sym$ are the complete homogeneous symmetric functions of degree $n$, $\{ h_\lambda := h_{\lambda_1} h_{\lambda_2} \cdots h_{\lambda_m} : \lambda \vdash n\}$. $\sym$ can be realized as the ring of invariants of the ring of power series of bounded degree $\mathbb{Q}[\![x_1, x_2, \dots]\!]$ in commuting variables $\{x_1, x_2, \dots\}$. Under this identification, $h_i$ denotes the sum of all monomials in the $x$ variables of degree $i$. Non-commutative symmetric functions ----------------------------------- $\Nsym$ is a non-commutative analogue of $\sym$, the algebra of symmetric functions, that arises by considering an algebra with one non-commutative generator at each positive degree. In addition to the relationship with the symmetric functions, this algebra has links to Solomon’s descent algebra in type $A$ [@MR], the algebra of quasi-symmetric functions [@MR], and representation theory of the type $A$ Hecke algebra at $q=0$ [@KT]. It is an example of a combinatorial Hopf algebra [@ABS]. While we will follow the foundational results and definitions from references such as [@GKLLRT; @MR], we have chosen to use notation here which is suggestive of analogous results in $\sym$. We consider $\Nsym$ as the algebra with generators $\{\HH_1, \HH_2, \dots \}$ and no relations. Each generator $H_i$ is defined to be of degree $i$, giving $\Nsym$ the structure of a graded algebra. We let $\Nsym_n$ denote the graded component of $\Nsym$ of degree $n$. A basis for $\Nsym_n$ are the *complete homogeneous functions* $\{\HH_\alpha := \HH_{\alpha_1} \HH_{\alpha_2} \cdots \HH_{\alpha_m}\}_{\alpha \vDash n}$ indexed by compositions of $n$. Immaculate tableaux ------------------- Let $\alpha$ and $\beta$ be compositions. An *immaculate tableau* of shape $\alpha$ and content $\beta$ is a labelling of the boxes of the diagram of $\alpha$ by positive integers in such a way that: 1. the number of boxes labelled by $i$ is $\beta_i$; 2. the sequence of entries in each row, from left to right, is weakly increasing; 3. the sequence of entries in the *first* column, from top to bottom, is increasing. An immaculate tableau is said to be *standard* if it has content $1^{|\alpha|}$. Let $K_{\alpha, \beta}$ denote the number of immaculate tableaux of shape $\alpha$ and content $\beta$. We re-iterate that aside from the first column, there is no relation on the other columns of an immaculate tableau. \[ex:immaculatetableau\] The five immaculate tableau of shape $[4,2,3]$ and content $[3,1,2,3]$: $${{ \def\newtableau{{1,1, 1, 3},{2, 3}, {4,4,4}} \begin{array}{c} \begin{tikzpicture}[scale=0.45,every node/.style={font=\rm\small}] \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.5,0.5); \foreach \row in \newtableau { \coordinate (x) at ($(x)-(0,1)$); \coordinate (y) at (x); \foreach \entry in \row { \ifthenelse{\equal{\entry}{X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray!10] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \ifthenelse{\equal{\entry}{\boldentry X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \node (y) at ($(y) + (1,0)$) {\entry}; \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } } } } \end{tikzpicture} \end{array}}} {{ \def\newtableau{{1,1, 1, 3},{2, 4}, {3,4,4}} \begin{array}{c} \begin{tikzpicture}[scale=0.45,every node/.style={font=\rm\small}] \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.5,0.5); \foreach \row in \newtableau { \coordinate (x) at ($(x)-(0,1)$); \coordinate (y) at (x); \foreach \entry in \row { \ifthenelse{\equal{\entry}{X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray!10] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \ifthenelse{\equal{\entry}{\boldentry X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \node (y) at ($(y) + (1,0)$) {\entry}; \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } } } } \end{tikzpicture} \end{array}}} {{ \def\newtableau{{1,1, 1, 4},{2,3}, {3,4,4}} \begin{array}{c} \begin{tikzpicture}[scale=0.45,every node/.style={font=\rm\small}] \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.5,0.5); \foreach \row in \newtableau { \coordinate (x) at ($(x)-(0,1)$); \coordinate (y) at (x); \foreach \entry in \row { \ifthenelse{\equal{\entry}{X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray!10] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \ifthenelse{\equal{\entry}{\boldentry X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \node (y) at ($(y) + (1,0)$) {\entry}; \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } } } } \end{tikzpicture} \end{array}}} {{ \def\newtableau{{1,1, 1, 4},{2, 4}, {3,3,4}} \begin{array}{c} \begin{tikzpicture}[scale=0.45,every node/.style={font=\rm\small}] \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.5,0.5); \foreach \row in \newtableau { \coordinate (x) at ($(x)-(0,1)$); \coordinate (y) at (x); \foreach \entry in \row { \ifthenelse{\equal{\entry}{X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray!10] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \ifthenelse{\equal{\entry}{\boldentry X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \node (y) at ($(y) + (1,0)$) {\entry}; \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } } } } \end{tikzpicture} \end{array}}} {{ \def\newtableau{{1,1, 1, 2},{3, 3}, {4,4,4}} \begin{array}{c} \begin{tikzpicture}[scale=0.45,every node/.style={font=\rm\small}] \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.5,0.5); \foreach \row in \newtableau { \coordinate (x) at ($(x)-(0,1)$); \coordinate (y) at (x); \foreach \entry in \row { \ifthenelse{\equal{\entry}{X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray!10] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \ifthenelse{\equal{\entry}{\boldentry X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \node (y) at ($(y) + (1,0)$) {\entry}; \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } } } } \end{tikzpicture} \end{array}}}$$ \[def:descentSIT\] We say that a standard immaculate tableau $T$ has a descent in position $i$ if $i+1$ is in a row strictly below the row containing $i$. The *descent composition*, denoted $D(T)$, is the composition corresponding to the set of descents in $T$. The standard immaculate tableau of shape $[6,5,7]:$ $$T = {{ \def\newtableau{{1,2,4,5, 10,11},{3, 6, 7, 8, 9}, {12,13,14,15,16, 17, 18}} \begin{array}{c} \begin{tikzpicture}[scale=0.45,every node/.style={font=\rm\small}] \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.5,0.5); \foreach \row in \newtableau { \coordinate (x) at ($(x)-(0,1)$); \coordinate (y) at (x); \foreach \entry in \row { \ifthenelse{\equal{\entry}{X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray!10] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \ifthenelse{\equal{\entry}{\boldentry X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \node (y) at ($(y) + (1,0)$) {\entry}; \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } } } } \end{tikzpicture} \end{array}}}$$ has descents in positions $\{2, 5, 11\}$. The descent composition of $T$ is then $D(T) = [2,3,6,7]$. The immaculate basis of $\Nsym$ ------------------------------- The immaculate basis of $\Nsym$ was introduced in [@BBSSZ]. It shares many properties with the Schur basis of $\sym$. We define[^1] the immaculate basis $\{ \fS_\alpha\}_{\alpha}$ as the unique elements of $\Nsym$ satisfying: $$\HH_\beta = \sum_{\alpha} K_{\alpha, \beta} \fS_\alpha.$$ Continuing from Example \[ex:immaculatetableau\], we see that $$\HH_{3123} = \cdots + 5 \fS_{423} + \cdots.$$ We will not attempt to summarize everything that is known about this basis, but instead refer the reader to [@BBSSZ] and [@BBSSZ2]. Modules for the dual immaculate basis ===================================== In this section we will construct indecomposable modules for the $0$-Hecke algebra whose characteristic is a dual immaculate quasi-symmetric function. Quasi-symmetric functions ------------------------- The algebra of quasi-symmetric functions, $\Qsym$, was introduced in [@Ges] (see also subsequent references such as [@GR; @Sta84]). The graded component $\Qsym_n$ is indexed by compositions of $n$. The algebra is most readily realized as a subalgebra of the ring of power series of bounded degree $\mathbb{Q}[\![x_1, x_2, \dots]\!]$, and the monomial quasi-symmetric function indexed by a composition $\alpha$ is defined as $$\label{monomial-qsym} M_\alpha = \sum_{i_1 < i_2 < \cdots < i_m} x_{i_1}^{\alpha_1} x_{i_2}^{\alpha_2} \cdots x_{i_m}^{\alpha_m}.$$ The algebra of quasi-symmetric functions, $\Qsym$, can be defined as the linear span of the monomial quasi-symmetric functions. These, in fact, form a basis of $\Qsym$, and their multiplication is inherited from $\mathbb{Q}[\![x_1, x_2, \dots]\!]$. We view $\sym$ as a subalgebra of $\Qsym$. In fact, the quasi-symmetric monomial functions refine the usual monomial symmetric functions $m_\lambda \in \sym$: $$m_\lambda = \sum_{\sort(\alpha) = \lambda} M_\alpha,$$ where $\sort(\alpha)$ denotes the partition obtained by organizing the parts of $\alpha$ from the largest to the smallest. The fundamental quasi-symmetric function, denoted $F_\alpha$, is defined by its expansion in the monomial quasi-symmetric basis: $$F_\alpha = \sum_{\beta \leq \alpha} M_\beta.$$ The algebras $\Qsym$ and $\Nsym$ form dual graded Hopf algebras. In this context, the monomial basis of $\Qsym$ is dual to the complete homogeneous basis of $\Nsym$. Duality can be expressed by the means of an inner product, for which $\langle H_\alpha,M_\beta \rangle = \delta_{\alpha,\beta}$. In [@BBSSZ], we studied the dual basis to the immaculate functions of $\Nsym$, denoted $\fS_\beta^*$ and indexed by compositions. They are the basis of $\Qsym$ defined by $\langle \fS_\alpha, \fS_\beta^*\rangle = \delta_{\alpha, \beta}$. In [@BBSSZ Proposition 3.37], we showed that the dual immaculate functions have the following positive expansion into the fundamental basis: \[prop:FundamentalPositive\] The dual immaculate functions $\fS_\alpha^*$ are fundamental positive. Specifically they expand as $$\fS_\alpha^* = \sum_{T} F_{D(T)},$$ a sum over all standard immaculate tableaux of shape $\alpha$. Finite dimensional representation theory of $H_n(0)$ ---------------------------------------------------- We will outline the study of the finite dimensional representations of the $0$-Hecke algebra and its relationship to $\Qsym$. We begin by defining the $0$-Hecke algebra. We refer the reader to [@Th2 Section 5] for the relationship between the generic Hecke algebra and the $0$-Hecke algebra and their connections to representation theory. The Hecke algebra $H_n(0)$ is generated by the elements $\pi_1, \pi_2, \dots \pi_{n-1}$ subject to relations: $$\begin{aligned} \pi_i^2 &= \pi_i;\\ \pi_i \pi_{i+1}\pi_i &= \pi_{i+1}\pi_i \pi_{i+1};\\ \pi_i \pi_j &= \pi_j \pi_i \textrm{ if } |i-j| > 1.\end{aligned}$$ A basis of $H_n(0)$ is given by the elements $\{ \pi_\sigma : \sigma \in S_n \}$, where $\pi_\sigma = \pi_{i_1} \pi_{i_2} \cdots \pi_{i_m}$ if $\sigma = s_{i_1} s_{i_2}\cdots s_{i_m}$. We let $G_0(H_n(0))$ denote the Grothendieck group of finite dimensional representations of $H_n(0)$. As a vector space, $G_0(H_n(0))$ is spanned by the finite dimensional representations of $H_n(0)$, with the relation on isomorphism classes $[B] = [A]+[C]$ whenever there is a short exact sequence of $H_n(0)$-representations $0\rightarrow A \rightarrow B \rightarrow C \rightarrow 0$. We let $$\mathcal{G} = \bigoplus_{n \geq 0} G_0(H_n(0)).$$ The irreducible representations of $H_n(0)$ are indexed by compositions. The irreducible representation corresponding to the composition $\alpha$ is denoted $L_\alpha$. The collection $\{ [L_\alpha] \}$ forms a basis for $\mathcal{G}$. As shown in Norton [@N], each irreducible representation is one dimensional, spanned by a non-zero vector $v_\alpha \in L_\alpha$, and is determined by the action of the generators on $v_\alpha$: $$\pi_i v_\alpha = \begin{cases} 0 & \textrm{ if $i \in \mathcal{S}(\alpha)$}; \\ v_\alpha & \textrm{ otherwise}, \\ \end{cases}$$ where $\mathcal{S}(\alpha)$ denotes the subset of $[1 \dots n-1]$ corresponding to the composition $\alpha$. The tensor product $H_n(0) \otimes H_m(0)$ is naturally embedded as a subalgebra of $H_{n+m}(0)$. Under this identification, one can endow $\mathcal{G}$ with a ring structure; for $[N] \in G_0(H_n(0))$ and $[M] \in G_0(H_m(0))$, let $$[N][M] := [Ind_{H_n(0) \otimes H_m(0)}^{H_{n+m}(0)} N \otimes M]$$ where induction is defined in the usual manner. There is an important linear map $\mathcal{F}: \mathcal{G} \rightarrow \Qsym$ defined by $\mathcal{F}([L_\alpha]) = F_\alpha$. For a module $M$, $\mathcal{F}([M])$ is called the *characteristic of $M$*. The quasi-symmetric functions and the Grothendieck group of finite dimensional representations of $H_n(0)$ are isomorphic as rings. The map $\mathcal{F}$ is an isomorphism between $\mathcal{G}$ and $\Qsym$. The map $\mathcal{F}$ is actually an isomorphism of graded Hopf algebras. We will not make use of the coalgebra structure. A representation on $\mathcal{Y}$-words --------------------------------------- We start by defining the analogue of a permutation module for $H_n(0)$. For a composition $\alpha = [\alpha_1, \alpha_2, \dots, \alpha_m] \models n$, we let $\mathcal{M}_\alpha$ denote the vector space spanned by words of length $n$ on $m$ letters with content $\alpha$ (so that $j$ appears $\alpha_j$ times in each word). The action of $H_n(0)$ on a word $w = w_1 w_2 \cdots w_n$ is defined on generators as: $$\label{eq:actionwords} \pi_i w = \begin{cases} w & \textrm{ if $w_i \geq w_{i+1}$}; \\ s_i(w) & \textrm{if $w_i < w_{i+1}$}; \end{cases}$$ where $s_i(w) = w_1 w_2 \cdots w_{i+1}w_i \cdots w_n$. This is isomorphic to the representation: $$Ind^{H_n(0)}_{H_{\alpha}(0)} \left(L_{\alpha_1} \otimes L_{\alpha_2} \otimes \cdots \otimes L_{\alpha_m}\right),$$ where $L_k$ is the one-dimensional representation indexed by the composition $[k]$ and $H_\alpha(0) := H_{\alpha_1}(0) \otimes H_{\alpha_2}(0) \otimes \cdots \otimes H_{\alpha_m}(0)$. This can be seen by associating the element $\pi_v \otimes_{H_\alpha(0)} L_{\alpha_1} \otimes L_{\alpha_2} \otimes \cdots \otimes L_{\alpha_m}$ where $v$ is the minimal length left coset representative of $S_n / S_{\alpha_1} \times S_{\alpha_2} \times \cdots \times S_{\alpha_m}$ with the element $\pi_v (1^{\alpha_1} 2^{\alpha_2} \cdots k^{\alpha_k})$. We call a word a *$\mathcal{Y}$-word* if the first instance of $j$ appears before the first instance of $j+1$ for every $j$. We let $\mathcal{N}_\alpha$ denote the subspace of $\mathcal{M}_\alpha$ consisting of all words that are not $\mathcal{Y}$-words. The action of $H_n(0)$ on $\mathcal{M}_\alpha$ will never move a $j+1$ to the right of a $j$. This implies that $\mathcal{N}_\alpha$ is a submodule of $\mathcal{M}_\alpha$. The object of our interest is the quotient module $\mathcal{V}_\alpha := \mathcal{M}_\alpha / \mathcal{N}_\alpha$. We now state our main result. \[thm:repthry\] The characteristic of $\mathcal{V}_\alpha$ is the dual immaculate function indexed by $\alpha$, i.e. $\mathcal{F}([\mathcal{V}_\alpha]) = \fS^*_\alpha$. Before we prove this we will associate words to standard immaculate tableaux and give an equivalent description of the $0$-Hecke algebra on standard immaculate tableau. To a $\mathcal{Y}$-word $w$, we associate the unique standard immaculate tableau $\mathcal{T}(w)$ which has a $j$ in row $w_j$. \[bijectionYwords\] Let $w = 112322231$ be the $\mathcal{Y}$-word of content $[3,4, 2]$. Then $\mathcal{T}(w)$ is the standard immaculate tableau: $${{ \def\newtableau{{1,2,9},{3,5,6,7},{4,8}} \begin{array}{c} \begin{tikzpicture}[scale=0.45,every node/.style={font=\rm\small}] \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.5,0.5); \foreach \row in \newtableau { \coordinate (x) at ($(x)-(0,1)$); \coordinate (y) at (x); \foreach \entry in \row { \ifthenelse{\equal{\entry}{X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray!10] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \ifthenelse{\equal{\entry}{\boldentry X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \node (y) at ($(y) + (1,0)$) {\entry}; \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } } } } \end{tikzpicture} \end{array}}}$$ $\mathcal{T}$ yields a bijection between standard immaculate tableau and $\mathcal{Y}$-words. In the case of the symmetric group, the irreducible representation corresponding to the partition $\lambda$ has a basis indexed by standard tableaux. Under the same map $\mathcal{T}$, standard Young tableaux are in bijection with Yamanouchi words (words for which every prefix contains at least as many $j$ as $j+1$ for every $j$). In this sense, $\mathcal{Y}$-words are a natural analogue to Yamanouchi words in our setting. The Specht modules that give rise to the indecomposable module of the symmetric group are built as a quotient of ${\mathcal M}_\lambda$. Under the Frobenius map, these modules are associated to Schur functions. We may now describe the action of $H_n(0)$ on $\mathcal{V}_\alpha$, identifying the set of standard immaculate tableaux as the basis. Specifically, for a tableau $T$ and a generator $\pi_i$, we let: $$\label{eq:actiontableaux} \pi_i (T) = \begin{cases} 0 & \textrm{ if $i$ and $i\!+\!1$ are in the first column of $T$} \\ T & \textrm{ if $i$ is in a row weakly below the row containing $i\!+\!1$}\\ s_i(T) & \textrm{ otherwise}; \end{cases}$$ where $s_i(T)$ is the tableau that differs from $T$ by swapping the letters $i$ and $i+1$. Continuing from Example \[bijectionYwords\], we see that $\pi_1, \pi_4, \pi_5, \pi_6, \pi_8$ send $T$ to itself, $\pi_3$ sends $T$ to $0$ and $\pi_2, \pi_7$ send $T$ to the following tableaux: $$\pi_2(T) = {{ \def\newtableau{{1,3,9},{2,5,6,7},{4,8}} \begin{array}{c} \begin{tikzpicture}[scale=0.45,every node/.style={font=\rm\small}] \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.5,0.5); \foreach \row in \newtableau { \coordinate (x) at ($(x)-(0,1)$); \coordinate (y) at (x); \foreach \entry in \row { \ifthenelse{\equal{\entry}{X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray!10] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \ifthenelse{\equal{\entry}{\boldentry X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \node (y) at ($(y) + (1,0)$) {\entry}; \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } } } } \end{tikzpicture} \end{array}}} \hspace{1in} \pi_7(T) = {{ \def\newtableau{{1,2,9},{3,5,6,8},{4,7}} \begin{array}{c} \begin{tikzpicture}[scale=0.45,every node/.style={font=\rm\small}] \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.5,0.5); \foreach \row in \newtableau { \coordinate (x) at ($(x)-(0,1)$); \coordinate (y) at (x); \foreach \entry in \row { \ifthenelse{\equal{\entry}{X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray!10] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \ifthenelse{\equal{\entry}{\boldentry X}} { \node (y) at ($(y) + (1,0)$) {}; \fill[color=gray] ($(y)-(0.5,0.5)$) rectangle +(1,1); \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } { \node (y) at ($(y) + (1,0)$) {\entry}; \draw ($(y)-(0.5,0.5)$) rectangle +(1,1); } } } } \end{tikzpicture} \end{array}}}$$ An example of the full action of $\pi_i$ on tableaux representing the basis elements of the module $\mathcal{V}_{(2,2,3)}$ is given in Figure \[fig:module223\]. If we order the tableaux so that $S \prec T$ if there exists a permutation $\sigma$ such that $\pi_\sigma(T) = S$ then this figure shows that order is not a total order on tableaux but that it can be extended to a total order arbitrarily. We will use this total order in the following proof of Theorem \[thm:repthry\]. We are now ready to prove Theorem \[thm:repthry\], which states that the characteristic of $\mathcal{V}_\alpha$ is $\fS_\alpha^*$. We construct a filtration of the module $\mathcal{V}_\alpha$ whose successive quotients are irreducible representations. Now, define $\mathcal{M}_T$ to be the linear span of all standard immaculate tableaux that are less than or equal to $T$. From the definition of the order and the fact that the $\pi_i$ are not invertible, we see that $\mathcal{M}_T$ is a module. Ordering the standard immaculate tableaux of shape $\alpha$ as $T_1, T_2, \dots, T_m$, then we have a filtration of $\mathcal{V}_\alpha$: $$0 \subset \mathcal{M}_{T_1} \subset \mathcal{M}_{T_2} \subset \cdots \subset \mathcal{M}_{T_m} = \mathcal{V}_\alpha.$$ The successive quotient modules $\mathcal{M}_{T_j}/\mathcal{M}_{T_{j-1}}$ are one dimensional, spanned by $T_j$; to determine which irreducible this is, it suffices to compute the action of the generators. From the description of $\mathcal{V}_\alpha$ above, we see that $$\pi_i (T_j) = \begin{cases} 0 & \textrm{ if $i \in \mathcal{S}(D(T_j))$}\\ T_j & \textrm{ otherwise}. \end{cases}$$ This is the representation $[L_{D(T_j)}]$, whose characteristic is $F_{D(T_j)}$. Therefore $\mathcal{F}([\mathcal{V}_\alpha]) = \fS_\alpha^*$ by Proposition \[prop:FundamentalPositive\]. We aim to prove that the modules we have constructed are indecomposable. We let $\SS_\alpha$ denote the super-standard tableau of shape $\alpha$, namely, the unique standard immaculate tableau with the first $\alpha_1$ letters in the first row, the next $\alpha_2$ letters in the second row, etc. We will first need a few lemmas. \[lemma:cycgen\] The module $\mathcal{V}_\alpha$ is cyclicly generated by $\SS_\alpha$. The module $\mathcal{M}_\alpha$ is cyclically generated by $1^{\alpha_1} 2^{\alpha_2} \cdots k^{\alpha_k} = \mathcal{T}^{-1}(\SS_\alpha)$, which can be seen since every basis element of $\mathcal{M}_\alpha$ comes from an application of the anti-sorting operators $\pi_i$ on $1^{\alpha_1} 2^{\alpha_2} \cdots k^{\alpha_k}$. $\mathcal{V}_\alpha$ is a quotient of $\mathcal{M}_\alpha$, and hence cyclicly generated by the same element. \[lemma:turkey\] If $P$ is a standard immaculate tableau of shape $\alpha$ such that $\pi_i(P) = P$ for all $i \in \{1,2,\cdots,n\} \setminus \mathcal{S}(\alpha)$ then $P = \SS_\alpha$. In particular, if $P \neq \SS_\alpha$ then there exists an $i$ such that $\pi_i(\SS_\alpha) = \SS_\alpha$ but $\pi_i(P) \neq P$. If $\pi_i(P) = P$, then $i$ must be in the cell to the left of $i+1$ or in a row below $i+1$. The fact that $\pi_i(P) = P$ for all $i \in \{ 1,2,\dots,\alpha_1-1\}$ implies that the first row of $P$ agrees with $\SS_\alpha$. In a similar manner, we see that the second rows must agree. Continuing in this manner, we conclude that $P = \SS_\alpha$. For every $\alpha \models n$, $\mathcal{V}_\alpha$ is an indecomposable representation of $H_n(0)$. We let $f$ be an idempotent module morphism from $\mathcal{V}_\alpha$ to itself. If we can prove $f$ is either the zero morphism or the identity, then $\mathcal{V}_\alpha$ is indecomposable [@Ja Proposition 3.1]. Suppose $f(\SS_\alpha) = \sum_T a_T T$. By Lemma \[lemma:turkey\], for any $P \neq \SS_\alpha$, there exists an $i$ such that $\pi_i (\SS_\alpha) = \SS_\alpha$ but $\pi_i(P) \neq P$. Since $f$ is a module map, $$\label{eqn:boring}\sum_T a_T T = f(\SS_\alpha) = f(\pi_i\SS_\alpha) = \pi_if(\SS_\alpha) = \sum_T a_T \pi_i T.$$ The coefficient of $P$ on the right-hand side of Equation is zero (if there was a $T$ such that $\pi_i T = P$ then $\pi_i T = \pi_i^2T = \pi_i P \neq P$, a contradiction). Therefore $a_P = 0$ for all $P\neq \SS_\alpha$, so $f(\SS_\alpha) = \SS_\alpha$, or $f(\SS_\alpha) = 0$. Since $\mathcal{V}_\alpha$ is cyclicly generated by $\SS_\alpha$, this implies that either $f$ is the identity morphism or the zero morphism. \[fig:module223\] ![A diagram representing the action of the generators $\pi_i$ of $H_n(0)$ given in Equation on the basis elements of the module $\mathcal{V}_{(2,2,3)}$.](modulecompletecropped.pdf){width="3in"} [99]{} M. Aguiar, N. Bergeron and F. Sottile, *Combinatorial Hopf Algebras and generalized Dehn–Sommerville relations*, , 142 (2006) 1–30. C. Berg, N. Bergeron, F. Saliola, L. Serrano and M. Zabrocki, *A lift of the Schur and Hall-Littlewood bases to non-commutative symmetric functions*, arXiv:1208.5191. To appear in the Canadian Journal of Mathematics. C. Berg, N. Bergeron, F. Saliola, L. Serrano and M. Zabrocki, *Multiplicative structures of the immaculate basis of non-commutative symmetric functions*, , (2013). G. Duchamp, D. Krob, B. Leclerc, J-Y. Thibon, Fonctions quasi-symétriques, fonctions symétriques non-commutatives, et algèbres de Hecke à $q = 0$. Comptes Rendus de l’Académie des Sciences 322 (1996), 107-112. Ira M. Gessel. *Multipartite [$P$]{}-partitions and inner products of skew [S]{}chur functions*, In [Combinatorics and algebra ([B]{}oulder, [C]{}olo., 1983)]{}, volume 34 of [ Contemp. Math.]{}, pages 289–317. Amer. Math. Soc., Providence, RI, 1984. Israel M. Gelfand, Daniel Krob, Alain Lascoux, Bernard Leclerc, Vladimir S. Retakh, and Jean-Yves Thibon, *Non-commutative symmetric functions*, , 112-2 (1995) 218–348. Ira M. Gessel and Christophe Reutenauer, *Counting permutations with given cycle structure and descent set*, , 64-2 (1993) 189–215. N. Jacobson, *Basic algebra 2*, (2nd ed.), Dover. D. Krob, J.-Y. Thibon, *non-commutative symmetric functions IV: Quantum linear groups and Hecke algebras at q=0*, 6 (1997) 339–376. C. Malvenuto, C. Reutenauer, *Duality between quasi-symmetric functions and the Solomon descent algebra*, 177-3 (1995) 967–982. P. N. Norton. *0-Hecke algebras*. J. Austral. Math. Soc. Ser. A, 27(3):337–357, 1979. B. Sagan, *The Symmetric Group: Representations, Combinatorial Algorithms, and Symmetric Functions*, 2nd edition, Springer-Verlag, New York, 2001. W.A. Stein et al. *[S]{}age [M]{}athematics [S]{}oftware ([V]{}ersion 4.3.3)*, The Sage Development Team, 2010, [http://www.sagemath.org]{}. The [S]{}age-[C]{}ombinat community. *[[S]{}age-[C]{}ombinat]{}: enhancing Sage as a toolbox for computer exploration in algebraic combinatorics*, [[http://combinat.sagemath.org]{}]{}, 2008. Richard P. Stanley, *On the number of reduced decompositions of elements of Coxeter groups*, , 5-4 (1984) 359–372. J.Y. Thibon, *Lectures on noncommutative symmetric functions*, Memoirs of the Japan Mathematical Society 11 (2001), 39–94. J. Y. Thibon, Introduction to noncommutative symmetric functions, *From Numbers and Languages to (Quantum) Cryptography*, NATO Security through Science Series: Information and Communication Security, Volume 7. [^1]: This is not the original definition, but is equivalent by Proposition 3.16 in [@BBSSZ].
--- author: - 'R. P. Mignani' - 'S. Zaggia' - 'A. De Luca' - 'R. Perna' - 'N. Bassan' - 'P. A. Caraveo' date: 'Received ...; accepted ...' title: 'Optical and Infrared Observations of the X-ray source 1WGA J1713.4$-$3949 in the G347.3-0.5 SNR[^1] ' --- Introduction ============ X-ray observations have unveiled the existence of peculiar classes of Isolated Neutron Stars (INSs) which stand apart from the family of more classical radio pulsars in being radio-silent and not powered by the neutron star rotation but by still poorly understood emission mechanisms. Some of the most puzzling classes of radio-silent INSs are identified with a group of X-ray sources detected at the centre of young ($\sim$ 10-40 kyears) supernova remnants (SNRs), hence dubbed Central Compact Objects or CCOs (Pavlov et al. 2002). Although the SNR associations imply ages of the order of a few kyears, their X-ray properties make CCOs completely different from the other young INSs in SNRs (Pavlov et al. 2004; De Luca 2008). Oly two of them exhibit X-ray pulsations, with periods in the $\sim$ 100-400 ms range, and the measured upper limits on the period derivatives yield spin down ages $\ge 10^3$ exceeding the SNR age. Furthermore, their X-ray spectra are not purely magnetospheric but have strong thermal components. Finally, they are not embedded in pulsar wind nebulae (PWN). The discovery of long term X-ray flux variations (Gotthelf et al. 1999) and of a 6.7 hours periodicity (e.g. De Luca et al. 2006) in the RCW 103 CCO further complicated the picture, suggesting either a binary system with a low-mass companion, or a long-period magnetar (De Luca et al. 2006; Pizzolato et al. 2008). For other CCOs, the invoked scenarios involve low-magnetised INSs surrounded by debris disks formed after the supernova event (Gotthelf & Halpern 2007; Halpern et al. 2007), isolated accreting black holes (Pavlov et al. 2000), and dormant magnetars (Krause et al. 2005). In the optical/IR, deep observations have been performed only for a handful of objects (see De Luca 2008 for a summary) but no counterpart has been identified yet, with the possible exception of the Vela Jr. CCO (Mignani et al. 2007a). One of the CCOs which still lack a deep optical/IR investigation is 1WGA J1713.4$-$3949 in the young ($\le 40$ kyears) G347.3-0.5 SNR. Discovered by [[*ROSAT*]{}]{} (Pfeffermann & Aschenbach 1996), the source was re-observed with [[*ASCA*]{}]{}  and identified as an INS due to its high-temperature spectrum and the lack of an optical counterpart (Slane et al. 1999). 1WGA J1713.4$-$3949 was later observed with [[*RXTE*]{}]{}, [[*Chandra*]{}]{} and [[*XMM*]{}]{} (Lazendic et al. 2003; Cassam-Chenaï et al. 2004), with all the observation consistent with a steady X-ray emission. The X-ray luminosity is $L_{0.5-10 keV} \sim 6 ~ 10^{34}$ (d/6 kpc)$^2$ erg s$^{-1}$, where 6 kpc is the originally estimated SNR distance (Slane et al. 1999). A revised distance of $1.3 \pm 0.4$ kpc was recently obtained by Cassam-Chenaï et al. (2004). The X-ray spectrum can be fitted either by a blackbody, likely produced from hot polar caps, plus a power-law ($kT\sim$ 0.4 keV; $\Gamma \sim 4$; $N_H \sim 9 ~ 10^{21}$ cm$^{-2}$), or by two blackbodies ($kT_1\sim$ 0.5 keV; $kT_2\sim$ 0.3 keV; $N_H \sim 5 ~ 10^{21}$ cm$^{-2}$). No X-ray pulsations were detected so far (Slane et al. 1999; Lazendic et al. 2003), nor any radio counterpart (Lazendic et al. 2004), thus strengthening the case for 1WGA J1713.4$-$3949 to be a member of the CCO class. Here we present the results of the first, deep optical/IR observations of 1WGA J1713.4$-$3949 performed with the ESO telescopes. Observations are described in Sect. 2, while the results are described and discussed in Sect 3 and 4, respectively. Observations and data reduction =============================== Optical observations -------------------- 1WGA J1713.4$-$3949 was observed on June 13th 2004 at the ESO La Silla Observatory with the [[*New Technology Telescope*]{}]{} ([[*NTT*]{}]{}). The telescope was equipped with the second generation of the [*SUperb Seeing Imager*]{} ([[*SUSI2*]{}]{}). The camera is a mosaic of two 2000$\times$4000 pixels EEV CCDs with a 2$\times$2 binned pixel scale of 016 (5.5 $\times$ 5.5 field of view). Repeated exposures were obtained in the broad-band B, V, and I filters. The [[*SUSI2*]{}]{} observations log is summarised in the first half of Table \[data\]. Observations were performed with the target close to the zenith and under reasonably good seeing conditions ($\sim 1\arcsec$). Since the target was always centred on the left chip, no dithering was applied to the B and V-band exposures while the I-band ones were dithered to compensate for the fringing pattern affecting the CCD at longer wavelengths. Both night (twilight flat fields) and day time calibration frames (bias, dome flat fields) were acquired. Unfortunately, due to the presence of clouds both at the beginning and at the end of the night no standard star observations were acquired. As a reference for the photometric calibration we then used the closest in time zero points regularly measured using Landolt stars (Landolt 1983) as part of the instrument calibration plan and available in the photometry calibration database maintained by the [[*NTT*]{}]{}/[[*SUSI2*]{}]{} team. According to the zero point trending plots [^2], we estimate a conservative uncertainty of $\sim 0.1$ magnitudes on the values extrapolated to the night of our observations. Infrared observations --------------------- 1WGA J1713.4$-$3949 was observed on May 23rd and 24th 2006 at the ESO Paranal Observatory with [[*NAos COnica*]{}]{}([[*NACO*]{}]{}), the adaptive optics (AO) imager and spectrometer mounted at the [[*VLT*]{}]{} Yepun telescope. In order to provide the best combination between angular resolution and sensitivity, we used the S27 camera with a pixel scale of 0027 ($28''\times28''$ field of view). As a reference for the AO correction we used the [[*GSC-2*]{}]{}  star S230012111058 ($V=14.3$), positioned 115 away from our target, with the $VIS$ ($4500-10000 ~ $ Å) dichroic element and wavefront sensor. Observations were performed in the H and K$_s$ bands. To allow for subtraction of the variable IR sky background, each integration was split in sequences of short randomly dithered exposures with Detector Integration Times (DIT) of 24 s and 5 exposures (NDIT) along each node of the dithering pattern. The [[*NACO*]{}]{} observations log is summarised in the second half of Table \[data\]. For all observations, the seeing conditions were on average below $\sim 0\farcs8$. Unfortunately, since the target was always observed at the end of the night, the airmass was always above 1.4. Sky conditions were photometric in both nights. On the first night, the second and third K$_s$-band exposure sequence were aborted because the very high airmass prevented Because of their worse image quality and their much lower signal–to–noise, these data are not considered in the following analysis. The K$_s$-band exposure sequence obtained on the second night was interrupted despite of the very good seeing because of the incoming twilight. Thanks to the combination of good seeing and low airmass, the H-band exposure is the one with the best image quality. Night (twilight flat fields) and day time calibration frames (darks, lamp flat fields) were taken daily as part of the [[*NACO*]{}]{}calibration plan. Standard stars from the Persson et al. (1998) fields were observed in both nights for photometric calibration. ------------ -------- ------- --------------------------------------- -- yyyy-mm-dd Filter T (s) Seeing (“) & Airmass\ 2004-06-13 & B & 3200 & 1.14 & 1.07\ 2004-06-13 & V & 6400 & 1.12 & 1.03\ 2004-06-13 & I & 3150 & 1.0 & 1.16\ 2006-05-24 & Ks & 1800 & 0.66 & 1.45\ 2006-05-24 & Ks & 360 & 0.95 & 1.70\ 2006-05-24 & Ks & 600 & 0.78 & 1.81\ 2006-05-25 & H & 2400 & 0.62 & 1.42\ 2006-05-25 & Ks & 1200 & 0.40 & 1.74\ ------------ -------- ------- --------------------------------------- -- : Log of the [[*NTT*]{}]{}/[[*SUSI2*]{}]{} (first half) and [[*VLT*]{}]{}/[[*NACO*]{}]{} (second half) observations of the 1WGA J1713.4$-$3949 field. Columns report the observing epoch, the filter, the total integration time, and the average seeing and airmass. \[data\] Data reduction -------------- The [[*NTT*]{}]{}/[[*SUSI2*]{}]{} data were reduced using standard routine available in the [MIDAS]{} data reduction package[^3]. After the basic reduction steps (hot pixels masking, removal of bad CCD column, bias subtraction, flat field correction), single science frames were combined to filter cosmic ray hits and to remove the fringing patterns in the I-band. The astrometry was computed using the coordinates and positions of 61 stars selected from the [[*2MASS*]{}]{}catalogue (Skrutskie et al. 2006). For a better comparison with the [[*VLT*]{}]{}/[[*NACO*]{}]{} IR images, the I-band image was taken as a reference. The pixel coordinates of the [[*2MASS*]{}]{} stars (all non saturated and evenly distributed in the field) were measured by fitting their intensity profiles with a Gaussian function using the [GAIA]{} ([Graphical Astronomy and Image Analysis]{}) tool[^4]. The fit to celestial coordinates was computed using the [Starlink]{} package [ASTROM]{}[^5]. The rms of the astrometric fit residuals was $\approx$ 009 per coordinate. After accounting for the 02 [**conservative**]{} astrometric accuracy of [[*2MASS*]{}]{} (Skrutskie et al. 2006), the overall uncertainty to be attached to our astrometry is finally 024. The [[*VLT*]{}]{}  data were processed through the ESO [[*NACO*]{}]{}  data reduction pipeline[^6]. For each band, science frames were reduced with the produced master dark and flat field frames and combined to correct for the exposure dithering and to produce cosmic-ray free and sky-subtracted images. The photometric calibration was applied using the zero point provided by the [[*NACO*]{}]{} pipeline, computed through fixed aperture photometry. The astrometric calibration was performed using the same procedure described above. However, since only five [[*2MASS*]{}]{}  stars are identified in the narrow [[*NACO*]{}]{}  S27 camera field of view, we computed the astrometric solution using as a reference a set of 23 secondary stars found in common with the [[*SUSI2*]{}]{} I-band image, calibrated using [[*2MASS*]{}]{}. The rms of the astrometric fit residuals was then $\approx$ 006 per coordinate. By adding in quadrature the rms of the astrometric fit residuals of the [[*SUSI2*]{}]{} I-band image and the average astrometric accuracy of [[*2MASS*]{}]{} we thus end up with an overall accuracy of 025 on the [[*NACO*]{}]{} image astrometry. Data analysis and results ========================= Astrometry ---------- We derived the coordinates of 1WGA J1713.4$-$3949 through the analysis of unpublished [[*Chandra*]{}]{} observations. The field of 1WGA J1713.4-3949 was observed on April 19th 2005 with the [*ACIS/I*]{} instrument for 9.7 ks. Calibrated (level 2) data were retrieved from the [[*Chandra*]{}]{}X-ray Center Archive and were analysed using the [Chandra Interactive Analysis of Observations]{} software ([CIAO v3.3]{}). In order to compute the target position, we performed a source detection in the 0.5-10 keV energy range using the [wavdetect]{} task. The source coordinates turned out to be $\alpha (J2000)=17^h 13^m 28.32^s$, $\delta (J2000)= -39^\circ 49\arcmin 53\farcs34$, with a nominal uncertainty of $\sim$ 08 (99% confidence level)[^7]. The identification of a field X-ray source with the bright star HD 322941 at a position consistent with the one listed in the Tycho Reference Catalog (H[ø]{}g et al. 2000) confirmed the accuracy of the nominal [[*Chandra*]{}]{} astrometric solution. Unfortunately, since no other field X-ray source could be unambiguously identified with catalogued objects, it was not possible to perform any boresight correction to the [[*Chandra*]{}]{}data in order to improve the nominal astrometric accuracy. The computed 1WGA J1713.4$-$3949 position is shown in Fig. 1, overplotted on the [[*NTT*]{}]{}/[[*SUSI2*]{}]{} I-band and on the [[*VLT*]{}]{}/[[*NACO*]{}]{} H-band images. In the former (Fig.1-left) a faint and patchy object is clearly detected northeast of the [[*Chandra*]{}]{}  error circle (I$=23.5 \pm 0.3)$ and a fainter one (I$=24.3\pm0.4$) is possibly detected south of it. However, in both cases their patchy structure makes it difficult to determine whether their are single, or blended with unresolved field objects. No other object is detected within or close to the [[*Chandra*]{}]{} error circle down to B$\sim$26, V$\sim$26.2 and I$\sim$24.7 ($3 \sigma$). However, due to the better seeing conditions (see Table \[data\]) and to the sharper angular resolution, five objects are clearly detected in the [[*VLT*]{}]{}/[[*NACO*]{}]{} image (Fig. 1-right). Of these, object 413 falls within the [[*Chandra*]{}]{} error circle. A sixth fainter object (479) is possibly detected, albeit at very low significance. They are all point-like and compatible with the on-axis [[*NACO*]{}]{} PSF. Objects 401 and 403 are identified with the two faint objects detected in the [[*NTT*]{}]{}/[[*SUSI2*]{}]{}  I-band image northeast and southeast of the [[*Chandra*]{}]{}  error circle, respectively. The former might be actually a blend of objects 401 and 400, whose angular separation ($\approx 0\farcs6$) is smaller than the PSF of the [[*NTT*]{}]{}/[[*SUSI2*]{}]{} image. We thus take the measured magnitude (I$=23.5 \pm 0.3$) of object 401 with caution. All the objects detected in the [[*NACO*]{}]{} H-band image are also detected in the longest 1200 and 1800 s K$_s$-band ones (see Table \[data\]).No other object is detected close to the [[*Chandra*]{}]{}error circle down to H$\sim$21.3 and K$_s\sim$20.5 ($3 \sigma$). ID H K$_s$ H$-$K$_s$ ----- ------------------ ------------------ ----------------- 380 19.75 $\pm$ 0.14 19.32 $\pm$ 0.11 0.43 $\pm$ 0.18 400 18.98 $\pm$ 0.13 18.49 $\pm$ 0.09 0.49 $\pm$ 0.16 401 17.82 $\pm$ 0.13 17.32 $\pm$ 0.08 0.50 $\pm$ 0.15 403 18.47 $\pm$ 0.13 17.87 $\pm$ 0.08 0.60 $\pm$ 0.15 413 18.63 $\pm$ 0.13 18.31 $\pm$ 0.09 0.33 $\pm$ 0.16 479 20.34 $\pm$ 0.15 19.61 $\pm$ 0.12 0.73 $\pm$ 0.20 : [[*VLT*]{}]{}/[[*NACO*]{}]{}  H and K$_s$-band photometry and colour of the candidate counterparts of 1WGA J1713.4$-$3949. \[phot\] Photometry ---------- We computed objects magnitudes in the [[*NACO*]{}]{}  images through PSF photometry using the suite of tools [[Daophot]{}]{}(Stetson 1992) and applying the same procedures described in Zaggia et al. (1997) and applied in Mignani et al. (2007a) and De Luca et al. (2008). Since the [[*NACO*]{}]{}PSF is largely oversampled, we re-sampled the images with a $3\times3$ pixels window using the [swarp]{} program[^8] to increase the signal–to–noise ratio. As a reference for our photometry we used the co-added and re-sampled H-band image to create a master list of objects, which we registered on the K$_s$-band one and used as a mask for the object detection. For each image, the model PSF was calculated by fitting the profile of a number of bright but non-saturated reference stars in the field and used to measure the objects fluxes at the reference positions. Our photometry was calibrated using the zero points provided by the [[*NACO*]{}]{} pipeline after applying the aperture correction, with an attached errors of $\sim 0.13$ and $\sim 0.08$ magnitudes in H and K$_s$, respectively. Single band catalogues were then matched and used as a reference for our colour analysis. The IR magnitudes of our candidates are listed in Tab. \[phot\]. We used the K$_s$-band photometry performed on the two consecutive nights to search for variability on time scales of hours. However, none of our candidates shows flux variations larger than 0.1 magnitudes, which is consistent with our photometric errors. Fig. 2 shows the H,H-K$_s$ colour magnitude diagram (CMD) for our candidates as well as for all objects detected in the field. None of the candidates is characterised by peculiar colours with respect to the main sequence of the field stellar population, which suggest that they are main-sequence stars. ![H,H-K$_s$ CMD of all stars detected in the [[*VLT*]{}]{}$/$[[*NACO*]{}]{} field. All candidates identified in Fig. 1 are marked in red and labelled accordingly. No interstellar extinction correction has been applied.[]{data-label="rxj1713_cmd"}](rxj1713_cmd.ps){height="8cm"} Discussion ========== To determine whether one of the detected objects is the IR counterpart to 1WGA J1713.4$-$3949, we investigated how their observed properties fit with different scenarios. A binary system --------------- If our candidates are stars, we considered the possibility that one of them is the companion of the 1WGA J1713.4$-$3949 neutron star. Their observed colours are quite red (0.4$<$ H-K$_s$ $<$0.7), which suggests that they might be intrinsically red late-type stars. To be compatible with the observed range of H-K$_s$ (Ducati et al. 2001), e.g. an M-type main sequence star should be reddened by an amount of interstellar extinction corresponding to an $N_H \sim 10^{22}$ cm$^{-2}$ (Predhel & Schmitt 1995). This value is compatible with the largest values obtained from the spectral fits to 1WGA J1713.4$-$3949 (Lazendic et al. 2003; Cassam-Chenaï et al. 2004). For the originally proposed 1WGA J1713.4$-$3949 distance of 6 kpc (Slane et al. 1999) an M-type star with such an high extinction should be at least $\sim$0.7 magnitudes fainter than our faintest candidate (object 479). An early to mid M-type star would be compatible with the revised distance of $1.3 \pm 0.4$ kpc (Cassam-Chenaï et al. 2004) but it would be detected in our [[*NTT*]{}]{}/[[*SUSI2*]{}]{} image at I$\sim 20.2-21.7$. Thus, we conclude that if our candidates are stars none of them can be associated with 1WGA J1713.4$-$3949. Our optical/IR magnitude upper limits only allow an undetected companion of spectral type later than M. An isolated neutron star ------------------------ If 1WGA J1713.4$-$3949 is indeed an INS, we can then speculate if one of our candidates is the neutron star itself. Due to the paucity of the neutron stars observed in the IR (e.g. Mignani et al. 2007b) and to the lack of well-defined spectral templates, it is very difficult to estimate their expected IR brightness. This is even more difficult for the CCO neutron stars, none of which has been unambiguosuly identified so far (e.g. De Luca 2008). In the best characterised case of rotation-powered neutron stars one can deduce that the magnetospheric IR and X-ray luminosities correlate (Mignani et al. 2007b; Possenti et al. 2002). By assuming, e.g. a blackbody plus power law X-ray spectrum for 1WGA J1713.4$-$3949 (Lazendic et al. 2003; Cassam-Chenaï et al. 2004) we then scaled the magnetospheric IR-to-X-ray luminosity ratio of the Vela pulsar, taken as a reference because of its comparable age ($\sim 10$ kyears). After accounting for the corresponding interstellar extinction we thus estimated K$_s \sim 19.7$ for 1WGA J1713.4$-$3949, i.e. similar to the magnitude of object 479 (K$_s \sim 19.6$). Since also the magnetospheric optical and X-ray luminosities correlate (e.g., Zharikov et al. 2004), we similarly estimated B$\approx28.3$ for 1WGA J1713.4$-$3949 which, however, is below our [[*NTT*]{}]{}/[[*SUSI2*]{}]{}  upper limit (B$\ge$26). Thus, a neutron star identification can not be firmly excluded. A fossil disk ------------- As discussed in Sect.1, some CCO models invoke low-magnetised INSs surrounded by fallback disks. So, the last possibility is that we detected the IR emission from such a disk. We note that the IR-to-X-ray flux ratio for 1WGA J1713.4$-$3949 would be $\approx 10^{-3} - 10^{-2}$, i.e. much larger than that estimated for the anomalous X-ray pulsar 4U0142+61 (Wang et al. 2006), the only INS with evidence of a fallback disk. However, we can not a priori rule out the fallback disk scenario. We computed the putative disk emission using the model of Perna et al. (2000), which accounts for both for the contribution of viscous dissipation as well as that due to reprocessing of the neutron star X-ray luminosity. As a reference, we assumed the X-ray luminosity derived for the updated distance of $1.3 \pm 0.4$ kpc (Cassam-Chenaï et al. 2004). For a nominal disk inclination angle of $60^\circ$ with respect to the line of sight, the unknown model parameters are the disk inner and outer radii ($R_{\rm in}$,$R_{\rm out}$), and accretion rate ($\dot{M}$). We thus iteratively fitted our data for different sets of the model parameters. For the dimmest candidate we found that the IR fluxes would be consistent with a spectrum of a disk ($R_{\rm in}=0.28 R_\odot$, $R_{\rm out}=1.4R_\odot$) whose emission is dominated by the reprocessed neutron star X-ray luminosity (Fig. 3), similarly to the case of 4U 0142+61. However, such a disk should be detected in the I band, with a flux $\sim 1.5$ magnitude above our measured upper limit, as shown in Fig. 3. The overprediction of the optical flux is even more dramatic for a disk that fits the brighter counterparts. We thus conclude that, if the neutron stra has a disk, it was not detected by our observations. ![Dereddened IR spectra of the 1WGA J1713.4$-$3949 candidate counterparts. Dotted lines are drawn for guidance. The BVI bands upper limits are indicated. The solid line is the best fitting disk spectrum ($R_{\rm in}=0.28 R_\odot$, $R_{\rm out}=1.4 R_\odot$) for object 479. []{data-label="rxj1713_cmd"}](disk.ps){height="8cm"} Conclusions =========== We performed deep optical and IR observations of the CCO 1WGA J1713.4$-$3949 in the G347.3-0.5 SNR, the first ever performed for this source, with the [[*NTT*]{}]{} and the [[*VLT*]{}]{}. We detected a few objects close to the derived [[*Chandra*]{}]{}  X-ray error circle. However, if they are stars the association with the CCO would not be compatible with its current values of distance and hydrogen column density. Similarly to the cases of the CCOs in PKS 1209$-$51, Puppis A (Wang et al. 2007), Cas A (Fesen et al. 2006) and RCW 103 (De Luca et al. 2008), our results argue against the presence of a companion star, unless it is later than M-type, and favour the INS scenario. The identification of the faintest candidate with the neutron star itself can not be firmly excluded, while the identification with a fallback disk is ruled out by its non-detection in the I band. Thus, we conclude that the 1WGA J1713.4$-$3949 counterpart is still unidentified. Deeper optical/IR observations are needed to pinpoint new candidates. Although the source is apparently steady in X-rays, flux variations as observed in the RCW 103 CCO (Gotthelf et al. 1999) can not be a priory excluded. A prompt IR follow-up would then increase the chances to identify the 1WGA J1713.4$-$3949 counterpart. RPM warmly thanks N. Ageorges (ESO) for her friendly support at the telescope, D. Dobrzycka (ESO) for reducing the IR data with the [[*NACO*]{}]{} pipeline. Cassam-Chenaï, G., Decourchelle, A., Ballet, J., Sauvageot, J.-L., Dubner,G., et al., 2004, A&A, 427, 199 De Luca, A., Caraveo, P. A., Mereghetti, S., Tiengo, A., Bignami, G. F., 2006, Science, 313, 814 De Luca, A., Mignani, R.P., Zaggia, S., Beccari, G., Mereghetti, S., et al., 2008, ApJ, in press, \[arXiv:0803.2885\] De Luca, A., 2008, in Proc. of 40 Years of Pulsars: Millisecond Pulsars, Magnetars and More, AIP, 938, 311 Ducati, J.R., Bevilacqua, C.M., Rembold, S.B., Ribeiro, D., 2001, ApJ, 558, 309 Fesen, R. A., Pavlov, G. G., Sanwal, D. 2006, ApJ, 636, 848 Gotthelf, E. V., Petre, R., Vasisht, G., 1999, ApJ, 514, L107 Gotthelf, E. V., Halpern, J.P., 2007, ApJ, 664, L35 Halpen, J.P., Gotthelf, E. V., Camilo, F., Seward, F.D., 2007, ApJ, 665, 1304 H[ø]{}g E., Fabricius C., Makarov V.V., et al., 2000, A&A, 355, L27 Krause, O, Rieke, G.H., Birkmann, S.M., Le Floc’h, E., Gordon, K. D., et al., 2005, Science, 308, 1064 Landolt, A.U., 1983, AJ, 88, 439 Lazendic, J.S., Slane,P.O., Gaensler, B.M., Plucinsky, P.P., Hughes, J.P., 2003, ApJ, 593, L27 Lazendic, J.S., Slane,P.O., Gaensler, B.M., Reynolds, S.P., Plucinsky, P.P., et al., 2004, ApJ, 602, 271 Mignani, R.P., De Luca, A., Zaggia, S., Sester,D., Pellizzoni, A., et al., 2007a, A&A, 473, 833 Mignani, R.P., Perna, R., Rea, N., Israel, G.L., Mereghetti, S., et al. 2007b, A&A, 471, 265 Pavlov, G. G., Zavlin, V.E., Aschenbach. B., Trümper,J., Sanwal, D., 2000, ApJ, 531, L35 Pavlov, G.G., Sanwal, D., Garmire, G.P., Zavlin, V.E., 2002 in Proc. of Neutron Stars in Supernova Remnants, ASP Conference Series, Vol. 271, p.247 Pavlov, G. G., Sanwal, D., Teter, M. 2004, IAU Symp. Vol 218, 239 Perna, R., Hernquist, L., & Narayan, R. 2000, ApJ, 541, 344 Pfeffermann, E., Aschenbach, B., 1996, Mpe Report, 263,267 Pizzolato, F., Colpi, M., De Luca, A., Mereghetti, S., Tiengo, A., 2008, ApJ, accepted, \[arXiv:0803.1373\] Possenti, A., Cerutti, R., Colpi, M., Mereghetti, S., 2002, A&A, 387, 993 Predehl, P. & Schmitt, J.H.M.M. 1995, A&A 293, 889 Skrutskie, M. F., Cutri, R. M., Stiening, R., Weinberg, M. D., Schneider, S., et al., 2006, AJ, 131, 1163 Slane, P., Gaensler, B. M., Dame, T.M., Hughes, J.P., Plucinsky, P.P., et al., 1999, ApJ, 525, 357 Stetson, P. B. 1992, ASP Conf. Ser.  25: Astronomical Data Analysis Software and Systems I, 25, 297 Wang, Z., Chakrabarty, D., Kaplan, D. L. 2006, Nature, 440, 772 Wang, Z., Kaplan, D., Chakrabarty, D., 2007, ApJ, 655, 261 Zaggia, S. R., Piotto, G., & Capaccioli, M. 1997, , 327, 1004 Zharikov, S. V., Shibanov, Yu. A., Mennickent, R. E., Komarova, V. N., Koptsevich, A. B., et al., 2004, A&A, 417, 1017 [^1]: Based on observations collected at the European Southern Observatory, Paranal, Chile under programme ID 073.D-0632(A),077.D-0764(A) [^2]: http://www.ls.eso.org/lasilla/sciops/ntt/susi/docs/susiCounts.html [^3]: http://www.eso.org/sci/data-processing/software/esomidas/ [^4]: star-www.dur.ac.uk/ pdraper/gaia/gaia.html [^5]: http://star-www.rl.ac.uk/Software/software.htm [^6]: www.eso.org/observing/dfo/quality/NACO/pipeline [^7]: http://cxc.harvard.edu/cal/ASPECT/celmon/ [^8]: http://terapix.iap.fr/
--- abstract: 'We report on the calculation of Gamow-Teller and double-$\beta$ decay properties for nuclei around $^{132}$Sn within the framework of the realistic shell model. The effective shell-model Hamiltonian and Gamow-Teller transition operator are derived by way of many-body perturbation theory, without resorting to empirical effective quenching factor for the Gamow-Teller operator. The results are then compared with the available experimental data, in order to establish the reliability of our approach. This is a mandatory step, before we apply the same methodology, in forthcoming studies, to the calculation of the neutrinoless double-$\beta$ decay nuclear matrix element for nuclei that are currently considered among the best candidates for the detection of this process.' author: - 'L. Coraggio' - 'L. De Angelis' - 'T. Fukui' - 'A. Gargano' - 'N. Itaco' bibliography: - 'biblio.bib' title: 'Calculation of Gamow-Teller and Two-Neutrino Double-$\beta$ Decay Properties for $^{130}$Te and $^{136}$Xe with a realistic nucleon-nucleon potential' --- Introduction {#intro} ============ The detection of neutrinoless double-$\beta$ decay ($0\nu\beta\beta$) is nowadays one of the main targets in many laboratories all around the world, triggered by the search of “new physics” beyond the Standard Model. The observation of such a process would be the evidence of a lepton number violation and shed more light on the nature and properties of the neutrino (see Refs. [@Avignone08; @Vergados12] and references therein). It is well known that the expression for the half life of the $0\nu\beta\beta$ decay can be written in the following form: $$\left[ T^{0\nu}_{1/2} \right]^{-1} = G^{0\nu} \left| M^{0\nu} \right|^2 \langle m _{\nu} \rangle^2 ~~, \label{halflife}$$ where $G^{0\nu}$ is the so-called phase-space factor (or kinematic factor), $\langle m _{\nu} \rangle$ is the effective neutrino mass that takes into account the neutrino parameters associated with the mechanisms of light- and heavy-neutrino exchange, and $M^{0\nu}$ is the nuclear matrix element (NME) directly related to the wave functions of the parent and grand-daughter nuclei. From the expression (\[halflife\]), it is clear that a reliable estimate of the NME is a keypoint to understand which are the most favorable nuclides to be considered for the search of the $0\nu\beta\beta$ decay, and how to link the experimental results to a measurement of $|\langle m_{\nu}\rangle|$. It is therefore incumbent upon the theoretical nuclear structure community to make an effort to provide calculations of the NME as much reliable as possible. Currently, the nuclear structure models which are largely employed in this research field are the Interacting Boson Model (IBM) [@Barea09; @Barea12; @Barea13], the Quasiparticle Random-Phase Approximation (QRPA) [@Simkovic08; @Simkovic09; @Fang11; @Faessler12], Energy Density Functional methods [@Rodriguez10], and the Shell Model (SM) [@Caurier08; @Menendez09a; @Menendez09b; @Horoi13a; @Horoi13b; @Neacsu15; @Brown15; @Frekers17]. All of them have different advantages and drawbacks, that make one model more suitable than another for a certain class of nuclei, but nowadays the results obtained employing these approaches agree within a factor $\sim 2 \div 3$ (see Ref. [@Barea15] and references therein). A common feature in all the many-body models applied to systems with mass number ranging from $A=48$ to 150 is that the parameters upon which they depend need to be determined fitting some spectroscopic properties of the nuclei under investigation. In particular, since the Hilbert space considered in these approximated models is a truncated one, it is necessary to introduce quenching factors of the axial and vector coupling constants $g_A$ and $g_V$ that appear in the NME expression. Besides of the excluded degrees of freedom in the many-body calculation, the quenching operation has to take into account the subnucleonic structure of the nucleons too. The free value of $g_A$, that is obtained by the measurement of $g_A/g_V$ from the neutron decay [@Nakamura10], is 1.269, and its quenching factor is usually fixed fitting the observed Gamow-Teller (GT) and two-neutrino double-$\beta$ decay ($2\nu\beta\beta$) properties, that are experimentally available. We remark that the structure of the two operators, corresponding to the $0\nu\beta\beta$ and $2\nu\beta\beta$ decays, is quite different, and the quenching operation may be effective to calculate the GT strengths and $2\nu\beta\beta$ NME, but not consistent with the renormalization of the $0\nu\beta\beta$-decay operator. As a matter of fact, there are two main open questions about this problem. The first one is related to the fact that in the $2\nu\beta\beta$ decay essentially the $J^{\pi}=1^+$ states of the intermediate odd-odd nucleus are involved in the process, while all multipoles come into play in the $0\nu\beta\beta$ decay. So there is no precise prescription if $0\nu\beta\beta$ should be quenched only for the $1^+$ multipole, the quenching factor being fitted on $\beta$-decay properties, or all the multipole channels should be equally quenched [@Barea13]. Besides this, there is another question to be addressed. In the $2\nu\beta\beta$ decay the term associated with the vector current of the electroweak lagrangian and its coupling constant $g_V$ plays a negligible role, but this might not be the case for the $0\nu\beta\beta$ decay. So, it may be necessary also to renormalize this factor in order to take into account the many-body effects and the neglected subnucleonic degrees of freedom. Actually, there is no experimental evidence for an underlying mechanism for the renormalization of $g_V$, namely if the same quenching factor used for $g_A$, fixed by fitting $\beta$ decay data, should be used to quench $g_V$ too. Our framework to tackle these problems is the realistic shell model, where all the parameters appearing in the SM Hamiltonian and in the transition operators are derived from a realistic free nucleon-nucleon ($NN$) potential $V_{NN}$ by way of the many-body theory [@Kuo90; @Suzuki95]. In this way the bare matrix elements of the $NN$ potential and of any transition operator are renormalized with respect to the truncation of the full Hilbert space into the reduced SM model space, to take into account the neglected degrees of freedom without resorting to any empirical parameter. In other words, in our approach we do not employ effective charges to calculate electromagnetic transition strengths, and we do not quench empirically the axial and vector current coupling constants. It is a mandatory step, however, to check this approach to calculate properties related to the GT and $2\nu\beta\beta$ decays of nuclei involved in possible $0\nu\beta\beta$, and compare the results with the available data. This is the content of present work, where we present the outcome of SM calculations for nuclei around $^{132}$Sn, focussing our attention on the GT strengths and $2\nu\beta\beta$-decay of $^{130}$Te and $^{136}$Xe. These two nuclei are currently considered as candidates for the observation of neutrinoless double-beta decay by some large experimental collaborations. The $0\nu\beta\beta$ decay of $^{130}$Te is targeted by the CUORE collaboration at the INFN Laboratori Nazionali del Gran Sasso in Italy [@CUORE], while the decay of $^{136}$Xe is investigated both by the EXO-200 collaboration at the Waste Isolation Pilot Plant in Carlsbad, New Mexico, [@EXO-200], and by the KamLAND-Zen collaboration in the Kamioka mine in Japan [@Kamland]. Our starting point is the high-precision $NN$ potential CD-Bonn [@Machleidt01b], whose repulsive high-momentum components are smoothed out using the $V_{\rm low-k}$ approach [@Bogner02]. Then, from this realistic potential we have derived, within a model space spanned by the five proton and neutron orbitals $0g_{7/2},1d_{5/2},1d_{3/2},2s_{1/2},0h_{11/2}$ outside the doubly-closed $^{100}$Sn, the effective shell-model Hamiltonian $H_{\rm eff}$, effective electromagnetic and GT transition operators. The derivation of the effective Hamiltonian and operators has been performed by way of the time-dependent perturbation theory [@Kuo71; @Coraggio09a], including diagrams up to the third-order in $V_{\rm low-k}$. The following section is devoted to the presentation of some details about the derivation of our shell-model Hamiltonian and of the effective transition and decay operators. In Section \[results\], we report the results of our calculations for the spectroscopic properties of $^{130}$Te, $^{130,136}$Xe, and $^{136}$Ba, electromagnetic and GT transition strengths for $^{130}$Te and $^{136}$Xe, and their NMEs for the $2\nu\beta\beta$ decay. Theoretical results are compared with available experimental data. In the last section we sketch out a summary of the present work and an outlook of our future program. In the Supplemental Material [@supplemental2017], the calculated two-body matrix elements (TBME) of our SM Hamiltonian can be found. Outline of calculations {#calculations} ======================= We start our calculations by considering the high-precision CD-Bonn $NN$ potential [@Machleidt01b]. Because of the non-perturbative behavior induced by the repulsive high-momentum components of CD-Bonn potential, we have renormalized the latter by way of the so-called $V_{\rm low-k}$ approach [@Bogner01; @Bogner02]. This procedure provides a smooth potential that can be employed directly in the many-body perturbation theory, and that preserves exactly the onshell properties of the original $NN$ potential up to a cutoff momentum $\Lambda$. We have chosen its value, as in many of our recent papers [@Coraggio09c; @Coraggio09d; @Coraggio14b; @Coraggio15a; @Coraggio16a], to be equal to $2.6$ fm$^{-1}$, because we have found that the larger the cutoff the smaller the role of the missing three-nucleon force (3NF) [@Coraggio15b]. The Coulomb potential is explicitly taken into account in the proton-proton channel. The next step is to derive an effective Hamiltonian for SM calculations employing a model space spanned by the five $0g_{7/2},1d_{5/2},1d_{3/2},2s_{1/2},0h_{11/2}$ proton and neutron orbitals outside the doubly-closed $^{100}$Sn core. To this end, an auxiliary one-body potential $U$ is introduced in order to break up the Hamiltonian for a system of $A$ nucleons as the sum of a one-body term $H_0$, which describes the independent motion of the nucleons, and a residual interaction $H_1$: $$\begin{aligned} H &= & \sum_{i=1}^{A} \frac{p_i^2}{2m} + \sum_{i<j=1}^{A} V_{\rm low-k}^{ij} = T + V_{\rm low-k} = \nonumber \\ ~& = & (T+U)+(V_{\rm low-k}-U)= H_{0}+H_{1}~~.\label{smham}\end{aligned}$$ Once $H_0$ has been introduced, the reduced model space is defined in terms of a finite subset of $H_0$’s eigenvectors. In our calculation we choose as auxiliary potential the harmonic oscillator (HO) potential. Since the diagonalization of the many-body Hamiltonian (\[smham\]) in an infinite Hilbert space is obviously infeasible, our eigenvalue problem is then reduced to the solution of that one for an effective Hamiltonian $H_{\rm eff}$ in a truncated model space. In this paper, we derive $H_{\rm eff}$ by way of the Kuo-Lee-Ratcliff (KLR) folded-diagram expansion [@Kuo71; @Kuo90] in terms of the vertex function $\hat{Q}$ box, that is defined as $$\hat{Q} (\epsilon) = P H_1 P + P H_1 Q \frac{1}{\epsilon - Q H Q} Q H_1 P ~~. \label{qbox}$$ The $\hat{Q}$ box may be expanded perturbatively in terms of irreducible valence-linked one- and two-body Goldstone diagrams through third order in $H_1$ [@Hjorth95]. We have reviewed the calculation of our SM effective Hamiltonian $H_{\rm eff}$ in Ref. [@Coraggio12a], where details of the diagrammatic expansion of the $\hat{Q}$ box and its perturbative properties are also reported. In terms of the $\hat{Q}$ box, the effective SM Hamiltonian $H_{\rm eff}$ can be written in an operator form as $$H_{\rm eff} = \hat{Q} - \hat{Q'} \int \hat{Q} + \hat{Q'} \int \hat{Q} \int \hat{Q} - \hat{Q'} \int \hat{Q} \int \hat{Q} \int \hat{Q} + ~...~~,$$ where the integral sign represents a generalized folding operation, and $\hat{Q'}$ is obtained from $\hat{Q}$ by removing terms at the first order in $V_{\rm low-k}$ [@Kuo71; @Kuo90]. The folded-diagram series is then summed up to all orders using the Lee-Suzuki iteration method [@Suzuki80]. From $H_{\rm eff}$ we obtain both single-particle (SP) energies and TBME for our SM calculations. As already mentioned in the Introduction, in the Supplemental Material [@supplemental2017] our calculated TBME are reported, and in Table \[spetab\] our calculated SP energies. There, the latter (labelled as I) are compared with a set of empirical SP energies (labelled as II) that are needed to fit the observed SP states in $^{133}$Sb and $^{131}$Sn [@ensdf; @xundl]. -------------- ----- -------------------- --- ----- --------------------- \[spetab\]     Proton SP spacings     Neutron SP spacings   I II   I II $0g_{7/2}$ 0.0 0.0   0.0 0.0 $1d_{5/2}$ 0.3 0.4   0.6 0.7 $1d_{3/2}$ 1.2 1.4   1.5 2.1 $2s_{1/2}$ 1.1 1.3   1.2 1.9 $0h_{11/2}$ 1.9 1.6   2.7 3.0 -------------- ----- -------------------- --- ----- --------------------- : Theoretical (I) and empirical (II) proton and neutron SP energy spacings (in MeV) employed in present work (see text for details). As regards the effective transition and decay operators, namely the effective charges of the electric quadrupole operators and the matrix elements of the effective GT operator, we have derived them consistently with SM $H_{\rm eff}$, within an approach that is strictly based on the one presented by Suzuki and Okamoto in Ref. [@Suzuki95]. In that paper, it has been demonstrated that a non-Hermitian effective operator $\Theta_{\rm eff}$ can be written in the following form: $$\begin{aligned} \Theta_{\rm eff} & = & (P + \hat{Q}_1 + \hat{Q}_1 \hat{Q}_1 + \hat{Q}_2 \hat{Q} + \hat{Q} \hat{Q}_2 + \cdots)(\chi_0+\nonumber \\ ~ & ~& + \chi_1 + \chi_2 +\cdots)~~, \label{effopexp}\end{aligned}$$ where $\hat{Q}$ is the $\hat{Q}$ box defined by the expression (\[qbox\]), and $$\hat{Q}_m = \frac {1}{m!} \frac {d^m \hat{Q} (\epsilon)}{d \epsilon^m} \biggl| _{\epsilon=\epsilon_0} ~~, \label{qm}$$ $\epsilon_0$ being the eigenvalue of the degenerate model-space of the unperturbed Hamiltonian $H_0$, that, as mentioned before, we have chosen to be the HO one. The $\chi_n$ operators are defined as follows: $$\begin{aligned} \chi_0 &=& (\hat{\Theta}_0 + h.c.)+ \Theta_{00}~~, \label{chi0} \\ \chi_1 &=& (\hat{\Theta}_1\hat{Q} + h.c.) + (\hat{\Theta}_{01}\hat{Q} + h.c.) ~~, \\ \chi_2 &=& (\hat{\Theta}_1\hat{Q}_1 \hat{Q}+ h.c.) + (\hat{\Theta}_{2}\hat{Q}\hat{Q} + h.c.) + \nonumber \\ ~ & ~ & (\hat{\Theta}_{02}\hat{Q}\hat{Q} + h.c.)+ \hat{Q} \hat{\Theta}_{11} \hat{Q}~~, \label{chin} \\ &~~~& \cdots \nonumber\end{aligned}$$ where $\hat{\Theta}_m$, $\hat{\Theta}_{mn}$ have the following expressions: $$\begin{aligned} \hat{\Theta}_m & = & \frac {1}{m!} \frac {d^m \hat{\Theta} (\epsilon)}{d \epsilon^m} \biggl|_{\epsilon=\epsilon_0} ~~~, \\ \hat{\Theta}_{mn} & = & \frac {1}{m! n!} \frac{d^m}{d \epsilon_1^m} \frac{d^n}{d \epsilon_2^n} \hat{\Theta} (\epsilon_1 ;\epsilon_2) \biggl|_{\epsilon_1= \epsilon_0, \epsilon_2 = \epsilon_0} ~,\end{aligned}$$ with $$\begin{aligned} \hat{\Theta} (\epsilon) & = & P \Theta P + P \Theta Q \frac{1}{\epsilon - Q H Q} Q H_1 P ~~, \label{thetabox} \\ \hat{\Theta} (\epsilon_1 ; \epsilon_2) & = & P \Theta P + P H_1 Q \frac{1}{\epsilon_1 - Q H Q} \times \nonumber \\ ~ & ~ & Q \Theta Q \frac{1}{\epsilon_2 - Q H Q} Q H_1 P ~~,\end{aligned}$$ $\Theta$ being the bare operator. In our calculations for the one-body operators we arrest the $\chi$ series to the leading term $\chi_0$, and the latter is expanded perturbatively including diagrams up to the third order in the perturbation theory, consistently with the perturbative expansion of the $\hat{Q}$ box. In Fig. \[figeffop\] we have reported all the single-body $\chi_0$ diagrams up to the second order, the bare operator $\Theta$ being represented with an asterisk. The first-order $(V_{\rm low-k}-U)$-insertion is represented by a circle with a cross inside, which arises in the perturbative expansion owing to the presence of the $−U$ term in the interaction Hamiltonian $H_1$ (see for example Ref. [@Coraggio12a] for details). ![One-body second-order diagrams included in the perturbative expansion of $\chi_0$. The asterisk indicates the bare operator $\Theta$, the wavy lines the two-body potential $V_{\rm low-k}$.[]{data-label="figeffop"}](single-body_operator_2nd.pdf) Using this approach we have calculated proton and neutron effective state-dependent charges, which are reported in Table \[effch\]. It should be pointed out that our results are close to the usual empirical values ($e^{\rm emp}_p=1.5e,~e^{\rm emp}_n =0.5\div 0.8e$). ---------------------------------------- ---------------------------------- ---------------------------------- \[effch\] $n_a l_a j_a ~ n_b l_b j_b $ $\langle a || e_p || b \rangle $ $\langle a || e_n || b \rangle $ $0g_{7/2}~0g_{7/2}$ 1.66 1.00 $0g_{7/2}~1d_{5/2}$ 1.70 1.07 $0g_{7/2}~1d_{3/2}$ 1.65 1.00 $1d_{5/2}~0g_{7/2}$ 1.71 1.00 $1d_{5/2}~1d_{5/2}$ 1.52 0.63 $1d_{5/2}~1d_{3/2}$ 1.50 0.64 $1d_{5/2}~2s_{1/2}$ 1.53 0.62 $1d_{3/2}~0g_{7/2}$ 1.63 0.97 $1d_{3/2}~1d_{5/2}$ 1.48 0.66 $1d_{3/2}~1d_{3/2}$ 1.51 0.69 $1d_{3/2}~2s_{1/2}$ 1.55 0.68 $2s_{1/2}~1d_{5/2}$ 1.52 0.63 $2s_{1/2}~1d_{3/2}$ 1.56 0.67 $0h_{11/2}~0h_{11/2}$ 1.50 0.68 ---------------------------------------- ---------------------------------- ---------------------------------- : Proton and neutron effective charges of the electric quadrupole operator $E2$. In Tables \[effGTpn\] and \[effGTnp\], the matrix elements of the proton-neutron GT$^+$ and neutron-proton GT$^-$ effective operators, respectively, are reported. The breaking of the proton-neutron symmetry is due to the fact that we include in the perturbative calculation of $H_{\rm eff}$ and ${\rm GT}_{\rm eff}$ also the effect of the Coulomb potential between the interacting protons. In the last column the quenching factors that should be employed in order to obtain the corresponding ${\rm GT}_{\rm eff}$ matrix element are also reported. The quenching factor is not reported for those matrix elements that are forbidden for the bare GT operator. ------------------------------------------ ------------------ ------------------ \[effGTpn\] $n_a l_a j_a ~ n_b l_b j_b $ GT$^+_{\rm eff}$ quenching factor $0g_{7/2}~0g_{7/2}$ -1.239 0.50 $0g_{7/2}~1d_{5/2}$ -0.139   $1d_{5/2}~0g_{7/2}$ 0.017   $1d_{5/2}~1d_{5/2}$ 1.864 0.64 $1d_{5/2}~1d_{3/2}$ -1.747 0.56 $1d_{3/2}~1d_{5/2}$ 1.942 0.63 $1d_{3/2}~1d_{3/2}$ -1.023 0.66 $1d_{3/2}~2s_{1/2}$ -0.118   $2s_{1/2}~1d_{3/2}$ 0.095   $2s_{1/2}~2s_{1/2}$ 1.598 0.65 $0h_{11/2}~0h_{11/2}$ 2.597 0.69 ------------------------------------------ ------------------ ------------------ : Matrix elements of the proton-neutron effective GT$^+$ operator. In the last column it is reported the corresponding quenching factors (see text for details). ------------------------------------------ ------------------ ------------------ \[effGTnp\] $n_a l_a j_a ~ n_b l_b j_b $ GT$^-_{\rm eff}$ quenching factor $0g_{7/2}~0g_{7/2}$ -1.239 0.50 $0g_{7/2}~1d_{5/2}$ -0.019   $1d_{5/2}~0g_{7/2}$ 0.131   $1d_{5/2}~1d_{5/2}$ 1.864 0.64 $1d_{5/2}~1d_{3/2}$ -1.891 0.61 $1d_{3/2}~1d_{5/2}$ 1.794 0.58 $1d_{3/2}~1d_{3/2}$ -1.023 0.66 $1d_{3/2}~2s_{1/2}$ -0.093   $2s_{1/2}~1d_{3/2}$ 0.117   $2s_{1/2}~2s_{1/2}$ 1.598 0.65 $0h_{11/2}~0h_{11/2}$ 2.597 0.69 ------------------------------------------ ------------------ ------------------ : Same as in Table \[effGTpn\], but for the neutron-proton effective GT$^-$ operator. Results ======= This section is devoted to the presentation of the results of our SM calculations. We compare the calculated low-energy spectra of $^{130}$Te, $^{130}$Xe, $^{136}$Xe, and $^{136}$Ba, and their electromagnetic transition strengths with the available experimental data, that are reported in Table \[E2\]. It should be mentioned that in Ref. [@Vietze15] shell-model calculations for $^{130,136}$Xe isotopes have been performed using the empirical shell-model Hamiltonian GCN5082 [@Menendez09b]. We show also the results of the GT$^-$ strength distributions of $^{130}$Te and $^{136}$Xe, which are defined as follows: $$B({\rm GT}^-) = \frac{ \left| \langle \Phi_f || \sum_{j} \vec{\sigma}_j \tau^-_j || \Phi_i \rangle \right|^2} {2J_i+1}~~, \label{GTstrength}$$ where indices $i,f$ refer to the parent and daughter nuclei, respectively, and the sum is over all interacting nucleons. In the following subsections, we also report the results of the calculated NME of the $2\nu\beta\beta$ decays $^{130}{\rm Te}_{\rm g.s.} \rightarrow ^{130}$Xe$_{\rm g.s.}$ and $^{136}{\rm Xe}_{\rm g.s.} \rightarrow ^{136}$Ba$_{\rm g.s.}$, via the following expression: $$M^{\rm GT}_{2\nu} = \sum_n \frac{ \langle 0^+_f || \vec{\sigma} \tau^- || 1^+_n \rangle \langle 1^+_n || \vec{\sigma} \tau^- || 0^+_i \rangle } {E_n + E_0} ~~, \label{doublebetame}$$ where $E_n$ is the excitation energy of the $J^{\pi}=1^+_n$ intermediate state, $E_0=\frac{1}{2}Q_{\beta\beta}(0^+) +\Delta M$, $Q_{\beta\beta}(0^+)$ and $\Delta M$ being the $Q$ value of the $\beta \beta$ decay and the mass difference between the daughter and parent nuclei, respectively. In the expression of Eq. (\[doublebetame\]) the sum over index $n$ runs over all possible intermediate states of the daughter nucleus. The NMEs have been calculated using the ANTOINE shell-model code, using the Lanczos strength function-method as in Ref. [@Caurier05]. The theoretical values are then compared with the experimental counterparts, that are directly related to the observed half life $T^{2\nu}_{1/2}$ $$\left[ T^{2\nu}_{1/2} \right]^{-1} = G^{2\nu} \left| M^{\rm GT}_{2\nu} \right|^2 ~~. \label{2nihalflife}$$ In connection with the $2\nu\beta\beta$ decay, we show also the comparison between our calculated proton/neutron occupancies/vacancies and the recent data. All the calculations have been performed employing both theoretical and empirical SP energies, reported in Table \[spetab\], in order to provide an indicator of the sensitivity of our SM results on the choice of the SP energies. ---------------- ------------------------ -------------------- ------ ------ \[E2\] Nucleus $J_i \rightarrow J_f $ $B(E2)_{Expt}$ I II           $^{130}$Te           $2^+ \rightarrow 0^+$ $580 \pm 20$ 430 420   $6^+ \rightarrow 4^+$ $240 \pm 10$ 220 200 $^{130}$Xe           $2^+ \rightarrow 0^+$ $1170^{+20}_{-10}$ 954 876 $^{136}$Xe           $2^+ \rightarrow 0^+$ $420 \pm 20$ 300 300   $4^+ \rightarrow 2^+$ $53 \pm 1$ 9 11   $6^+ \rightarrow 4^+$ $0.55 \pm 0.02$ 1.58 2.42 $^{136}$Ba           $2^+ \rightarrow 0^+$ $800^{+80}_{-40}$ 590 520 ---------------- ------------------------ -------------------- ------ ------ : Experimental and calculated $B(E2)$ strengths of $^{130}$Te, $^{130}$Xe, $^{136}$Xe, and $^{136}$Ba (in $e^2{\rm fm}^4$). They are reported for observed states up to 2 MeV excitation energy. Data are taken from Refs [@ensdf; @xundl]. $^{130}$Te ${\rm GT}^-$ strengths and $2\nu\beta\beta$ decay ------------------------------------------------------------ In Figs. \[130Te\] and \[130Xe\], we show the experimental [@ensdf; @xundl] and calculated spectra of $^{130}$Te and $^{130}$Xe up to an excitation energy of 2 MeV. As can be seen, these results are scarcely sensitive to the choice of the SP energies, those of $^{130}$Te being in a very good agreement with the experimental data, while the reproduction of the observed $^{130}$Xe low-lying states is less satisfactory. From inspection of Table \[E2\], it can be seen that our calculated electric-quadrupole transition rates $B(E2)$ compare well with the observed values for both nuclei, testifying the reliability of our SM wavefunctions and of the effective electric-quadrupole transition-operator. Its matrix elements are reported in Table \[effch\]. It is worth noting that the calculated $B(E2)$s do not show a relevant dependence on the choice of the SP energies, their values being very close each other. ![Experimental and calculated spectra of $^{130}$Te up to 2 MeV excitation energy (see text for details).[]{data-label="130Te"}](130Te.pdf) ![Same as Fig. \[130Te\], for $^{130}$Xe.[]{data-label="130Xe"}](130Xe.pdf) In Fig. \[130TeGT-\], our calculated running sums of the Gamow-Teller strengths ($\Sigma B({\rm GT}^-)$) as a function of the excitation energy for $^{130}$Te are shown. The comparison of the calculated GT strength distributions with the observed ones is a very relevant point when trying to assess the reliability of a many-body approach to the description of the $\beta\beta$ decay. The single $\beta$ decay GT strengths, defined by Eq. (\[GTstrength\]), can be accessed experimentally through intermediate energy charge-exchange reactions. As a matter of fact, the GT strength can be extracted, following the standard approach in the distorted-wave Born approximation (DWBA), from the GT component of the cross section by way of the relation [@Goodman80; @Taddeucci87] $$\frac{d \sigma^{GT}}{d \Omega} = \left (\frac{\mu}{\pi \hbar^2} \right )^2 \frac{k_f}{k_i} N^{\sigma \tau}_{D}| J_{\sigma \tau} |^2 B(GT)~~,$$ where $N^{\sigma \tau}_{D}$ is the distortion factor, and $| J_{\sigma \tau} |$ is the volume integral of the effective $NN$ interaction. In the following, we compare our results with the GT$^-$ distributions obtained in recent high-resolution $(^3{\rm He},t)$ studies on $^{130}$Te [@Puppe12]. In Fig. \[130TeGT-\], the data are reported with a red line, while the results obtained with SP energies (I) and (II) define the blue and black areas, respectively, for the bare and effective ${\rm GT}^-$ operators. It can be seen that the renormalized GT operator is able to reproduce quite well the behavior of the experimental running GT strength. As a matter of fact, if we shift the calculated distributions in order to reproduce the position of the first $1^+$ state in $^{130}$I, the theoretical total ${\rm GT}^-$ strengths up to 3 MeV excitation energy are equal to 0.842 and 0.873, for the calculations with SP energies I and II respectively, which should be compared with the experimental value $0.746 \pm 0.045$. The crucial role of the many-body renormalization is evident when considering the results obtained using the bare GT operator. In this case the total ${\rm GT}^-$ strength is equal to 2.554 and 2.408 with SP energies from set (I) and (II), respectively. As regards the $2\nu\beta\beta$ decay of $^{130}$Te, we have calculated NME, as defined by expression (\[doublebetame\]), and the results, compared with value obtained from the experimental half life of the $^{130}{\rm Te} \rightarrow ^{130}$Xe $2\nu\beta\beta$ decay [@Barabash10], are reported in Fig. \[130Te130Xe\]. ![Running sums of the $^{130}$Te $B({\rm GT}^-)$ strengths as a function of the excitation energy $E_x$ up to 3000 keV (see text for details).[]{data-label="130TeGT-"}](130Te-GTstrength.pdf) The theoretical results are reported as a function of the maximum excitation energy of the intermediate states included in the sum of expression (\[doublebetame\]). As can be seen, the calculated values saturate when including at least intermediate states up to 8 MeV excitation energy. As in the case of the theoretical GT strength distributions, the NMEs calculated with the effective GT operator are in a good agreement with the experimental datum $M^{\rm GT}_{2\nu}=(0.034 \pm 0.003)$MeV$^{-1}$ [@Barabash10], our results being 0.044 MeV$^{-1}$ and 0.046 MeV$^{-1}$ with SP energies (I) and (II), respectively. Actually, the NMEs calculated with the bare GT operator are 0.131 MeV$^{-1}$(I) and 0.137 MeV$^{-1}$ (II), which are far away from the experimental one. It is worth mentioning now the results obtained by two recent SM calculations [@Caurier12; @Neacsu15], where the shell model Hamiltonians are based on realistic $NN$ potentials but empirically modified in order to reproduce some spectroscopic properties of nuclei around $^{132}$Sn. In Ref. [@Caurier12] the calculated NME is 0.043 MeV$^{-1}$, with a quenching factor of 0.57 of $g_A$, while in Ref. [@Neacsu15] a value of 0.0328 MeV$^{-1}$, using a quenching factor of 0.74, is reported. ![Running sums of the calculated $M^{\rm GT}_{2\nu}$ as a function of the excitation energy of the intermediate states. The blue area corresponds to the calculations with the bare GT operator, while the black one to those with ${\rm GT}_{\rm eff}$ (see text for details).[]{data-label="130Te130Xe"}](130Te2b2v.pdf) Another important indicator of the quality of the calculated NME, both for $2\nu\beta\beta$ and $0\nu\beta\beta$ decay, may be provided by the comparison of the theoretical occupancies of valence nucleons in the ground states of the parent and grand-daughter nuclei with the observed ones. Recently, those quantities have been determined by measuring the cross sections of one-proton stripping and one-neutron pick-up reactions, for proton occupancies and neutron vacancies, respectively [@Entwisle16; @Kay13]. These data are reported in Figs. \[130Teprot\] and \[130Teneut\] and compared with our calculations. ![Change in proton occupancies between the ground states for the $^{130}{\rm Te} \rightarrow ^{130}$Xe decay (see text for details). The brown area corresponds to the occupation of the $0g_{7/2}$ orbital, the green one to the $1d$ orbitals, the red one to the $2s_{1/2}$ orbital, and the blue one to the $0h_{11/2}$ orbital.[]{data-label="130Teprot"}](130Teprot.pdf) ![Change in neutron vacancies between the ground states for the $^{130}{\rm Te} \rightarrow ^{130}$Xe decay (see text for details). Colored areas refer to the same orbital as in Fig. \[130Teprot\].[]{data-label="130Teneut"}](130Teneut.pdf) The calculations with SP energies (I) and (II) give very close results, which are in nice agreement with experiment, bearing in mind the experimental uncertainties that are up to $20\%$ for the change in occupancy of proton $0g_{7/2}$ orbital [@Entwisle16]. $^{136}$Xe ${\rm GT}^-$ strengths and $2\nu\beta\beta$ decay ------------------------------------------------------------ ![Same as Fig. \[130Te\], for $^{136}$Xe.[]{data-label="136Xe"}](136Xe.pdf) This subsection is organized like the previous one, so we start from the inspection of Figs. \[136Xe\] and \[136Ba\], where the experimental [@ensdf; @xundl] and calculated spectra of $^{136}$Xe and $^{136}$Ba up to an excitation energy of 2 MeV are reported. The calculated spectra are again in a good agreement with experiment, and results are rather insensitive to the SP energies I and II. From inspection of Table \[E2\], it can be seen that our calculated $B(E2;2^+_1 \rightarrow 0^+_1)$s are very close to the observed value, while for $^{136}$Xe the theoretical $B(E2;4^+_1 \rightarrow 2^+_1)$s and $B(E2;6^+_1 \rightarrow 4^+_1)$s are less satisfactory when compared with available data [@ensdf; @xundl], calculations with SP energies (I) and (II) underestimating the observed $B(E2;4^+_1 \rightarrow 2^+_1)$ and overestimating the experimental $B(E2;6^+_1 \rightarrow 4^+_1)$. ![Same as Fig. \[130Te\], for $^{136}$Ba.[]{data-label="136Ba"}](136Ba.pdf) The calculated $\Sigma B({\rm GT}^-)$ for $^{136}$Xe, as a function of the excitation energy, can be found in Fig. \[136XeGT-\], where they are compared with the observed GT$^-$ distributions extracted from high-resolution $(^3{\rm He},t)$ reactions on $^{136}$Xe [@Frekers13]. ![Running sums of the $^{136}$Xe $B({\rm GT}^-)$ strengths as a function of the excitation energy $E_x$ up to 4500 keV (see text for details).[]{data-label="136XeGT-"}](136Xe-GTstrength.pdf) As in the case of $^{130}$Te, we observe that the renormalized GT operator reproduces satisfactorily the observed running GT strength, the theoretical total ${\rm GT}^-$ strengths up to 4.5 MeV excitation energy being equal to 0.94 (I) and 1.13 (II) and to be compared with an experimental value of $1.33 \pm 0.07$. We have calculated the NME related to the $2\nu\beta\beta$ decay of $^{136}$Xe into $^{136}$Ba, whose values are 0.091 MeV$^{-1}$ (I) and 0.094 MeV$^{-1}$ (II) with bare GT operator, and 0.0285 MeV$^{-1}$ (I) and 0.0287 MeV$^{-1}$ (II) with the effective operator $\rm{GT}_{\rm eff}$. The experimental value, obtained from the experimental half life of the $^{136}{\rm Xe} \rightarrow ^{136}$Ba $2\nu\beta\beta$ decay [@Albert14], is $(0.0218 \pm 0.0003)$ MeV$^{-1}$, that compares well with the theoretical values derived employing the effective operator (Fig. \[136Xe136Ba\]). For the sake of completeness, we mention that in Ref. [@Caurier12] the calculated value of the NME is 0.025 MeV$^{-1}$ with a quenching factor equal to 0.45, and in Ref. [@Neacsu15] they obtain 0.0256 MeV$^{-1}$, having quenched $g_A$ by a factor 0.74. The latter has been employed also in Ref. [@Horoi13b] as a quenching factor to calculate the matrix element of $2\nu\beta\beta$ decay of $^{136}$Xe, resulting in a calculated NME of 0.062 MeV$^{-1}$. In this paper, the authors have used a different shell-model Hamiltonian, derived by way of the KLR folded-diagram expansion from the realistic N$^3$LO potential [@Entem02], which however seems to describe the nuclear structure of nuclei around $Z=50$ equally as well as in Ref. [@Neacsu15]. This evidences the tight relationship between the shell-model Hamiltonian and the choice of the $g_A$ quenching factor. ![Same as in Fig. \[130Te130Xe\], for the $^{136}{\rm Xe} \rightarrow ^{136}$Ba $2\nu\beta\beta$ decay (see text for details).[]{data-label="136Xe136Ba"}](136Xe2b2v.pdf) Finally, in Figs. \[136Xeprot\] and \[136Xeneut\] the theoretical occupancies of valence nucleons in the ground states of the parent and grand-daughter nuclei are shown and compared with those obtained in Ref. [@Kay13; @Entwisle16] from the experimental cross sections of the $(d,^3{\rm He})$ and $(\alpha,^3{\rm He})$ reactions. The poor reproduction of the experimental neutron vacancies, as can be seen in Fig. \[136Xeneut\], is due to the fact that in our model space the neutron component of $^{136}$Xe is frozen, having its 32 valence neutrons totally filled the 50-82 shell. ![Same as in Fig. \[130Teprot\], for the $^{136}{\rm Xe}\rightarrow ^{136}$Ba decay (see text for details).[]{data-label="136Xeprot"}](136Xeprot.pdf) ![Same as in Fig. \[130Teneut\], for the $^{136}{\rm Xe} \rightarrow ^{136}$Ba decay (see text for details).[]{data-label="136Xeneut"}](136Xeneut.pdf) Summary and Outlook =================== In the present work, we have presented the results of a realistic SM calculation of GT decay properties for $^{130}$Te and $^{136}$Xe. Our aim has been to test an approach to the calculation of the NME of the $0\nu\beta\beta$ decay of these nuclei, where the SM Hamiltonian and the related transition operators are derived, starting from a realistic $NN$ potential, from the many-body theory. This means that the need to resort to empirical parameters is drastically reduced, so enhancing the predictiveness of the nuclear structure calculations. The first step toward this goal has been to test the reliability of our theoretical framework, that cannot be done for the $0\nu\beta\beta$. In particular we have calculated the GT strengths and the NMEs of the $2\nu\beta\beta$, and compared the results with the available experimental ones. This is reported in Section \[results\], and the overall agreement with the data is quite good. As summarized in Table \[summaryGT\], the quality of our results is similar to, and even better than, that obtained with recent calculations available in literature, which employ SM parameters that have been empirically fitted to reproduce some selected observables. Our results are encouraging for our next steps towards an (almost) parameter-free calculation of the $0\nu\beta\beta$ NME of $^{130}$Te and $^{136}$Xe, making us confident of a positive outcome of a fully microscopical approach to this problem. Finally, it should be pointed out that in our calculations the renormalization of the bare operators by way of the many-body perturbation theory takes into account the degrees of freedom that are not explicitly included within the reduced SM model space. There are also two other main effects that should be included in the renormalization of the GT operators, namely the blocking effect and the role of the subnucleonic degrees of freedom. ----------------------- ------------- --------------------- -------- -------- \[summaryGT\] Nucleus   Expt. I II           $^{130}$Te           GT strength $0.746 \pm 0.045$ 0.842 0.873   NME $0.034 \pm 0.003$ 0.044 0.046 $^{136}$Xe           GT strength $1.33 \pm 0.07$ 0.94 1.13   NME $0.0218 \pm 0.0003$ 0.0285 0.0287 ----------------------- ------------- --------------------- -------- -------- : Experimental and calculated GT strengths and $2\nu\beta\beta$-decay NME (in MeV$^{-1}$) for $^{130}$Te and $^{136}$Xe. The blocking effect is responsible for taking into account the correlations among the active nucleons in systems with many interacting valence particles, within the derivation of the effective operators. We are currently investigating the role played by these correlations in the calculation of GT and $2\nu\beta\beta$ properties. Another contribution to the renormalization of the GT operators is associated to the quark structure of nucleons. Since a realistic $NN$ potential is our starting point, we do not consider in such a picture the role played by the nucleon resonances ($\Delta,N^{\ast},\cdots$) - that are also responsible for three-nucleon forces - whose contribution should lead to renormalized values of $g_A$ and $g_V$. Nowadays, the derivation of nuclear potentials by way of the chiral perturbation theory [@EGM05; @ME11] allows a consistent treatment of this approach to the renormalization of the axial- and vector-current constants, that has been already explored in Ref. [@Menendez11]. We are investigating this subject, which will be the topic of a forthcoming paper. Acknowledgments {#acknowledgments .unnumbered} =============== The authors gratefully acknowledge useful comments and suggestions from Francesco Iachello and Frederic Nowacki. Appendix {#appendix .unnumbered} ======== ---------------------------------------------------------- ----- ------- -------- $n_a l_a j_a ~ n_b l_b j_b ~ n_c l_c j_c ~ n_d l_d j_d $ $J$ $T_z$ TBME $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 1 -0.426 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 1 -0.660 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 1 -0.333 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 1 -0.253 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 1 1.751 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 1 -0.660 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 1 -0.394 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 1 -1.060 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 1 -0.344 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 1 0.650 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 1 -0.333 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 1 -1.060 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 1 0.067 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 1 -0.284 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 1 0.771 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 1 -0.253 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 1 -0.344 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 1 -0.284 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 1 -0.514 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 1 0.372 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 1 1.751 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 1 0.650 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 1 0.771 $ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 1 0.372 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 1 -0.500 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 1 0.254 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 1 -0.030 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 1 -0.014 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 1 -0.030 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 1 0.304 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 1 -0.022 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 1 -0.014 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 1 -0.022 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 1 0.285 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 1 -0.014 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 1 0.002 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 1 -0.149 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 1 -0.105 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 1 -0.152 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 1 -0.187 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 1 -0.120 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 1 0.076 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 1 0.367 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 1 0.397 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 1 0.163 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 1 0.034 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 1 0.080 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 1 0.116 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 1 0.055 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 1 -0.018 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 1 -0.238 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 1 0.012 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 1 -0.114 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 1 -0.154 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 1 -0.192 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 1 -0.120 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 1 0.172 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 1 0.433 ---------------------------------------------------------- ----- ------- -------- : Proton-proton, neutron-neutron, and proton-neutron matrix elements (in MeV) derived for calculations in model space $0g_{7/2},1d_{5/2},1d_{3/2},2s_{1/2},0h_{11/2}$. They are antisymmetrized, and normalized by a factor $1/ \sqrt{ (1 + \delta_{j_aj_b})(1 + \delta_{j_cj_d})}$. \[tbme\] ----------------------------------------------- --- --- -------- $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 1 -0.105 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 1 0.034 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 1 -0.114 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 1 -0.026 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 1 -0.179 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 1 -0.246 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 1 -0.243 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 1 0.200 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 1 0.247 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 1 -0.152 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 1 0.080 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 1 -0.154 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 1 0.218 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 1 -0.082 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 1 -0.252 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 1 0.239 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 1 0.089 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 1 -0.187 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 1 0.116 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 1 -0.192 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 1 -0.149 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 1 -0.208 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 1 0.485 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 1 0.191 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 1 -0.120 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 1 0.055 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 1 -0.120 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 1 -0.243 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 1 -0.252 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 1 -0.208 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 1 0.200 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 1 0.073 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 1 0.146 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 1 0.076 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 1 -0.018 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 1 0.172 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 1 0.200 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 1 0.239 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 1 0.485 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 1 0.099 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 1 -0.245 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 1 0.367 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 1 -0.238 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 1 0.433 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 1 0.247 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 1 0.089 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 1 0.191 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 1 0.146 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 1 -0.245 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 1 -0.315 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 1 0.261 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 1 0.058 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 1 -0.033 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 1 -0.003 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 1 -0.018 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 1 0.281 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 1 -0.004 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 1 -0.023 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 1 -0.022 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 1 0.335 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 1 -0.015 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 1 0.024 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 1 -0.003 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 1 -0.023 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 1 -0.015 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 1 0.248 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 1 -0.042 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 1 -0.018 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 1 -0.022 ----------------------------------------------- --- --- -------- \[tbme\] ----------------------------------------------- ---- --- -------- $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 1 0.024 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 1 0.250 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 1 0.219 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 1 0.049 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 1 -0.068 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 1 0.031 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 1 -0.058 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 1 -0.146 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 1 0.214 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 1 0.303 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 1 0.161 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 1 -0.139 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 1 0.046 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 1 0.128 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 1 -0.182 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 1 0.244 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 1 0.102 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 1 -0.037 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 1 -0.099 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 1 0.167 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 1 0.156 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 1 0.076 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 1 0.150 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 1 -0.191 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 1 -0.058 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 1 0.046 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 1 -0.037 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 1 0.076 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 1 0.169 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 1 -0.337 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 1 0.115 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 1 -0.146 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 1 0.128 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 1 -0.099 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 1 0.150 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 1 -0.292 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 1 0.216 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 1 0.214 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 1 -0.182 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 1 0.167 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 1 -0.191 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 1 0.115 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 1 0.216 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 1 0.031 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 5 1 0.252 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 5 1 0.021 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 5 1 0.386 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 6 1 0.343 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 6 1 0.051 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 6 1 0.137 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 6 1 -0.012 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 6 1 -0.246 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 6 1 0.137 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 6 1 -0.246 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 6 1 0.141 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 8 1 0.192 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 10 1 0.261 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 2 1 0.024 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 3 1 0.135 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 3 1 0.082 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 3 1 0.082 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 3 1 -0.225 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 1 0.231 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 1 0.063 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 1 -0.003 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 1 0.063 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 1 0.210 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 1 0.000 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 1 -0.003 ----------------------------------------------- ---- --- -------- \[tbme\] ----------------------------------------------- --- ---- -------- $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 1 0.000 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 1 0.271 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 1 0.187 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 1 0.088 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 1 -0.052 $ 0g_{ 7/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 1 0.046 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 1 0.088 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 1 0.111 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 1 0.169 $ 1d_{ 5/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 1 -0.198 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 1 -0.052 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 1 0.169 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 1 0.246 $ 1d_{ 3/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 1 0.163 $ 2s_{ 1/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 1 0.046 $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 1 -0.198 $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 1 0.163 $ 2s_{ 1/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 1 0.045 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 1 0.235 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 1 0.038 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 1 -0.017 $ 0g_{ 7/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 1 0.012 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 1 0.038 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 1 0.308 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 1 0.030 $ 1d_{ 5/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 1 -0.012 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 1 -0.017 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 1 0.030 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 1 0.249 $ 1d_{ 3/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 1 -0.010 $ 2s_{ 1/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 1 0.012 $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 1 -0.012 $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 1 -0.010 $ 2s_{ 1/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 1 0.343 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 1 0.049 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 1 0.134 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 1 -0.187 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 1 0.134 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 1 0.218 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 1 0.236 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 1 -0.187 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 1 0.236 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 1 -0.088 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 8 1 0.257 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 8 1 0.008 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 8 1 0.008 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 8 1 0.369 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 9 1 -0.602 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 -1 -0.739 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 -1 -0.680 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 -1 -0.417 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 -1 -0.291 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 -1 1.616 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 -1 -0.680 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 -1 -0.724 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 -1 -1.058 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 -1 -0.385 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 -1 0.717 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 -1 -0.417 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 -1 -1.058 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 -1 -0.317 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 -1 -0.333 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 -1 0.713 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 -1 -0.291 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 -1 -0.385 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 -1 -0.333 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 -1 -0.735 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 -1 0.399 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 -1 1.616 ----------------------------------------------- --- ---- -------- \[tbme\] ----------------------------------------------- --- ---- -------- $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 -1 0.717 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 -1 0.713 $ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 -1 0.399 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 -1 -0.915 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 -1 0.024 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 -1 -0.033 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 -1 -0.017 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 -1 -0.033 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 -1 0.051 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 -1 -0.019 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 -1 -0.017 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 -1 -0.019 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 -1 0.055 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 -1 -0.280 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 -1 0.016 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 -1 -0.178 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 -1 -0.116 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 -1 -0.158 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 -1 -0.200 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 -1 -0.134 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 -1 0.093 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 -1 0.339 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 -1 0.130 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 -1 0.161 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 -1 0.039 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 -1 0.078 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 -1 0.113 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 -1 0.051 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 -1 -0.024 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 -1 -0.221 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 -1 -0.281 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 -1 -0.104 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 -1 -0.163 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 -1 -0.192 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 -1 -0.134 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 -1 0.186 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 -1 0.428 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 -1 -0.116 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 -1 0.039 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 -1 -0.104 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 -1 -0.291 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 -1 -0.157 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 -1 -0.261 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 -1 -0.241 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 -1 0.218 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 -1 0.270 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 -1 -0.158 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 -1 0.078 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 -1 -0.163 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 -1 -0.042 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 -1 -0.080 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 -1 -0.240 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 -1 0.238 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 -1 0.074 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 -1 -0.200 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 -1 0.113 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 -1 -0.192 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 -1 -0.417 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 -1 -0.195 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 -1 0.460 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 -1 0.210 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 -1 -0.134 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 -1 0.051 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 -1 -0.134 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 -1 -0.241 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 -1 -0.240 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 -1 -0.195 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 -1 -0.047 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 -1 0.106 ----------------------------------------------- --- ---- -------- \[tbme\] ----------------------------------------------- --- ---- -------- $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 -1 0.134 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 -1 0.093 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 -1 -0.024 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 -1 0.186 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 -1 0.218 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 -1 0.238 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 -1 0.460 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 -1 -0.159 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 -1 -0.241 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 -1 0.339 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 -1 -0.221 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 -1 0.428 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 -1 0.270 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 -1 0.074 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 -1 0.210 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 -1 0.134 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 -1 -0.241 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 -1 -0.541 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 -1 0.029 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 -1 0.069 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 -1 -0.030 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 -1 -0.006 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 -1 -0.022 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 -1 0.047 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 -1 -0.001 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 -1 -0.027 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 -1 -0.027 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 -1 0.109 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 -1 -0.008 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 -1 0.027 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 -1 -0.006 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 -1 -0.027 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 -1 -0.008 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 -1 0.013 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 -1 -0.048 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 -1 -0.022 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 -1 -0.027 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 -1 0.027 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 -1 -0.006 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 -1 -0.027 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 -1 0.060 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 -1 -0.081 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 -1 0.045 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 -1 -0.066 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 -1 -0.160 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 -1 0.203 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 -1 0.060 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 -1 0.157 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 -1 -0.145 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 -1 0.046 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 -1 0.127 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 -1 -0.164 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 -1 -0.005 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 -1 0.124 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 -1 -0.033 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 -1 -0.101 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 -1 0.161 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 -1 -0.094 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 -1 0.069 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 -1 0.149 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 -1 -0.185 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 -1 -0.066 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 -1 0.046 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 -1 -0.033 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 -1 0.069 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 -1 -0.077 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 -1 -0.338 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 -1 0.129 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 -1 -0.160 ----------------------------------------------- --- ---- -------- \[tbme\] ----------------------------------------------- ---- ---- -------- $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 -1 0.127 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 -1 -0.101 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 -1 0.149 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 -1 -0.506 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 -1 0.208 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 -1 0.203 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 -1 -0.164 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 -1 0.161 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 -1 -0.185 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 -1 0.129 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 -1 0.208 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 -1 -0.213 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 5 -1 0.026 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 5 -1 0.025 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 5 -1 0.161 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 6 -1 0.105 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 6 -1 0.066 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 6 -1 0.135 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 6 -1 -0.262 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 6 -1 -0.232 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 6 -1 0.135 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 6 -1 -0.232 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 6 -1 -0.100 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 8 -1 -0.039 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 10 -1 0.034 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 2 -1 -0.293 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 3 -1 -0.168 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 3 -1 0.092 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 3 -1 0.092 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 3 -1 -0.524 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 -1 -0.017 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 -1 0.063 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 -1 -0.004 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 -1 0.063 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 -1 -0.032 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 -1 -0.034 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 -1 -0.004 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 -1 -0.034 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 -1 -0.008 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 -1 -0.079 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 -1 0.075 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 -1 -0.068 $ 0g_{ 7/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 -1 0.047 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 -1 0.075 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 -1 -0.152 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 -1 0.154 $ 1d_{ 5/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 -1 -0.225 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 -1 -0.068 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 -1 0.154 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 -1 -0.019 $ 1d_{ 3/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 -1 0.170 $ 2s_{ 1/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 -1 0.047 $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 -1 -0.225 $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 -1 0.170 $ 2s_{ 1/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 -1 -0.203 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 -1 0.003 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 -1 0.045 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 -1 -0.008 $ 0g_{ 7/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 -1 0.027 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 -1 0.045 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 -1 0.082 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 -1 0.029 $ 1d_{ 5/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 -1 -0.023 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 -1 -0.008 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 -1 0.029 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 -1 0.043 $ 1d_{ 3/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 -1 -0.002 $ 2s_{ 1/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 -1 0.027 $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 -1 -0.023 ----------------------------------------------- ---- ---- -------- \[tbme\] ----------------------------------------------- --- ---- -------- $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 -1 -0.002 $ 2s_{ 1/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 -1 0.101 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 -1 -0.178 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 -1 0.117 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 -1 -0.176 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 -1 0.117 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 -1 -0.038 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 -1 0.244 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 -1 -0.176 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 -1 0.244 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 -1 -0.316 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 8 -1 0.033 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 8 -1 0.035 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 8 -1 0.035 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 8 -1 0.152 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 9 -1 -0.760 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 0 -0.817 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 0 -0.632 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 0 -0.403 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 0 -0.269 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 0 1.612 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 0 -0.632 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 0 -0.688 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 0 -1.067 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 0 -0.340 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 0 0.754 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 0 -0.403 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 0 -1.067 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 0 -0.279 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 0 -0.296 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 0 0.721 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 0 -0.269 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 0 -0.340 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 0 -0.296 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 0 -0.795 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 0 0.412 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 0 0 1.612 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 0 0 0.754 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 0 0 0.721 $ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 0 0 0.412 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 0 0 -0.977 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 1 0 -0.378 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 0 -0.163 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 1 0 0.144 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 1 0 0.134 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 0 -0.043 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 1 0 0.044 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 1 0 -0.138 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 0 -0.069 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 1 0 0.065 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 1 0 0.004 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 1 0 -0.799 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 0 -0.699 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 1 0 0.746 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 1 0 -0.013 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 0 0.279 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 1 0 -0.278 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 1 0 -0.147 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 0 -0.248 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 1 0 0.232 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 1 0 -0.080 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 1 0 -0.196 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 1 0 0.144 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 0 0.746 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 1 0 -0.686 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 1 0 0.023 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 0 -0.275 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 1 0 0.266 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 1 0 0.137 ----------------------------------------------- --- ---- -------- ----------------------------------------------- --- --- -------- $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 0 0.222 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 1 0 -0.230 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 1 0 0.079 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 1 0 0.168 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 1 0 0.134 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 0 -0.013 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 1 0 -0.313 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 0 0.741 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 1 0 -0.722 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 1 0 0.634 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 0 0.046 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 1 0 -0.044 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 1 0 -0.256 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 1 0 0.407 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 1 0 -0.043 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 0 0.279 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 0 -0.912 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 1 0 0.945 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 1 0 -0.033 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 0 0.314 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 1 0 -0.325 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 1 0 0.478 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 1 0 -0.284 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 1 0 0.044 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 0 -0.278 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 1 0 0.266 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 1 0 -0.722 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 0 0.945 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 1 0 -0.854 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 1 0 0.045 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 0 -0.307 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 1 0 0.298 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 1 0 -0.465 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 1 0 0.290 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 1 0 -0.138 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 0 -0.147 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 1 0 0.137 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 1 0 0.634 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 0 -0.033 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 1 0 -0.172 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 0 -0.271 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 1 0 0.266 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 1 0 -0.003 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 1 0 -0.268 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 1 0 -0.069 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 0 -0.248 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 1 0 0.222 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 1 0 0.046 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 0 0.314 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 0 -0.443 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 1 0 0.504 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 1 0 -0.125 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 1 0 -0.090 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 1 0 0.065 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 0 0.232 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 1 0 -0.230 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 1 0 -0.044 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 0 -0.325 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 1 0 0.298 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 1 0 0.266 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 0 0.504 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 1 0 -0.450 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 1 0 0.115 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 1 0 0.081 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 1 0 0.004 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 0 -0.080 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 1 0 0.079 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 1 0 -0.256 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 0 0.478 ----------------------------------------------- --- --- -------- ----------------------------------------------- --- --- -------- $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 1 0 -0.465 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 1 0 -0.003 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 0 -0.125 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 1 0 -1.209 $ 2s_{ 1/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 1 0 0.222 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 1 0 -0.799 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 1 0 -0.196 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 1 0 0.168 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 1 0 0.407 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 1 0 -0.284 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 1 0 0.290 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 1 0 -0.268 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 1 0 -0.090 $ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 1 0 0.081 $ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 2s_{ 1/2}$ 1 0 0.222 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 1 0 -0.574 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 -0.279 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 0.011 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 -0.137 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 2 0 -0.014 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 -0.107 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 -0.107 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.134 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 2 0 -0.136 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 0.106 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 -0.131 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 0.060 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 -0.128 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 -0.060 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 0.335 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 -0.390 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 0.330 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 2 0 -0.533 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 0.022 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 -0.049 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.016 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 2 0 -0.150 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 -0.122 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 0.032 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 -0.078 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 0.115 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 -0.066 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 -0.177 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 -0.244 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 2 0 0.131 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 -0.042 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 0.058 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 0.024 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 2 0 -0.038 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 0.214 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 -0.093 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 0.196 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 -0.172 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 0.032 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 0.316 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 -0.014 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 -0.533 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 0.131 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 2 0 -0.386 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 -0.011 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 -0.121 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.116 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 2 0 -0.302 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 -0.029 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 -0.035 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 -0.060 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 0.005 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 -0.067 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 0.169 ----------------------------------------------- --- --- -------- ----------------------------------------------- --- --- -------- $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 -0.107 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 0.022 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 -0.042 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 -0.290 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 -0.119 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.180 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 2 0 -0.062 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 0.111 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 -0.242 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 0.137 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 -0.173 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 -0.138 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 0.293 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 -0.107 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 -0.049 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 0.058 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 -0.602 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.226 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 2 0 -0.229 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 -0.560 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 -0.176 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 -0.108 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 0.151 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 -0.351 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 0.051 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 -0.134 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 -0.016 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 0.024 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.301 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 2 0 -0.194 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 -0.153 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 -0.146 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 -0.173 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 -0.110 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 -0.645 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 0.157 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 -0.136 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 -0.150 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 -0.038 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 2 0 -0.302 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 -0.062 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 -0.229 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.194 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 2 0 -0.237 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 -0.054 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 -0.092 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 -0.028 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 0.020 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 -0.191 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 0.295 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 0.106 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 -0.122 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 0.214 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 2 0 -0.029 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 0.111 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 -0.560 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.153 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 -0.591 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 0.184 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 -0.351 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 0.225 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 -0.108 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 -0.052 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 -0.131 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 0.032 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 -0.093 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 2 0 -0.035 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 -0.242 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 -0.176 ----------------------------------------------- --- --- -------- ----------------------------------------------- --- --- -------- $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.146 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 -0.047 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 0.065 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 -0.132 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 -0.068 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 0.134 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 0.060 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 -0.078 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 0.196 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 2 0 -0.060 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 0.137 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 -0.108 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.173 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 -0.336 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 0.665 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 -0.179 $ 1d_{ 3/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 -0.187 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 -0.128 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 0.115 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 -0.172 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 2 0 0.005 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 -0.173 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 0.151 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.110 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 2 0 0.020 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 0.225 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 -0.132 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 0.665 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 -0.291 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 0.165 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 0.152 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 -0.060 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 -0.066 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 0.032 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 2 0 -0.067 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 -0.138 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 -0.351 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 -0.645 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 2 0 -0.191 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 -0.108 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 -0.068 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 -0.179 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 -0.331 $ 2s_{ 1/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 0.181 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 2 0 0.335 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 2 0 -0.177 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 2 0 0.316 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 2 0 0.169 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 2 0 0.293 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 2 0 0.051 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 2 0 0.157 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 2 0 0.295 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 2 0 -0.052 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 2 0 0.134 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 2s_{ 1/2}$ 2 0 -0.187 $ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 2 0 0.152 $ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 1d_{ 3/2}$ 2 0 0.181 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 2 0 -0.559 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 3 0 -0.091 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 -0.111 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 -0.147 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 -0.091 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 3 0 0.101 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 0.009 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 0.061 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 -0.007 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 -0.135 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 -0.059 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 -0.148 ----------------------------------------------- --- --- -------- ----------------------------------------------- --- --- -------- $ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 0.077 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 -0.005 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 -0.344 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 -0.184 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 -0.136 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 -0.212 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 3 0 0.236 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 -0.093 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 0.112 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 -0.129 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 -0.202 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 -0.105 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 -0.109 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 0.179 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 -0.095 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 0.019 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 -0.093 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 -0.242 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 3 0 0.222 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 0.054 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 0.022 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 0.002 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 -0.164 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 -0.057 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 -0.154 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 0.231 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 0.041 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 -0.125 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 -0.277 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 3 0 0.196 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 0.072 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 0.034 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 -0.059 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 -0.230 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 -0.027 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 -0.182 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 0.391 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 -0.088 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 -0.044 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 3 0 0.101 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 0.236 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 0.222 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 0.196 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 3 0 -0.188 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 0.090 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 -0.098 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 0.097 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 0.136 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 0.103 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 0.098 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 -0.205 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 0.121 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 -0.024 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 3 0 0.009 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 -0.093 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 0.054 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 0.072 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 -0.299 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 0.390 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 -0.374 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 0.066 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 -0.380 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 0.125 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 -0.076 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 -0.355 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 0.209 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 3 0 0.061 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 0.112 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 0.022 ----------------------------------------------- --- --- -------- ----------------------------------------------- --- --- -------- $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 0.034 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 -0.176 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 0.165 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 0.045 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 0.180 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 0.353 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 -0.034 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 0.207 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 -0.081 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 3 0 -0.007 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 -0.129 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 0.002 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 -0.059 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 -0.567 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 0.041 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 -0.235 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 0.045 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 0.068 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 -0.555 $ 1d_{ 5/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 0.189 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 3 0 -0.135 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 -0.202 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 -0.164 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 -0.230 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 3 0 0.136 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 0.066 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 0.045 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 0.041 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 -0.094 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 -0.018 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 -0.143 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 0.225 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 0.005 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 -0.119 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 3 0 -0.059 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 -0.105 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 -0.057 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 -0.027 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 3 0 0.103 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 -0.380 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 0.180 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 -0.235 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 -0.156 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 -0.341 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 0.030 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 -0.149 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 0.081 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 3 0 -0.148 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 -0.109 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 -0.154 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 -0.182 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 3 0 0.098 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 0.125 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 0.353 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 0.045 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 -0.695 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 0.201 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 0.067 $ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 -0.100 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 3 0 0.077 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 0.179 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 0.231 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 0.391 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 3 0 -0.205 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 -0.076 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 -0.034 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 0.068 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 0.225 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 0.030 ----------------------------------------------- --- --- -------- ----------------------------------------------- --- --- -------- $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 0.201 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 -0.271 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 0.052 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 0.034 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 3 0 -0.005 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 -0.095 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 0.041 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 -0.088 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 3 0 0.121 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 -0.355 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 0.207 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 -0.555 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 0.005 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 -0.149 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 0.067 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 -0.549 $ 2s_{ 1/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 0.185 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 3 0 -0.344 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 3 0 0.019 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 3 0 -0.125 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 3 0 -0.044 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 3 0 -0.024 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 3 0 0.209 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 3 0 -0.081 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 2s_{ 1/2}$ 3 0 0.189 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 3 0 -0.119 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 3 0 0.081 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}$ 3 0 -0.100 $ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 3 0 0.034 $ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 1d_{ 5/2}$ 3 0 0.185 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 3 0 -0.146 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 0 -0.018 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 0 0.041 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 0 -0.059 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 0 0.033 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 4 0 -0.040 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 0 -0.062 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 0 -0.108 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 4 0 -0.057 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 4 0 0.105 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 4 0 -0.031 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 0 0.199 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 0 -0.230 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 0 0.106 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 0 -0.278 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 4 0 -0.302 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 0 0.026 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 0 0.016 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 4 0 0.058 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 4 0 -0.088 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 4 0 -0.118 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 0 -0.129 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 0 -0.008 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 0 0.076 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 4 0 -0.063 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 0 -0.008 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 0 0.039 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 4 0 0.011 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 4 0 0.120 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 4 0 -0.050 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 0 0.119 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 0 -0.130 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 4 0 -0.127 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 0 0.038 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 0 -0.081 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 4 0 0.054 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 4 0 -0.214 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 4 0 -0.046 $ 0g_{ 7/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 0 -0.149 ----------------------------------------------- --- --- -------- ----------------------------------------------- --- --- -------- $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 0 -0.040 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 0 -0.302 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 0 -0.063 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 0 -0.127 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 4 0 -0.229 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 0 -0.020 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 0 -0.084 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 4 0 -0.095 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 4 0 0.029 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 4 0 -0.259 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 0 0.123 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 0 -0.062 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 0 0.026 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 0 -0.008 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 0 0.038 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 0 -0.075 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 0 -0.247 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 4 0 -0.019 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 4 0 0.237 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 4 0 -0.043 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 0 0.140 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 0 -0.108 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 0 0.016 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 0 0.039 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 0 -0.081 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 0 -0.827 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 4 0 -0.123 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 4 0 -0.291 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 4 0 -0.219 $ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 0 0.158 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 0 -0.057 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 0 0.058 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 0 0.011 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 0 0.054 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 4 0 -0.095 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 0 -0.019 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 0 -0.123 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 4 0 -0.016 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 4 0 -0.038 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 4 0 -0.070 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 0 0.110 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 0 0.105 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 0 -0.088 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 0 0.120 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 0 -0.214 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 4 0 0.029 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 0 0.237 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 0 -0.291 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 4 0 -0.815 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 4 0 -0.073 $ 1d_{ 3/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 0 -0.158 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 0 -0.031 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 0 -0.118 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 0 -0.050 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 0 -0.046 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 4 0 -0.259 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 0 -0.043 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 0 -0.219 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 4 0 -0.070 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 4 0 -0.073 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 4 0 -0.126 $ 2s_{ 1/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 0 0.136 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 4 0 0.199 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 4 0 -0.129 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 4 0 0.119 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 2s_{ 1/2}$ 4 0 -0.149 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 4 0 0.123 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 4 0 0.140 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}$ 4 0 0.158 ----------------------------------------------- --- --- -------- ----------------------------------------------- --- --- -------- $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 4 0 0.110 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 5/2}$ 4 0 -0.158 $ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0g_{ 7/2}$ 4 0 0.136 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 4 0 -0.217 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 5 0 -0.264 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 5 0 -0.134 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 5 0 -0.259 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 5 0 0.124 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 5 0 -0.104 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 5 0 -0.235 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 5 0 -0.186 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 5 0 -0.022 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 5 0 -0.209 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 5 0 0.072 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 5 0 -0.114 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 5 0 -0.232 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 5 0 0.041 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 5 0 -0.426 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 5 0 0.259 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 5 0 0.027 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 5 0 -0.599 $ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}$ 5 0 -0.074 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 5 0 0.124 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 5 0 0.072 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 5 0 0.259 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 5 0 -0.034 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 5 0 0.106 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 5 0 0.204 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 5 0 -0.042 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 5 0 -0.104 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 5 0 -0.114 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 5 0 0.027 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 5 0 -1.024 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 5 0 0.038 $ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 5 0 0.240 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 5 0 -0.235 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 5 0 -0.232 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 5 0 -0.599 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 5 0 0.204 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 5 0 0.038 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 5 0 -0.404 $ 1d_{ 3/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 5 0 -0.067 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 5 0 -0.186 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 5 0 0.041 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}$ 5 0 -0.074 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 5 0 -0.042 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}$ 5 0 0.240 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0g_{ 7/2}$ 5 0 -0.067 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 5 0 -0.105 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 6 0 0.118 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 6 0 0.046 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 6 0 -0.043 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 6 0 0.133 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 6 0 -0.406 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 6 0 -0.150 $ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}$ 6 0 -0.183 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 6 0 -0.043 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 6 0 -0.150 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 6 0 -0.397 $ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 6 0 0.174 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 6 0 0.133 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}$ 6 0 -0.183 $ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}$ 6 0 0.174 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 6 0 -0.106 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 7 0 -1.119 $ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}$ 7 0 -0.096 $ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}$ 7 0 -0.096 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 7 0 -0.172 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 8 0 -0.047 ----------------------------------------------- --- --- -------- ----------------------------------------------- ---- --- -------- $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 9 0 -0.350 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 10 0 0.028 $ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}~ 0h_{11/2}$ 11 0 -1.148 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 2 0 -1.420 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 2 0 1.066 $ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 2 0 1.066 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 2 0 -1.349 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 3 0 -0.746 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 3 0 0.306 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 3 0 -0.541 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 3 0 -0.206 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 3 0 0.306 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 3 0 -0.434 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 3 0 0.183 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 3 0 -0.095 $ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 3 0 -0.541 $ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 3 0 0.183 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 3 0 -0.685 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 3 0 -0.282 $ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 3 0 -0.206 $ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 3 0 -0.095 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 3 0 -0.407 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 0 -0.370 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 0 -0.094 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 0 -0.226 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 4 0 0.359 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 4 0 -0.157 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 4 0 0.236 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 0 -0.094 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 0 -0.129 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 0 -0.363 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 4 0 0.176 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 4 0 -0.098 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 4 0 0.307 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 0 -0.226 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 0 -0.363 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 0 -0.546 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 4 0 0.224 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 4 0 -0.284 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 4 0 0.525 $ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 0 0.359 $ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 0 0.176 $ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 0 0.224 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 4 0 -0.355 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 4 0 0.088 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 4 0 -0.230 $ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 0 -0.157 $ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 0 -0.098 $ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 0 -0.284 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 4 0 -0.108 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 4 0 0.360 $ 0h_{11/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 4 0 0.236 $ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 4 0 0.307 $ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 4 0 0.525 $ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 4 0 -0.536 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 0 -0.474 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 0 0.103 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 0 -0.240 $ 0g_{ 7/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 0 0.147 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 5 0 -0.373 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 5 0 -0.035 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 5 0 -0.161 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 5 0 -0.104 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 0 0.103 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 0 -0.086 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 0 0.107 $ 1d_{ 5/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 0 -0.166 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 5 0 0.021 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 5 0 -0.071 ----------------------------------------------- ---- --- -------- ----------------------------------------------- --- --- -------- $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 5 0 -0.047 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 5 0 -0.055 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 0 -0.240 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 0 0.107 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 0 -0.212 $ 1d_{ 3/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 0 0.222 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 5 0 -0.161 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 5 0 0.043 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 5 0 -0.183 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 5 0 -0.045 $ 2s_{ 1/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 0 0.147 $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 0 -0.166 $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 0 0.222 $ 2s_{ 1/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 0 -0.154 $ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 5 0 0.092 $ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 5 0 -0.055 $ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 5 0 0.034 $ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 5 0 -0.043 $ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 0 -0.373 $ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 0 0.021 $ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 0 -0.161 $ 0h_{11/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 0 0.092 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 5 0 -0.432 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 5 0 -0.095 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 5 0 -0.226 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 5 0 -0.138 $ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 0 -0.035 $ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 0 -0.071 $ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 0 0.043 $ 0h_{11/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 0 -0.055 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 5 0 -0.070 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 5 0 -0.107 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 5 0 -0.165 $ 0h_{11/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 0 -0.161 $ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 0 -0.047 $ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 0 -0.183 $ 0h_{11/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 0 0.034 $ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 5 0 -0.192 $ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 5 0 -0.211 $ 0h_{11/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 5 0 -0.104 $ 0h_{11/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 5 0 -0.055 $ 0h_{11/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 5 0 -0.045 $ 0h_{11/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 5 0 -0.043 $ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 5 0 -0.144 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 0 -0.160 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 0 -0.099 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 0 -0.090 $ 0g_{ 7/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 0 -0.067 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 6 0 0.173 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 6 0 -0.152 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 6 0 0.095 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 6 0 -0.109 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 0 -0.099 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 0 -0.170 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 0 -0.204 $ 1d_{ 5/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 0 -0.287 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 6 0 0.169 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 6 0 -0.251 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 6 0 0.225 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 6 0 -0.262 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 0 -0.090 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 0 -0.204 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 0 -0.090 $ 1d_{ 3/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 0 -0.195 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 6 0 0.087 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 6 0 -0.207 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 6 0 0.133 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 6 0 -0.196 $ 2s_{ 1/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 0 -0.067 ----------------------------------------------- --- --- -------- ----------------------------------------------- --- --- -------- $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 0 -0.287 $ 2s_{ 1/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 0 -0.195 $ 2s_{ 1/2}~ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 0 -0.276 $ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 6 0 0.118 $ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 6 0 -0.255 $ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 6 0 0.212 $ 2s_{ 1/2}~ 0h_{11/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 6 0 -0.370 $ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 0 0.173 $ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 0 0.169 $ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 0 0.087 $ 0h_{11/2}~ 0g_{ 7/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 0 0.118 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 6 0 -0.158 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 6 0 0.097 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 6 0 -0.093 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 6 0 0.072 $ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 0 -0.152 $ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 0 -0.251 $ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 0 -0.207 $ 0h_{11/2}~ 1d_{ 5/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 0 -0.255 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 6 0 -0.150 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 6 0 0.203 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 6 0 -0.285 $ 0h_{11/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 0 0.095 $ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 0 0.225 $ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 0 0.133 $ 0h_{11/2}~ 1d_{ 3/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 0 0.212 $ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 6 0 -0.082 $ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 6 0 0.196 $ 0h_{11/2}~ 2s_{ 1/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 6 0 -0.109 $ 0h_{11/2}~ 2s_{ 1/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 6 0 -0.262 $ 0h_{11/2}~ 2s_{ 1/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 6 0 -0.196 $ 0h_{11/2}~ 2s_{ 1/2}~ 2s_{ 1/2}~ 0h_{11/2}$ 6 0 -0.370 $ 0h_{11/2}~ 2s_{ 1/2}~ 0h_{11/2}~ 2s_{ 1/2}$ 6 0 -0.263 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 0 -0.499 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 0 0.076 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 0 -0.343 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 7 0 -0.299 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 7 0 0.036 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 7 0 -0.165 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 0 0.076 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 0 -0.039 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 0 0.129 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 7 0 -0.039 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 7 0 -0.006 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 7 0 -0.107 $ 1d_{ 3/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 0 -0.343 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 0 0.129 $ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 0 -0.420 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 7 0 -0.161 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 7 0 0.113 $ 1d_{ 3/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 7 0 -0.092 $ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 0 -0.299 $ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 0 -0.039 $ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 0 -0.161 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 7 0 -0.456 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 7 0 -0.073 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 7 0 -0.325 $ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 0 0.036 $ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 0 -0.006 $ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 0 0.113 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 7 0 -0.027 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 7 0 -0.132 $ 0h_{11/2}~ 1d_{ 3/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 7 0 -0.165 $ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 7 0 -0.107 $ 0h_{11/2}~ 1d_{ 3/2}~ 1d_{ 3/2}~ 0h_{11/2}$ 7 0 -0.092 $ 0h_{11/2}~ 1d_{ 3/2}~ 0h_{11/2}~ 1d_{ 3/2}$ 7 0 -0.397 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 8 0 -0.018 $ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 8 0 -0.125 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 8 0 0.069 ----------------------------------------------- --- --- -------- ----------------------------------------------- --- --- -------- $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 8 0 -0.180 $ 1d_{ 5/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 8 0 -0.125 $ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 8 0 -0.546 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 8 0 0.198 $ 1d_{ 5/2}~ 0h_{11/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 8 0 -0.704 $ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 8 0 0.069 $ 0h_{11/2}~ 0g_{ 7/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 8 0 0.198 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 8 0 -0.022 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 8 0 0.125 $ 0h_{11/2}~ 1d_{ 5/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 8 0 -0.180 $ 0h_{11/2}~ 1d_{ 5/2}~ 1d_{ 5/2}~ 0h_{11/2}$ 8 0 -0.704 $ 0h_{11/2}~ 1d_{ 5/2}~ 0h_{11/2}~ 1d_{ 5/2}$ 8 0 -0.516 $ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 9 0 -1.057 $ 0g_{ 7/2}~ 0h_{11/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 9 0 -0.255 $ 0h_{11/2}~ 0g_{ 7/2}~ 0g_{ 7/2}~ 0h_{11/2}$ 9 0 -0.255 $ 0h_{11/2}~ 0g_{ 7/2}~ 0h_{11/2}~ 0g_{ 7/2}$ 9 0 -0.989 ----------------------------------------------- --- --- --------
--- abstract: 'A proper understanding of the mechanism for cuprate superconductivity can emerge only by comparing materials in which physical parameters vary one at a time. Here we present a variety of bulk, resonance, and scattering measurements on the (Ca$_{x}$La$_{1-x}$)(Ba$_{1.75-x}$La$_{0.25+x}$)Cu$_{3}$O$_{y}$ high temperature superconductors, in which this can be done. We determine the superconducting, Néel, glass, and pseudopage critical temperatures. In addition, we clarify which physical parameter varies, and, equally important, which does not, with each chemical modification. This allows us to demonstrate that a single energy scale, set by the superexchange interaction $J$, controls all the critical temperatures of the system. $J$, in-turn, is determined by the in plane Cu-O-Cu buckling angle.' address: 'Physics Department, Technion-Israel Institute of Technology, Haifa 32000, Israel' author: - Amit Keren title: Evidence of magnetic mechanism for cuprate superconductivity --- Introduction ============ The critical temperature for superconductivity $T_{c}$ in the metallic superconductors Hg, Sn, and Tl as a function of $M^{-1/2}$, where $M$ is the atomic mass, is presented in Fig. \[isotope\] on a full scale including the origin [@Maxwell]. In the case of Sn a clear isotope effect is observed resulting in a 4% variation of $T_{c}$ upon isotope substitution. For Hg and Tl the variations are observed only by zooming in on the data. In all cases a linear fit goes through the data points and the origin quite satisfactorily, namely, $T_{c}$ is proportional to $M^{-1/2}$. This observation, known as the isotope effect, plays a key role in exposing the mechanism for superconductivity in metallic superconductors. However, had nature not provided us with isotopes, and we had to draw conclusions only by comparing the different materials in Fig. \[isotope\], we would conclude that $T_{c}$ has nothing to do with the atomic mass. Thus, Fig. \[isotope\] demonstrates that it is dangerous to compare materials where several quantities vary simultaneously. The isotope experiment overcomes this problem and reveals the origin of metallic superconductivity. The mechanism for high temperature superconductivity (HTSC) in the cuprate is still elusive, but is believed to be of magnetic origin [Belivers,KotliarPRB88]{}. Verifying this belief would require an experiment similar to the isotope effect, namely, a measurement of $T_{c}$ versus the magnetic interaction strength $J$, with no other structural changes in the compounds under investigation. Unfortunately, varying $J$ experimentally can only be done by chemical variation, usually leading to very different materials. For example, YBa$_{2}$Cu$_{3}$O$_{y}$ (YBCO) has a maximum $T_{c}$ of 96 K, La$_{2-x}$Sr$_{x}$CuO$_{4}$ has a maximum $T_{c}$ of 38 K, and both have roughly the same $J$ [@TranquadaPRB40; @KeimerPRB92; @Wan08105216]. This fact has been used to contradict the magnetic mechanism although these materials are different in crystal perfection, number of layers, symmetry, and more. Clearly they are uncomparable exactly as Hg, Sn and Tl. In the present manuscript we describe a set of experiments designed to overcome this problem and to perform a magnetic analog of the isotope experiment by making very small and subtle chemical changes, which modify $J$ but keep all other parameters intact. This is achieved by investigating a system of HTSC with the chemical formula (Ca$_{x}$La$_{1-x}$)(Ba$_{1.75-x}$La$_{0.25+x}$)Cu$_{3}$O$_{y}$ and acronym CLBLCO. Each value of $% x=0.1\ldots0.4 $ is a family of superconductors. All families have the YBCO structure with negligible structural differences; all compounds are tetragonal, and there is no oxygen chain ordering as in YBCO. Within a family, $y$ can be varied from zero doping to over doping. We present measurements of the critical temperature of superconductivity $% T_{c}$ using resistivity [@Goldschmidt], the spin glass temperature $% T_{g}$ [@KanigelPRL02] and the Néel temperature $T_{N}$ [OferPRB06]{} of the parent antiferromagnet (AFM) using zero field muon spin relaxation ($\mu$SR), the level of doping and the level of impurities by Nuclear Quadruple Resonance (NQR) [@KerenPRB06], the superconducting carrier density using transverse field muon spin rotation [@KerenSSC03], the lattice parameters, including the oxygen buckling angle, with neutron scattering [@OferPRB08], and the pseudogap (PG) temperature $T^{\ast}$ with susceptibility [@LubaPRB08]. This allowed us to generate one of the most complete phase diagrams of any HTSC system, to demonstrate a proportionality between $T_{c}$ and $J$, and to draw more conclusions. The paper consists of two main sections in addition to this introduction: in Sec. \[Main\] the main experimental results and parameters extracted from the raw data are presented. In Sec. \[Conc\] the conclusions are summarized. Experimental details, raw data, and analysis description are given in appendices. \[ptb\] [Isotope.EPS]{} Main Results\[Main\] ==================== The phase diagram of CLBLCO including $T_{N}$, $T_{g}$, $T_{c}$, and $% T^{\ast }$ versus oxygen level $y$ is shown in Fig. \[criticalvsy\]. Details of the magnetic measurements are given in  \[MagCritical\] and pseudogap measurements in  \[PG\]. In the doping region up to $% y=6.5$, the $T_{N}$ curves of the different families are nearly parallel. The maximum $T_{N}$ ($T_{N}^{max}$) of the $x=0.1$ family is the lowest, and of the $x=0.4$ family the highest. Upon further doping $T_{N}$ decreases rapidly and differently, leading to a crossing point after which the $x=0.4$ family has the lowest value of $T_{N}$, and the $x=0.1$ family the highest. By further doping, the long-range order is replaced by a spin glass phase, where islands of spins freeze. The spin glass phase penetrates into the superconducting phase which exists for $y=6.9$ to $y=7.25$. This phase starts earlier as $x$ increases. The superconducting domes are nearly concentric with maximum $T_{c}$ ($T_{c}^{max}$) decreasing with decreasing $% x $. $T_{c}^{max}$ varies from 80 K at $x=0.4$ to 56 K at $x=0.1$, a nearly 30% variation. This is much stronger varation than the strongest isotope effect in nature. As for the pseudogap temperatures $T^{\ast}$, it seems that the $x=0.1$ family has the highest $T^{\ast}$ and the $x=0.4$ the lowest. \[ptb\] [CriticalvsY.EPS]{} Perhaps the clearest feature of this phase diagram is the correlation between $T_{N}^{max}$ and $T_{c}^{max}$. The family with the highest $% T_{N}^{max}$ has the highest $T_{c}^{max}$. However, $T_{N}$ is not a clean energy scale. It is well established that a pure 2D AFM orders magnetically only at $T=0$, and that $T_{N}$ is finite only for 3D AFM. Intermediate cases are described by more complicated interactions where $J$ is the isotropic intralayer Heisenberg interaction, and $\alpha_{eff}J$ represent interlayer and anisotropic coupling [@KeimerPRB92]. In order to extract $J$ from $% T_{N}$, $\alpha_{eff}$ must be determined. One method of extracting $\alpha_{eff}$ is from the magnetic order parameter $M$ versus temperature $T$ [@KeimerPRB92; @ArovasPRB98]. For small $% \alpha_{eff}$ the reduction of the magnetic order parameter $M$ with increasing $T$ is fast so that at $\alpha_{eff}=0$ the 2D limit is recovered. On the other hand, in the three dimensional case, where $% \alpha_{eff}=1$, we expect a weak temperature dependence of $M$ at $% T\rightarrow0$ due to lack of antiferromagnetic magnons states at low frequencies. A plot of the normalized order parameter $\sigma=M/M_{0}$, where $M_{0}$ is the order parameter at $T\rightarrow0$, versus $T/T_{N}$ should connect (1,0) to (0,1) as depicted in Fig. \[alpha\] [OferPRB06]{}. The differences between curves are set only by $\alpha_{eff}$ and they can be fitted to experimental data. Given $\alpha_{eff}$ and $T_{N}$, $J$ can be extracted. This is not a very accurate method of $\alpha_{eff}$ determination, but $J$ depends only on $\ln(\alpha_{eff})$ so high accuracy is not required [@KeimerPRB92]. \[ptb\] [alpha.EPS]{} Using zero field muon spin rotation, we determined the muon angular rotation frequencies $\omega$ in the different compounds [@OferPRB06]. The normalized order parameter is given by $\sigma(T)=\omega(T)/\omega(0)$. The order parameter extracted from the high angular frequency, around a few tens of MRad/Sec ($\omega\sim27$ MRad/Sec in our case), is known to agree with neutron scattering determination of $\sigma$. In Fig. \[alpha\] we also present $\sigma$ for two different underdoped CLBLCO samples with $x=0.1$ and $0.3$. Clearly the reduction of the magnetization with increasing temperatures is not the same for these two samples, and therefore their anisotropies are different. Since $\sigma$ is less sensitive to increasing $T$ in the $x=0.1$ family than in the $x=0.3$ family, the $\alpha_{eff}$ of $x=0.1$ must be larger. Using the muon spin rotation frequency versus $T$ and calculations based on the Swinger boson mean field (SBMF) theory [@OferPRB06; @KeimerPRB92], we determined $% \alpha_{eff}$ for all samples [@OferPRB06]. Knowing $T_{N}$ and $% \alpha_{eff}$ for all samples we extract a corrected $T_{N}$ ($T_{N}^{cor}$). For the very underdoped samples this $T_{N}^{cor}=J$. For doped samples the situation is more complicated since the samples are described more accurately by the t-J model. We will present $T_{N}^{cor}$ shortly after discussing the doping. Doping in CLBLCO is done by controlling the oxygen level in a chain layer as in YBCO. This leaves some ambiguity concerning the doping of the CuO$_{2}$ planes. One possibility to determine the amount of charge present in this plane is to measure the in-plane Cu NQR frequency $\nu_{Q}$. Assuming the lattice parameters variations within a family can be ignored (an assumption tested with neutrons in  \[Lattice\]) the NQR frequency is proportional to the level of doping in the plane. The in-plane Cu NQR frequencies, discussed further in  \[Impurities\], are shown in Fig. \[nqranalysis\](a). It is clear that $\nu_{Q}$ grows linearly with doping in the underdoped side of the phase diagram. The most interesting finding is that, within the experimental error, the slope of $\nu_{Q}(x,y)$ in the underdoped side is $x$-independent, as demonstrated by the parallel solid lines. This means that the rate at which holes $p$ are introduced into the CuO$_{2}$ planes, $\partial p/\partial y$, is a constant independent of $% x$ or $y$ in the underdoped region. Using further the ubiquitous assumption that the optimal hole density, at optimal oxygenation, $y_{opt}$, is universal, we conclude that the in-plane hole density is a function only of $% \Delta y=y-y_{opt}$. The same conclusion was reached by X-ray absorption spectroscopy (XAS) experiments [@SannaXray]. \[ptb\] [NQRAnalysis.EPS]{} In contrast, the CLBLCO family obeys the Uemura relations in the entire doping region, namely, $T_{c}$ is proportional to $n_{s}/m^{\ast }$ where $% n_{s}$ is the superconducting carrier density and $m^{\ast }$ the effective mass [@KerenSSC03]. This is determined with transverse field muon spin relaxation measurements where the Gaussian relaxation rate $R_{\mu }$ is proportional to $n_{s}/m^{\ast }$ as explained in  \[PenDep\] [MuonBook,SonierReview]{}. The experimental results are depicted in Fig. [Uemura]{} for all families. This experiment seems to contradict the NQR results for the following reason. If all holes had turned superconducting ($% p=n_{s}$), then samples of different $x$ but identical $\Delta y$ should have identical $\Delta p$ and identical $\Delta n_{s}$. In addition, if $% m^{\ast }$ is universal, samples with a common $\Delta y$ should have the same $T_{c}$, in contrast to the phase diagram of Fig. \[criticalvsy\]. Something must be wrong in the hole counting. A similar conclusion was reached in the investigation of Y$_{1-x}$Ca$_{x}$Ba$_{2}$Cu$_{3}$O$_{6+y}$ [@SannaCM]. \[ptb\] [Uemura.EPS]{} This problem can be solved by assuming that not all holes are mobile and turn superconducting. Therefore, we should define the mobile hole concentration $\Delta p_{m}$ by multiplying $\Delta y$ by a different constant per family $K(x)$, namely, $\Delta p_{m}=K(x)\Delta y$. The superconducting carrier density variation $\Delta n_{s}$ is now proportional to $\Delta p_{m}$ with a universal factor. The $K$s are chosen so that the superconducting critical temperature $T_{c}$ domes, normalized by $% T_{c}^{max}$ of each family, collapse onto each other. This is shown in Fig. \[unified\](a) using $K=0.76$, $0.67$, $0.54$ ,$0.47$ for $x=0.1\ldots0.4$. An animation showing the rescaling of the critical temperatures and doping are given in the supporting materials. \[ptb\] [unified.EPS]{} Figure \[unified\](a) also shows the other critical temperatures $% T_{N}^{cor}$ and $T_{g}$ normalized by $T_{c}^{max}$ and plotted as a function of $\Delta p_{m}$. The critical temperatures from all families collapse to a single function of $\Delta p_{m}$. This means that there is proportionality between the in-plane Heisenberg coupling constant $J$ and the maximum of the superconducting transition temperature $T_{c}^{max}$ in the series of (Ca$_{x}$La$_{1-x}$)(Ba$_{1.75-x}$La$_{0.25+x}$)Cu$_{3}$O$_{y}$ families. In fact, the data presented up to here can be explained by replacing the $1/m\ast$ in the Uemura relation by a family-dependent magnetic energy scale $J_{x}$ and writing $$T_{c}=cJ_{x}n_{s}(\Delta y) \label{TcvsJandns}$$ where $c$ is a universal constant for all families. For a typical superconductor having $T_c=80$ K, $J\sim1000$ K, and $8~$% superconducting carrier density per Cu site, $c$ is on the order of unity. This is the main finding of this paper. Theoretical indications for the importance of $J$ and $n_{s}$ in setting up $T_{c}$ could be found since the early days of HTSC [@KotliarPRB88]. Having established a proportionality between $T_{c}$ and $J$, it is important to understand the origin of the $J$ variations between families in CLBLCO. As we show in appendix \[Lattice\] using lattice parameters measurements with neutron diffraction, the Cu-O-Cu buckling angle is responsible for these $J$ variations since it is the only lattice parameter that shows strong differences between the families; there is an about 30% change from the $x=0.1$ family to $x=0.4$ [@OferPRB08]. This change is expected since as $x$ increases, a positive charge is moving from the Y to the Ba site of the YBCO structure, pulling the oxygen toward the plane and flattening the Cu-O-Cu bond. \[ptb\] [tJparam.EPS]{} From the lattice parameters it is possible to construct the hopping integral $t$ and super-exchange $J$ of the t-J model, assuming that the Hubbard $U$ and the charge transfer energy $\Delta$ are family-independent. The basic quantity is the hopping integral t$_{pd}$ between a Cu $3d_{x% %TCIMACRO{\U{b2}}% %BeginExpansion {{}^2}% %EndExpansion -y% %TCIMACRO{\U{b2}}% %BeginExpansion {{}^2}% %EndExpansion }$ and O $2p$ orbitals [@ZaaneannJPhys87]. This hopping integral is proportional to bond length $a$ to the power -3.5 [@HarrisonBook]. The hopping from the O $2p$ to the next Cu $3d_{x% %TCIMACRO{\U{b2}}% %BeginExpansion {{}^2}% %EndExpansion -y% %TCIMACRO{\U{b2}}% %BeginExpansion {{}^2}% %EndExpansion }$ involves again the bond length and cosine of the buckling angle $\theta$. Thus, the Cu to Cu hopping depends on $a$ and $\theta$ as $t_{dd}\propto cos\theta/a^{-7}$ and $J$ is proportional to $t_{dd}% %TCIMACRO{\U{b2}}% %BeginExpansion {{}^2}% %EndExpansion $, hence, $J\propto cos% %TCIMACRO{\U{b2}}% %BeginExpansion {{}^2}% %EndExpansion \theta/a^{14}$. Estimates of the $t_{dd}$ and $J$, normalized to the averaged values of the $% x=0.1$ family, $\left\langle t_{dd}\right\rangle _{0.1}$ and $\left\langle J\right\rangle _{0.1}$ are presented in Fig. \[tjparam\](a) and (b). Although there is a variation in $t$ and $J$ within each family, the variation is much larger between the families. $J$ increases with increasing $x$, in qualitative agreement with experimental determination of $J$. There is a 10% increase in $J$ which is *not* big enough to explain the variations in $T_{N}^{cor}$, but the magnitude and direction are satisfactory. More accurate calculations are on their way [@LePetit]. It is also important to notice that there is an about 5% difference in the $% t/J $ ratio between the two extreme families. Equally important is to understand what is not changing between families. For example, we would like to check whether the crystal quality is the same for all families. Figure \[nqranalysis\](b) shows the width of the NQR lines discussed in  \[Impurities\] [@KerenPRB06]. This width is minimal at optimal doping and is identical within experimental resolution for all families. More recent experiments with oxygen 17 lines, and better resolution, provide the same conclusions [@AmitInPrep]. A third evidence for the similarity of crystal quality comes from XAS [@SannaPC]. Thus, the variations in $T_{c}$ for the different families cannot be explained by disorder or impurities. The unified phase diagram in Fig. \[unified\](a) reveals more information about the CLBLCO family than just Eq. \[TcvsJandns\]. In particular, the fact that the Néel order is destroyed for all families at the same critical $\Delta p_{m}$ is very significant. The disappearance of the long-range Néel order and its replacement with a glassy ground state could be studied more clearly by following the order parameter as a function of doping. Naturally, $\omega(T\rightarrow0)$ disappears (drops to zero) only when the Néel order is replaced by the spin glass phase as seen in Fig. \[unified\](b). We found that the order parameter is universal for all families, and in particular the critical doping is family-independent [@OferPRB08]. To demonstrate this point we show, using the two arrows in Fig. \[unified\](b), what should have been the difference in the critical doping had it changed between the $x=0.4$ and $x=0.1$ by 5% of the doping from zero, namely, 5% of $\Delta p_{m}=0.3$. Our data indicates that $% M_{0}(\Delta p_{m})$ is family-independent to better than 5%. This is a surprising result considering the fact that $t/J$ varies between families by more than 5% and that the critical doping is expected to depend on $t/J$ [@MvsP]. A possible explanation is that the destruction of the AFM order parameter should be described by a hopping of boson pairs where $t$ is absorbed into the creation of tightly bound bosons leaving a prominent energy scale $J$ [@HavilioPRL98]. The proximity of the magnetic critical doping to superconductivity makes this possibility appealing. Finally we discuss the scaling of the pseudogap temperature. $T^{\ast}$ determined by magnetization measurements (See  \[PG\]) behaves like the well-known PG or the spin gap measured by other techniques on a variety of superconductors samples [@PGReviews]. More importantly, a small but clear family dependence of $T^{\ast}$ is seen. At first glance in Fig. \[criticalvsy\], it appears that $T^{\ast}$ has anti-correlation with $T_{c}^{max}$ or the maximum $T_{N}$ ($T_{N}^{max}$). The $x=0.4$ family, which has the highest $T_{c}^{max}$ and $T_{N}^{max}$, has the lowest $% T^{\ast}$, and vice versa for the $x=0.1$ family. However, this conclusion is reversed if instead of plotting the $T^{\ast}$ as a function of oxygen level, it is normalized by $T_{N}^{max}$, and plotted as a function of mobile hole variation $\Delta p_{m}$ [LubaPRB08]{}. This is demonstrated in Fig. \[unified\](c). Here $% T_{N}^{max} $ are chosen so that the $T_{N}(\Delta p_{m})/T_{N}^{max}$ curves collapse onto each other, and are 379, 391.5, 410, and 423 K for the $% x=0.1\ldots0.4$ families, respectively. Therefore, $T_{N}^{max}$ should be interpreted as the extrapolation of $T_{N}$ to the lowest $\Delta p_{m}$. Normalizing $T^{\ast}$ by $T_{c}^{max}$ does not provide as good data collapse as the normalization by $T_{N}^{max}$ [@LubaPRB08]. We conclude that a PG does exist in CLBLCO and that it scales with the maximum Néel temperature of each family. Therefore the PG is a 3D phenomenon involving both in- and out of-plane coupling. A similar conclusion was reached by resistivity analysis [@SuPRB06] and theoretical considerations [MillisPRL93]{}. Conclusions\[Conc\] =================== In this work four families of cuprate superconductors with a maximum $T_{c}$ variation of $30$% are investigated. It is demonstrated experimentally that these families are nearly identical in their crystal structure and crystal quality. The only detectable property that varies considerably between them is the Cu-O-Cu buckling angle. This angle is expected to impact the holes hopping rate $t$, hence, the magnetic super-exchange $J$ between Cu spins. $% J $ in turn sets the scale for the Néel temperature where long range antiferromagnetic order is taking place. Independent measurements of $J$ show that indeed $J$ varies between families and that $T_{c}$ grows when $J$ increases. A linear transformation from oxygen concentration to mobile hole concentration can generate a unified phase diagram in which $T_{c}$ is in fact proportional to $J$ for all doping. Since $T_{c}$ is also proportional to the superconducting carrier density it obeys Eq. \[TcvsJandns\]. Surprisingly, the critical density, where the Néel order is destroyed at zero temperature upon doping, is identical for all families. The critical doping is expected to depend on $t/J$. This result has two implications. On the one hand it supports the validity of the linear doping transformation; on the other it suggests that this transformation has eliminated $t$ from the low temperature effective Hamiltonian. Finally, it is found that the pseudogap temperature $T^{\ast}$, as measured by susceptibility, scales better with the Néel temperature than with $J$ (or $T_{c}$). This suggests that $T^{\ast}$ is determined by both in- and out-of-plane coupling, and should be viewed as a temperature where the system attempts unsuccessfully to order magnetically. Acknowledgements ================ The author acknowledges very helpful discussions with A. Kanigel, R. Ofer, E. Amit, A. Auerbach, Y. J. Uemura, and H. Alloul. Financial support from the Israel Science Foundation is also acknowledged. Magnetic critical temperatures\[MagCritical\] ============================================= The Néel and spin glass temperatures presented in Figs. \[criticalvsy\], \[alpha\] and \[unified\] are obtained by zero field $\mu$SR. In these experiments we determine the time-dependent spin polarization $P_{z}(t)$ of a muon injected into the sample at different temperatures. $z$ represents the initial muon spin direction. Figure \[musrzfraw\] shows typical $% P_{z}(t)$ curves, at different temperatures, for three samples from the $% x=0.1$ family. At high temperatures the polarization curves from all samples are typical of magnetic fields emanating from nuclear magnetic moments. In this case the time-dependence of the polarization exhibits a Gaussian decay. As the temperature is lowered the sample enters a magnetic frozen phase and the polarization relaxes much more rapidly. While the transition from the paramagnetic to the frozen state looks identical for all samples, the behavior at very low $T$ is different and indicates the nature of the ground state. Figure \[musrzfraw\](a) is an example of an antiferromagnetic ground state. When the temperature decreases, long range magnetic order is established at $\sim377$ K reflected by spontaneous oscillations of $P(t)$. Figure \[musrzfraw\](c) is an example of a spin glass (SG) transition at $% \sim17$ K. In this case the ground state consists of magnetic islands with randomly frozen electronic moments [@KanigelPRL02], and consequently the polarization shows only rapid relaxation. When the transition is to a Néel or spin glass state, the critical temperatures are named T$_{N}$ and T$% _{g}$, respectively. Figure \[musrzfraw\](b) presents an intermediate case where the sample appears to have two transitions. The first one starts below 240 K, where the fast decay in the polarization appears. Between 160 K and 40 K there is hardly any change in the polarization decay, and at 30 K there is another transition manifested in a faster decaying polarization. This behavior was observed in all the samples on the border between antiferromagnet and spin-glass in the phase diagram. \[ptb\] [MuSRzfRaw.EPS]{} In order to determine the magnetic critical transition temperatures, the data were fitted to a sum of two functions: a Gaussian with amplitude $A_{n}$ representing the normal fraction of the sample, and a rapidly relaxing and oscillating function, with amplitude $A_{m}$ and angular frequency $\omega $, representing the magnetic fraction and describes the magnetic field due to frozen electronic moments [@OferPRB06; @OferPRB08]. In this function the sum $A_{m}+A_{n}=1$ is constant at all temperatures. Figure \[am(t)\] shows $A_{m}$ as a function of temperature, for the three samples in Fig. \[musrzfraw\]. Above the transition, where only nuclear moments contribute, $A_{m}$ is close to zero. As the temperature decreases, the frozen magnetic part increases and so does $A_{m}$, at the expense of $A_{n}$. For the pure AFM and SG phases, the transition temperature was determined as the temperature at which $A_{m}$ is half of the saturation value. For the samples with two transitions, two temperatures were determined using the same principle. \[ptb\] [AmT.EPS]{} The magnetic order parameter is extracted from the muon rotation frequency in zero field as seen in Fig. \[musrzfraw\](a). These rotations allow us to determine $\omega(T,x,y)$. The temperature dependence of $\omega$ is used to determine the effective interlayer and anisotropic interaction $% \alpha_{eff}$ in Fig. \[alpha\]. $\omega(0,x,y)$ is used to determine the family-dependent critical doping in Fig. \[unified\](b). Pseudogap\[PG\] =============== We determine $T^{\ast}$ using temperature-dependent magnetization measurements. In Fig. \[chi\](a) we present raw data from four samples of the $x=0.2$ family with different doping levels. At first glance the data contain only two features: A Curie-Weiss (CW) type increase of $\chi$ at low temperatures, and a non-zero baseline at high temperature ($\sim300$). This base line increases with increasing y. The CW term is very interesting but will not be discussed further here. The baseline shift could be a consequence of variations in the core and Van Vleck electron contribution or an increasing density of states at the Fermi level. A zoom-in on the high temperature region, marked by the ellipse, reveals a third feature in the data: a minimum point of $\chi$. To present this minimum clearly we subtracted from the raw data the minimal value of the susceptibility $\chi_{min}$ for each sample, and plotted the result on a tighter scale in Fig. \[chi\](b). The $\chi$ minimum is a result of decreasing susceptibility upon cooling from room temperature, followed by an increase in the susceptibility due to the CW term at low $T$. This phenomenon was previously noticed by Johnston in YBCO [@JohnstonBook], and Johnston [@JohnstonPRL89] and Nakano [@NakanoPRB94] *et al.* in La$_{2-x}$Sr$_{x}$CuO$_{4}$ (LSCO). The minimum point moves to higher temperatures with decreasing oxygen level as expected from $T^{\ast}$. There are three possible reasons for this decreasing susceptibility: (I) increasing AFM correlations upon cooling, (II) opening of a SG where excitations move from $q=0$ to the AFM wave vector [@ChiExp], or (III) disappearing density of states at the Fermi level as parts of the Fermi arc are being gapped out when the PG opens as $T/T^{\ast}$ decreases [KanigelNature06]{}. \[ptb\] [Chi.EPS]{} In order to determine the $T^{\ast}$ we fit the data to a three-component function $\chi=C_{1}/(T+\theta)+C_{2}\tanh(T^{\star}/T)+C_{3}$. The fits are presented by solid lines in Fig. \[chi\]. The values of $T^{\star}$ are shown in Fig. \[criticalvsy\] and Fig. \[unified\]. Doping and Impurities\[Impurities\] =================================== The Cu-NQR experiment is done on powder samples fully enriched with $^{63}$Cu. We measured between five and seven different samples for each $x$ in the normal state at 100 K. The most overdoped sample is a non superconducting $% x=0.1$ compound. The NMR measurements were done by sweeping the field in a constant applied frequency $f_{app}$=77.95 MHz, using a $\pi/2$-$\pi$ echo sequence. The echo signal was averaged 100,000 times and its area evaluated as a function of field. The data are presented in Fig. \[nqrline\]. The full spectrum of the optimally doped $x=0.4$ sample ($y=7.156$) is shown in the inset of Fig. \[nqrline\]. The main planes emphasize the important parts of the spectrum using three axis breakers. \[ptb\] [nqrline.eps]{} The evolution of the main peaks as $x$ increases is highlighted by the dotted lines. It is clear that as $x$ decreases the peaks move away from each other. This means that $\nu _{Q}$ at optimal doping is a decreasing function of $x$. A more interesting observation is the fact that there is no change in the width of the peaks, at least not one that can easily be spotted by the naked eye. This means that the distribution of $\nu _{Q}$ is $% x$-independent and that there is no difference in the disorder between the optimally doped samples of the different families. Thus, as mentioned in Sec. \[Main\], disorder is not relevant to the variation of $T_{c}^{max}$ between the different families. This conclusion is supported by more rigorous analysis [@KerenPRB06]. $\nu _{Q}$ and $\Delta \nu _{Q}$ presented in Fig. \[nqranalysis\] are obtained by fitting NQR line shape to this data as demonstrated by the solid lines. Penetration depth\[PenDep\] =========================== The penetration depth is measured with transverse field TF-$\mu$SR. In this experiment one follows the transverse muon polarization $P_{\bot}(0)$ when a magnetic field is on and perpendicular to the initial polarization. These experiments are done by field cooling (FC) the sample to 1.8 K at an external field of 3 kOe. Every muon precesses according to the local field in its environment. When field cooling the sample, a vortex lattice is formed, and the field from these vortices decays on a length scale of the penetration depth $\lambda$. This leads to an inhomogeneous field distribution in the sample. Since the magnetic length scale is much larger than the atomic one, the muons probe the magnetic field distribution randomly, which, in turn, leads to a damping of the muons average spin polarization. This situation is demonstrated in Fig. \[musrtfraw\] where we present $P_{\bot}(0)$ in two different perpendicular directions (called real and imaginary) in a rotating reference frame. At temperatures above $% T_{c}$ the field is homogeneous and all muons experience the same field, and therefore no relaxation is observed. Well below $T_{c}$ (of $77$ K in this case) there are strong field variations and therefore different muons precess with different frequencies, and the average polarization quickly decays to zero. At intermediate temperatures the field variation are not severe and the relaxation is moderate. \[ptb\] [MuSRtfRaw.EPS]{} It was shown that in powder samples of HTSC the muon polarization $P_{\bot }(t)$ is well described by $P_{\bot}(t)=exp(-R_{\mu}^{2}t^{2}/2)cos(\omega t) $ where $\omega=\gamma\mu H$ is the precession frequency of the muon, and $R_{\mu}$ is the relaxation rate [@MuonBook]. The solid line in this figure is the fit result. The fact that the whole asymmetry relaxes indicates that CLBLCO is a bulk superconductor. The fit results for $R_{\mu}$ are shown in Fig. \[Uemura\]. As can be seen, the dependence of $T_{c}$ on $R_{\mu}$ is linear in the under-doped region and universal for all CLBLCO families, as expected from the Uemura relations. However, there is a new aspect in this plot. There is no “boomerang” effect, namely, overdoped and underdoped samples with equal $T_{c}$ have the same $R_{\mu}$, with only slight deviations for the $x=0.1$. Therefore, in CLBLCO there is one to one correspondence between $T_{c}$ and $R_{\mu}$, and therefore $n_{s}/m^{\ast}$, over the whole doping range. Lattice parameters\[Lattice\] ============================= Neutron powder diffraction experiments were performed at the Special Environment Powder Diffractometer at Argonne’s Intense Pulsed Neutron Source (see Ref. [@ChmaissemNature99] for more details). Figure \[neutrons\] shows a summary of the lattice parameters. The empty symbols represent data taken from Ref. [@ChmaissemNature99]. All the parameters are family-dependent, but not to the same extent. The lattice parameters a and c, depicted in Fig. \[neutrons\](a) and (b), change by up to about 0.5% between the two extreme families ($x=0.1$ and $x=0.4$). The in-plane Cu-O-Cu buckling angle is shown in Fig. \[neutrons\](c). This angle is non-zero since the oxygen is slightly out of the Cu plane and closer to the Y site of the YBCO structure. As mentioned in Sec. \[Main\] it changes by 30% between families. \[ptb\] [Neutrons.EPS]{} References {#references .unnumbered} ========== [99]{} C. A. Reynolds *et al.*, Phys. Rev. 84, 691 (1950); B. Serin *et al.*, Phys. Rev. 86, 162 (1952); E. Maxwell and O. S. Lutes, Physical Review **95**, 333 (1954). D. Scalapino, Phys. Rep. **250**, 329 (1995); A. V. Chubukov, D. Pines and B. P Stojković J. Phys. Condens. Matter **8**, 10017 (1996); E. Altman and A. Auerbach, Phys. Rev. B **65**, 104508 (2002); D. Muñoz, I. de P. R. Moreira, and F. Illas, Phys. Rev. B **65**, 224521 (2002); P. W. Anderson, P. A. Lee, M. Randeria, T. M. Rice, N. Trivedi, F. C. Zhang, J. Phys. Condens. Matter **16**, R755 (2004); E. Demler, W. Hanke, and S.-C. Zhang, Rev. Mod. Phys. **76**, (909) 2004; S. A. Kivelson and E. Fradkin cond-mat/0507459. T. Moriya and K. Ueda, Rep. Prog. Phys. **66**, 1299 (2003). G. Kotliar, Phys. Rev. B **37**, 3664 (1988); S. S. Kancharla, B. Kyung, D. Sénéchal, M. Civelli, M. Capone, G. Kotliar, and A.-M. S. Tremblay, Phys. Rev. B **77**, 184516 (2008). J. M. Tranquada G. Shirane B. Keimer S. Shamoto and M. Sato, Phys. Rev. B **40**, 4503 (1989); N. W. Preyer *et al.*, Phys. rev. B **37**, 9761 (1988). B. Keimer *et al.*, Phys. Rev. B **54**, 7430 (1992). X. Wan, T. A. Majer, and S. Y. Savrasov, Phys. Rev. B **79**, 155114 (2009). D. Goldschmidt, G. M. Reisner, Y. Direktovitch, A. Knizhnik, E. Gartstein, G. Kimmel, and Y. Eckstein, Phys. Rev. B **48**, 532 (1993). A. Kanigel, A. Keren, Y. Eckstein, A. Knizhnik, J. S. Lord, A. Amato., Phys. Rev. Lett. **88**, 137003 (2002). R. Ofer, G. Bazalitsky, A. Kanigel, A. Keren, A. Auerbach, J. S. Lord, and A. Amato, Phys. Rev. B **74**, 220508(R) (2006). A. Keren, A. Kanigel, and G. Bazalitsky, Phys. Rev. B **74**, 172506 (2006). A. Keren, A. Kanigel, J. S. Lord, and A. Amato, Solid State Commun. **126**, 39 (2003). R. Ofer, A. Keren, O. Chmaissem, and A. Amato, Phys. Rev. B **78**, 140508(R) (2008). Y. Lubashevsky and A. Keren, Phys. Rev. B **78**, 020505(R) (2008). Y. H. Su, H. G. Luo, and T. Xiang, Phys. Rev. B **73**, 134510 (2006). A. J. Millis and H. Monien, Phys. Rev. Lett. **70**, 2810 (1993). D. P. Arovas and A. Auerbach, Phys. Rev. B **38**, 316 (1998). S. Sanna, S. Agrestini, K. Zheng, N. L. Saini, and A. Bianconi, in preparation. Muon Science: Muons in Physics, Chemistry and Materials, Eds S. L. Lee, S. H. Kilcoyne and R. Cywinski (Institute of Physics, London), 1999. J. E. Sonier, Rep. Prog. Phys. **70** (2007). S. Sanna, F. Coneri, A. Rigoldi, G. Concas, and R. De Renzi, Phys. Rev. B **77**, 224511 (2008). J. Zaaneann and D G. A. Sawatzky, Can. J. Phys. **65**, 1262 (1987). W. A. Harrison, Electronic Structure and the Properties of Solids, W. H. Freeman and company, San Fransisco 1980. S. Petit and M.-B. Lepetit, in preparation. E. Amit and A. Keren, in preparation. S. Sanna, private communication. G. Khaliullin and P. Horsch, Phys. Rev. B 47, 463 (1993); A. Belkasri and J. L. Richard, Phys. Rev. B 50, 12896 (1994); J. L. Richard and V. Yu. Yushankha?, Phys. Rev. B 50, 12927 (1994). D. Yamamoto and S. Kurihara PRB 75, 134520 (2007). M. Havilio and A. Auerbach, Phys. Rev. Lett. **83**, 4848 (1999). T. Timusk and B. Statt, Rep. Prog. Phys. **62**, 61 (1999); M. R. Norman, D. Pines, and C. Kallin, Advances in Physics **54**, 715 (2005); P. A. Lee, N. Nagaosa, and X.-G. Wen, Rev. Mod. Phys. **78**, 17 (2006). D. C. Johnston et al., Chemistry of High-Temperature Superconductors, American Chemical Society, Washington DC 1987, p. 149. D. C. Johnston, Phys. Rev. Lett. **62**, 957 (1989). T. Nakano, M. Oda, C. Manabe, N. Momono,Y. Miura, and M. Ido, Phys. Rev. B **49** 16000 (1994). J. Rossat-Mignot et al., Physica (Amsterdam) 180B– 181B, 383 (1992); L. P. Regnault et al., Physica (Amsterdam) 235C, 59 (1995); M.-H. Julien et al. Phys. Rev. Lett. **76**, 4238 (1996); A. Kanigel, et al., Nature Physics **2**, 447 (2006). O. Chmaissem, J. D. Jorgensen, S. Short, A. Knizhnik, Y. Eckstein, H. Shaked, Nature **397**, 45 (1999).
--- author: - | Tuhina Mukherjee[^1]\ Tata Institute of Fundamental Research(TIFR)\ Centre of Applicable Mathematics,\ Bangalore, India.\ K. Sreenadh[^2]\ Department of Mathematics\ Indian Institute of Technology Delhi,\ Hauz Khas, New Delhi-110016, India title: Critical growth elliptic problems with Choquard type nonlinearity --- A Brief Survey {#sec:1} ============== We devote our first section on briefly glimpsing the results that have already been proved in the context of existence and multiplicity of solutions of the Choquard equations. Consider the problem $$\label{BS-1} -\Delta u +u = (I_\alpha \ast |u|^p)|u|^{p-2}u \; \text{in} \; \mathbb{R}^n$$ where $u: \mathbb R^n \to \mathbb{R}$ and $I_\alpha: \mathbb R^n \to \mathbb R$ is the [Riesz]{} potential defined by $$I_\alpha(x) = \frac{\Gamma\left(\frac{n-\alpha}{2}\right)}{\Gamma\left(\frac{\alpha}{2}\right)\pi^{\frac{n}{2}} 2^\alpha |x|^{n-\alpha}}$$ for $\alpha \in (0,n)$ and $\Gamma$ denotes the Gamma function. Equation is generally termed as Choquard equations or the Hartree type equations. It has various physical significance. In the case $n=3$, $p=2$ and $\alpha=2$, finds its origin in a work by S.I. Pekar describing the quantum mechanics of a polaron at rest [@Pekar]. Under the same assumptions, in $1976$ P. Choquard used to describe an electron trapped in its own hole, in a certain approximation to Hartree-Fock theory of one component plasma [@Choquard]. Following standard critical point theory, we expect that solutions of can be viewed as critical points of the energy functional $$J(u)= \frac12\int_{{\Omega}}(|\nabla u|^2+ u^2)- \frac{1}{2p}\int_{\Omega}(I_\alpha*|u|^p)|u|^p.$$ It is clear from the first term that naturally we have to take $u \in H^1({\mathbb}R^n)$ which makes the first and second term well defined. Now the question is whether the third term is well defined and sufficiently smooth over $H^1(\mathbb{R}^n)?$ For this, we recall the following Hardy-Littlewood- Sobolev inequality. \[HLS\] Let $t,r>1$ and $0<\mu<n $ with $1/t+\mu/n+1/r=2$, $f \in L^t(\mathbb R^n)$ and $h \in L^r(\mathbb R^n)$. Then there exists a constant $C(t,n,\mu,r)$, independent of $f,h$ such that $$\label{har-lit} \int_{{\mathbb}R^n}\int_{{\mathbb}R^n} \frac{f(x)h(y)}{|x-y|^{\mu}}\mathrm{d}x\mathrm{d}y \leq C(t,n,\mu,r)\|f\|_{L^t}\|h\|_{L^r}.$$ [ If $t =r = \textstyle\frac{2n}{2n-\mu}$ then $$C(t,n,\mu,r)= C(n,\mu)= \pi^{\frac{\mu}{2}} \frac{\Gamma\left(\frac{n}{2}-\frac{\mu}{2}\right)}{\Gamma\left(n-\frac{\mu}{2}\right)} \left\{ \frac{\Gamma\left(\frac{n}{2}\right)}{\Gamma(n)} \right\}^{-1+\frac{\mu}{n}}.$$ The inequality in is achieved if and only if $f\equiv (constant)h$ and $$h(x)= A(\gamma^2+ |x-a|^2)^{\frac{-(2n-\mu)}{2}}$$ for some $A \in \mathbb C$, $0 \neq \gamma \in \mathbb R$ and $a \in \mathbb R^n$.]{} For $u \in H^1({\mathbb}R^n),$ let $f = h= |u|^p$, then by Theorem \[HLS\], $$\int_{{\mathbb}R^n}\int_{{\mathbb}R^n} \frac{|u(x)|^p|u(y)|^p}{|x-y|^{n-\alpha}}\mathrm{d}x\mathrm{d}y \leq C(t,n,\mu,p)\left(\int_{{\mathbb}R^n}|u|^{\frac{2np}{n+\alpha}}\right)^{1+\frac{\alpha}{n}}.$$ This is well defined if $u \in L^{\frac{2np}{n+\alpha}}({\mathbb}R^n)$. By the classical Sobolev embedding theorem, the embedding $H^1({\mathbb}R^n) \hookrightarrow L^r({\mathbb}R^n)$ is continuous when $r \in [2, 2^*)$, where $2^*= \frac{2n}{n-2}$. This implies $u \in L^{\frac{2np}{n+\alpha}}({\mathbb}R^n)$ if and only if $$\label{cond-on-p} 2_\alpha:=\frac{n+\alpha}{n}\leq p \leq \frac{n+\alpha}{n-2}:= 2^*_\alpha.$$ The constant $2_\alpha$ is termed as the lower critical exponent and $2_\alpha^*$ is termed as the upper critical exponent in the sense of Hardy-Littlewood-Sobolev inequality. Then we have the following result. If $p \in (1,\infty)$ satisfies , then the functional $J$ is well-defined and continuously Fréchet differentiable on the Sobolev space $H^1({\mathbb}R^n)$. Moreover, if $p\geq 2$, then the functional $J$ is twice continuously Fréchet differentiable. This suggests that it makes sense to define the solutions of as critical points of $J$. A remarkable feature in Choquard nonlinearity is the appearance of a lower nonlinear restriction: the lower critical exponent $2_\alpha >1$. That is the nonlinearity is superlinear. Existence and multiplicity results ---------------------------------- A function $u \in H^1({\mathbb}R^n)$ is said to be a weak solution of if it satisfies $$\int_{{\mathbb}R^n }(\nabla u \nabla v + uv)~dx + \int_{{\mathbb}R^n} \left(\int_{{\mathbb}R^n}\frac{|u(y)|^p}{|x-y|^{\alpha}}~dy \right)|u|^{p-2}uv ~dx=0$$ for each $v \in H^1({\mathbb}R^n)$. We define a solution $u \in H^1({\mathbb}R^n)$ to be a groundstate of the Choquard equation whenever it is a solution that minimizes the functional $J$ among all nontrivial solutions. In [@vs] V. Moroz and J. Van Schaftingen studied the existence of groundstate solutions and their asymptotic behavior using concentration-compactness lemma. The groundstate solution has been identified as infimum of $J$ on the Nehari manifold $${\mathcal}N =\{u \in H^1({\mathbb}R^n): \; \langle J^\prime(u),u\rangle =0\}$$ which is equivalent to prove that the mountain pass minimax level $\displaystyle \inf_{\gamma \in \Gamma}\sup_{[0,1]}J\circ\gamma$ is a critical value. Here the class of paths $\Gamma$ is defined by $\Gamma = \{\gamma \in C([0,1];H^1({\mathbb}R^n)):\; \gamma(0)=0 \text{ and } J(\gamma(1))<0\}$. Precisely, they proved the following existence result- If $2_\alpha<p<2^*_\alpha$ then there exists a nonzero weak solution $u \in W^{1,2}({\mathbb}R^n)$ of which is a groundstate solution of . They have also proved the following Pohozaev identity: \[vs-poho\] Let $u \in H^{2}_{loc}({\mathbb}R^n)\cap W^{1,\frac{2np}{n+\alpha}}{({\mathbb}R^n)}$ is a weak solution of the equation $$-{\Delta}u + u = (I_\alpha*|u|^p)|u|^{p-2}u\; \text{in}\; \mathbb{R}^n$$ then $$\frac{n-2}{2}\int_{{\mathbb}R^n}|\nabla u|^2+\frac{n}{2}\int_{{\mathbb}R^n}|u|^2= \frac{n+\alpha}{2p}\int_{{\mathbb}R^n}(I_\alpha*|u|^p)|u|^p.$$ Pohozaev identity for some Choquard type nonlinear equations has also been studied in [@menzala; @css]. Using Proposition \[vs-poho\], they proved the following nonexistence result- \[nonext\] If $p \leq 2_\alpha$ or $p\geq 2^*_\alpha$ and $u \in H^{1}({\mathbb}R^n)\cap L^{\frac{2np}{n+\alpha}}({\mathbb}R^n)$ such that $\nabla u \in H^{1}_{loc}({\mathbb}R^n)\cap L^{\frac{2np}{n+\alpha}}_{loc}({\mathbb}R^n)$ satisfies weakly, then $u \equiv 0$. Next important thing to note is the following counterpart of Brezis-Leib lemma:\ If the sequence $\{u_k\}$ converges weakly to $u$ in $H^1({\mathbb}R^n)$, then $$\label{BS-2} \lim_{k \to \infty} \int_{{\mathbb}R^n}(I_\alpha*|u_k|^p)|u_k|^p- (I_\alpha*|u-u_k|^p)|u-u_k|^p= \int_{{\mathbb}R^n}(I_\alpha*|u|^p)|u|^p.$$ One can find its proof in [@mms; @vs]. Equation plays a crucial role in obtaining the solution where there is a lack of compactness.\ Next coming to the positive solutions, in [@AFY] authors studied the existence of solutions for the following equation $$\label{BS-3} -\Delta u +V(x)u = (|x|^{-\mu}\ast F(u))f(u), \; u>0\; \text{in}\; {\mathbb}R^n, \; u \in D^{1,2}({\mathbb}R^n)$$ where $F$ denotes primitive of $f$, $n\geq 3$ and $\mu \in (0,n)$. Assumptions on the potential function $V$ and the function $f$ are as follows: 1. $\displaystyle \lim_{s \to 0^+} \frac{sf(s)}{s^q}<+\infty$ for $q \geq 2^* =\frac{2n}{n-2}$ 2. $\displaystyle \lim_{s \to \infty} \frac{sf(s)}{s^p}=0$ for some $p \in \left( 1, \frac{2(n-\mu)}{n-2}\right)$ when $\mu \in (1,\frac{n+2}{2})$, 3. There exists $\theta>2$ such that $1<\theta F(s)<2f(s)s$ for all $s>0$, 4. $V$ is a nonnegative continuous function. Define the function ${\mathcal}V : [1,+\infty) \to [0,\infty)$ as $${\mathcal}V(R) = \frac{1}{R^{(q-2)(n-2)}}\inf_{|x|\geq R} |x|^{(q-2)(n-2)}V(x)$$ Motivated by the articles [@BL; @BGM1; @BGM2], authors proved the following result in [@AFY]. Assume that $0<\mu< \frac{n+2}{2}$ and $(i)-(iv)$ hold. If there exists a constant ${\mathcal}V_0>0$ such that if ${\mathcal}V(R)>{\mathcal}V_0$ for some $R>1$, then admits a positive solution. Taking $p=2$ in , Ghimenti, Moroz and Schaftingen [@nodal] established existence of a least action nodal solution which appeared as the limit of minimal action nodal solutions for when $p \searrow 2$. They proved the following theorem by constructing a Nehari nodal set and minimizing the corresponding energy functional over this set. If $\alpha \in ((n-4)^+, n)$ and $p=2$ then admits a least action nodal solution. In [@ZHZ], Zhang et al. proved the existence of infinitely many distinct solutions for the following generalized Choquard equation using the index theory $$\label{BS-5} -\Delta u +V(x)u = \left( \int_{{\mathbb}R^n} \frac{Q(y)F(u(y))}{|x-y|^{\mu}}~dy\right)Q(x)f(u(x)) \; \text{in}\; {\mathbb}R^n$$ where $\mu \in (0, n)$, $V$ is periodic, $f$ is either odd or even and some additional assumptions. Although Theorem \[nonext\] holds, when $p=2_\alpha$ in , Moroz and Schaftingen in [@vs2] proved some existence and nonexistence of solutions for the problem $$\label{BS-4} -\Delta u + V(x)u = (I_\alpha \ast |u|^{2_\alpha})|u|^{2_\alpha-2}u \; \text{in} \; \mathbb{R}^n$$ where the potential $V \in L^\infty({\mathbb}R^n)$ and must not be a constant. They proved existence of a nontrivial solution if $$\liminf_{|x|\to \infty}(1-V(x))|x|^2 > \frac{n^2(n-2)}{4(n+1)}$$ and gave necessary conditions for existence of solutions of . Because $2_\alpha$ is the lower critical exponent in the sense of Theorem \[HLS\], a lack of compactness occurs in minimization technique. So concentration compactness lemma and Brezis Lieb type lemma plays an important role. Equation was reconsidered by Cassani, Schaftingen and Zhang in [@CSZ] where they gave necessary and sufficient condition for the existence of positive ground state solution depending on the potential $V$.\ Very recently, in [@divsree], authors studied some existence and multiplicity results for the following [critical]{} growth Kirchhoff- Choquard equations $$-M(\|u\|^2)\Delta u = {\lambda}u + (I_\alpha \ast |u|^{2_{\mu}^{*}})|u|^{2_{\mu}^{*}-2}u \; \text{in} \; {\Omega}\textcolor{red}{,}\; u=0 \; \text{on}\; {\partial}{\Omega}$$ where $M(t)\sim at+bt^\theta, \theta\ge 1$ for some constants $a$ and $b$. Now let us consider the critical dimension case that is $n=2$ commonly known as the Trudinger-Moser case. When $n=2$, the critical Sobolev exponent becomes infinity and the embedding goes as $W^{1,2}({\mathbb}R^2) \hookrightarrow L^q({\mathbb}R^2)$ for $q \in [2,\infty)$ whereas $W^{1,2}({\mathbb}R^2)\not\hookrightarrow L^\infty({\mathbb}R^2)$. The following *Trudinger-Moser inequality* plays a crucial role when $n=2$. \[TM-ineq\] For $u \in W^{1,2}_0({\mathbb}R^2)$, $$\int_{{\mathbb}R^2} [\exp(\alpha|u|^{2})-1]~dx < \infty.$$ Moreover if $\|\nabla u\|_2\leq 1$, $\|u\|_2 \leq M$ and $\alpha < 4\pi$ then there exists a $C(\alpha,M)>0$ such that $$\int_{{\mathbb}R^2} [\exp(\alpha|u|^{2})-1]~dx <C(M,\alpha).$$ Motivated by this, the nonlinearity in this case is an appropriate exponential function. The following singularly perturbed Choquard equation $$\label{BS-8} -\epsilon^2 \Delta u +V(x)u = {\epsilon}^{\mu-2} \left( |x|^{-\mu}\ast F(u)\right)f(u) \;\text{in}\; {\mathbb}R^2$$ was studied by Alves et al. in [@yang-JDE]. Here $\mu\in (0,2)$, $V$ is a continuous potential, ${\epsilon}$ is a positive parameter, $f$ has critical exponential growth in the sense of Trudinger-Moser and $F$ denotes its primitive. Under appropriate growth assumptions on $f$, authors in [@yang-JDE] proved existence of a ground state solution to when ${\epsilon}=1$ and $V$ is periodic and also established the existence and concentration of semiclassical ground state solutions of with respect to ${\epsilon}$. An existence result for Choquard equation with exponential nonlinearity in ${\mathbb}R^2$ has also been proved in [@yang-JCA]. The Kirchhoff-Choquard problems in this case are studied in the work [@arora].\ Now let us consider the Choquard equations in the bounded domains. In particular, consider the Brezis-Nirenberg type problem for Choquard equation $$\label{BS-6} -{\Delta}u = {\lambda}u + \left( \int_{{\Omega}}\frac{|u|^{2^*_\mu}(y)}{|x-y|^\mu}~dy\right) |u|^{2^*_\mu-2}u\; \text{in}\; {\Omega}, \; u = 0 \; \text{on}\; \partial{\Omega}$$ where ${\Omega}$ is bounded domain in ${\mathbb}R^n$ with Lipschitz boundary, ${\lambda}\in {\mathbb}R$ and $2^*_\mu = \displaystyle\frac{2n-\mu}{n-2}$ which is the critical exponent in the sense of Hardy-Littlewood-Sobolev inequality. These kind of problems are motivated by the celebrated paper of Brezis and Nirenberg [@breniren]. Gao and Yang in [@gy1] proved existence of nontrivial solution to for $n \geq 4$ in case ${\lambda}$ is not an eigenvalue of $-{\Delta}$ with Dirichlet boundary condition and for a suitable range of ${\lambda}$ when $n=3$. They also proved the nonexistence result when ${\Omega}$ is a star shaped region with respect to origin. Here, the best constant for the embedding is defined as $$S_{H,L}:= \inf\left\{\int_{{\mathbb}R^n}|\nabla u|^2: \; u\in H^1({\mathbb}R^n), \; \int_{{\mathbb}R^n}(|x|^{-\mu}*|u|^{2^*_\mu})|u|^{2^*_\mu}dx=1\right\}.$$ They showed that the minimizers of $S_{H,L}$ are of the form $U(x)= \left( \frac{b}{b^2+|x-a|^2}\right)^{\frac{n-2}{2}}$ where $a,b$ are appropriate constants. We remark that $U(x)$ is the Talenti function which also forms minimizers of $S$, the best constant in the embedding $H^1_0({\Omega})$ into $L^{2^*}({\Omega})$. Let us consider the family $U_{{\epsilon}}(x)= \epsilon^{\frac{2-n}{2}}U(\frac{x}{\epsilon})$. Using Brezis-Lieb lemma, in [@gy1] it was shown that every Palais Smale sequence is bounded and the first critical level is $$c < \frac{n+2-\mu}{4n-2\mu}S_{H,L}^{\frac{2n-\mu}{n+2-\mu}}.$$ If $Q_{\lambda}:= \inf\limits_{u \in H^1_0({\Omega})\setminus \{0\}}\frac{\int_{{\Omega}}|\nabla u|^2- {\lambda}u}{\int_{{\Omega}R^n}(|x|^{-\mu}*|u|^{2^*_\mu})|u|^{2^*_\mu}dx}$, then $Q_{\lambda}< S_{H,L}$ which can be shown using $U_\epsilon$’s. Then using Mountain pass lemma and Linking theorem depending on the dimension $n$, existence of first solution to is shown. The nonexistence result was proved after establishing a Pohozaev type identity. Gao and Yang also studied Choquard equations with concave-convex power nonlinearities in [@gy2] with Dirichlet boundary condition.\ Very recently, the effect of topology of domain on the solution of Choquard equations has been studied by some researchers. Ghimenti and Pagliardini [@GP] proved that the number of positive solution of the following Choquard equation $$\label{BS-7} -\Delta u -{\lambda}u= \left( \int_{\Omega}\frac{|u|^{p_{\epsilon}}(y)}{|x-y|^{\mu}}~dy\right)|u|^{p_{\epsilon}-2}u,\;u>0 \; \text{in}\;{\Omega}, \; \; u=0 \;\text{in}\; {\mathbb}R^n \setminus {\Omega}$$ depends on the topology of the domain when the exponent $p_{\epsilon}$ is very close to the critical one. Precisely, they proved- There exists $\bar {\epsilon}> 0$ such that for every ${\epsilon}\in (0, \bar {\epsilon}]$, Problem has at least $cat_{\Omega}({\Omega})$ low energy solutions. Moreover, if it is not contractible, then there exists another solution with higher energy. Here $cat_{\Omega}({\Omega})$ denotes the Lusternik-Schnirelmann category of ${\Omega}$. They used variational methods to look for critical points of a suitable functional and proved a multiplicity result through category methods. This type of result was historically introduced by Coron for local problems in [@Bahri]. Another significant result in this regard has been recently obtained by authors in [@divya]. Here they showed existence of a high energy solution for $$-\Delta u = \left( \int_{\Omega}\frac{|u|^{2^*_\mu}(y)}{|x-y|^{\mu}}~dy\right)|u|^{2^*_\mu-2}u\; \text{in}\; {\Omega},\;u=0\;\text{on}\; \partial {\Omega},$$ where ${\Omega}$ is an annular type domain with sufficiently small inner hole. Radial symmetry and Regularity of solutions ------------------------------------------- Here, we try to give some literature on radially symmetric solutions and regularity of weak solutions constructed variationally for Choquard equations. First we come to the question of radially symmetric solutions. Is all the positive solutions for the equation $$\Delta u -\omega u + (|x|^{-\mu} \ast |u|^{2_\alpha})p|u|^{2_\alpha-2}u=0, \; \omega >0,\; u \in H^1({\mathbb}R^n)$$ are radially symmetric and monotone decreasing about some fixed point? This was an open problem which was settled by Ma and Zhao [@MZ] in case $2\leq p < \frac{2n-\mu}{n-2}$ and some additional assumptions. The radial symmetry and uniqueness of minimizers corresponding to some Hartree equation has also been investigated in [@JV]. Recently, Wang and Yi [@WY] proved that if $u \in C^2({\mathbb}R^n)\cap H^1({\mathbb}R^n)$ is a positive radial solution of with $p=2$ and $\alpha =2$ then $u$ must be unique. Using Ma and Zhao’s result, they also concluded that the positive solutions of in this case is uniquely determined, up to translations in the dimension $n=3,4,5$. Huang et al. in [@HYY] proved that with $n=3$ has at least one radially symmetric solutions changing sign exactly $k$-times for each $k$ when $p \in \left(2.5,5 \right)$. Taking $V \equiv 1$ in and $f$ satisfies almost necessary the upper critical growth conditions in the spirit of Berestycki and Lions, Li and Tang [@LT] very recently proved that has a ground state solution, which has constant sign and is radially symmetric with respect to some point in ${\mathbb}R^n$. They used the Pohozaev manifold and a compactness lemma by Strauss to conclude their main result. For further results regarding Choquard equations, we suggest readers to refer [@survey] which extensively covers the existing literature on the topic. Very recently, in [@divya] authors studied the classification problem and proved that all positive solutions of the following equation are radially symmetric: for $p=2_{\mu}^{*}$ $$\label{co2} -{\Delta}u= (I_{\mu}*|u|^{p}) |u|^{p-2} u \; \text{in} \; \mathbb{R}^n.$$ They observed that the solutions of this problem satisfy the integral system of equations $$\begin{aligned} & u(x)=\int_{\mathbb{R}^n}\frac{u^{p-1}(y)v(y)}{|x-y|^{N-2}}~dy, u\geq 0 \text{ in } \mathbb{R}^n \\ & v(x)=\int_{\mathbb{R}^n}\frac{u^p(y)}{|x-y|^{N-\mu}}~dy, v\geq 0 \text{ in } \mathbb{R}^n. \end{aligned}$$ By obtaining the regularity estimates and using moving method they proved the following result: \[cothm5\] Every non-negative solution $u \in D^{1,2}(\mathbb{R}^N)$ of equation is radially symmetric, monotone decreasing and of the form $$\begin{aligned} u(x)= \left(\frac{c_1}{c_2+|x-x_0|^2}\right)^{\frac{N-2}{2}} \end{aligned}$$ for some constants $c_1,c_2>0$ and some $x_0 \in \mathbb{R}^N$. Next we recall some regularity results for the problem . Fix $\alpha \in (0,n)$ and consider the problem , then in [@vs] authors showed the following- \[reg-1\] If $u\in H^1({\mathbb}R^n)$ solves weakly for $p \in (2_\alpha, 2_\alpha^*)$ then $u \in L^1({\mathbb}R^n) \cap C^2({\mathbb}R^n)$, $u \in W^{2,s}({\mathbb}R^n)$ for every $s>1$ and $u \in C^\infty({\mathbb}R^n \setminus u^{-1}\{0\})$. The classical bootstrap method for subcritical semilinear elliptic problems combined with estimates for Riesz potentials allows them to prove this result. Precisely, they first proved that $I_\alpha \ast |u|^p \in L^\infty({\mathbb}R^n)$ and using the Calderón-Zygmund theory they obtain $u \in W^{2,r}({\mathbb}R^n)$ for every $r>1$. Then the proof of Theorem \[reg-1\] followed from application of Morrey–Sobolev embedding and classical Schauder regularity estimates. In [@MS-ams], author extended a special case of the regularity result by Brezis and Kato [@BK] for the Choquard equations. They proved the following- If $H, K \in L^{\frac{2n}{\alpha}}({\mathbb}R^n) \cap L^{\frac{2n}{\alpha+2}}({\mathbb}R^n)$ and $u \in H^1({\mathbb}R^n)$ solves $$-\Delta u + u= (I_\alpha \ast Hu)K\; \text{in}\; \mathbb{R}^n$$ then $u \in L^p({\mathbb}R^n)$ for every $p \in \left[2, \frac{2n^2}{\alpha(n-2)}\right)$. They proved it by establishing a nonlocal counterpart of Lemma $2.1$ of [@BK] in terms of the Riesz potentials. After this, they showed that the convolution term is a bounded function that is $I_\alpha \ast |u|^p \in L^\infty({\mathbb}R^n)$. Therefore, $$|-\Delta u + u| \leq C (|u|^{\frac{\alpha}{n}}+ |u|^{\frac{\alpha+2}{n-2}})$$ that is the right hand side now has subcritical growth with respect to the Sobolev embedding. So by the classical bootstrap method for subcritical local problems in bounded domains, it is deduced that $u \in W^{2,p}_{loc}({\mathbb}R^n)$ for every $p \geq 1$. Moreover it holds that if admits a positive solution and $p$ is an even integer then $u \in C^\infty$, refer [@Lei1; @Lei2; @MSS1]. Using appropriate test function and results from [@MS-ams], Gao and Yang in [@gy2] established the following regularity and $L^\infty$ estimate for problems on bounded domains- Let $u$ be the solution of the problem $$-\Delta u = g(x,u)\;\text{in}\; {\Omega},\;\; u \in H^{1}_0({\Omega}),$$ where $g$ is satisfies $|g(x,u)| \leq C(1+|u|^p)+ \left(\displaystyle\int_{\Omega}\frac{|u|^{2^*_\mu}}{|x-y|^{\mu}}dy\right)|u|^{2^*_\mu-2}u$, $\mu \in (0,n)$, $1<p<2^*-1$ and $C>0$ then $u \in L^\infty({\Omega})$. As a consequence of this lemma we can obtain $u \in C^2(\bar {\Omega})$ by adopting the classical $L^p$ regularity theory of elliptic equations. Choquard equations involving the $p(.)$-Laplacian {#sec-1.2} ------------------------------------------------- Firstly, let us consider the quasilinear generalization of the Laplace operator that is the $p$-Laplace operator defined as $$-{\Delta}_p u:=-\nabla\cdot(|\nabla u|^{p-2}\nabla u), \; 1<p<\infty.$$ The Choquard equation involving $-{\Delta}_p$ has been studied in [@ay1; @ay2; @ay3]. In [@ay1], Alves and Yang studied concentration behavior of solutions for the following quasilinear Choquard equation $$\label{BS-9} -\epsilon^p \Delta_p u +V(x)|u|^{p-2}u = {\epsilon}^{\mu-n} \left( \int_{{\mathbb}R^n}\frac{Q(y)F(u(y))}{|x-y|^{\mu}}\right)Q(x)f(u) \;\text{in}\; {\mathbb}R^n$$ where $1<p<n$, $n \geq 3$, $0 < \mu < n$, $V$ and $Q$ are two continuous real valued functions on ${\mathbb}R^n$, $F(s)$ is the primitive function of $f(s)$ and ${\epsilon}$ is a positive parameter. Taking $Q \equiv 1$, Alves and Yang also studied in [@ay2]. Recently, Alves and Tavares proved a version of Hardy-Littlewood-Sobolev inequality with variable exponent in [@p(x)-choq] in the spirit of variable exponent Lebesgue and Sobolev spaces. Precisely, for $p(x), q(x)\in C^+(\mathbb{R}^n)$ with $p^-:=\min\{p(x),0\}>1,$ and $ q^{-}>1$, the following holds: \[var-exp\] Let $h \in L^{p^+}({\mathbb}R^n) \cap L^{p^-}({\mathbb}R^n)$, $g \in L^{q^+}({\mathbb}R^n) \cap L^{q^-}({\mathbb}R^n)$ and ${\lambda}:{{\mathbb}R^n} \times {\mathbb}R^n \to {\mathbb}R$ be a continuous function such that $0\leq {\lambda}^+\leq {\lambda}^-<n$ and $$\frac{1}{p(x)}+ \frac{{\lambda}(x,y)}{n}+ \frac{1}{q(y)}=2,\; \forall x,y\in {\mathbb}R^n.$$ Then there exists a constant $C$ independent of $h$ and $g$ such that $$\left|\int_{{\mathbb}R^n}\int_{{\mathbb}R^n} \frac{h(x)g(y)}{|x-y|^\mu}~dxdy\right| \leq C(\|h\|_{p^+}\|g\|_{q^+}+ \|h\|_{p^-}\|g\|_{q^-}).$$ In the spirit of Theorem \[var-exp\], authors in [@p(x)-choq] proved existence of a solution $u \in W^{1,p(x)}({\mathbb}R^n)$ to the following quasilinear Choquard equation using variational methods under the subcritical growth conditions on $f(x,u)$: $$\label{BS-12} -\Delta_{p(x)}u + V(x)|u|^{p(x)-2}u=\left(\frac{F(x,u(x))}{|x-y|^{{\lambda}(x,y)}}~dx \right)f(y,u(y))\; \text{in}\; {\mathbb}R^n,\;$$ where $\Delta_{p(x)}$ denotes the $p(x)$-Laplacian defined as $-{\Delta}_{p(x)} u:=-div(|\nabla u|^{p(x)-2}\nabla u)$, $V$, $p$, $f$ are real valued continuous functions and $F$ denotes primitive of $f$ with respect to the second variable. Choquard equations involving the fractional Laplacian ===================================================== In this section, we summarize our contributions related to the existence and multiplicity results concerning different Choquard equations, in separate subsections. We employ the variational methods and used some asymptotic estimates to achieve our goal. While dealing with critical exponent in the sense of Hardy-Littlewood-Sobolev inequality, we always consider the upper critical exponent. We denote $\|\cdot\|_r$ as the $L^r({\Omega})$ norm. Brezis-Nirenberg type existence results --------------------------------------- The fractional Laplacian operator $(-{\Delta})^s$ on the set of the Schwartz class functions is defined as $$(-{\Delta})^s u(x) = -\mathrm{P.V.}\int_{{\mathbb}R^n} \frac{u(x)-u(y)}{\vert x-y\vert^{n+2s}}~{d}y$$ ([up to a normalizing constant]{}), where $\mathrm{P.V.}$ denotes the Cauchy principal value, $s \in (0,1)$ and $n>2s$. The operator $(-{\Delta})^s$ is the infinitesimal generator of L$\acute{e}$vy stable diffusion process. The equations involving this operator arise in the modelling of anomalous diffusion in plasma, population dynamics, geophysical fluid dynamics, flames propagation, chemical reactions in liquids and American options in finance. Motivated by , in [@TS-1] we considered the following doubly nonlocal equation involving the fractional Laplacian with noncompact nonlinearity $$\label{BS-10} (-{\Delta})^su = \left( \int_{{\Omega}}\frac{|u|^{2^*_{\mu,s}}}{|x-y|^{\mu}}\mathrm{d}y \right)|u|^{2^*_{\mu,s}-2}u +{\lambda}u \; \text{ in } {\Omega}, \; \; u =0 \; \text{ in } \mathbb R^n\setminus {\Omega},$$ where ${\Omega}$ is a bounded domain in $\mathbb R^n$ with Lipschitz boundary, ${\lambda}$ is a real parameter, $s\in (0,1)$, $2^*_{\mu,s}= \displaystyle\frac{2n-\mu}{n-2s}$, [$0<\mu<n$]{} and $n>2s$. Here, $2^*_{\mu,s}$ appears as the upper critical exponent in the sense of Hardy-Littlewood-Sobolev inequality when the function is taken in the fractional Sobolev space $H^s({\mathbb}R^n):= \{u \in L^2({\mathbb}R^n): \|(-{\Delta})^{\frac{s}{2}}u\|_2 < \infty\}$ which is continuously embedded in $L^{2^*_s}({\mathbb}R^n)$ where $2^*_s= \displaystyle\frac{2n}{n-2s}$. For more details regarding the fractional Sobolev spaces and its embeddings, we refer [@hitch]. Following are the main results that we have proved- \[thrm1\] [Let $n \geq 4s$ for $s \in (0,1)$, then has a positive weak solution for every ${\lambda}>0$ such that ${\lambda}$ is not an eigenvalue of $(-{\Delta})^s$ with homogenous Dirichlet boundary condition in ${\mathbb}R^n \setminus {\Omega}$.]{} \[newthrm\] Let $s \in (0,1)$ and $ 2s<n<4s $, then there exist $\bar{{\lambda}}>0$ such that for any ${\lambda}> \bar {\lambda}$ different from the eigenvalues of $(-{\Delta})^s$ with homogenous Dirichlet boundary condition in ${\mathbb}R^n \setminus {\Omega}$, has a nontrivial weak solution. \[thrm3\] Let ${\lambda}<0$ and ${\Omega}\not\equiv {\mathbb}R^n$ be a strictly star shaped bounded domain (with respect to origin) with $C^{1,1}$ boundary, then cannot have a nonnegative nontrivial solution. Consider the space $X$ defined as $$X= \left\{u|\;u:{\mathbb}R^n \to {\mathbb}R \;\text{is measurable},\; u|_{{\Omega}} \in L^2({\Omega})\; \text{and}\; \frac{(u(x)- u(y))}{ |x-y|^{\frac{n}{2}+s}}\in L^2(Q)\right\},$$ where $Q={\mathbb}R^{2n}\setminus({\mathcal}C{\Omega}\times {\mathcal}C{\Omega})$ and ${\mathcal}C{\Omega}:= {\mathbb}R^n\setminus{\Omega}$ endowed with the norm $$\|u\|_X = \|u\|_{L^2({\Omega})} + \left[u\right]_X,$$ where $$\left[u\right]_X= \left( \int_{Q}\frac{|u(x)-u(y)|^{2}}{|x-y|^{n+2s}}\,\mathrm{d}x\mathrm{d}y\right)^{\frac12}.$$ Then we define $ X_0 = \{u\in X : u = 0 \;\text{a.e. in}\; {\mathbb}R^n\setminus {\Omega}\}$ and we have the Poincare type inequality: there exists a constant $C>0$ such that $\|u\|_{L^{2}({\Omega})} \le C [u]_X$, for all $u\in X_0$. Hence, $\|u\|=[u]_X$ is a norm on ${X_0}$. Moreover, $X_0$ is a Hilbert space and $C_c^{\infty}({\Omega})$ is dense in $X_0$. For details on these spaces and variational setup we refer to [@bn-serv]. We say that $u \in X_0$ is a weak solution of if $$\begin{split} \int_{Q}\frac{(u(x)-u(y))(\varphi(x)-\varphi(y))}{|x-y|^{n+2s}}~dx dy =& \int_{{\Omega}}\int_{{\Omega}}\frac{|u(x)|^{2^*_{\mu,s}}|u(y)|^{2^*_{\mu,s}-2}u(y)\varphi(y)}{|x-y|^{\mu}}~dx dy \\ &+ {\lambda}\int_{{\Omega}}u\varphi ~dx,\; \text{for every}\; \varphi \in {X_0}. \end{split}$$ The corresponding energy functional associated to the problem is given by $$I_{{\lambda}}(u)= I(u) := \frac{\|u\|^2}{2} - \frac{1}{22^*_{\mu,s}}\int_{{\Omega}}\int_{{\Omega}}\frac{|u(x)|^{2^*_{\mu,s}}|u(y)|^{2^*_{\mu,s}}}{|x-y|^{\mu}}~ dx dy -\frac{{\lambda}}{2}\int_{{\Omega}}|u|^2 dx.$$ Using Hardy-Littlewood-Sobolev inequality, we can show that $I \in C^1(X_0,{\mathbb}R)$ and the critical points of $I$ corresponds to weak solution [of]{} . We define $$S^H_s := \displaystyle \inf\limits_{H^s({\mathbb}R^n)\setminus \{0\}} \frac{\displaystyle\int_{{\mathbb}R^n}\int_{{\mathbb}R^n} \frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}{\,dxdy}}{\displaystyle\left(\int_{{\mathbb}R^n} \int_{{\mathbb}R^n}\frac{|u(x)|^{2^*_{\mu,s}}|u(y)|^{2^*_{\mu,s}}}{|x-y|^{\mu}} dx dy\right)^{\frac{1}{2^*_{\mu,s}}}}$$ as the best constant which is achieved if and only if $u$ is of the form $$C\left( \frac{t}{t^2+|x-x_0|^2}\right)^{\frac{n-2s}{2}}, \; \; \text{for all}\; x \in {\mathbb}R^n,$$ for some $x_0 \in {\mathbb}R^n$, $C>0$ and $t>0$. Moreover, $ S^H_s ~ C(n,\mu)^{\frac{1}{2^*_{\mu,s}}}= S_s,$ where $S_s$ is the best constant of the Sobolev imbedding $H^s(\mathbb R^n)$ into $L^2(\mathbb R^n).$ Using suitable translation and dilation of the minimizing sequence, we proved- Let $$S^H_s({\Omega}) := \inf\limits_{X_0\setminus\{0\}} \frac{\displaystyle\int_{Q} \frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}{\,\mathrm{d}x\mathrm{d}y}}{\displaystyle\left(\int_{{\Omega}} \int_{{\Omega}}\frac{|u(x)|^{2^*_{\mu,s}}|u(y)|^{2^*_{\mu,s}}}{|x-y|^{\mu}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{2^*_{\mu,s}}}}.$$ Then $S^H_s({\Omega})= S^H_s$ and $S^H_s({\Omega})$ is never achieved except when ${\Omega}= {\mathbb}R^n$. Since has a lack of compactness due to the presence of the critical exponent, we needed a Brezis-Lieb type lemma which can be proved in the spirit of as follows- $$\begin{split} \int_{{\mathbb}R^n} \int_{{\mathbb}R^n} \frac{|u_k(x)|^{2^*_{\mu,s}}|u_k(y)|^{2^*_{\mu,s}}}{|x-y|^{\mu}}~\mathrm{d}x\mathrm{d}y &- \int_{{\mathbb}R^n} \int_{{\mathbb}R^n} \frac{|(u_k-u)(x)|^{2^*_{\mu,s}}|(u_k-u)(y)|^{2^*_{\mu,s}}}{|x-y|^{\mu}}~\mathrm{d}x\mathrm{d}y\\ & \rightarrow \int_{{\mathbb}R^n} \int_{{\mathbb}R^n} \frac{|u(x)|^{2^*_{\mu,s}}|u(y)|^{2^*_{\mu,s}}}{|x-y|^{\mu}}~\mathrm{d}x \mathrm{d}y \; \text{ as}\; k \rightarrow \infty \end{split}$$ where $\{u_k\}$ is a bounded sequence in $L^{2^*_s}({\mathbb}R^n)$ such that $u_k \rightarrow u$ almost everywhere in ${\mathbb}R^n$ as $n \rightarrow \infty$. Next, we prove the following properties concerning the compactness of Palais-Smale sequences. If $\{u_k\}$ is a Palais-Smale sequence of $I$ at $c$. Then 1. $\{u_k\}$ must be bounded in $X_0$ and its weak limit is a weak solution of , 2. $\{u_k\}$ has a convergent subsequence if $$c < \frac{n+2s-\mu}{2(2n-\mu)}(S^H_s)^{\frac{2n-\mu}{n+2s-\mu}}.$$ Let us consider the sequence of eigenvalues of the operator $(-{\Delta})^s$ with homogenous Dirichlet boundary condition in ${\mathbb}R^n \setminus {\Omega}$, denoted by $$0 < {\lambda}_1 < {\lambda}_2 \leq {\lambda}_3 \leq \ldots \leq {\lambda}_{j} \leq {\lambda}_{j+1}\leq \ldots$$ and $\{e_j\}_{j \in {\mathbb}N} \subset L^{\infty}({\Omega})$ be the corresponding sequence of eigen functions. We also consider this sequence of $e_j$’s to form an orthonormal basis of $L^2({\Omega})$ and orthogonal basis of $X_0$. We then dealt with the cases ${\lambda}\in (0,{\lambda}_{1})$ and ${\lambda}\in ({\lambda}_{r}, {\lambda}_{r+1})$ separately. We assume $0 \in {\Omega}$ and fix $\delta>0$ such that $B_{\delta}\subset {\Omega}\subset B_{{\hat k}\delta}$, [for some $\hat k>1$]{}. Let $\eta \in C^{\infty}({\mathbb}R^n)$ be such that $0\leq \eta \leq 1$ in ${\mathbb}R^n$, $\eta \equiv 1$ in $B_{\delta}$ and $\eta \equiv 0$ in ${\mathbb}R^n \setminus {\Omega}$. For $\epsilon >0$, we define the function $u_\epsilon$ as follows $$u_\epsilon(x) := \eta(x)U_{\epsilon}(x),$$ for $x \in {\mathbb}R^n$, where $ \displaystyle U_{\epsilon}(x) = \epsilon^{-\frac{(n-2s)}{2}}\; {\left(\frac{u^*\left(\frac{x}{\epsilon}\right)}{\|u^*\|_{{2^*_s}} }\right)}$ and $u^*(x)= {\alpha\left(\beta^2 + \left|\frac{x}{ S_s^{\frac{1}{2s}} }\right|^2\right)^{-\frac{n-2s}{2}}}$ with $\alpha \in {\mathbb}R \setminus \{0\}$, $ \beta >0$. We obtained the following important asymptotic estimates \[estimates1\] The following estimates holds true: $$\label{esti-new} \int_{{\mathbb}R^n}\frac{|u_\epsilon(x)- u_\epsilon(y)|^2}{|x-y|^{n+2s}}~\mathrm{d}x\mathrm{d}y \leq \left((C(n,\mu))^{\frac{n-2s}{2n-\mu}}S^H_s\right)^{\frac{n}{2s}}+ O(\epsilon^{n-2s}),$$ $$\left(\int_{{\Omega}}\int_{{\Omega}}\frac{|u_\epsilon(x)|^{2^*_{\mu,s}}|u_\epsilon(y)|^{2^*_{\mu,s}}}{|x-y|^{\mu}}~\mathrm{d}x\mathrm{d}y \right)^{\frac{n-2s}{2n-\mu}}\leq (C(n,\mu))^{\frac{n(n-2s)}{2s(2n-\mu)}} (S^H_s)^{\frac{n-2s}{2s}}+ {O(\epsilon^{n})},$$ and $$\left(\int_{{\Omega}}\int_{{\Omega}}\frac{|u_\epsilon(x)|^{2^*_{\mu,s}}|u_\epsilon(y)|^{2^*_{\mu,s}}}{|x-y|^{\mu}}~\mathrm{d}x\mathrm{d}y \right)^{\frac{n-2s}{2n-\mu}}\geq {\left((C(n,\mu))^{\frac{n}{2s}} (S^H_s)^{\frac{2n-\mu}{2s}}- O\left(\epsilon^{n}\right)\right)^{\frac{n-2s}{2n-\mu}}.}$$ When $n \geq 4s$, we proved that the energy functional $I_{\lambda}$ satisfies the Mountain pass geometry if ${\lambda}\in (0,{\lambda}_1)$ and Linking Theorem geometry if ${\lambda}\in ({\lambda}_r,{\lambda}_{r+1})$. Also in both the cases, Proposition \[estimates1\] helped us to show that for small enough ${\epsilon}>0$ $$\label{elem1} \frac{\|u_{\epsilon}\|^2 - {\lambda}\int_{{\Omega}}{|u_{\epsilon}|^2}\mathrm{d}x}{\left(\int_{{\Omega}}\int_{{\Omega}}\frac{|u_{\epsilon}(x)|^{2^*_{\mu,s}}{|u_{\epsilon}(y)|^{2^*_{\mu,s}}}}{|x-y|^{\mu}}~\mathrm{d}x\mathrm{d}y \right)^{\frac{n-2s}{2n-\mu}}} < S^H_s.$$ Then the proof of Theorem \[thrm1\] follows by applying Mountain Pass Lemma and Linking Theorem. On the other hand when $2s<n<4s$, could be proved only when ${\lambda}> \bar {\lambda}$ for some suitable $\bar {\lambda}>0$, when ${\epsilon}>0$ is sufficiently small. Hence again applying Mountain Pass Lemma and Linking Theorem in this case too, we prove Theorem \[newthrm\]. To prove Theorem \[thrm3\], we first prove that if ${\lambda}<0$ then any solution $u \in X_0$ of belongs to $L^\infty({\Omega})$ which implied that when ${\Omega}$ is a $C^{1,1}$ domain then $u/\delta^s \in C^\alpha(\bar {\Omega})$ for some $\alpha >0$ (depending on ${\Omega}$ and $s$) satisfying $\alpha < \min\{s,1-s\}$, where $\delta(x) = \text{dist}(x, \partial {\Omega})$ for $x\in {\Omega}$. Then using $(x.\nabla u)$ as a test function in , we proved the following Pohozaev type identity- \[Poho\] If ${\lambda}<0$, ${\Omega}$ be bounded $C^{1,1}$ domain and $u \in L^{\infty}({\Omega})$ solves , then $$\begin{split} \frac{2s-n}{2}&\int_{{\Omega}}u(-{\Delta})^su~\mathrm{d}x - \frac{\Gamma(1+s)^2}{2}\int_{\partial {\Omega}}\left(\frac{u}{\delta^s}\right)^2(x.\nu)\mathrm{d}\sigma\\ &={-}\left(\frac{2n-\mu}{22^*_{\mu,s}}\int_{{\Omega}}\int_{{\Omega}}\frac{|u(x)|^{2^*_{\mu,s}}|u(y)|^{2^*_{\mu,s}}}{|x-y|^{\mu}}~\mathrm{d}x\mathrm{d}y+ {\frac{{\lambda}n}{2}} \int_{{\Omega}}|u|^2\mathrm{d}x\right), \end{split}$$ where $\nu$ denotes the unit outward normal to $\partial {\Omega}$ at $x$ and $\Gamma$ is the Gamma function. Using Proposition \[Poho\], Theorem \[thrm3\] easily followed. Magnetic Choquard equations --------------------------- Very recently Lü [@Lu] studied the problem $$\label{intro4} (-i \nabla +A(x))^2u + (g_0+\mu g)(x) u = (|x|^{-\alpha}* |u|^p)|u|^{p-2}u, \; u \in H^1({\mathbb}R^n, {\mathbb}C),$$ where $n\geq 3$, $\alpha \in (0,n)$, $p \in \left( \frac{2n-\alpha}{n}, \frac{2n-\alpha}{n-2}\right)$, $A = (A_1, A_2, \ldots, A_n): {\mathbb}R^n \rightarrow {{\mathbb}R^n}$ is a vector (or magnetic) potential such that [$A \in L^n_{\text{loc}}({\mathbb}R^n, {\mathbb}R^n)$]{} and [$A$ is continuous at ${0}$]{}, $g_0$ and $g$ are real valued functions on ${\mathbb}R^n$ satisfying some [necessary]{} conditions and $\mu >0$. He proved the existence of ground state solution [when]{} $\mu \geq \mu^*$, for some $\mu^*>0$ and concentration behavior of solutions as $\mu \rightarrow \infty$. Salazar [in]{} [@szr] showed existence of vortex type solutions for the stationary nonlinear magnetic Choquard equation $$(-i \nabla +A(x))^2u + W(x)u = (|x|^{-\alpha}* |u|^p)|u|^{p-2}u \; \text{in} \; {\mathbb}R^n,$$ where $p \in \left[2, 2^*_\alpha \right)$ and $W: {\mathbb}R^n \to {\mathbb}R$ is bounded electric potential. Under some assumptions on decay of $A$ and $W$ at infinity, Cingloni, Sechi and Squassina in [@css] showed existence of family of solutions. Schrödinger equations with magnetic field and Choquard type nonlinearity has also been studied in [@MSS; @MSS1]. But the critical case in was still open which motivated us to study the problem $(P_{{\lambda},\mu})$ in [@TS-2]: $$(P_{{\lambda},\mu}) \left\{ \begin{array}{ll} (-i \nabla+A(x))^2u + \mu g(x)u = {\lambda}u + (|x|^{-\alpha} * |u|^{2^*_\alpha})|u|^{2^*_\alpha-2}u & \mbox{in } {\mathbb}R^n \\ u \in H^1({\mathbb}R^n, {\mathbb}C) \end{array} \right.$$ where $n \geq 4, 2^{*}_{\alpha}= \frac{2n-\alpha}{n-2}, \alpha \in (0,n), \mu>0, {\lambda}>0, A = (A_1, A_2, \ldots, A_n): \mathbb{R}^n \rightarrow \mathbb{R}^n$ is a vector(or magnetic) potential such that $A\in L^{n}_{loc}(\mathbb{R}^n, \mathbb{R}^n)$ and $A$ is continuous at ${0}$ and $g(x)$ satisfies the following assumptions: 1. $g \in C({\mathbb}R^n,{\mathbb}R)$, $g \geq 0$ and ${\Omega}:= \text{interior of}\;g^{-1}(0)$ is a nonempty bounded set with smooth boundary and $\overline{{\Omega}}= g^{-1}(0)$. 2. There exists $M>0$ such that ${{\mathcal}L}\{x \in {\mathbb}R^n:\; g(x)\leq M\} < {+\infty}$, where ${{\mathcal}L}$ denotes the Lebesgue measure in ${\mathbb}R^n$. Let us define $-\nabla_A := (-i\nabla +A)$ and $$H^1_A({\mathbb}R^n, {\mathbb}C)= \left\{u \in L^2({\mathbb}R^n,{\mathbb}C) \;: \; \nabla_A u \in L^2({\mathbb}R^n, {\mathbb}C^n)\right\}.$$ Then $H^1_A({\mathbb}R^n, {\mathbb}C)$ is a Hilbert space with the inner product $$\langle u,v\rangle_A = \text{Re} \left(\int_{{\mathbb}R^n} (\nabla_A u \overline{\nabla_A v} + u \overline v)~\mathrm{d}x \right),$$ where $\text{Re}(w)$ denotes the real part of $w \in {\mathbb}C$ and $\bar w$ denotes its complex conjugate. The associated norm $\|\cdot\|_A$ on the space $H^1_A({\mathbb}R^n, {\mathbb}C)$ is given by $$\|u\|_A= \left(\int_{{\mathbb}R^n}(|\nabla_A u|^2+|u|^2)~\mathrm{d}x\right)^{\frac{1}{2}}.$$ We call $H^1_A({\mathbb}R^n, {\mathbb}C)$ simply $H^1_A({\mathbb}R^n)$. Let $H^{0,1}_A({\Omega}, {\mathbb}C)$ (denoted by $H^{0,1}_A({\Omega})$ for simplicity) be the Hilbert space defined by the closure of $C_c^{\infty}({\Omega}, {\mathbb}C)$ under the scalar product [$\langle u,v \rangle_A= \textstyle\text{Re}\left(\int_{{\Omega}}(\nabla_A u \overline{\nabla_A v}+u \overline v)~\mathrm{d}x\right)$]{}, where ${\Omega}= \text{interior of } g^{-1}(0)$. Thus [norm on $H^{0,1}_A({\Omega})$ is given by]{} $$\|u\|_{H^{0,1}_A({\Omega})}= \left(\int_{\Omega}(|\nabla_A u|^2+|u|^2)~\mathrm{d}x\right)^{\frac{1}{2}}.$$ Let $E = \left\{u \in H^1_A({\mathbb}R^n): \int_{{\mathbb}R^n}g(x)|u|^2~\mathrm{d}x < +\infty\right\}$ be the Hilbert space equipped with the inner product $$\langle u,v\rangle = \text{Re} \left(\int_{{\mathbb}R^n}\left(\nabla_A u \overline{\nabla_A v}~\mathrm{d}x + g(x)u\bar v \right)~\mathrm{d}x\right)$$ and the associated norm $\|u\|_E^2= \int_{{\mathbb}R^n}\left(|\nabla_A u|^2+ g(x)|u|^2\right)~\mathrm{d}x. $ Then $\|\cdot\|_E$ is clearly equivalent to each of the norm $\|u\|_\mu^2= \int_{{\mathbb}R^n}\left(|\nabla_A u|^2+ \mu g(x)|u|^2\right)~\mathrm{d}x$ for $\mu >0$. We have the following well known *diamagnetic inequality* (for detailed proof, see [@LL], Theorem $7.21$ ). \[dia\_eq\] If $u \in H^1_A({\mathbb}R^n)$, then $|u| \in H^1({\mathbb}R^n,{\mathbb}R)$ and $$|\nabla |u|(x)| \leq |\nabla u(x)+ i A(x)u(x)| \; \text{for a.e.}\; x \in {\mathbb}R^n.$$ So for each $q \in [2,2^*]$, there exists constant $b_q>0$ (independent of $\mu$) such that $$\label{mg-eq1} |u|_q \leq b_q\|u\|_\mu, \; \text{for any}\; u \in E,$$ where $|\cdot|_q$ denotes the norm in $L^q({\mathbb}R^n,{\mathbb}C)$ and $2^*= \textstyle\frac{2n}{n-2}$ is the Sobolev critical exponent. [Also $H^1_A({\Omega}) \hookrightarrow L^q({\Omega}, {\mathbb}C)$ is continuous for each $1\leq q \leq 2^*$ and compact when $1\leq q < 2^*$.]{} We denote ${{\lambda}_1({\Omega})}>0$ as the best constant of the embedding ${H^{0,1}_A({\Omega})}$ into $L^2({\Omega}, {\mathbb}C) $ given by $${\lambda}_1({\Omega}) = \inf\limits_{u \in H^{0,1}_A({\Omega})}\left\{\int_{{\Omega}}|\nabla_A u|^2 ~\mathrm{d}x : \; {\int_{{\Omega}}|u|^2~\mathrm{d}x}=1\right\}$$ which is also the first eigenvalue of $-\Delta_A := (-i\nabla +A)^2$ on ${\Omega}$ with boundary condition $u=0$. In [@TS-2], we consider the problem $$(P_{\lambda})\;\; (-i\nabla +A(x))^2u = {\lambda}u + (|x|^{-\alpha}*|u|^{2^*_\alpha})|u|^{2^*_\alpha-2}u \; \mbox{in } \;{\Omega}, \;\; u=0 \; \mbox{on }\; \partial {\Omega}$$ and proved the following main results: \[MT1\] For every ${\lambda}\in (0, {\lambda}_1({\Omega}))$ there exists a $\mu({\lambda})>0$ such that $(P_{{\lambda},\mu})$ has a least energy solution $u_\mu$ for each $\mu\geq \mu({\lambda})$. \[MT2\] Let $\{u_m\}$ be a sequence of non-trivial solutions of $(P_{{\lambda},\mu_m})$ with $\mu_m \rightarrow \infty$ and $I_{{\lambda},\mu_m}(u_m) \rightarrow c< \frac{n+2-\alpha}{2(2n-\alpha)}S_A^{\frac{2n-\alpha}{n+2-\alpha}}$ as $m \rightarrow \infty$. Then $u_{m}$ concentrates at a solution of $(P_{\lambda})$. We give some definitions below- We say that a function $u \in {E}$ is a weak solution of $(P_{{\lambda},\mu})$ if $$\text{Re}\left( \int_{{\mathbb}R^n} \nabla_A u \overline{\nabla_A v}~\mathrm{d}x + \int_{{\mathbb}R^n}(\mu g(x)-{\lambda})u\overline{v}~\mathrm{d}x- \int_{{\mathbb}R^n} (|x|^{-\alpha}* |u|^{2^*_\alpha})|u|^{2^*_\alpha-2}u \overline{v}~\mathrm{d}x \right)=0$$ for all $v \in {E}$. A solution $u$ [of $(P_{{\lambda},\mu})$]{} is said to be a least energy solution if the energy functional $$I_{{\lambda},\mu}(u)= \int_{{\mathbb}R^n}\left( \frac12\left(|\nabla_A u|^2+ (\mu g(x)-{\lambda})|u|^2\right) - \frac{1}{22^*_\alpha}(|x|^{-\alpha}* |u|^{2^*_\alpha})|u|^{2^*_\alpha}\right) ~\mathrm{d}x$$ achieves its minimum at $u$ over all the nontrivial solutions of $(P_{{\lambda},\mu})$. A sequence of solutions $\{u_k\}$ of $(P_{{\lambda},\mu_k})$ is said to concentrate at a solution $u$ of $(P_{\lambda})$ if a subsequence converges strongly to $u$ in $H^1_A({\mathbb}R^n)$ as $\mu_k \rightarrow \infty$. We first proved the following Lemma. \[comp\_lem1\] Suppose $\mu_m \geq 1$ and $u_m \in E$ be such that $\mu_m \rightarrow \infty$ as $m \rightarrow \infty$ and [there exists a $K>0$ such that]{} $\|u_m\|_{\mu_m} < K$, for all $m \in {\mathbb}N$. Then there exists a $u \in H^{0,1}_A({\Omega})$ such that (upto a subsequence), $u_m \rightharpoonup u$ weakly in $E$ and $u_m \rightarrow u$ strongly in $L^2({\mathbb}R^n)$ as $m \rightarrow \infty$. Then we define an operator $T_\mu := -\Delta_A + \mu g(x)$ on $E$ given by $$\big(T_\mu(u),v\big)= \text{Re}\left(\int_{{\mathbb}R^n}(\nabla_A u \overline{\nabla_A v}+ \mu g(x)u\overline v)~\mathrm{d}x\right).$$ Clearly $T_\mu$ is a self adjoint operator and if $a_\mu := \inf \sigma(T_\mu)$, i.e. the infimum of the spectrum of $T_\mu$, then $a_\mu$ can be characterized as $$0 \leq a_\mu = \inf \{\big(T_\mu(u),u\big): \; u \in E,\; \|u\|_{L^2}=1\}= \inf \{\|u\|_\mu^2:\; u\in E,\;\|u\|_{L^2}=1\}.$$ Then considering a minimizing sequence of $a_\mu$, we were able to prove that for each ${\lambda}\in (0, {\lambda}_1({\Omega}))$, there exists a $\mu({\lambda})>0$ such that $a_\mu \geq ({\lambda}+{\lambda}_1({\Omega}))/2$ whenever $\mu \geq \mu({\lambda})$. As a consequence $$\big((T_\mu-{\lambda})u,u) \geq \beta_{\lambda}\|u\|_\mu^2$$ for all $u \in E$, $\mu \geq \mu({\lambda})$, where $\beta_{\lambda}:= ({\lambda}_1({\Omega})-{\lambda})/({\lambda}_1({\Omega})+{\lambda})$. We fix ${\lambda}\in (0, {\lambda}_1({\Omega}))$ and $\mu \geq \mu({\lambda})$. Using standard techniques, we established the following concerning any Palais Smale sequence $\{u_k\}$ of $I_{{\lambda},\mu}$ - 1. $\{u_m\}$ must be bounded in $E$ and its weak limit is a solution of $(P_{{\lambda},\mu})$, 2. $\{u_m\}$ has a convergent subsequence when $c$ satisfies $$c \in \left(-\infty, \frac{n+2-\alpha}{2(2n-\alpha)} S_A^{\frac{2n-\alpha}{n+2-\alpha}}\right)$$ where $S_A$ is defined as follows $$S_A = {\inf_{u \in H^1_A({\mathbb}R^n) \setminus \{0\}} \frac{\displaystyle \int_{{\mathbb}R^n}|\nabla_Au|^2~\mathrm{d}x}{\displaystyle \int_{{\mathbb}R^n}(|x|^{-\alpha}* |u|^{2^*_\alpha})|u|^{2^*_\alpha}~\mathrm{d}x}}.$$ Using asymptotic estimates using the family $U_\epsilon (x)= (n(n-2))^{\frac{n-2}{4}}\left(\frac{\epsilon}{\epsilon^2+|x|^2}\right)^{\frac{n-2}{4}}$, we showed that- \[S\_Aattain\] If $g\geq 0$ and $A \in L^n_{\text{loc}}({\mathbb}R^n, {\mathbb}R^n)$, then the infimum $S_A$ is attained if and only if $\text{curl }A \equiv 0$. Our next step was to introduce the Nehari manifold $${\mathcal}N_{{\lambda},\mu}= \left\{u \in E\setminus \{0\}:\; \langle I^\prime_{{\lambda},\mu}(u),u\rangle=0\right\}$$ and consider the minimization problem $k_{{\lambda},\mu}:= \inf_{u \in {\mathcal}N_{{\lambda},\mu}} I_{{\lambda},\mu}(u)$. Using the family $\{U_{\epsilon}\}$, we showed that $$k_{{\lambda},\mu} < \frac{n+2-\alpha}{2(2n-\alpha)} S_A^{\frac{2n-\alpha}{n+2-\alpha}}.$$ Then the proof of Theorem \[MT1\] followed by using the Ekeland Variational Principle over ${\mathcal}N_{{\lambda},\mu}$. The proof of Theorem \[MT2\] followed from Lemma \[comp\_lem1\] and the Brezis-Lieb type lemma for the Riesz potentials. These results can be generalized to the problems involving fractional magnetic operators: $$(P_{{\lambda},\mu}^s)\left\{ \begin{array}{rlll} & (-{\Delta})^s_A u + \mu g(x)u = {\lambda}u + (|x|^{-\alpha} * |u|^{2^*_{\alpha,s}})|u|^{2^*_{\alpha,s}-2}u \;\text{in} \; {\mathbb}R^n ,\\ & u \in H^s_A({\mathbb}R^n, {\mathbb}C) \end{array} \right.$$ where $n \geq 4s$, $s \in (0,1)$ and $\alpha\in (0,n)$. Here $2^*_{\alpha,s}=\textstyle \frac{2n-\alpha}{n-2s}$ is the critical exponent in the sense of Hardy-Littlewood-Sobolev inequality. We assume the same conditions on $A$ and $g$ as before. For $u \in C_c^\infty({\Omega})$, the fractional magnetic operator $(-{\Delta})^s_A$, up to a normalization constant, is defined by $$(-{\Delta})^s_A u (x) = 2\lim_{{\epsilon}\to 0^+} \int_{{\mathbb}R^n \setminus B_{{\epsilon}}(x)} \frac{u(x)-e^{i(x-y)\cdot A\left(\frac{x+y}{2}\right)}u(y) }{|x-y|^{n+2s}}\mathrm{d}y$$ for all $x \in {\mathbb}R^n$. With proper functional setting, we can prove the existence and concentration results for the problem $(P_{{\lambda},\mu}^s)$ employing the same arguments as in the local magnetic operator case. Singular problems involving Choquard nonlinearity ------------------------------------------------- The paper by Crandal, Rabinowitz and Tartar [@crt] is the starting point on semilinear problem with singular nonlinearity. A lot of work has been done related to the existence and multiplicity results for singular problems, see [@haitao; @hirano2; @hirano1]. Using splitting Nehari manifold technique, authors in [@hirano1] studied the existence of multiple solutions of the problem: $$\label{hr} -{\Delta}u = {\lambda}u^{-q}+ u^p,\; u>0 \; \text{in}\; {\Omega}, \quad u = 0 \; \mbox{on}\; \partial{\Omega},$$ where ${\Omega}$ is smooth bounded domain in ${\mathbb}R^n$, $n\geq 1$, $p=2^*-1$, ${\lambda}>0$ and $0<q<1$. In [@haitao], Haitao studied the equation for $n\geq 3$, $1<p\leq 2^{*}-1$ and showed the existence of two positive solutions for maximal interval of the parameter ${\lambda}$ using monotone iterations and mountain pass lemma. But the singular problem involving Choquard nonlinearity was completely open until we studied the following problem in [@TS-3] $$(P_{{\lambda}}): \quad \quad -{\Delta}u = {\lambda}u^{-q} + \left( \int_{{\Omega}}\frac{|u(y)|^{2^*_{\mu}}}{|x-y|^{\mu}}\mathrm{d}y \right)|u|^{2^*_{\mu}-2}u, \; u>0 \; \text{in}\; {\Omega}, \; u = 0 \; \mbox{on}\; \partial{\Omega}, \quad$$ where ${\Omega}\subset {\mathbb}R^n$, $n>2$ be a bounded domain with smooth boundary $\partial {\Omega}$, ${\lambda}>0,\; 0 < q < 1, $ $ 0<\mu<n$ and $2^*_\mu=\frac{2n-\mu}{n-2}$. The main difficulty in treating $(P_{\lambda})$ is the presence of singular nonlinearity along with critical exponent in the sense of Hardy-Littlewood-Sobolev inequality which is nonlocal in nature. The energy functional no longer remains differentiable due to presence of singular nonlinearity, so usual minimax theorems are not applicable. Also the critical exponent term being nonlocal adds on the difficulty to study the Palais-Smale level around a nontrivial critical point. We say that $u\in H^1_0({\Omega})$ is a positive weak solution of $(P_{\lambda})$ if $u>0$ in ${\Omega}$ and $$\label{BS-11} \int_{{\Omega}} (\nabla u \nabla \psi -{\lambda}u^{-q}\psi)~\mathrm{d}x - \int_{{\Omega}}\int_{{\Omega}}\frac{|u(x)|^{2^*_{\mu}}|u(y)|^{2^*_{\mu}-2}u(y)\psi(y)}{|x-y|^{\mu}}~\mathrm{d}x\mathrm{d}y = 0$$ [for all]{} $\psi \in C^{\infty}_c({\Omega})$. We define the functional associated to $(P_{\lambda})$ as $I : H^1_{0}({\Omega}) \rightarrow (-\infty, \infty]$ by $$I(u) = \frac12 \int_{{\Omega}}|\nabla u|^2~ \mathrm{d}x- \frac{{\lambda}}{1-q} \int_{\Omega}|u|^{1-q} \mathrm{d}x - \frac{1}{22^*_{\mu}}\int_{{\Omega}}\int_{{\Omega}}\frac{|u(x)|^{2^*_{\mu}}|u(y)|^{2^*_{\mu}}}{|x-y|^{\mu}}~\mathrm{d}x\mathrm{d}y,$$ for $u \in H^1_{0}({\Omega})$. For each $0 < q <1$, we set $ H_+ = \{ u \in H^1_0({\Omega}) : u \geq 0\}$ and $$H_{+,q} = \{ u \in H_+ : u \not\equiv 0,\; |u|^{1-q} \in L^1({\Omega})\} = H_+ \setminus \{0\} .$$ For each $u\in H_{+,q}$ we define the fiber map $\phi_u:{\mathbb}R^+ \rightarrow {\mathbb}R$ by $\phi_u(t)=I_{\lambda}(tu)$. Then we proved the following: \[mainthrm1\] Assume $0<q < 1$ and let $\Lambda$ be a constant defined by $$\begin{split} \Lambda = & \sup \left\{{\lambda}>0:\text{ for each} \; u\in H_{+,q}\backslash\{0\}, ~\phi_u(t)~ \text{has two critical points in} ~(0, \infty)\right.\\ &\left.\text{and}\; \sup\left\{ \int_{{\Omega}}|\nabla u|^2~\mathrm{d}x\; : \; u \in H_{+,q}, \phi^{\prime}_u(1)=0,\;\phi^{\prime\prime}_u(1)>0 \right\} \leq (2^*_\mu S_{H,L}^{2^*_\mu})^{\frac{1}{2^*_\mu-1}} \right\}. \end{split}$$ Then ${\Lambda}>0$. Using the variational methods on the Nehari manifold, we proved the following multiplicity result. \[mainthrm2\] For all ${\lambda}\in (0, \Lambda)$, $(P_{\lambda})$ has two positive weak solutions $u_{\lambda}$ and $v_{\lambda}$ in $C^\infty({\Omega})\cap L^{\infty}({\Omega})$. We also have that if $u$ is a positive weak solution of $(P_{\lambda})$, then $u$ is a classical solution in the sense that $u \in C^\infty({\Omega}) \cap C(\bar {\Omega})$. We define $\delta : {\Omega}\rightarrow [0,\infty)$ by $\delta(x)=\inf\{|x-y|: y \in \partial {\Omega}\}$, for each $x \in {\Omega}$. \[mainthrm3\] Let $u$ be a positive weak solution of $(P_{\lambda})$, then there exist $K,\;L>0$ such that $L\delta \leq u \leq K\delta$ in ${\Omega}$. We define the Nehari manifold $${\mathcal}N_{{\lambda}} = \{ u \in H_{+,q} | \left\langle I^{\prime}(u),u\right\rangle = 0 \}$$ and show that $I$ is coercive and bounded below on ${\mathcal}N_{{\lambda}}$. It is easy to see that the points in ${\mathcal}N_{{\lambda}}$ are corresponding to critical points of $\phi_{u}$ at $t=1$. So, we divided ${\mathcal}N_{{\lambda}}$ in three sets corresponding to local minima, local maxima and points of inflexion $$\begin{aligned} {\mathcal}N_{{\lambda}}^{+} &= \{ t_0u \in {\mathcal}N_{{\lambda}} |\; t_0 > 0,~ \phi^{\prime}_u (t_0) = 0,~ \phi^{\prime \prime}_u(t_0) > 0\},\\ {\mathcal}N_{{\lambda}}^{-} = & \{ t_0u \in {\mathcal}N_{{\lambda}} |\; t_0 > 0, ~ \phi^{\prime}_u (t_0) = 0, ~\phi^{\prime \prime}_u(t_0) < 0\}\end{aligned}$$ and $ {\mathcal}N_{\lambda}^{0}= \{ u \in {\mathcal}N_{{\lambda}} | \phi^{\prime}_{u}(1)=0,\; \phi^{\prime \prime}_{u}(1)=0 \}$. We aimed at showing that the minimizers of $I$ over ${\mathcal}N^+$ and ${\mathcal}N^-$ forms a weak solution of $(P_{\lambda})$. We briefly describe the key steps to show this. Using the fibering map analysis, we proved that there exist ${\lambda}_*>0$ such that for each $u\in H_{+,q}\backslash\{0\}$, there is unique $t_1$ and $t_2$ with the property that $t_1<t_2$, $t_1 u\in {\mathcal}N_{{\lambda}}^{+}$ and $ t_2 u\in {\mathcal}N_{{\lambda}}^{-}$, for all ${\lambda}\in (0,{\lambda}_*)$. This implied Theorem \[mainthrm1\]. Also ${\mathcal}N_{{\lambda}}^{0} = \{0\}$ for all $ {\lambda}\in (0, {\lambda}_*)$. Then it is easy to see that $\sup \{ \|u\|: u \in {\mathcal}N_{{\lambda}}^{+}\} < \infty $ and $\inf \{ \|v\|: v \in {\mathcal}N_{{\lambda}}^{-} \} >0$. Suppose $u$ and $v$ are minimizers of $I$ on ${\mathcal}N_{{\lambda}}^{+}$ and ${\mathcal}N_{{\lambda}}^{-}$ respectively. Then for each $w \in H_{+}$, we showed $u^{-q}w, v^{-q} w \in L^{1}({\Omega})$ and $$\begin{aligned} &\int_{{\Omega}} (\nabla u \nabla w-{\lambda}u^{-q}w)~\mathrm{d}x - \int_{{\Omega}}\int_{{\Omega}}\frac{|u(y)|^{2^*_{\mu}}|u(x)|^{2^*_{\mu}-2}u(x)w(x)}{|x-y|^{\mu}}~\mathrm{d}y\mathrm{d}x \geq 0 , \label{upos}\\ &\int_{{\Omega}} (\nabla v \nabla w -{\lambda}v^{-q}w)~\mathrm{d}x - \int_{{\Omega}}\int_{{\Omega}}\frac{|v(y)|^{2^*_{\mu}}|u(x)|^{2^*_{\mu}-2}v(x)w(x)}{|x-y|^{\mu}}~\mathrm{d}y\mathrm{d}x \geq 0.\label{vpos}\end{aligned}$$ Particularly, $u,\; v >0$ almost everywhere in ${\Omega}$. Then the claim followed using the Gatéaux differentiability of $I$. Lastly, the proof of Theorem \[mainthrm2\] followed by proving that $I$ achieves its minimum over the sets ${\mathcal}N^+_{\lambda}$ and ${\mathcal}N^-_{\lambda}$.\ In the regularity section, firstly, we showed that holds for all $\psi \in H_0^1({\Omega})$ and each positive weak solution $u$ of $(P_{\lambda})$ belongs to $L^\infty({\Omega})$. Under the assumption that there exist $a\geq 0$, $R \geq 0$ and $q \leq s <1$ such that ${\Delta}\delta \leq R\delta^{-s} \; \text{in} \; {\Omega}_a:=\{x\in {\Omega}, {\delta}(x)\le a\},$ using appropriate test functions, we proved that there exist $K>0$ such that $u \leq K\delta$ in ${\Omega}$. To get the lower bound on $u$ with respect to $\delta$, following result from [@brenir] plays a crucial role. Let ${\Omega}$ be a bounded domain in ${\mathbb}R^n$ with smooth boundary $\partial {\Omega}$. Let $u \in L^1_{\text{loc}}({\Omega})$ and assume that for some $k \geq 0$, $u$ satisfies, in the sense of distributions $$-{\Delta}u + ku \geq 0 \; \text{in} \; {\Omega},\quad u \geq 0 \; \text{in}\; {\Omega}.$$ Then either $u \equiv 0$, or there exists ${\epsilon}>0$ such that $u(x) \geq {\epsilon}\delta(x), \; x \in {\Omega}.$ Additionally, we also prove that the solution can be more regular in a restricted range of $q$. Let $q\in (0,\frac{1}{n})$ and let $u \in H^1_0({\Omega})$ be a positive weak solution of $(P_{\lambda})$, then $u \in C^{1+\alpha}(\bar {\Omega})$ for some $0<\alpha<1$. System of equations with Choquard type nonlinearity =================================================== In this section, we briefly illustrate some existence and multiplicity results proved concerning system of Choquard equations with nonhomogeneous terms. We consider the nonlocal operator that is the fractional Laplacian and since the Choquard nonlinearity is also a nonlocal one, such problems are often called ’doubly nonlocal problems’. We employ the method of Nehari manifold to achieve our goal. Doubly nonlocal $p$-fractional Coupled elliptic system ------------------------------------------------------ The $p$-fractional Laplace operator is defined as $$(-{\Delta})^s_pu(x)= 2 \lim_{{\epsilon}\searrow 0} \int_{|x|>{\epsilon}} \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{n+sp}}~dy, \; \forall x\in {\mathbb}R^n,$$ which is nonlinear and nonlocal in nature. This definition matches to linear fractional Laplacian operator $(-{\Delta})^s$, [up to]{} a normalizing constant depending on $n$ and $s$, when $p=2$. [The operator]{} $(-{\Delta})^s_p$ is degenerate when $p>2$ and singular when $1<p<2$. For details, refer [@hitch]. Our concern lies in the nonhomogenous Choquard equations and system of equations. Recently, authors in [@XXW] and [@yang-zamp] showed multiplicity of positive solutions for a nonhomogeneous Choquard equation using Nehari manifold. The motivation behind such problems lies in the famous article by Tarantello [@tarantello] where author used the structure of associated Nehari manifold to obtain the multiplicity of solutions for the following nonhomogeneous Dirichlet problem on bounded domain ${\Omega}$ $$-{\Delta}u = |u|^{2^*-2}u+f \;\text{in}\; {\Omega},\; u=0 \;\text{on}\; \partial {\Omega}.$$ System of elliptic equations involving $p$-fractional Laplacian [has]{} been studied in [@CD; @CS] using Nehari manifold techniques. Very recently, Guo et al. [@guo] studied a nonlocal system involving fractional Sobolev critical exponent and fractional Laplacian. There are not many results on elliptic systems with non-homogeneous nonlinearities in the literature but we cite [@choi; @faria; @ww] as some very recent works on the study of fractional elliptic systems. Motivated by these articles, we consider the following nonhomogenous quasilinear system of equations with perturbations involving $p$-fractional Laplacian in [@TS-4]:\ Let $p\geq 2, s\in (0,1), n>sp$, $\mu \in (0,n)$, $\frac{p}{2}\left( 2-\frac{\mu}{n}\right) < q <\frac{p^*_s}{2}\left( 2-\frac{\mu}{n}\right)$, $\alpha,\beta,\gamma >0$, $$(P)\left\{ \begin{array}{rlll} (-{\Delta})^s_p u+ a_1(x)u|u|^{p-2} &= \alpha(|x|^{-\mu}*|u|^q)|u|^{q-2}u+ \beta (|x|^{-\mu}*|v|^q)|u|^{q-2}u\\ & \quad \quad + f_1(x)\; \text{in}\; {\mathbb}R^n,\\ (-{\Delta})^s_p v+ a_2(x)v|v|^{p-2} &= \gamma(|x|^{-\mu}*|v|^q)|v|^{q-2}v+ \beta (|x|^{-\mu}*|u|^q)|v|^{q-2}v\\ &\quad \quad + f_2(x)\; \text{in}\; {\mathbb}R^n, \end{array} \right.$$ where $0< a_i \in C^1({\mathbb}R^n, {\mathbb}R)$, $i=1,2$ and $f_1,f_2: {\mathbb}R^n \to {\mathbb}R$ are perturbations. Here $p^*_s = \frac{np}{n-sp}$ is the critical exponent associated with the embedding of the fractional Sobolev space $W^{s,p}({\mathbb}R^n)$ into $L^{p_s^*}({\mathbb}R^n).$ Wang et. al in [[@wangetal]]{} studied the problem $(P)$ in the local case $s=1$ and obtained a partial multiplicity results. We improved their results and showed the multiplicity results with a weaker assumption of $f_1$ and $f_2$ below. For $i=1,2$ we introduce the spaces $$Y_i:= \left\{u \in W^{s,p}({\mathbb}R^n): \; \int_{{\mathbb}R^n}a_i(x)|u|^p~dx < +\infty \right\}$$ then $Y_i$ are Banach spaces equipped with the norm $$\|u\|_{Y_i}^p = \int_{{\mathbb}R^n}\int_{{\mathbb}R^n}\frac{|u(x)-u(y)|^p}{|x-y|^{n+sp}}dxdy+ \int_{{\mathbb}R^n}a_i(x)|u|^pdx.$$ We define the product space $Y= Y_1 \times Y_2$ which [ is a reflexive Banach space]{} with the norm $$\|(u,v)\|^p := \|u\|_{Y_1}^p+ \|v\|_{Y_2}^p,$$ for all $(u,v)\in Y$. We assume the following condition on $a_i$, for $i=1,2$ $$\label{cond-on-lambda} (A)\;\; a_i \in C({\mathbb}R^n),\; a_i >0 \; \text{and there exists}\; M_i>0 \; \text{such that}\; \mu(\{x\in {\mathbb}R^n: a_i \leq M_i\})< \infty. $$ Then under the condition (A) on $a_i$, for $i=1,2$, we get $Y_i$ is continuously imbedded into $ L^r({\mathbb}R^n)$ for $r \in [p,p^*_s]$. To obtain our results, we assumed the following condition on perturbation terms: $$\label{star0} \int_{{\mathbb}R^n} (f_1u+ f_2 v)< C_{p,q}\left(\frac{2q+p-1}{4pq}\right)\|(u,v)\|^{\frac{p(2q-1)}{2q-p}}$$ for all $(u,v)\in Y$ such that $$\int_{{\mathbb}R^n}\left(\alpha(|x|^{-\mu}*|u|^q)|u|^q +2\beta (|x|^{-\mu}*|u|^q)|v|^q +\gamma (|x|^{-\mu}*|v|^q)|v|^q \right)dx= 1$$ and $$C_{p,q}= \left(\frac{p-1}{2q-1}\right)^{\frac{2q-1}{2q-p}}\left(\frac{2q-p}{p-1}\right).$$ It is easy to see that $2q > p\left(\frac{2n-\mu}{n}\right) > p-1> \frac{p-1}{2p-1}$ which implies $\frac{2q+p-1}{4pq}<1.$ So implies that $$\label{star00} \int_{{\mathbb}R^n} (f_1u+ f_2 v)< C_{p,q}\|(u,v)\|^{\frac{p(2q-1)}{2q-p}}$$ which we used more frequently rather than our actual assumption . Now, the main results goes as follows. \[mainthrm\] Suppose $\displaystyle\frac{p}{2}\left(\frac{2n-\mu}{n}\right)< q < \displaystyle\frac{p}{2}\left(\frac{2n-\mu}{n-sp}\right)$, $\mu \in (0,n)$ and $(A)$ holds true. Let $0 \not \equiv f_1,f_2 \in L^{\frac{p}{p-1}}({\mathbb}R^n)$ satisfies then $(P)$ has at least two weak solutions, in which one forms a local minimum of $J$ on $Y$. Moreover if $f_1,f_2 \geq 0$ then this solution is a nonnegative weak solution. If $u,\phi \in W^{s,p}({\mathbb}R^n)$, we use the notation $\langle u,\phi\rangle $ to denote $$\langle u,\phi\rangle := \int_{{\mathbb}R^n}\int_{{\mathbb}R^n} \frac{(u(x)-u(y))|u(x)-u(y)|^{p-2}(\phi(x)-\phi(y))}{|x-y|^{n+sp}}dxdy.$$ A pair of functions $(u,v)\in Y$ is said to be a weak solution to $(P)$ if $$\label{def-weak-sol} \begin{split} \langle u,\phi_1\rangle &+ \int_{{\mathbb}R^n}a_1(x)u|u|^{p-2}\phi_1~dx+ \langle v,\phi_2\rangle + \int_{{\mathbb}R^n}a_2(x)v|v|^{p-2}\phi_2~dx\\ & -\alpha \int_{{\mathbb}R^n}(|x|^{-\mu}*|u|^q)u|u|^{q-2}\phi_1 ~dx-\gamma \int_{{\mathbb}R^n}(|x|^{-\mu}*|v|^q)v|v|^{q-2}\phi_2 ~dx\\ & -\beta \int_{{\mathbb}R^n}(|x|^{-\mu}*|v|^q)u|u|^{q-2}\phi_1~ dx-\beta \int_{{\mathbb}R^n}(|x|^{-\mu}*|u|^q)v|v|^{q-2}\phi_2 ~dx\\ & - \int_{{\mathbb}R^n}(f_1\phi_1 +f_2\phi_2)~dx=0,\; \forall \;(\phi_1,\phi_2) \in Y. \end{split}$$ Thus we define the energy functional corresponding to $(P)$ as $$\begin{split} J(u,v) &= \frac{1}{p}\|(u,v)\|^p - \frac{1}{2q}\int_{{\mathbb}R^n}\left(\alpha(|x|^{-\mu}*|u|^q)|u|^q +\beta (|x|^{-\mu}*|u|^q)|v|^q \right)dx\\ &\quad - \frac{1}{2q}\int_{{\mathbb}R^n}\left( \beta (|x|^{-\mu}*|v|^q)|u|^q+ \gamma (|x|^{-\mu}*|v|^q)|v|^q \right)dx -\int_{{\mathbb}R^n } (f_1u+f_2v)dx\\ &= \frac{1}{p}\|(u,v)\|^p - \frac{1}{2q}\int_{{\mathbb}R^n}\left(\alpha(|x|^{-\mu}*|u|^q)|u|^q +2\beta (|x|^{-\mu}*|u|^q)|v|^q\right.\\ & \quad \left.+ \gamma (|x|^{-\mu}*|v|^q)|v|^q \right)dx -\int_{{\mathbb}R^n } (f_1u+f_2v)dx. \end{split}$$ Clearly, weak solutions to $(P)$ corresponds to the critical points of $J$. To find the critical points of $J$, we constraint our functional $J$ on the Nehari manifold $${\mathcal}N = \{(u,v)\in Y: \; (J^\prime(u,v),(u,v)) =0 \},$$ where $$\begin{split} (J^\prime(u,v),(u,v)) =& \|(u,v)\|^p - \int_{{\mathbb}R^n}\left(\alpha(|x|^{-\mu}*|u|^q)|u|^q +2\beta (|x|^{-\mu}*|u|^q)|v|^q\right.\\ &\left. + \gamma (|x|^{-\mu}*|v|^q)|v|^q \right)dx -\int_{{\mathbb}R^n } (f_1u+f_2v)dx. \end{split}$$ Clearly, every nontrivial weak solution to $(P)$ belongs to ${\mathcal}N$. Denote $I(u,v)= (J^\prime (u,v),(u,v))$ and subdivide the set ${\mathcal}N$ into three sets as follows: $${{\mathcal}N^\pm = \{(u,v)\in {\mathcal}N: \; \pm(I^\prime (u,v),(u,v))> 0\}},$$ $${\mathcal}N^0 = \{(u,v)\in {\mathcal}N: \; (I^\prime (u,v),(u,v))=0\}. $$ Then ${\mathcal}N^0$ contains the element $(0,0)$ and ${\mathcal}N^+ \cup {\mathcal}N^0$ and ${\mathcal}N^- \cup {\mathcal}N^0$ are closed subsets of $Y$. For $(u,v)\in Y$, we define the fibering map $\varphi: (0,\infty) \to {\mathbb}R $ as $\varphi(t) = J(tu,tv)$. One can easily check that $(tu,tv)\in {\mathcal}N$ if and only if $\varphi^\prime(t)=0$, for $t>0$ and ${\mathcal}N^+$, ${\mathcal}N^-$ and ${\mathcal}N^0$ can also be written as $${\mathcal}N^\pm = \{ (tu,tv)\in {\mathcal}N:\; \varphi^{\prime\prime}(t)\gtrless 0 \},\\ \text{ and }{\mathcal}N^0 = \{ (tu,tv)\in {\mathcal}N:\; \varphi^{\prime\prime}(t)=0 \}.$$ We showed that $J$ becomes coercive and bounded from below on ${\mathcal}N$. By analyzing the fiber maps $\varphi_{u,v}(t)$ we proved that if holds, then ${\mathcal}N_0 =\{(0,0)\}$ and ${\mathcal}N^-$ is a closed set. By Lagrange multiplier method, we showed that minimizers of $J$ over ${\mathcal}N^+$ and ${\mathcal}N^-$ are the weak solutions of $(P)$. So our problem reduced to minimization problem given below- $$\label{min-N+} \Upsilon^+ := \inf\limits_{(u,v) \in {\mathcal}N^+}J(u,v),\; \text{and}\; \Upsilon^- := \inf\limits_{(u,v) \in {\mathcal}N^-}J(u,v).$$ Using again the map $\varphi_{u,v}$, we could show that $\Upsilon^+<0$ whereas $\Upsilon^->0$. Our next task was to consider $$\Upsilon := \inf_{(u,v)\in {\mathcal}N}J(u,v)$$ and show that there exist a constant $C_1>0$ such that $\Upsilon \leq - \frac{(2q-p)(2qp-2q-p)}{4pq^2}C_1. $ Our next result was crucial one, which concerns another minimization problem. \[inf-achvd\] For $0\neq f_1,f_2 \in L^{\frac{p}{p-1}}({\mathbb}R^n)$, $$\inf_Q \left( C_{p,q}\|(u,v)\|^{\frac{p(2q-1)}{2q-p}}- \int_{{\mathbb}R^n}(f_1u+f_2v)~dx\right):= \delta$$ is achieved, where $ Q = \{(u,v)\in Y : L(u,v)=1\}$. Also if $f_1,f_2$ satisfies , then $\delta >0$. After this, using the Ekeland variational principle we proved the existence of a Palais Smale sequence for $J$ at the levels $\Upsilon$ and $\Upsilon^-$. Keeping this altogether, we could prove that $\Upsilon$ and $\Upsilon^-$ are achieved by some functions $(u_0,v_0)$ and $(u_1,v_1)$ where $(u_0,v_0)$ lies in ${\mathcal}N^+$ and forms a local minimum of $J$. The non negativity of $(u_i,v_i)$ for $i=0,1$ was showed using the modulus function $(|u_i|,|v_i|)$ and their corresponding fiber maps. Hence we conclude our main result, Theorem \[mainthrm\]. Doubly nonlocal system with critical nonlinearity ------------------------------------------------- In this section, we illustrate our results concerning a system of Choquard equation with Hardy-Littlewood-Sobolev critical nonlinearity which involves the fractional Laplacian. Precisely, we consider the following problem in [@TS-5] $$(P_{{\lambda},\delta})\left\{ \begin{array}{rlll} (-{\Delta})^su &= {\lambda}|u|^{q-2}u + \left(\int_{{\Omega}}\frac{|v(y)|^{2^*_\mu}}{|x-y|^\mu}~\mathrm{d}y\right) |u|^{2^*_\mu-2}u\; \text{in}\; {\Omega}\\ (-{\Delta})^sv &= \delta |v|^{q-2}v + \left(\int_{{\Omega}}\frac{|u(y)|^{2^*_\mu}}{|x-y|^\mu}~\mathrm{d}y \right) |v|^{2^*_\mu-2}v \; \text{in}\; {\Omega}\\ u &=v=0\; \text{in}\; {\mathbb}R^n\setminus\Omega, \end{array} \right.$$ where ${\Omega}$ is a smooth bounded domain in ${\mathbb}R^n$, $n >2s$, $s \in (0,1)$, $\mu \in (0,n)$, $2^*_\mu = \displaystyle\frac{2n-\mu}{n-2s}$ is the upper critical exponent in the Hardy-Littlewood-Sobolev inequality, $1<q<2$, ${\lambda},\delta >0$ are real parameters. As we illustrated some literature on system of elliptic equation involving fractional Laplacian in the last subsection, it was an open question regarding the existence and multiplicity result for system of Choquard equation with Hardy-Littlewood-Sobolev critical nonlinearity, even in the local case $s=1$. Using the Nehari manifold technique, we prove the following main result- \[MT\] Assume $1<q<2$ and $0<\mu<n$ then [there exists positive constants]{} $\Theta$ and $\Theta_0$ such that 1. if $\mu\leq 4s$ and $ 0< {\lambda}^{\frac{2}{2-q}}+ \delta^{\frac{2}{2-q}}< \Theta$, the system $(P_{{\lambda},\delta})$ admits at least two nontrivial solutions, 2. if $\mu> 4s$ and $ 0< {\lambda}^{\frac{2}{2-q}}+ \delta^{\frac{2}{2-q}}< \Theta_0$, the system $(P_{{\lambda},\delta})$ admits at least two nontrivial solutions. Moreover, there exists a positive solution for $(P_{{\lambda},\delta})$. Consider the product space $Y:= X_0\times X_0$ endowed with the norm $\|(u,v)\|^2:=\|u\|^2+\|v\|^2$. For notational convenience, if $u, v \in X_0$ we set $$B(u,v):= \int_{{\Omega}}(|x|^{-\mu}\ast|u|^{2^*_\mu})|v|^{2^*_\mu}.$$ We say that $(u,v)\in Y$ is a weak solution to $(P_{{\lambda},\delta})$ if for every $(\phi,\psi)\in Y$, it satisfies $$\begin{split}\label{weak-sol} (\langle u, \phi\rangle + \langle v,\psi\rangle) &= \int_{{\Omega}}({\lambda}|u|^{q-2}u\phi+\delta |v|^{q-2}v\psi)\mathrm{d}x\\ &+ \int_{{\Omega}}(|x|^{-\mu}\ast|v|^{2^*_\mu})|u|^{2^*_\mu-2}u\phi~\mathrm{d}x + \int_{{\Omega}}(|x|^{-\mu}\ast|u|^{2^*_\mu})|v|^{2^*_\mu-2}v\psi~\mathrm{d}x. \end{split}$$ Equivalently, if we define the functional $I_{{\lambda},\delta}:Y\to {\mathbb}R$ as $$I_{{\lambda},\delta}(u):= \frac{1 }{2} \|(u,v)\|^2-\frac{1}{q}\int_{{\Omega}}({\lambda}|u|^q+\delta |v|^q)-\frac{2}{22^*_\mu}B(u,v)$$ then the critical points of $I_{{\lambda},\delta}$ correspond to the weak solutions of $(P_{{\lambda},\delta})$. We set $$\tilde S_s^H = \inf_{(u,v) \in Y\setminus \{(0,0)\}} \frac{\|(u,v)\|^2 }{ \left( \int_{{\Omega}}(|x|^{-\mu}\ast |u|^{2^*_\mu})|v|^{2^*_\mu}~\mathrm{d}x\right)^{\frac{1}{2^*_\mu}}} = \inf_{(u,v) \in Y\setminus \{(0,0)\}} \frac{\|(u,v)\|^2}{B(u,v)^{\frac{1}{2^*_\mu}}}$$ and show that $\tilde S_s^H = 2 S_s^H$. We define the set $${\mathcal}N_{{\lambda},\delta}:= \{(u,v)\in Y\setminus \{0\}:\; (I_{{\lambda},\delta}^\prime(u,v),(u,v))=0\}$$ and find that the functional $I_{{\lambda},\delta}$ is coercive and bounded below on ${\mathcal}N_{{\lambda},\delta}$. Consider the fibering map $\varphi_{u,v}:{\mathbb}R^+ \to {\mathbb}R$ as $\varphi_{u,v}(t)= I_{{\lambda},\delta}(tu,tv)$ which gives another characterization of ${\mathcal}N_{{\lambda},\delta}$ as follows $${\mathcal}N_{{\lambda},\delta}=\{(tu,tv)\in Y\setminus\{(0,0)\}:\; \varphi_{u,v}^\prime (t)=0\}$$ because $\varphi_{u,v}^\prime(t)= (I_{{\lambda},\delta}^\prime(tu,tv),(u,v))$. Naturally, our next step is to divide ${\mathcal}N_{{\lambda},\delta}$ into three subsets corresponding to local minima, local maxima and point of inflexion of $\varphi_{u,v}$ namely $$\begin{aligned} {\mathcal}N_{{\lambda},\delta}^\pm := \{(u,v)\in {\mathcal}N_{{\lambda},\delta}:\; \varphi_{u,v}^{\prime\prime}(1)\gtrless 0\}\;\; \text{and}\;\;{\mathcal}N_{{\lambda},\delta}^0 := \{(u,v)\in {\mathcal}N_{{\lambda}.\delta}:\; \varphi_{u,v}^{\prime\prime}(1)=0\}.\end{aligned}$$ As earlier, the minimizers of $I_{{\lambda},\delta}$ on ${\mathcal}N_{{\lambda},\delta}^+$ and ${\mathcal}N_{{\lambda},\delta}^-$ forms nontrivial weak solutions of $(P_{{\lambda},\delta})$. Then we found a threshold on the range of ${\lambda}$ and $\delta$ so that ${\mathcal}N_{{\lambda},\delta}$ forms a manifold. Precisely we proved- \[Theta-def-lem\] For every $(u,v)\in Y \setminus \{(0,0)\}$ and ${\lambda},\delta $ satisfying $0<{\lambda}^{\frac{2}{2-q}}+ \delta^{\frac{2}{2-q}} < \Theta$, where $\Theta$ is equal to $$\label{Theta-def} \left[ \frac{2^{2^*_\mu-1}{(C_s^n)^{\frac{22^*_\mu-q}{2-q}}}}{C(n,\mu)}\left(\frac{2-q}{22^*_\mu-q}\right) \left( \frac{22^*_\mu-2}{22^*_\mu-q}\right)^{\frac{22^*_\mu-2}{2-q}} S_s^{\frac{q(2^*_\mu-1)}{2-q}+2^*_\mu}|{\Omega}|^{-\frac{(2^*_s-q)(22^*_\mu-2)}{2^*_s(2-q)}}\right]^{\frac{1}{2^*_\mu-1}}$$ [then there exist unique]{} $t_1,t_2>0$ such that $t_1<t_{\text{max}}(u,v)<t_2$, $(t_1u,t_1v) \in {\mathcal}N_{{\lambda},\delta}^+$ and $(t_2u,t_2v)\in {\mathcal}N_{{\lambda},\delta}^-$. Moreover, ${\mathcal}N_{{\lambda},\delta}^0= \emptyset$. As a consequence, we infer that for any ${\lambda}, \delta$ satisfying $0 < {\lambda}^{\frac{2}{2-q}} + \delta^{\frac{2}{2-q} }< \Theta$, $${\mathcal}N_{{\lambda},\delta}= {\mathcal}N_{{\lambda},\delta}^+ \cup {\mathcal}N_{{\lambda},\delta}^-.$$ After this, we prove that any Palais Smale sequence $\{(u_k,v_k)\}$ for $I_{{\lambda},\delta}$ must be bounded in $Y$ and its weak limit forms a weak solution of $(P_{{\lambda},\delta})$. We define the following $$l_{{\lambda},\delta}= \inf_{{\mathcal}N_{{\lambda},\delta}} I_{{\lambda},\delta} \; \text{and} \;l_{{\lambda},\delta}^\pm = \inf_{{\mathcal}N_{{\lambda},\delta}^\pm} I_{{\lambda},\delta}.$$ We fix $0 < {\lambda}^{\frac{2}{2-q}} + \delta^{\frac{2}{2-q}}< \Theta$ and showed that $l_{{\lambda},\delta} \leq l_{{\lambda},\delta}^+<0$ and $\inf \{\|(u,v)\|:\; (u,v)\in {\mathcal}N_{{\lambda},\delta}^- \}>0$.\ To prove the existence of first solution, we first show that there exists a $(PS)_{l_{{\lambda},\delta}}$ sequence $\{(u_k,v_k)\} \subset {\mathcal}N_{{\lambda},\delta}$ for $I_{{\lambda},\delta}$ using the Ekeland variational principle and then prove that $l_{{\lambda},\delta}^+$ is achieved by some function $(u_1,v_1) \in {\mathcal}N_{{\lambda},\delta}^+$. Moreover $u_1,v_1>0$ in ${\Omega}$ and for each compact subset $K$ of ${\Omega}$, there exists a $m_K>0$ such that $u_1,v_1 \geq m_K$ in $K$. Thus we obtain a positive weak solution $(u_1,v_1)$ of $(P_{{\lambda},\delta})$.\ On the other hand, proof of existence of second solution has been divided into two parts- $\mu\leq 4s$ and $\mu>4s$. In the case $\mu \leq 4s$, using the estimates in Proposition \[estimates1\], we could reach the first critical level as follows- $$\sup_{t \geq 0} I_{{\lambda},\delta}((u_1,v_1)+ t(w_0,z_0) )< c_1:=I_{{\lambda},\delta}(u_1,v_1)+ \frac{n-\mu+2s}{2n-\mu} \left(\frac{{C_s^n} \tilde S_s^H}{2} \right)^{\frac{2n-\mu}{n-\mu+2s}}$$ for some non negative $(w_0,z_0) \in Y\setminus \{(0,0)\}$. This implied $l_{{\lambda},\delta}^- < c_1$. Whereas to show the same thing in the case $\mu>4s$, we had to take another constant $\Theta_0 \leq \Theta$ and the same estimates as in Proposition \[estimates1\]. Consequently, we prove that there exists a $(u_2,v_2) \in {\mathcal}N_{{\lambda},\delta}^-$ such that $l_{{\lambda},\delta}^-$ is achieved, hence gave us the second solution. From this, we concluded the proof of Theorem \[MT\]. Some open questions =================== Here we state some open problems in this direction. 1. $H^1$ versus $C^1$ local minimizers and global multiplicity result: Consider energy functional defined on $H^1_0({\Omega})$ given by $\Phi(u)=\frac{\|u\|^2}{2}- {\lambda}\int_{\Omega}F(x,u)$ where $F$ is the primitive of $f$. When $|f(u)|\leq C(1+|u|^{p})$ for $p \in [1,2^*]$, Brezis and Nirenberg in [@brenir] showed that a local minimum of $\Phi$ in $C^1({\Omega})$-topology is also a local minimum in the $H^1_0({\Omega})$-topology. Such property of the functional corresponding to Choquard type nonlinearity and singular terms is still not addressed. 2. Variable exponent problems: As pointed out in section \[sec-1.2\], existence of a solution for problem has been studied in [@p(x)-choq] but the question of multiplicity of solutions for variable exponent Choquard equations is still open. 3. $p$-Laplacian critical problems: The Critical exponent problem involving the $p$-Laplcian and Choquard terms is an important question. This requires the study of minimizers of $S_{H,L}$. Also the regularity of solutions and the global multiplicity results for convex-concave nonlinearities is worth exploring. 4. Hardy-Sobolev operators and nonlocal problems: The doubly critical problems arise due to the presence of two noncompact terms. Hardy Sobolev operator is defined as $-{\Delta}_p u -\frac{\mu |u|^{p-2}u}{|x|^2}$. Here the critical growth Choquard terms in the [equations]{} require the minimizers and [asymptotic]{} estimates to study the compactness of minimizing sequences. The existence and multiplicity results are good questions to explore in this case. [11]{} Alves, C.O., Cassani, D., Tarsi, C., Yang, M.: Existence and concentration of ground state solutions for a critical nonlocal Schrödinger equation in ${\mathbb}R^n$. J. Differential Equations. **261**, 1933–1972 (2016). Alves, C.O., Figueiredo M.G., Yang, M.: Existence of solutions for a nonlinear Choquard equation with potential vanishing at infinity. Adv. Nonlinear Anal. **5**, 331–345 (2016). Alves, C.O., Tavares, L.S.: A Hardy-Littlewood-Sobolev type inequality for variable exponents and applications to quasilinear Choquard equations involving variable exponent. Available at:https://arxiv.org/pdf/1609.09558.pdf. Alves, C.O., Yang, M.: Existence of semiclassical ground state solutions for a generalized Choquard equation. J. Differential Equations. **257**, 4133–-4164 (2014). Alves, C.O., Yang, M.: Multiplicity and concentration of solutions for a quasilinear Choquard equation, J. Math. Phys. **55**, 061502, 21 pp. (2014). Alves, C.O., Yang, M.: Investigating the multiplicity and concentration behaviour of solutions for a quasi-linear Choquard equation via the penalization method. Proc. Roy. Soc. Edinburgh Sect. A. **146**, 23-–58 (2016). Alves, C.O., Yang, M.: Existence of Solutions for a Nonlocal Variational Problem in ${\mathbb}R^2$ with Exponential Critical Growth. Journal of Convex Analysis. **24**, 1197–1215 (2017). Arora, R., Giacomoni,J., Mukherjee, T. and Sreenadh, K.: n-Kirchhoff Choquard equation with exponential nonlinearity Available via https://arxiv.org/pdf/1810.00583.pdf Bahri, A., Coron,J.-M.: On a nonlinear elliptic equation involving the critical Sobolev exponent: the effect of the topology of the domain. Comm. Pure Appl. Math. **41**, 253–-294 (1988). Benci, V., Grisanti, C. R., Micheletti, A. M.: Existence and non existence of the ground state solution for the nonlinear Schrödinger equations with $V(\infty)=0$. Topol. Methods Nonlinear Anal. **26**, 203-–219 (2005). Benci, V., Grisanti, C. R., Micheletti, A. M.: Existence of solutions for the nonlinear Schrödinger equation with $V(\infty)=0$. Progr. Nonlinear Differential Equations Appl. **66**, 53–-65 (2005). Berestycki, H., Lions, P.L.: Nonlinear scalar field equations. I Existence of a ground state, Arch. Ration. Mech. Anal. **82**, 313–-346 (1983). Brézis, H., Kato, T.: Remarks on the Schr¨odinger operator with singular complex potentials. J. Math. Pures Appl. **9**, 137–-151 (1979). Brézis, H., Nirenberg, L.: Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents. Comm. Pure Appl. Math. **36**, 437–477 (1983). Brézis, H., Nirenberg, L.: $H^1$ versus $C^1$ local minimizers. C. R. Acad. Sci. Paris. **317**, 465–472 (1993). Cassani, D., Scahftingen, J.V., Zhang, J.: Groundstates for Choquard type equations with Hardy-Littlewood-Sobolev lower critical exponent. Available via https://arxiv.org/pdf/1709.09448.pdf. Chen, W., Deng, S.: The Nehari manifold for a fractional p-Laplacian system involving concave-convex nonlinearities. Nonlinear Anal. Real World Appl. **27**, 80–-92 (2016). Chen, W., Squassina, M.: Critical Nonlocal Systems with Concave-Convex Powers. Adv. Nonlinear Stud. **16**, 821-–842 (2016). Choi, W.: On strongly indefinite systems involving the fractional Laplacian. Nonlinear Anal. **120**, 127–153 (2015). Cingolani, S., Clapp, M., Secchi, S.: Multiple solutions to a magnetic nonlinear Choquard equation. Z. Angrew. Math. Phys. **63**, 233–248 (2012). Cingolani, S., Clapp, M., Secchi, S.: Intertwining semiclassical solutions to a Schrödinger- Newton system. Discrete Contin. Dyn. Syst. Ser. S. **6**, 891–908 (2013). Cingolani, S., Secchi, S., Squassina, M.: Semi-classical limit for Schrödinger equations with magnetic field and Hartree-type nonlinearities. Proc. Roy. Soc. Edinburgh Sect. A, **140** 973–-1009 (2010). Crandall, M. G., Rabinowitz, P. H., Tartar, L.: On a Dirichlet problem with a singular nonlinearity. Communications in Partial Differential Equations. **2**, 193–222 (1977). Faria, L.F.O., Miyagaki, O.H., Pereira, F.R., Squassina, M., Zhang, C.: The Brezis-Nirenberg problem for nonlocal systems. Adv. Nonlinear Anal. **5**, 85–103 (2016). Gao, F., M. Yang, M.: On nonlocal Choquard equations with Hardy–Littlewood–Sobolev critical exponents, Journal of Mathematical Analysis and Applications. **448**, 1006-–1041 (2017). Gao, F., Yang, M.: On the Brezis–Nirenberg type critical problem for nonlinear Choquard equation. Sci Chi Math. **61** (2018) doi: 10.1007/s11425-016-9067-5. Georgiev, V., Venkov, G.: Symmetry and uniqueness of minimizers of Hartree type equations with external Coulomb potential. J. Differential Equations. **251**, 420–-438 (2011). Ghimenti, M., Pagliardini, D.: Multiple positive solutions for a slightly subcritical Choquard problem on bounded domains. Available via https://arxiv.org/pdf/1804.03448.pdf. Giacomoni, J., Mukherjee, T., Sreenadh, K.: Doubly nonlocal system with Hardy-Littlewood-Sobolev critical nonlinearity. Journal of Mathematical Analysis and Applications. **467**, 638–672 (2018). Goel, D., Sreenadh, K.: Critical Kirchoff-Choquard problems, Preprint. Goel, D., Radulescu, V., Sreenadh, K.: Coron problem for nonlocal equations involving Choquard nonlinearity. Available via https://arxiv.org/pdf/1804.08084.pdf. Guo, Z., Luo, S., Zou, W.: On critical systems involving frcational Laplacian. J. Math. Anal. Appl. **446**, 681–706 (2017). Haitao, Y.: Multiplicity and asymptotic behavior of positive solutions for a singular semilinear elliptic problem. J. Differential Equations, **189**, 487–512 (2003). Hirano, N., Saccon, C., Shioji, N.: Existence of multiple positive solutions for singular elliptic problems with concave and convex nonlinearities. Advances in Differential Equations. **9**, 197–220 (2004). Hirano, N., Saccon, C., Shioji, N.: Brezis-Nirenberg type theorems and multiplicity of positive solutions for a singular elliptic problem. J. Differential Equations. **245**, 1997–2037 (2008). Huang, Z. Yang, J., Yu, W.: Multiple nodal solutions of nonlinear choquard equations. Electronic Journal of Differential Equations, **268**, 1–18 (2017). Lei, Y.: On the regularity of positive solutions of a class of Choquard type equations. Math. Z. **273**, 883-–905 (2013). Lei, Y.: Qualitative analysis for the static Hartree-type equations. SIAM J. Math. Anal. **45**, 388–-406 (2013). Li, G-D., Tnag, C-L.: Existence of ground state solutions for Choquard equation involving the general upper critical Hardy-Littlewood-Sobolev nonlinear term. Communications on Pure and Applied Analysis. **18**, 285–300 (2019). Lieb, E.H.: Existence and uniqueness of the minimizing solution of Choquard’s nonlinear equation. Studies in Appl. Math. **57** 93–-105 (1976/77). Lieb, E.H., Loss, M.: Analysis. 2nd Edition. AMS, 2001. Lü, D.: Existence and concentration behavior of ground state solutions for magnetic nonlinear Choquard equations. Communications on Pure and Applied Analysis. **15**, 1781–1795 (2016). Ma, L., Zhao, L.: Classification of Positive Solitary Solutions of the Nonlinear Choquard Equation. Arch. Rational Mech. Anal. **195**, 455-–467 (2010). Menzala, G.P.: On the nonexistence of solutions for an elliptic problem in unbounded domains. Funkcial. Ekvac., **26** 231-–235 (1983). Mercuri,C., Moroz, V., Schaftingen, J.V.: Groundstates and radial solutions to nonlinear Schrödinger–Poisson–Slater equations at the critical frequency, J. Calc. Var. **55: 146**, (2016). Moroz, V., Schaftingen,J.V.: Groundstates of nonlinear Choquard equations: existence, qualitative properties and decay asymptotics. J. Funct. Anal., **265**, 153-–184 (2013). Moroz, V., Schaftingen,J.V.: Groundstates of nonlinear Choquard equations: Hardy–Littlewood–Sobolev critical exponent, Commun. Contemp. Math. **17**, 1550005 (12 pages) (2015). Moroz, V., Schaftingen,J.V.: Existence of groundstates for a class of nonlinear Choquard equations. Trans. Amer. Math. Soc. **367**, 6557-–6579 (2015). Moroz, V., Schaftingen,J.V.: Least action nodal solutions for the quadratic Choquard equation: Proc. Amer. Math. Soc. **145**, 737-747 (2017). Moroz, V., Schaftingen, J.V.: A guide to the Choquard equation. Journal of Fixed Point Theory and Applications. **19**, 773-–813 (2017). Mukherjee, T., Sreenadh, K.: Positive solutions for nonlinear Choquard equation with singular nonlinearity. Complex Variables and Elliptic Equations. **62**, 1044–1071 (2017). Mukherjee, T., Sreenadh, K.: Fractional Choquard Equation with critical nonlinearities. Nonlinear Differ. Equ. Appl. **24: 63**, (2017). Mukherjee, T., Sreenadh, K.: On Concentration of least energy solutions for magnetic critical Choquard equations. Journal of Mathematical Analysis and Applications. **464**, 402–420 (2018). Mukherjee, T., Sreenadh, K.: On doubly nonlocal p-fractional coupled elliptic system. Topological Methods in Nonlinear Analysis. **51**, 609–636 (2018). Nezza, E.D., Palatucci, G., Valdinoci,E.: Hitchhiker’s guide to the fractional sobolev spaces. Bull. Sci. math., **136**, 521-–573 (2012). Pekar,S.: Untersuchung über die Elektronentheorie der Kristalle. Akademie Verlag, Berlin, (1954). Salazar, D.: Vortex-type solutions to a magnetic nonlinear Choquard equation. Z. Angew. Math. Phys. **66**, 663–675 (2015). Servadei, R., Valdinoci, E.: The Brezis-Nirenberg result for the fractional laplacian. Trans. Amer. Math. Soc. **367** 67–102 (2015). Shen, Z., Gao, F., Yang, M.: Multiple solutions for nonhomogeneous Choquard equation involving Hardy–Littlewood–Sobolev critical exponent. Z. Angew. Math. Phys. **68:61**, (2017). Tarantell, G.: On nonhomogeneous elliptic equations involving critical Sobolev exponent. Ann. Inst. H. Poincaré Anal. Non Linéaire. **9**, 281–304 (1992). Zhang, H., Xu, J., Zhang, F.: Existence and multiplicity of solutions for a generalized Choquard equation. Computers and Mathematics with Applications, **73**, 1803–1814 (2017). Wang, J., Dong, Y., He, Q., Xiao, L.: Multiple positive solutions for a coupled nonlinear Hartree type equations with perturbations. J. Math. Anal. Appl. **450**, 780–794 (2017). Wang, K., Wei, J.: On the uniqueness of solutions of a nonlocal elliptic system. Math. Ann. **365**, 105–153 (2016). Wang, T., Yi, T.: Uniqueness of positive solutions of the Choquard type equations. Applicable Analysis. **96**, 409–417 (2017). Xie, T., Xiao, L., Wang, J.: Exixtence of multiple positive solutions for Choquard equation with perturbation. Adv. Math. Phys. **2015**, 760157 (2015). [^1]: tulimukh@gmail.com [^2]: sreenadh@maths.iitd.ac.in
--- abstract: | Cylindrical probability measures are finitely additive measures on Banach spaces that have sigma-additive projections to Euclidean spaces of all dimensions. They are naturally associated to notions of weak (cylindrical) random variable and hence weak (cylindrical) stochastic processes. In this paper we focus on cylindrical Lévy processes. These have (weak) Lévy-Itô decompositions and an associated Lévy-Khintchine formula. If the process is weakly square integrable, its covariance operator can be used to construct a reproducing kernel Hilbert space in which the process has a decomposition as an infinite series built from a sequence of uncorrelated bona fide one-dimensional Lévy processes. This series is used to define cylindrical stochastic integrals from which cylindrical Ornstein-Uhlenbeck processes may be constructed as unique solutions of the associated Cauchy problem. We demonstrate that such processes are cylindrical Markov processes and study their (cylindrical) invariant measures. [*Keywords and phrases*]{}: cylindrical probability measure, cylindrical Lévy process, reproducing kernel Hilbert space, Cauchy problem, cylindrical Ornstein-Uhlenbeck process, cylindrical invariant measure. MSC 2000: [*primary*]{} 60B11, [*secondary*]{} 60G51, 60H05, 28C20. author: - | David Applebaum[^1]\ Probability and Statistics Department\ University of Sheffield\ Sheffield\ United Kingdom - | Markus Riedle[^2]\ The University of Manchester\ Oxford Road\ Manchester M13 9PL\ United Kingdom title: Cylindrical Lévy processes in Banach spaces --- Introduction ============ Probability theory in Banach spaces has been extensively studied since the 1960s and there are several monographs dedicated to various themes within the subject - see e.g. Heyer [@heyer], Linde [@Linde], Vakhania et al [@Vaketal], Ledoux and Talagrand [@LedTal]. In general, the theory is more complicated than in Euclidean space (or even in an infinite-dimensional Hilbert space) and much of this additional complexity arises from the interaction between probabilistic ideas and Banach space geometry. The theory of type and cotype Banach spaces (see e.g. Schwartz [@SchwartzLNM]) is a well-known example of this phenomenon. From the outset of work in this area, there was already interest in cylindrical probability measures (cpms), i.e. finitely additive set functions whose “projections” to Euclidean space are always bona fide probability measures. These arise naturally in trying to generalise a mean zero normal distribution to an infinite-dimensional Banach space. It is clear that the covariance $Q$ should be a bounded linear operator that is positive and symmetric but conversely it is not the case that all such operators give rise to a sigma-additive probability measure. Indeed in a Hilbert space, it is necessary and sufficient for $Q$ to be trace-class (see e.g. Schwartz [@SchwartzLNM], p.28) and if we drop this requirement (and one very natural example is when $Q$ is the identity operator) then we get a cpm. Cpms give rise to cylindrical stochastic processes and these appear naturally as the driving noise in stochastic partial differential equations (SPDEs). An introduction to this theme from the point of view of cylindrical Wiener processes can be found in Da Prato and Zabczyk [@DaPratoZab]. In recent years there has been increasing interest in SPDEs driven by Lévy processes and Peszat and Zabczyk [@PesZab] is a monograph treatment of this topic. Some specific examples of cylindrical Lévy processes appear in this work and Priola and Zabczyk [@PriolaZab] makes an in-depth study of a specific class of SPDEs driven by cylindrical stable processes. In Brzeźniak and Zabczyk [@BrzZab] the authors study the path-regularity of an Ornstein-Uhlenbeck process driven by a cylindrical Lévy process obtained by subordinating a cylindrical Wiener process. The purpose of this paper is to begin a systematic study of cylindrical Lévy processes in Banach spaces with particular emphasis on stochastic integration and applications to SPDEs. It can be seen as a successor to an earlier paper by one of us (see Riedle [@riedle]) in which some aspects of this programme were carried out for cylindrical Wiener processes. The organisation of the paper is as follows. In section 2 we review key concepts of cylindrical proabability, introduce the cylindrical version of infinite divisibility and obtain the corresponding Lévy-Khintchine formula. In section 3 we introduce cylindrical Lévy processes and describe their Lévy-Itô decomposition. An impediment to developing the theory along standard lines is that the noise terms in this formula depend non-linearly on vectors in the dual space to our Banach space. In particular this makes the “large jumps” term difficult to handle. To overcome these problems we restrict ourself to the case where the cylindrical Lévy process is square-integrable with a well-behaved covariance operator. This enables us to develop the theory along similar lines to that used for cylindrical Wiener processes as in Riedle [@riedle] and to find a series representation for the cylindrical Lévy process in a reproducing kernel Hilbert space that is determined by the covariance operator. This is described in section 4 of this paper where we also utilise this series expansion to define stochastic integrals of suitable predictable processes. Finally, in section 5 we consider SPDEs driven by additive cylindrical Lévy noise. In the more familiar context of SPDEs driven by legitimate Lévy processes in Hilbert space, it is well known that the weak solution of this equation is an Ornstein-Uhlenbeck process and the investigation of these processes has received a lot of attention in the literature (see e.g. Chojnowska-Michalik [@ChoMich87], Applebaum [@Dave] and references therein). In our case we require that the initial condition is a cylindrical random variable and so we are able to construct cylindrical Ornstein-Uhlenbeck processes as weak solutions to our SPDE. We study the Markov property (in the cylindrical sense) of the solution and also find conditions for there to be a unique invariant cylindrical measure. Finally, we give a condition under which the Ornstein-Uhlenbeck process is “radonified”, i.e. it is a stochastic process in the usual sense.\ Notation and Terminology: $\operatorname{{{\mathbbm}R}_+}:=[0,\infty)$. The Borel $\sigma$-algebra of a topological space $T$ is denoted by $\operatorname{{\mathcal B}}(T)$. By a Lévy process in a Banach space we will always mean a stochastic process starting at zero (almost surely) that has stationary and independent increments and is stochastically continuous. We do not require that almost all paths are necessarily [càdlàg]{} i.e. right continuous with left limits. Cylindrical measures ==================== Let $U$ be a Banach space with dual $U^\ast$. The dual pairing is denoted by ${\langle u,a\rangle}$ for $u\in U$ and $a\in U^\ast$. For each $n\in\operatorname{{{\mathbbm}N}}$, let $U^{\ast n}$ denote the set of all $n$-tuples of vectors from $U^\ast$. It is a real vector space under pointwise addition and scalar multiplication and a Banach space with respect to the “Euclidean-type” norm ${\left\lVert \operatorname{a_{(n)}}\right\rVert}^2:= \sum_{k=1}^n {\left\lVert a_k \right\rVert}^2$, where $\operatorname{a_{(n)}}=(a_1,\dots, a_n)\in U^{\ast n}$. Clearly $U^{\ast n}$ is separable if $U^\ast$ is. For each $\operatorname{a_{(n)}}=(a_1,\dots, a_n)\in U^{\ast n}$ we define a linear map $$\begin{aligned} \pi_{a_1,\dots, a_n}:U\to \operatorname{{{\mathbbm}R}}^n,\qquad \pi_{a_1,\dots, a_n}(u)=({\langle u,a_1\rangle},\dots,{\langle u,a_n\rangle}).\end{aligned}$$ We often use the notation $\pi_{\operatorname{a_{(n)}}}:=\pi_{a_1,\dots, a_n}$ and in particular when $a_{n} = a \in U^\ast$, we will write $\pi(a) = a$. It is easily verified that for each $\operatorname{a_{(n)}}=(a_1,\dots, a_n)\in U^{\ast n}$ the map $\pi_{\operatorname{a_{(n)}}}$ is bounded with ${\left\lVert \pi_{\operatorname{a_{(n)}}} \right\rVert}{\leqslant}{\left\lVert \operatorname{a_{(n)}}\right\rVert}$. The Borel $\sigma$-algebra in $U$ is denoted by $\operatorname{{\mathcal B}}(U)$. Let $\Gamma$ be a subset of $U^\ast$. Sets of the form $$\begin{aligned} Z(a_1,\dots ,a_n;B)&:= \{u\in U:\, ({\langle u,a_1\rangle},\dots, {\langle u,a_n\rangle})\in B\}\\ &= \pi^{-1}_{a_1,\dots, a_n}(B),\end{aligned}$$ where $a_1,\dots, a_n\in \Gamma$ and $B\in \operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$ are called [*cylindrical sets* ]{}. The set of all cylindrical sets is denoted by $\operatorname{{\mathcal Z}}(U,\Gamma)$ and it is an algebra. The generated $\sigma$-algebra is denoted by $\operatorname{{\mathcal C}}(U,\Gamma)$ and it is called the [*cylindrical $\sigma$-algebra with respect to $(U,\Gamma)$*]{}. If $\Gamma=U^\ast$ we write $\operatorname{{\mathcal Z}}(U):=\operatorname{{\mathcal Z}}(U,\Gamma)$ and $\operatorname{{\mathcal C}}(U):=\operatorname{{\mathcal C}}(U,\Gamma)$. From now on we will assume that $U$ is separable and note that in this case, the Borel $\sigma$-algebra $\operatorname{{\mathcal B}}(U)$ and the cylindrical $\sigma$-algebra $\operatorname{{\mathcal C}}(U)$ coincide. The following lemma shows that for a finite subset $\Gamma\subseteq U^\ast$ the algebra $\operatorname{{\mathcal Z}}(U,\Gamma)$ is a $\sigma$-algebra and it gives a generator in terms of a generator of the Borel $\sigma$-algebra $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$, where we recall that a generator of a $\sigma$-algebra ${\mathfrak E}$ in a space $X$ is a set $E$ in the power set of $X$ such that the smallest $\sigma$-algebra containing $E$ is ${\mathfrak E}$. \[le.generatorcyl\] If $\Gamma=\{a_1,\dots, a_n\}\subseteq U^\ast$ is finite we have $$\begin{aligned} \operatorname{{\mathcal C}}(U,\Gamma)=\operatorname{{\mathcal Z}}(U,\Gamma)=\sigma(\{Z(a_1,\dots, a_n;B):\, B\in {\mathcal F}\}), \end{aligned}$$ where ${\mathcal F}$ is an arbitrary generator of $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$. Because for any $a_{i_1},\dots, a_{i_k}\in \Gamma$, $k\in\{1,\dots, n\}$, and $B\in \operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^k)$ we have $$\begin{aligned} Z(a_{i_1},\dots, a_{i_k};B) =Z(a_1,\dots, a_n; \tilde{B})\end{aligned}$$ by extending $B$ suitably to $\tilde{B}\in\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$ it follows that $$\begin{aligned} \operatorname{{\mathcal Z}}(U,\Gamma)&=\{Z(a_{i_1},\dots, a_{i_k};B):\, a_{i_1},\dots, a_{i_k}\in \Gamma, B\in \operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^k), k\in \{1,\dots, n\}\}\\ &=\{Z(a_1,\dots, a_n; \tilde{B}):\, \tilde{B}\in \operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)\}\\ &=\{\pi_{a_1,\dots, a_n}^{-1}(\tilde{B}):\,\tilde{B}\in \operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)\}\\ &=\pi_{a_1,\dots, a_n}^{-1}(\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)).\end{aligned}$$ The last family of sets is known to be a $\sigma$-algebra which verifies that $ \operatorname{{\mathcal C}}(U,\Gamma)=\operatorname{{\mathcal Z}}(U,\Gamma)$. Moreover, we have for every generator ${\mathcal F}$ of $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$ that $$\begin{aligned} \pi_{a_1,\dots, a_n}^{-1}(\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)) =\pi_{a_1,\dots, a_n}^{-1}(\sigma({\mathcal F})) =\sigma(\pi_{a_1,\dots, a_n}^{-1}({\mathcal F})),\end{aligned}$$ which completes the proof. A function $\mu:\operatorname{{\mathcal Z}}(U)\to [0,\infty]$ is called a [*cylindrical measure on $\operatorname{{\mathcal Z}}(U)$*]{}, if for each finite subset $\Gamma\subseteq U^\ast$ the restriction of $\mu$ to the $\sigma$-algebra $\operatorname{{\mathcal C}}(U,\Gamma)$ is a measure. A cylindrical measure is called finite if $\mu(U)<\infty$ and a cylindrical probability measure if $\mu(U)=1$. For every function $f:U\to\operatorname{{{\mathbbm}C}}$ which is measurable with respect to $\operatorname{{\mathcal C}}(U,\Gamma)$ for a finite subset $\Gamma\subseteq U^\ast$ the integral $\int f(u)\,\mu(du)$ is well defined as a complex valued Lebesgue integral if it exists. In particular, the characteristic function ${\varphi}_\mu:U^\ast\to\operatorname{{{\mathbbm}C}}$ of a finite cylindrical measure $\mu$ is defined by $$\begin{aligned} {\varphi}_{\mu}(a):=\int_U e^{i{\langle u,a\rangle}}\,\mu(du)\qquad\text{for all }a\in U^\ast.\end{aligned}$$ For each $\operatorname{a_{(n)}}=(a_1,\dots, a_n)\in U^{\ast n}$ we obtain an image measure $\mu\circ\pi_{\operatorname{a_{(n)}}}^{-1}$ on $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$. Its characteristic function $ {\varphi}_{\mu\circ \pi_{\operatorname{a_{(n)}}}^{-1}}$ is determined by that of $\mu$: $$\begin{aligned} \label{eq.charimcyl} {\varphi}_{\mu\circ\pi_{\operatorname{a_{(n)}}}^{-1}}(\beta)= {\varphi}_{\mu}(\beta_1a_1+\cdots + \beta_n a_n)\end{aligned}$$ for all $\beta=(\beta_1,\dots, \beta_n)\in \operatorname{{{\mathbbm}R}}^n$. If $\mu_{1}$ and $\mu_{2}$ are cylindrical probability measures on $U$ their convolution is the cylindrical probability measure defined by $$(\mu_{1} * \mu_{2})(A) = \int_{U} 1_{A}(x + y)\mu_{1}(dx)\mu_{2}(dy),$$ for each $A \in \operatorname{{\mathcal Z}}(U)$. Indeed if $A = \pi_{a_{(n)}}^{-1}(B)$ for some ${n \in {\mathbbm}{N}}, a_{(n)} \in U^{\ast n}, B \in {\cal B}({{\mathbbm}{R}^{n}})$, then it is easily verified that $$\label{conv} (\mu_{1} * \mu_{2})(A) = (\mu_{1} \circ \pi_{a_{(n)}}^{-1}) * (\mu_{2} \circ \pi_{a_{(n)}}^{-1})(B).$$ A standard calculation yields ${\varphi}_{\mu_{1} * \mu_{2}} = {\varphi}_{\mu_{1}}{\varphi}_{\mu_{2}}$. For more information about convolution of cylindrical probability measures, see [@Ros]. The $n$-times convolution of a cylindrical probability measure $\mu$ with itself is denoted by $\mu^{\ast n}$. \[de.infdiv\] A cylindrical probability measure $\mu$ on $\operatorname{{\mathcal Z}}(U)$ is called [ *infinitely divisible*]{} if for all $n\in\operatorname{{{\mathbbm}N}}$ there exists a cylindrical probability measure $\mu^{1/n}$ such that $\mu=\left(\mu^{1/n}\right)^{\ast n}$. It follow that a cylindrical probability measure $\mu$ with characteristic function ${\varphi}_\mu$ is infinitely divisible if and only if for all $n\in\operatorname{{{\mathbbm}N}}$ there exists a characteristic function ${\varphi}_{\mu^{1/n}}$ of a cylindrical probability measure $\mu^{1/n}$ such that $$\begin{aligned} {\varphi}_\mu(a)=\left({\varphi}_{\mu^{1/n}}(a)\right)^n\qquad\text{for all }a\in U^\ast.\end{aligned}$$ The relation implies that for every $\operatorname{a_{(n)}}\in U^{\ast n}$ and $\beta=(\beta_1,\dots, \beta_n)\in\operatorname{{{\mathbbm}R}}^n$ we have $$\begin{aligned} {\varphi}_{\mu\circ \pi_{\operatorname{a_{(n)}}}^{-1}}(\beta) &={\varphi}_\mu(\beta_1 a_1+\dots +\beta_n a_n)\\ &= \left({\varphi}_{\mu^{1/n}}(\beta_1 a_1+\dots +\beta_n a_n)\right)^n\\ &= \left({\varphi}_{\mu^{1/n}\circ \pi_{\operatorname{a_{(n)}}}^{-1}}(\beta)\right)^n.\end{aligned}$$ Thus, every image measure $\mu\circ \pi_{\operatorname{a_{(n)}}}^{-1}$ of an infinitely divisible cylindrical measure $\mu$ is an infinitely divisible probability measure on $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$. A probability measure $\mu$ on $\operatorname{{\mathcal B}}(U)$ is called infinitely divisible if for each $n\in\operatorname{{{\mathbbm}N}}$ there exists a measure $\mu^{1/n}$ on $\operatorname{{\mathcal B}}(U)$ such that $\mu=(\mu^{1/n})^{\ast n}$ (see e.g. Linde [@Linde], section 5.1). Consequently, every infinitely divisible probability measure on $\operatorname{{\mathcal B}}(U)$ is also an infinitely divisible cylindrical probability measure on $\operatorname{{\mathcal Z}}(U)$. Because $\mu\circ a^{-1}$ is an infinitely divisible probability measure on $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}})$ the Lévy-Khintchine formula in $\operatorname{{{\mathbbm}R}}$ implies that for every $a\in U^\ast$ there exist some constants $\beta_a\in\operatorname{{{\mathbbm}R}}$ and $\sigma_a\in\operatorname{{{\mathbbm}R}_+}$ and a Lévy measure $\nu_a$ on $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}})$ such that $$\begin{aligned} \label{eq.Levy-Khin-one} {\varphi}_{\mu}(a)={\varphi}_{\mu\circ a^{-1}}(1) = \exp\left(i\beta_a -\tfrac{1}{2} \sigma_a^2 + \int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}}\left(e^{i\gamma }-1-i\gamma \operatorname{\mathbbm 1}_{B_1}(\gamma)\right) \,\nu_a(d\gamma)\right),\end{aligned}$$ where $B_1:=\{\beta\in\operatorname{{{\mathbbm}R}}:\, {\left\lvert \beta \right\rvert}{\leqslant}1\}$. A priori all parameters in the characteristics of the image measure $\mu\circ a^{-1} $ depend on the functional $a\in U^\ast$. The following result sharpens this representation. \[th.cyllevymeasure\] Let $\mu$ be a cylindrical probability measure on $\operatorname{{\mathcal Z}}(U)$. If $\mu$ is infinitely divisible then there exists a cylindrical measure $\nu$ on $\operatorname{{\mathcal Z}}(U)$ such that the representation is satisfied with $$\begin{aligned} \nu_a=\nu\circ a^{-1} \qquad\text{for all }a\in U^\ast.\end{aligned}$$ Fix $\operatorname{a_{(n)}}=(a_1,\dots ,a_n)\in U^{\ast n}$ and let $ \nu_{a_1,\dots,a_n}$ denote the Lévy measure on $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$ of the infinitely divisible measure $\mu\circ\pi_{a_1,\dots, a_n}^{-1}$. Define the family of cylindrical sets $$\begin{aligned} {\mathcal G}:=\{Z(a_1,\dots,a_n;B):\,a_1,\dots, a_n\in U^\ast, n\in\operatorname{{{\mathbbm}N}}, B\in {\mathcal F}_{\operatorname{a_{(n)}}}\},\end{aligned}$$ where $$\begin{aligned} {\mathcal F}_{\operatorname{a_{(n)}}}:=\{(\alpha,\beta]\subseteq \operatorname{{{\mathbbm}R}}^n: \,\nu_{a_1,\dots, a_n}(\partial (\alpha,\beta])= 0,\, 0\notin [\alpha,\beta]\}\end{aligned}$$ and $\partial(\alpha,\beta]$ denotes the boundary of the $n$-dimensional interval $$\begin{aligned} (\alpha,\beta]:=\{v=(v_1,\dots, v_n)\in \operatorname{{{\mathbbm}R}}^n: \alpha_i <v_i{\leqslant}\beta_i,\; i=1,\dots, n\}\end{aligned}$$ for $\alpha=(\alpha_1,\dots, \alpha_n)\in\operatorname{{{\mathbbm}R}}^n, \beta=(\beta_1,\dots, \beta_n)\in\operatorname{{{\mathbbm}R}}^n$. Our proof relies on the relation $$\begin{aligned} \label{eq.limmunu} \lim_{t_k\to 0}\frac{1}{t_k}\int_{\operatorname{{{\mathbbm}R}}^n} \operatorname{\mathbbm 1}_B(\gamma)\,(\mu\circ\pi_{a_1,\dots, a_n}^{-1})^{\ast t_k}(d\gamma) =\int_{\operatorname{{{\mathbbm}R}}^n}\operatorname{\mathbbm 1}_B(\gamma)\,\nu_{a_1,\dots, a_n}(d\gamma).\end{aligned}$$ for all sets $B\in {\mathcal F}_{\operatorname{a_{(n)}}}$. This can be deduced from Corollary 2.8.9. in [@sato] which states that $$\begin{aligned} \label{eq.sato} \lim_{t_k\to 0}\frac{1}{t_k}\int_{\operatorname{{{\mathbbm}R}}^n} f(\gamma)\,(\mu\circ\pi_{a_1,\dots, a_n}^{-1})^{\ast t_k}(d\gamma) =\int_{\operatorname{{{\mathbbm}R}}^n}f(\gamma)\,\nu_{a_1,\dots, a_n}(d\gamma)\end{aligned}$$ for all bounded and continuous functions $f:\operatorname{{{\mathbbm}R}}^n\to\operatorname{{{\mathbbm}R}}$ which vanish on a neighborhood of $0$. The relation (\[eq.limmunu\]) can be seen in the following way: let $B=(\alpha,\beta]$ be a set in ${\mathcal F}_{\operatorname{a_{(n)}}}$ for $\alpha,\beta\in \operatorname{{{\mathbbm}R}}^n$. Because $0\notin \bar{B}$ there exists ${\varepsilon}>0$ such that $0\notin [\alpha-{\varepsilon},\beta+{\varepsilon}]$ where $\alpha-{\varepsilon}:=(\alpha_1-{\varepsilon}, \dots, \alpha_n-{\varepsilon})$ and $\beta+{\varepsilon}:=(\beta_1+{\varepsilon}, \dots, \beta_n+{\varepsilon})$. Define for $i=1,\dots, n$ the functions $g_i:\operatorname{{{\mathbbm}R}}\to [0,1]$ by $$\begin{aligned} g_i(c)=\left(1-\tfrac{(\alpha_i-c)}{{\varepsilon}}\right)\operatorname{\mathbbm 1}_{(\alpha_i-{\varepsilon},\alpha_i]}(c) + \operatorname{\mathbbm 1}_{(\alpha_i,\beta_i]}(c)+ \left(1-\tfrac{(c-\beta_i)}{{\varepsilon}}\right)\operatorname{\mathbbm 1}_{(\beta_i,\beta_i+{\varepsilon}]}(c),\end{aligned}$$ and interpolate the function $\gamma\mapsto \operatorname{\mathbbm 1}_{(\alpha,\beta]}(\gamma)$ for $\gamma=(\gamma_1,\dots, \gamma_n)$ by $$\begin{aligned} f((\gamma_1,\dots, \gamma_n)):= g_1(\gamma_1)\cdot \ldots \cdot g_n (\gamma_n).\end{aligned}$$ Because $\operatorname{\mathbbm 1}_B {\leqslant}f {\leqslant}\operatorname{\mathbbm 1}_{(\alpha-{\varepsilon},\beta+{\varepsilon}]}$ we have $$\begin{aligned} \frac{1}{t_k}\int_{\operatorname{{{\mathbbm}R}}^n} \operatorname{\mathbbm 1}_B(\gamma)\,(\mu\circ\pi_{a_1,\dots, a_n}^{-1})^{\ast t_k}(d\gamma) {\leqslant}\frac{1}{t_k}\int_{\operatorname{{{\mathbbm}R}}^n} f(\gamma)\,(\mu\circ\pi_{a_1,\dots, a_n}^{-1})^{\ast t_k}(d\gamma)\end{aligned}$$ and $$\begin{aligned} \int_{\operatorname{{{\mathbbm}R}}^n} f(\gamma)\,\nu_{a_1,\dots, a_n}(d\gamma) {\leqslant}\int_{\operatorname{{{\mathbbm}R}}^n} \operatorname{\mathbbm 1}_{(\alpha-{\varepsilon},\beta+{\varepsilon}]}(\gamma)\,\nu_{a_1,\dots, a_n}(d\gamma) =\nu_{a_1,\dots, a_n}((\alpha-{\varepsilon},\beta+{\varepsilon}]).\end{aligned}$$ Since $f$ is bounded, continuous and vanishes on a neighborhood of $0$, it follows from that $$\begin{aligned} \label{eq.limsupapp} \limsup_{t_k\to 0} \frac{1}{t_k}\int_{\operatorname{{{\mathbbm}R}}^n} \operatorname{\mathbbm 1}_{B}(\gamma)\,(\mu\circ\pi_{a_1,\dots, a_n}^{-1})^{\ast t_k}(d\gamma) {\leqslant}\nu_{a_1,\dots, a_n}((\alpha-{\varepsilon},\beta+{\varepsilon}]).\end{aligned}$$ By considering $(\alpha+{\varepsilon}, \beta-{\varepsilon}]\subseteq (\alpha,\beta]$ we obtain similarly that $$\begin{aligned} \label{eq.liminfapp} \nu_{a_1,\dots, a_n}((\alpha+{\varepsilon},\beta-{\varepsilon}]) {\leqslant}\liminf_{t_k\to 0} \frac{1}{t_k}\int_{\operatorname{{{\mathbbm}R}}^n} \operatorname{\mathbbm 1}_{B}(\gamma)\,(\mu\circ\pi_{a_1,\dots, a_n}^{-1})^{\ast t_k}(d\gamma).\end{aligned}$$ Because $\nu_{a_1,\dots,a_n}(\partial B)=0$ the inequalities and imply . Now we define a set function $$\begin{aligned} \nu:\operatorname{{\mathcal Z}}(U) \to [0,\infty], \qquad \nu(Z(a_1,\dots,a_n;B)):=\nu_{a_1,\dots, a_n}(B).\end{aligned}$$ First, we show that $\nu$ is well defined. For $Z(a_1,\dots,a_n;B)\in {\mathcal G}$ equation allows us to conclude that $$\begin{aligned} \nu(Z(a_1,\dots,a_n;B)) &= \lim_{t_k\to 0} \frac{1}{t_k} \int_{\operatorname{{{\mathbbm}R}}^n} \operatorname{\mathbbm 1}_{B}(\gamma)\,(\mu\circ \pi_{a_1,\dots, a_n}^{-1})^{\ast t_k} (d\gamma)\\ &= \lim_{t_k\to 0} \frac{1}{t_k} \int_{\operatorname{{{\mathbbm}R}}^n} \operatorname{\mathbbm 1}_{B}(\gamma)\,(\mu^{\ast t_k}\circ \pi_{a_1,\dots, a_n}^{-1}) (d\gamma)\\ &= \lim_{t_k\to 0} \frac{1}{t_k} \int_{U} \operatorname{\mathbbm 1}_{B}(\pi_{a_1,\dots, a_n}(u))\, \mu^{\ast t_k}(du)\\ &= \lim_{t_k\to 0} \frac{1}{t_k} \mu^{\ast t_k}(Z(a_1,\dots, a_n;B)).\end{aligned}$$ It follows that for two sets in ${\mathcal G}$ with $Z(a_1,\dots,a_n;B)=Z(b_1,\dots, b_m;C)$ that $$\begin{aligned} \nu(Z(a_1,\dots,a_n;B))=\nu(Z(b_1,\dots, b_m;C)),\end{aligned}$$ which verifies that $\nu$ is well defined on ${\mathcal G}$. Having shown that $\nu$ is well-defined on ${\mathcal G}$ for fixed $\operatorname{a_{(n)}}=(a_1,\dots, a_n)\in U^{\ast n}$ we now demonstrate that it’s restriction to the $\sigma$-algebra $\operatorname{{\mathcal Z}}(U,\{a_1,\dots, a_n\})$ is a measure so that it yields a cylindrical measure on $\operatorname{{\mathcal Z}}(U)$. Define a set of $n$-dimensional intervals by $$\begin{aligned} {\mathcal H}:=\{(\alpha,\beta]\subseteq \operatorname{{{\mathbbm}R}}^n:\, 0\notin [\alpha,\beta]\}.\end{aligned}$$ Because $\nu_{a_1,\dots, a_n}$ is a $\sigma$-finite measure the set $$\begin{aligned} {\mathcal H}{\!{}\!}{\mathcal F}_{\operatorname{a_{(n)}}}= \{(\alpha,\beta]\in {\mathcal H}:\, \nu_{a_1,\dots, a_n}(\partial (\alpha, \beta])\neq 0\}\end{aligned}$$ is countable. Thus, the set ${\mathcal F}_{\operatorname{a_{(n)}}}$ generates the same $\sigma$-algebra as ${\mathcal H}$ because the countably missing sets in ${\mathcal F}_{\operatorname{a_{(n)}}}$ can easily be approximated by sets in ${\mathcal F}_{\operatorname{a_{(n)}}}$. But ${\mathcal H}$ is known to be a generator of the Borel $\sigma$-algebra $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$ and so Lemma \[le.generatorcyl\] yields that $$\begin{aligned} {\mathcal G}_{\operatorname{a_{(n)}}}:=\{Z(a_1,\dots,a_n;B):\, B\in {\mathcal F}_{\operatorname{a_{(n)}}}\}\end{aligned}$$ generates $\operatorname{{\mathcal Z}}(U,\{a_1,\dots, a_n\})$. Furthermore, ${\mathcal G}_{\operatorname{a_{(n)}}}$ is a semi-ring because ${\mathcal F}_{\operatorname{a_{(n)}}}$ is a semi-ring. Secondly, $\nu$ restricted to ${\mathcal G}_{\operatorname{a_{(n)}}}$ is well defined and is a pre-measure. For, if $\{Z_k:=Z_k(a_1,\dots, a_n;B_k):\,k\in\operatorname{{{\mathbbm}N}}\}$ are a countable collection of disjoint sets in ${\mathcal G}_{\operatorname{a_{(n)}}}$ with $\cup Z_k\in {\mathcal G}_{\operatorname{a_{(n)}}}$ then the Borel sets $B_k$ are disjoint and it follows that $$\begin{aligned} \nu\left(\bigcup_{k{\geqslant}1} Z_k\right)&= \nu\left(\bigcup_{k{\geqslant}1} \pi_{a_1,\dots, a_n}^{-1}(B_k)\right) =\nu\left(\pi_{a_1,\dots, a_n}^{-1}\left(\bigcup_{k{\geqslant}1} B_k\right)\right)\\ & = \nu_{a_1,\dots, a_n}\left(\bigcup_{k{\geqslant}1} B_k\right) =\sum_{k=1}^\infty \nu_{a_1,\dots, a_n} (B_k) =\sum_{k=1}^\infty \nu(Z_k).\end{aligned}$$ Thus, $\nu$ restricted to ${\mathcal G}_{\operatorname{a_{(n)}}}$ is a pre-measure and because it is $\sigma$-finite it can be extended uniquely to a measure on $\operatorname{{\mathcal Z}}(U,\{a_1,\dots,a_n\})$ by Caratheodory’s extension theorem, which verifies that $\nu$ is a cylindrical measure on $\operatorname{{\mathcal Z}}(U)$. By the construction of the cylindrical measure $\nu$ in Theorem \[th.cyllevymeasure\] it folllows that every image measure $\nu\circ \pi_{\operatorname{a_{(n)}}}^{-1}$ is a Lévy measure on $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$ for all $\operatorname{a_{(n)}}\in U^{\ast n}$. This motivates the following definition: \[de.cyllevy\] A cylindrical measure $\nu$ on $\operatorname{{\mathcal Z}}(U)$ is called a [ *cylindrical Lévy measure*]{} if for all $a_1,\dots, a_n\in U^ \ast$ and $n\in\operatorname{{{\mathbbm}N}}$ the measure $\nu\circ \pi_{a_1,\dots, a_n}^{-1}$ is a Lévy measure on $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$. Let $\nu$ be a Lévy measure $\nu$ on $\operatorname{{\mathcal B}}(U)$ (see [@Linde] for a definition). Then, if Definition \[de.cyllevy\] is sensible $\nu$ should be also a [*cylindrical*]{} Lévy measure. That this is true, we explain in the following. According to Proposition 5.4.5 in [@Linde] the Lévy measure $\nu$ satisfies $$\begin{aligned} \label{eq.Lindeorg} \sup_{{\left\lVert a \right\rVert}{\leqslant}1}\int_{{\left\lVert u \right\rVert}{\leqslant}1} {\left\lvert {\langle u,a\rangle} \right\rvert}^2\, \nu(du)<\infty.\end{aligned}$$ This result can be generalised to $$\begin{aligned} \label{eq.Lindemod1} \sup_{{\left\lVert a \right\rVert}{\leqslant}1}\int_{\{u:{\left\lvert {\langle u,a\rangle} \right\rvert}{\leqslant}1\}} {\left\lvert {\langle u,a\rangle} \right\rvert}^2\, \nu(du)<\infty.\end{aligned}$$ For, the result relies on Proposition 5.4.1 in [@Linde] which is based on Lemma 5.3.10 therein. In the latter the set $\{u: {\left\lVert u \right\rVert}{\leqslant}1\}$ can be replaced by the larger set $\{u: {\left\lvert {\langle u,a\rangle} \right\rvert}{\leqslant}1\}$ for $a\in U^\ast$ with ${\left\lVert a \right\rVert}{\leqslant}1$ because in the proof the inequality (line -10, page 72 in [@Linde]) $$\begin{aligned} 1-\cos t {\geqslant}\tfrac{t^2}{3}\qquad \text{for all }{\left\lvert t \right\rvert}{\leqslant}1,\end{aligned}$$ is applied for $t={\left\lVert u \right\rVert}$ while we apply it for $t={\left\lvert {\langle u,a\rangle} \right\rvert}$. Then we can follow the original proof in [@Linde] to obtain . From it is easy to derive $$\begin{aligned} \label{eq.Lindemod2} \sup_{{\left\lVert a \right\rVert}{\leqslant}M}\int_{\{u:{\left\lvert {\langle u,a\rangle} \right\rvert}{\leqslant}N\}} {\left\lvert {\langle u,a\rangle} \right\rvert}^2\, \nu(du)<\infty\end{aligned}$$ for all $M,N{\geqslant}0$. For arbitrary $\operatorname{a_{(n)}}=(a_1,\dots, a_n)\in U^{\ast n}$ and $B_n:=\{\beta\in\operatorname{{{\mathbbm}R}}^n:{\left\lvert \beta \right\rvert}{\leqslant}1\}$ we have that $$\begin{aligned} \pi^{-1}_{\operatorname{a_{(n)}}}(B_{n})= \{u: {\langle u,a_1\rangle}^2+\dots +{\langle u,a_n\rangle}^2{\leqslant}1\} &\subseteq \{u: \tfrac{1}{n}({\langle u,a_1\rangle}+\dots +{\langle u,a_n\rangle})^2{\leqslant}1\}\\ &= \{u: {\left\lvert {\langle u,(a_1+\dots +a_n)\rangle} \right\rvert}{\leqslant}{{}{n}\;}\}=:D,\end{aligned}$$ where we used the inequality $(\gamma_1+\dots +\gamma_n)^2{\leqslant}n (\gamma_1^2+\dots +\gamma_n^2)$ for $\gamma_1,\dots, \gamma_n\in\operatorname{{{\mathbbm}R}}$. It follows from that $$\begin{aligned} \int_{B_{n}}{\left\lvert \beta \right\rvert}^2\, (\nu\circ \pi_{\operatorname{a_{(n)}}}^{-1})(d\beta) =\sum_{k=1}^n \int_{\pi^{-1}_{\operatorname{a_{(n)}}}(B_{n})}{\left\lvert {\langle u,a_k\rangle} \right\rvert}^2\,\nu(du) {\leqslant}\sum_{k=1}^n \int_{D} {\left\lvert {\langle u,a_k\rangle} \right\rvert}^2\,\nu(du) <\infty.\end{aligned}$$ As a result we obtain that $\nu$ is a cylindrical Lévy measure on $\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$. In the next section we will sharpen the structure of the Lévy-Khintchine formula for infinitely divisible cylindrical measures. It is appropriate to state the result at this juncture: \[co.leykhint\] Let $\mu$ be an infinitely divisible cylindrical probability measure. Then there exist a map ${r}:U^\ast\to\operatorname{{{\mathbbm}R}}$, a quadratic form $s:U^\ast\to \operatorname{{{\mathbbm}R}}$ and a cylindrical Lévy measure $\nu$ on $\operatorname{{\mathcal Z}}(U)$ such that: $$\begin{aligned} {\varphi}_{\mu}(a)= \exp\left( i {r}(a) -\tfrac{1}{2}s(a) +\int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}}\left(e^{i \gamma}-1-i\gamma \operatorname{\mathbbm 1}_{B_{1}}(\gamma) \right)(\nu\circ a^{-1})(d\gamma) \right)\end{aligned}$$ for all $a\in U^\ast$. Cylindrical stochastic processes ================================ Let $(\Omega, {{\mathcal F}},P)$ be a probability space that is equipped with a filtration $\{{{\mathcal F}}_t\}_{t{\geqslant}0}$. Similarly to the correspondence between measures and random variables there is an analogous random object associated to cylindrical measures: \[de.cylrv\] A [*cylindrical random variable $Y$ in $U$*]{} is a linear map $$\begin{aligned} Y:U^\ast \to L^0(\Omega,{{\mathcal F}},P).\end{aligned}$$ A cylindrical process $X$ in $U$ is a family $(X(t):\,t{\geqslant}0)$ of cylindrical random variables in $U$. The characteristic function of a cylindrical random variable $X$ is defined by $$\begin{aligned} {\varphi}_X:U^\ast\to\operatorname{{{\mathbbm}C}}, \qquad {\varphi}_X(a)=E[\exp(i Xa)].\end{aligned}$$ The concepts of cylindrical measures and cylindrical random variables match perfectly. Indeed, if $Z=Z(a_1,\dots, a_n;B)$ is a cylindrical set for $a_1,\dots, a_n\in U^\ast$ and $B\in \operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$ we obtain a cylindrical probability measure $\mu$ by the prescription $$\begin{aligned} \label{eq.relcylmeas} \mu(Z):=P((Xa_1,\dots, Xa_n)\in B).\end{aligned}$$ We call $\mu$ the [*cylindrical distribution of $X$*]{} and the characteristic functions ${\varphi}_\mu$ and ${\varphi}_X$ of $\mu$ and $X$ coincide. Conversely for every cylindrical measure $\mu$ on $\operatorname{{\mathcal Z}}(U)$ there exists a probability space $(\Omega,{{\mathcal F}},P)$ and a cylindrical random variable $X:U^\ast\to L^0(\Omega,{{\mathcal F}},P)$ such that $\mu$ is the cylindrical distribution of $X$, see [@Vaketal VI.3.2]. By some abuse of notation we define for a cylindrical process $X=(X(t):\,t{\geqslant}0)$: $$\begin{aligned} X(t):U^{\ast n}\to L^0(\Omega,{{\mathcal F}},P; \operatorname{{{\mathbbm}R}}^n),\qquad X(t)(a_1,\dots, a_n):=(X(t)a_1,\dots,X(t)a_n).\end{aligned}$$ In this way, one obtains for fixed $(a_1,\dots, a_n)\in U^{\ast n}$ an $n$-dimensional stochastic process $$\begin{aligned} (X(t)(a_1,\dots, a_n):\, t{\geqslant}0).\end{aligned}$$ It follows from that its marginal distribution is given by the image measure of the cylindrical distribution $\mu_t$ of $X(t)$: $$\begin{aligned} \label{eq.distmulticyl} P_{X(t)(a_1,\dots, a_n)}=\mu_t\circ \pi_{a_1,\dots, a_n}^{-1} \end{aligned}$$ for all $a_1,\dots, a_n\in U^{\ast n}$. Combining with shows that $$\begin{aligned} \label{eq.charmulti} {\varphi}_{X(t)(a_1,\dots, a_n)}(\beta_1,\dots, \beta_n) = {\varphi}_{X(t)(\beta_1a_1+ \dots +\beta_na_n)}(1)\end{aligned}$$ for all $\beta_1,\dots, \beta_n\in\operatorname{{{\mathbbm}R}}^n$ and $a_1,\dots, a_n\in U^\ast$.\ We give now the proof of Theorem \[co.leykhint\]. (of Theorem \[co.leykhint\]).\ Because of , i.e. $$\begin{aligned} {\varphi}_{\mu}(a)={\varphi}_{\mu\circ a^{-1}}(1) = \exp\left(i\beta_a -\tfrac{1}{2} \sigma_a^2 + \int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}}\left(e^{i\gamma }-1-i\gamma \operatorname{\mathbbm 1}_{B_1}(\gamma)\right) \,\nu_a(d\gamma)\right),\end{aligned}$$ we have to show that ${\varphi}_{\mu\circ a^{-1}}$ is in the claimed form. Theorem \[th.cyllevymeasure\] implies that there exists a cylindrical Lévy measure $\nu$ such that $\nu_a=\nu\circ a^{-1}$ for each $a\in U^\ast$. By defining ${r}(a):=\beta_a$ it remains to show that the function $$\begin{aligned} s:U^\ast\to \operatorname{{{\mathbbm}R}_+},\qquad s(a):=\sigma_a^2 \end{aligned}$$ is a quadratic form. Let $X$ be a cylindrical random variable with distribution $\mu$. By the Lévy-Itô decomposition in $\operatorname{{{\mathbbm}R}}$ (see e.g. Chapter 2 in [@Dave04]) it follows that $$\begin{aligned} \label{eq.li} Xa={r}(a) + \sigma_a W_a + \int_{0<{\left\lvert \beta \right\rvert}<1} \beta \,\tilde{N}_a(d\beta)+ \int_{{\left\lvert \beta \right\rvert}{\geqslant}1} \beta \,N_a(d\beta)\qquad\text{$P$-a.s.},\end{aligned}$$ where $W_a$ is a real valued centred Gaussian random variable with $EW_a^2=1$, $N_a$ is an independent Poisson random measure on $\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}$ and $\tilde{N}_a$ is the compensated Poisson random measure. By applying to $Xa$, $Xb$ and $X(a+b)$ for arbitrary $a,b\in U^\ast$ we obtain $$\begin{aligned} \sigma_{a+b}W_{a+b}&= \sigma_a W_a+ \sigma_b W_b\quad\text{$P$-a.s.}\label{eq.aux1}\end{aligned}$$ Similarly, for $\beta \in \operatorname{{{\mathbbm}R}}$ we have $$\begin{aligned} \sigma_{\beta a}W_{\beta a}&=\beta \sigma_{a} W_a\quad\text{$P$-a.s.}\label{eq.aux2}\end{aligned}$$ By squaring both sides of (\[eq.aux2\]) and then taking expectations it follows that the function $s$ satisfies $s(\beta a)=\beta^2 s(a)$. Similarly, one derives from that $\sigma_{a+b}^2=\sigma^2_a+\sigma_b^2 +2\rho(a,b)$, where $\rho(a,b):=Cov(\sigma_a W_a,\sigma_bW_b)$. Equation yields for $c\in U^\ast$ $$\begin{aligned} \rho(a+c,b)&=\operatorname{Cov}( \sigma_{a+c} W_{a+c},\, \sigma_b W_b)\\ & =\operatorname{Cov}(\sigma_a W_a+ \sigma_c W_c,\,\sigma_b W_b)\\ & = \rho(a,b)+\rho(c,b),\end{aligned}$$ which implies together with properties of the covariance that $\rho$ is a bilinear form. Thus the function $$\begin{aligned} \label{eq.quadform1} Q:U^\ast\times U^\ast\to \operatorname{{{\mathbbm}R}},\qquad Q(a,b):=s(a+b)-s(a)-s(b)= 2\rho(a,b)\end{aligned}$$ is a bilinear form and $s$ is thus a quadratic form. The cylindrical process $X=(X(t):\,t{\geqslant}0)$ is called [*adapted to a given filtration $\{{{\mathcal F}}_t\}_{t{\geqslant}0}$*]{}, if $X(t)a$ is ${{\mathcal F}}_t$-measurable for all $t{\geqslant}0$ and all $a\in U^\ast$. The cylindrical process $X$ is said to have [*weakly independent increments*]{} if for all $0{\leqslant}t_0<t_1<\dots <t_n$ and all $a_1,\dots, a_n\in U^\ast$ the random variables $$\begin{aligned} (X(t_1)-X(t_0))a_1,\dots , (X(t_n)-X(t_{n-1}))a_n\end{aligned}$$ are independent. \[de.cylLevy\] An adapted cylindrical process $(L(t):\,t{\geqslant}0)$ is called a [*weakly cylindrical Lévy process*]{} if 1. for all $a_1,\dots, a_n\in U^\ast$ and $n\in\operatorname{{{\mathbbm}N}}$ the stochastic process $\big((L(t)(a_1,\dots, a_n):\, t{\geqslant}0\big)$ is a Lévy process in $\operatorname{{{\mathbbm}R}}^n$. By Definition \[de.cylLevy\] the random variable $L(1)(a_1,\dots, a_n)$ is infinitely divisible for all $a_1,\dots, a_n\in U^\ast$ and the equation implies that the cylindrical distribution of $L(1)$ is an infinitely divisible cylindrical measure. \[ex.cylWiener\] An adapted cylindrical process $(W(t):\,t{\geqslant}0)$ in $U$ is called a [*weakly cylindrical Wiener process*]{}, if for all $a_1,\dots, a_n\in U^\ast$ and $n\in \operatorname{{{\mathbbm}N}}$ the $\operatorname{{{\mathbbm}R}}^n$-valued stochastic process $$\begin{aligned} \big((W(t)(a_1,\dots,a_{n}):\,t{\geqslant}0\big)\end{aligned}$$ is a Wiener process in $\operatorname{{{\mathbbm}R}}^n$. Here we call an adapted stochastic process $(X(t):\,t{\geqslant}0)$ in $\operatorname{{{\mathbbm}R}}^n$ a Wiener process if the increments $X(t)-X(s)$ are independent, stationary and normally distributed with expectation $E[X(t)-X(s)]=0$ and covariance Cov$[X(t)-X(s),X(t)-X(s)]={\left\lvert t-s \right\rvert}C$ for a non-negative definite symmetric matrix $C$. If $C=\operatorname{ Id}$ we call $X$ a [*standard*]{} Wiener process. Obviously, a weakly cylindrical Wiener process is an example of a weakly cylindrical Lévy process. The characteristic function of $W$ is given by $$\begin{aligned} {\varphi}_{W(t)}(a)=\exp\left(-\tfrac{1}{2}t s(a)\right),\end{aligned}$$ where $s:U^\ast\to\operatorname{{{\mathbbm}R}_+}$ is a quadratic form, see [@riedle] for more details on cylindrical Wiener processes. \[ex.cylpois\] Let $\zeta$ be an element in the algebraic dual $U^{\ast\prime}$, i.e. a linear function $\zeta:U^\ast\to \operatorname{{{\mathbbm}R}}$ which is not necessarily continuous. Then $$\begin{aligned} X:U^\ast\to L^0(\Omega,{{\mathcal F}},P),\qquad Xa:=\zeta(a)\end{aligned}$$ defines a cylindrical random variable. We call its cylindrical distribution $\mu$ a [*cylindrical Dirac measure in $\zeta$*]{}. It follows that $$\begin{aligned} {\varphi}_X(a)={\varphi}_\mu(a)=e^{i \zeta(a)}\qquad\text{for all }a \in U^\ast.\end{aligned}$$ We define the [*cylindrical Poisson process $(L(t):\,t{\geqslant}0)$*]{} by $$\begin{aligned} L(t)a:=\zeta(a)\, n(t) \qquad\text{for all }t{\geqslant}0,\end{aligned}$$ where $(n(t):\, t{\geqslant}0)$ is a real valued Poisson process with intensity $\lambda>0$. It turns out that the cylindrical Poisson process is another example of a weakly cylindrical Lévy process with characteristic function $$\begin{aligned} {\varphi}_{L(t)}(a)=\exp\left( \lambda t \left(e^{i\zeta(a)}-1\right)\right).\end{aligned}$$ \[ex.compcylpoisson\] Let $(Y_k:\,k\in\operatorname{{{\mathbbm}N}})$ be a sequence of cylindrical random variables each having cylindrical distribution $\rho$ and such that $\{Y_k a:\,k\in\operatorname{{{\mathbbm}N}}\}$ is independent for all $a\in U^\ast$. If $(n(t):\,t{\geqslant}0)$ is a real valued Poisson process of intensity $\lambda>0$ which is independent of $\{Y_k a:\,k\in\operatorname{{{\mathbbm}N}},\, a\in U^\ast\}$ then the [*cylindrical compound Poisson process*]{} $(L(t):\,t{\geqslant}0)$ is defined by $$\begin{aligned} L(t)a:=\begin{cases} 0, &\text{if }t=0,\\ Y_1a+\dots +Y_{n(t)}a, &\text{else,} \end{cases} \qquad\text{for all }a\in U^\ast. \end{aligned}$$ The cylindrical compound Poisson process is a weakly cylindrical Lévy process with $$\begin{aligned} {\varphi}_{L(t)}(a)=\exp\left(t\lambda\int_{U}\left(e^{i{\langle u,a\rangle}}-1\right) \,\rho(du)\right).\end{aligned}$$ Let $\rho$ be a L[é]{}vy measure on $\operatorname{{{\mathbbm}R}}$ and $\lambda$ be a positive measure on a set $O\subseteq \operatorname{{{\mathbbm}R}}^d$. In the monograph [@PesZab] by Peszat and Zabczyk an [*impulsive cylindrical process on $L^2(O,\operatorname{{\mathcal B}}(O),\lambda)$*]{} is introduced in the following way: let $\pi$ be the Poisson random measure on $[0,\infty)\times O\times \operatorname{{{\mathbbm}R}}$ with intensity measure $ds\,\lambda(d\xi)\,\rho(d\beta)$. Then for all measurable functions $f:O\to\operatorname{{{\mathbbm}R}}$ with compact support a random variable is defined by $$\begin{aligned} Z(t)f:=\int_0^t \int_O\int_{\operatorname{{{\mathbbm}R}}} f(\xi)\beta \,\tilde{\pi}(ds,d\xi,d\beta)\end{aligned}$$ in $L^2(\Omega,{{\mathcal F}},P)$ under the simplifying assumption that $$\begin{aligned} \int_{\operatorname{{{\mathbbm}R}}} \beta^2\, \rho(d\beta)<\infty.\end{aligned}$$ It turns out that the definition of $Z(t)$ can be extended to all $f$ in $L^2(O,\operatorname{{\mathcal B}}(O),\lambda)$ so that $Z=(Z(t):\,t {\geqslant}0)$ is a cylindrical process in the Hilbert space $L^2(O,\operatorname{{\mathcal B}}(O),\lambda)$. Moreover, $(Z(t)f:\, t{\geqslant}0)$ is a L[é]{}vy process for every $f\in L^2(O,\operatorname{{\mathcal B}}(O),\lambda)$ and $Z$ has the characteristic function $$\begin{aligned} \label{eq.charPes} {\varphi}_{Z(t)}(f)=\exp\left(t \int_O\int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} \left(e^{i f(\xi)\beta} -1-i f(\xi)\beta\right) \,\rho(d\beta) \,\lambda(d\xi)\right),\end{aligned}$$ see Prop. 7.4 in [@PesZab]. To consider this example in our setting we set $U=L^2(O,\operatorname{{\mathcal B}}(O),\lambda)$ and identify $U^\ast$ with $U$. By the results mentioned above and if we assume weakly independent increments, Lemma \[le.weaklyind\] tells us that the cylindrical process $Z$ is a weakly cylindrical L[é]{}vy process in accordance with our Definition \[de.cylLevy\]. By Corollary \[co.leykhint\] it follows that there exists a cylindrical L[é]{}vy measure $\nu$ on $\operatorname{{\mathcal Z}}(U)$ such that $\nu\circ f^{-1}$ is the L[é]{}vy measure of $(Z(t)f:\, t{\geqslant}0)$ for all $f\in U^\ast$. But on the other hand, if we define a measure by $$\begin{aligned} \nu_f:\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}})\to [0,\infty], \qquad \nu_f(B):=\int_O\int_{\operatorname{{{\mathbbm}R}}} \operatorname{\mathbbm 1}_B(\beta f(\xi))\,\rho(d\beta) \lambda(d\xi)\end{aligned}$$ we can rewrite as $$\begin{aligned} {\varphi}_{Z(t)}(f) &= \exp\left(t \int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} \left(e^{i \beta} -1-i \beta\right) \, v_f(d\beta) \right)\end{aligned}$$ and by the uniqueness of the characteristics of a Levy process we see that $v_f=v\circ f^{-1}$ for all $f\in U^\ast$. A cylindrical process $(L(t):\,t{\geqslant}0)$ is induced by a stochastic process $(X(t):\,t{\geqslant}0)$ on $U$ if $$\begin{aligned} L(t)a={\langle X(t),a\rangle} \qquad\text{for all }a \in U^\ast. \end{aligned}$$ If $X$ is a Lévy process on $U$ then the induced process $L$ is a weakly cylindrical Lévy process with the same characteristic function as $X$. Our definition of a weakly cylindrical Lévy process is an obvious extension of the definition of a finite-dimensional Lévy processes and is exactly in the spirit of cylindrical processes. The multidimensional formulation in Definition \[de.cylLevy\] would already be necessary to define a finite-dimensional Lévy process by this approach and it allows us to conclude that a weakly cylindrical Lévy process has weakly independent increments. The latter property is exactly what is needed in addition to a one-dimensional formulation: \[le.weaklyind\] For an adapted cylindrical process $L=(L(t):\,t{\geqslant}0)$ the following are equivalent: 1. $L$ is a weakly cylindrical Lévy process; 2. 1. $L$ has weakly independent increments; 2. $(L(t)a:\,t{\geqslant}0)$ is a Lévy process for all $a\in U^\ast$. We have only to show that (b) implies (a) for which we fix some $a_1,\dots, a_n\in U^\ast$. Because implies that the characteristic functions satisfy $$\begin{aligned} {\varphi}_{(L(t)-L(s))(a_1,\dots, a_n)}(\beta) = {\varphi}_{(L(t)-L(s))(\beta_1a_1+\dots +\beta_na_n)}(1)\end{aligned}$$ for all $\beta=(\beta_1,\dots, \beta_n)\in\operatorname{{{\mathbbm}R}}^n$ the condition (ii) implies that the increments of $((L(t)a_1,\dots, L(t)a_n)):\, t{\geqslant}0)$ are stationary. The assumption (i) implies that $$\begin{aligned} (L(t_1)-L(t_0))a_{k_1},\dots, (L(t_n)-L(t_{n-1}))a_{k_n}\end{aligned}$$ are independent for all $k_1,\dots, k_n\in\{1,\dots, n\}$ and all $0{\leqslant}t_0 < \dots < t_n$. If follows that the $n$-dimensional random variables $$\begin{aligned} (L(t_1)-L(t_0))(a_1,\dots, a_n), \dots, (L(t_n)-L(t_{n-1}))(a_1,\dots, a_n)\end{aligned}$$ are independent which shows the independent increments of $(L(t)(a_1,\dots, a_n):\,t{\geqslant}0)$. The stochastic continuity follows by the following estimate, where we use $|\cdot|_{n}$ to denote the Euclidean norm in ${{\mathbbm}{R}^{n}}$ and $c>0$: $$\begin{aligned} P(|(L(t)a_{1}, \ldots, L(t)a_{n})|_{n} > c)=P\left(|L(t)a_1|^{2}+\cdots + |L(t)a_n|^{2}>c^{2}\right) {\leqslant}\sum_{k=1}^n P\left( |L(t)a_k| >\tfrac{c}{{{}{n}\;}}\right),\end{aligned}$$ which completes the proof. Because $(L(t)a:\, t{\geqslant}0)$ is a one-dimensional Lévy process, we may take a [càdlàg]{} version (see e.g. Chapter 2 of [@Dave04]). Then for every $a\in U^\ast$ the one-dimensional Lévy-Itô decomposition implies $P$-a.s. $$\begin{aligned} \label{eq.levy-ito} L(t)a=\zeta_a t + \sigma_a W_a(t) + \int_{0<{\left\lvert \beta \right\rvert}{\leqslant}1} \beta \,\tilde{N}_a(t,d\beta)+ \int_{{\left\lvert \beta \right\rvert}> 1} \beta \,N_a(t,d\beta), \end{aligned}$$ where $\zeta_a\in\operatorname{{{\mathbbm}R}}$, $\sigma_a{\geqslant}0$, $(W_a(t):\,t{\geqslant}0)$ is a real valued standard Wiener process and $N_a$ is the Poisson random measure defined by $$\begin{aligned} N_a(t,B)= \sum_{0{\leqslant}s{\leqslant}t} \operatorname{\mathbbm 1}_B(\Delta L(s)a)\qquad\text{for }B\in \operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}),\end{aligned}$$ where $\Delta(f(s)):=f(s)-f(s-)$ for any [càdlàg]{} function $f:\operatorname{{{\mathbbm}R}}\to\operatorname{{{\mathbbm}R}}$. The Poisson random measure $N_a$ gives rise to the Lévy measure $\nu_a$ by $$\begin{aligned} \nu_a(B):=E[N_a(1,B)] \qquad\text{for }B\in \operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}).\end{aligned}$$ The compensated Poisson random measure $\tilde{N}_a$ is then defined by $$\begin{aligned} \tilde{N}_a(t,B):=N_a(t,B)-t\nu_a(B).\end{aligned}$$ Note, that all terms in the sum on the right hand side of are independent for each fixed $a\in U^\ast$. Combining with the Lévy-Khintchine formula in Theorem \[co.leykhint\] yields that $$\begin{aligned} \zeta_a&={r}(a), \qquad \sigma_a=s(a) \qquad\text{and} \quad\nu_a=\nu\circ a^{-1}\end{aligned}$$ for all $a\in U^\ast$, where ${r}$, $s$ and $\nu$ are the characteristics associated to the infinitely divisible cylindrical distribution of $L(1)$. By using the Lévy-Itô decomposition for the one-dimensional projections we define for each $t{\geqslant}0$ $$\begin{aligned} & W(t):U^\ast \to L^2(\Omega,{{\mathcal F}},P),\qquad W(t)a:= s(a)W_a(t),\\ & M(t):U^\ast \to L^2(\Omega,{{\mathcal F}},P),\qquad M(t)a :=\int_{0<{\left\lvert \beta \right\rvert}{\leqslant}1 } \beta\, \tilde{N}_a(t,d\beta), \\ & P(t):U^\ast \to L^0(\Omega,{{\mathcal F}},P),\qquad P(t)a :=\int_{{\left\lvert \beta \right\rvert}> 1} \beta\, N_a(t,d\beta).\end{aligned}$$ The one-dimensional Lévy-Itô decomposition is now of the form $$\begin{aligned} \label{eq.levyito} L(t)a={r}(a)t+ W(t)a+ M(t)a + P(t)a\qquad\text{for all }a\in U^\ast.\end{aligned}$$ Let $L=(L(t):\,t{\geqslant}0)$ be a weakly cylindrical Lévy process in $U$. Then $L$ satisfies (almost surely) where $$\begin{aligned} &(W(t):\,t{\geqslant}0)\quad \text{is a weakly cylindrical Wiener process},\\ &({r}(\cdot)t+M(t)+P(t):\, t{\geqslant}0)\quad\text{is a cylindrical process}. \end{aligned}$$ By we know that $$\begin{aligned} L(t)a={r}(a)t + W(t)a + R(t)a \qquad\text{for all }a\in U^\ast,\end{aligned}$$ where $R(t)a=M(t)a+P(t)a$. By applying this representation to every component of the $n$-dimensional stochastic process $(L(t)(a_1,\dots, a_n):\,t{\geqslant}0)$ for $a_1,\dots, a_n\in U^{\ast}$ we obtain $$\begin{aligned} L(t)(a_1,\dots, a_n)=({r}(a_1),\dots, {r}(a_n))t + (W(t)a_1,\dots, W(t)a_n) + (R(t)a_1,\dots,R(t)a_n).\end{aligned}$$ But on the other hand the $n$-dimensional Lévy process $(L(t)(a_1,\dots, a_n):\, t{\geqslant}0)$ also has a Lévy-Itô decomposition where the Gaussian part is an $\operatorname{{{\mathbbm}R}}^n$-valued Wiener process. By uniqueness of the decomposition it follows that the Gaussian part equals $((W(t)a_1,\dots,W(t) a_n):\,t{\geqslant}0)$ (a.s.) which ensures that the latter is indeed a weakly cylindrical Wiener process (see the definition in Example \[ex.cylWiener\]). Because $L$ and $W$ are cylindrical processes it follows that $a\mapsto {r}(a)t+M(t)a+P(t)a$ is also linear which completes the proof. One might expect that the random functions $P$ and $M$ are also cylindrical processes, i.e. linear mappings. But the following example shows that this is not true in general: \[ex.Poissonnotlinear\] Let $(L(t):\,t{\geqslant}0)$ be the cylindrical Poisson process from Example \[ex.cylpois\]. We obtain $$\begin{aligned} N_{a}(t,B)&=\sum_{s\in [0,t]}\operatorname{\mathbbm 1}_B(\zeta(a)\Delta n(s))\\ &=\operatorname{\mathbbm 1}_B(\zeta(a))\,n(t)\end{aligned}$$ for all $a\in U^{\ast}$ and $B\in\operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\})$. The image measures $\nu\circ a^{-1}$ of the cylindrical Lévy measure $\nu$ of $L$ are given by $$\begin{aligned} \nu\circ {a}^{-1}(B)=E[N_a(1,B)]=\operatorname{\mathbbm 1}_{B}(\zeta(a))\,\lambda.\end{aligned}$$ Then we have $$\begin{aligned} P(t)a=\int_{{\left\lvert \beta \right\rvert}>1} \beta\, N_a(t,d\beta) &=\sum_{s\in [0,t]} \Delta L(s)\operatorname{\mathbbm 1}_{\{{\left\lvert \beta \right\rvert}>1\}}(\Delta L(s))\\ &= \zeta(a)\sum_{s\in [0,t]} \Delta n(s) \operatorname{\mathbbm 1}_{\{{\left\lvert \beta \right\rvert}>1\}}(\zeta(a)\Delta n(s))\\ &= \zeta(a)n(t)\operatorname{\mathbbm 1}_{\{{\left\lvert \beta \right\rvert}>1\}}(\zeta(a)).\end{aligned}$$ We obtain analogously that $$\begin{aligned} M(t)a=\int_{{\left\lvert \beta \right\rvert}{\leqslant}1} \beta\, \tilde{N}_a(t,d\beta) &=\int_{{\left\lvert \beta \right\rvert}{\leqslant}1} \beta\, N_a(t,d\beta)- t\int_{{\left\lvert \beta \right\rvert}{\leqslant}1}\beta \, (\nu\circ a^{-1})(d\beta)\\ &= \zeta(a)(n(t)-t\lambda)\operatorname{\mathbbm 1}_{\{{\left\lvert \beta \right\rvert}{\leqslant}1\}}(\zeta(a)).\end{aligned}$$ Defining the term ${r}$ by $$\begin{aligned} {r}(a)=\lambda \operatorname{\mathbbm 1}_{\{{\left\lvert \beta \right\rvert}{\leqslant}1\}}(\zeta(a))\end{aligned}$$ gives the Lévy-Itô decomposition . But it is easy to see that none of the terms $P(t), M(t)$ and ${r}$ is linear because the truncation function $$\begin{aligned} a\mapsto \operatorname{\mathbbm 1}_{\{{\left\lvert \beta \right\rvert}{\leqslant}1\}}(\zeta(a))\end{aligned}$$ is not linear. For an arbitrary truncation function $h_a:\operatorname{{{\mathbbm}R}}\to\operatorname{{{\mathbbm}R}_+}$ which might even depend on $a\in U^\ast$ a similar calculation shows the non-linearity of the analogous terms. Let $(L(t):\,t{\geqslant}0)$ be the cylindrical compound Poisson process introduced in Example \[ex.compcylpoisson\]. If we define for $a\in U^\ast$ a sequence of stopping times recursively by $T_0^a:=0$ and $T_n^a:=\inf\{t>T_{n-1}^a:\, {\left\lvert \Delta L(t)a \right\rvert}>1\}$ then it follows that $$\begin{aligned} \int_{{\left\lvert \beta \right\rvert}>1}\beta\, N_a(t,d\beta)=J_1(a)+\dots + J_{N_a(t,B_1^c)}(a), \end{aligned}$$ where $B_1^c=\{\beta\in\operatorname{{{\mathbbm}R}}:\,{\left\lvert \beta \right\rvert}>1\}$ and $$\begin{aligned} J_n(a):=\int_{{\left\lvert \beta \right\rvert}>1} \beta\,N_a(T_n^a,d\beta)-\int_{{\left\lvert \beta \right\rvert}>1}\beta\, N_a(T_{n-1}^a,d\beta).\\\end{aligned}$$ We say that a cylindrical Lévy process $(L(t), t \geq 0)$ is [*weak order $2$*]{} if $E{\left\lvert L(t)a \right\rvert}^2<\infty$ for all $a\in U^\ast$ and $t{\geqslant}0$. In this case, we can decompose $L$ according to $$\begin{aligned} \label{eq.levyito2} L(t)a= {r}_2(a)t+ W(t)a + M_2(t)a \qquad\text{for all }a\in U^\ast,\end{aligned}$$ where ${r}_2(a) = r(a) + \int_{|\beta| > 1}\beta \, \nu_{a}(d\beta)$ and $$\begin{aligned} & M_2(t):U^\ast \to L^2(\Omega,{{\mathcal F}},P),\qquad M_2(t)a :=\int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} \beta \,\tilde{N}_a(t,d\beta).\end{aligned}$$ In this representation it turns out that all terms are linear: \[co.linear\] Let $L=(L(t):\,t{\geqslant}0)$ be a weakly cylindrical Lévy process of weak order 2 on $U$. Then $L$ satisfies with $$\begin{aligned} &{r}_2:U^\ast\to\operatorname{{{\mathbbm}R}}\quad\text{linear}, \\ &(W(t):\,t{\geqslant}0)\quad \text{is a weakly cylindrical Wiener process},\\ &(M_2(t)):\, t{\geqslant}0)\quad\text{is a cylindrical process}. \end{aligned}$$ Let $a,b\in U^\ast$ and $\gamma\in \operatorname{{{\mathbbm}R}}$. Taking expectation in yields $$\begin{aligned} {r}_2(\gamma a+b)t=E[L(t)(\gamma a+b)] =\gamma E[L(t)a]+E[L(t)b]= \gamma {r}_2( a)t+ r_2(b)t.\end{aligned}$$ Thus, ${r}_2$ is linear and since also $W$ and $L$ in are linear it follows that $M_2$ is a cylindrical process. But our next example shows that the assumption of finite second moments is not necessary for a “cylindrical” version of the Lévy-Itô decomposition: \[ex.LevyItoind\] Let $(L(t):\, t{\geqslant}0)$ be a weakly cylindrical Lévy process which is induced by a Lévy process $(X(t):\, t{\geqslant}0)$ on $U$, i.e. $$\begin{aligned} L(t)a={\langle X(t),a\rangle}\qquad\text{for all }a\in U^\ast, t{\geqslant}0. \end{aligned}$$ The Lévy process $X$ can be decomposed according to $$\begin{aligned} X(t)={r}t + W(t) + \int_{0<{\left\lVert u \right\rVert}{\leqslant}1} u\,\tilde{Y}(t,du) + \int_{{\left\lVert u \right\rVert}> 1} u Y(t,du),\end{aligned}$$ where ${r}\in U$, $(W(t):\, t{\geqslant}0)$ is an $U$-valued Wiener process and $$\begin{aligned} Y(t,C)=\sum_{s\in [0,t]}\operatorname{\mathbbm 1}_{C}(\Delta X(s)) \qquad \text{for } C\in \operatorname{{\mathcal B}}(U),\end{aligned}$$ see [@OnnoMarkus]. Obviously, the cylindrical Lévy process $L$ is decomposed according to $$\begin{aligned} L(t)a={\langle {r},a\rangle}t + {\langle W(t),a\rangle} +{\langle \int_{0<{\left\lVert u \right\rVert}{\leqslant}1} u\,\tilde{Y}(t,du),a\rangle} + {\langle \int_{{\left\lVert u \right\rVert}> 1} u Y(t,du),a\rangle},\end{aligned}$$ for all $a\in U^\ast$. All terms appearing in this decomposition are linear even for a Lévy process $X$ without existing weak second moments, i.e. with $E{\langle X(1),a\rangle}^2=\infty$. More specificially and for comparison with Example \[ex.Poissonnotlinear\] let $(X(t):\,t {\geqslant}0)$ be a Poisson process on $U$, i.e. $X(t)=u_0 n(t)$ where $u_0\in U$ and $(n(t):\,t{\geqslant}0)$ is a real valued Poisson process with intensity $\lambda>0$. Then we obtain $$\begin{aligned} \int_{0<{\left\lVert u \right\rVert}{\leqslant}1} u \,\tilde{Y}(t,du)= \begin{cases} 0, & {\left\lVert u_0 \right\rVert}>1,\\ (n(t)-\lambda t)u_0,& {\left\lVert u_0 \right\rVert}{\leqslant}1. \end{cases}\\\end{aligned}$$ Integration =========== For the rest of this paper we will always assume that our cylindrical Lévy process $(L(t), t \geq 0)$ is [*weakly [càdlàg]{}*]{}, i.e. the one-dimensional Lévy processes $(L(t)a, t \geq 0)$ are [càdlàg]{} for all $a \in U^\ast$. Covariance operator {#se.covariance} ------------------- Let $L$ be a weakly cylindrical Lévy process of weak order 2 with decomposition . Then the prescription $$\begin{aligned} \label{eq.M2} M_2(t):U^\ast\to L^2(\Omega, {{\mathcal F}},P), \qquad M_2(t)a=\int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} \beta\,\tilde{N}_a(t,d\beta)\end{aligned}$$ defines a cylindrical process $(M_2(t):\, t{\geqslant}0)$ which has weak second moments. Thus, we can define the covariance operators: $$\begin{aligned} Q_2(t):U^\ast\to U^{\ast\prime},\qquad (Q_2(t)a)(b) &= E\left[(M_2(t)a)(M_2(t)b)\right]\\ &=E\left[\left(\int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} \beta \,\tilde{N}_a(t,d\beta)\right)\left( \int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} \beta \,\tilde{N}_b(t,d\beta)\right)\right],\end{aligned}$$ where $ U^{\ast\prime}$ denotes the algebraic dual of $U^\ast$. In general one can not assume that the image $Q_2(t)a$ is in the bidual space $U^{\ast\ast}$ or even $U$ as one might expect for ordinary $U$-valued stochastic processes with weak second moments. We give a counterexample for that fact after we know that there is no need to consider all times $t$: \[le.Qt\] We have $Q_2(t)=tQ_2(1)$ for all $t{\geqslant}0$. The characteristic function of the 2-dimensional random variable $(M_2(t)a,M_2(t)b)$ satisfies for all $\beta_1,\beta_2\in\operatorname{{{\mathbbm}R}}$: $$\begin{aligned} {\varphi}_{M_2(t)a,M_2(t)b}(\beta_1,\beta_2) &= E\left[\exp\left(i(\beta_1 M_2(t)a+\beta_2 M_2(t)b)\right)\right]\\ &=E\left[\exp (iM_2(t)(\beta_1a+\beta_2b))\right]\\ &=\left(E\left[\exp (iM_2(1)(\beta_1a+\beta_2b))\right]\right)^t\\ &= \left( {\varphi}_{M_2(1)a,M_2(1)b}(\beta_1,\beta_2)\right)^t.\end{aligned}$$ This relation enables us to calculate $$\begin{aligned} & \frac{\partial}{\partial \beta_2}\frac{\partial}{\partial\beta_1} {\varphi}_{M_2(t)a,M_2(t)b}(\beta_1,\beta_2)\\ &\qquad= \frac{\partial}{\partial \beta_2}\frac{\partial}{\partial\beta_1} \left({\varphi}_{M_2(1)a,M_2(1)b}(\beta_1,\beta_2)\right)^t\\ &\qquad=t(t-1)\left({\varphi}_{M_2(1)a,M_2(1)b}(\beta_1,\beta_2)\right)^{t-2}\frac{\partial}{\partial \beta_2} {\varphi}_{M_2(1)a,M_2(1)b}(\beta_1,\beta_2) \frac{\partial}{\partial \beta_1} {\varphi}_{M_2(1)a,M_2(1)b}(\beta_1,\beta_2)\\ &\qquad \qquad + t\left({\varphi}_{M_2(1)a,M_2(b)}(\beta_1,\beta_2)\right)^{t-1} \frac{\partial}{\partial \beta_2}\frac{\partial}{\partial\beta_1} {\varphi}_{M_2(1)a,M_2(1)b}(\beta_1,\beta_2).\end{aligned}$$ By recalling that $$\begin{aligned} \frac{\partial}{\partial \beta_1} {\varphi}_{M_2(1)a,M_2(1)b}(\beta_1,\beta_2)|_{\beta_1=0,\beta_2=0}=i\,E[M_2(1)a]=0,\end{aligned}$$ the representation above of the derivative can be used to obtain $$\begin{aligned} -E[(M_2(t)a)(M_2(t)b)] &= \frac{\partial}{\partial \beta_2}\frac{\partial}{\partial\beta_1} {\varphi}_{M_2(t)a,M_2(t)b}(\beta_1,\beta_2)|_{\beta_1=0,\beta_2=0}\\ &= t \frac{\partial}{\partial \beta_2}\frac{\partial}{\partial\beta_1} {\varphi}_{M_2(1)a,M_2(1)b}(\beta_1,\beta_2)|_{\beta_1=0,\beta_2=0}\\ &=-tE[(M_2(1)a)(M_2(1)b)],\end{aligned}$$ which completes our proof. Because of Lemma \[le.Qt\] we can simplify our notation and write $Q_2$ for $Q_2(1)$. \[ex.Qdiscon\] For the cylindrical Poisson process in Example \[ex.Poissonnotlinear\] we have $$\begin{aligned} M_2(t)=\int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}}\beta\, \tilde{N}_a(t,d\beta) = \zeta(a)(n(t)-\lambda t) \qquad\text{for all } a\in U^\ast.\end{aligned}$$ It follows that $$\begin{aligned} (Q_2a)(b)&=E\left[(M_2(1)a)( M_2(1)b)\right]\\ &= \zeta(a)\zeta(b) E\left[{\left\lvert n(1)-\lambda \right\rvert}^2\right]\\ &= \zeta(a)\zeta(b) \lambda .\end{aligned}$$ If we choose $\zeta$ discontinuous then $Q_2(a)$ is discontinuous and thus $Q_2(a)\notin U^{\ast\ast}$. \[de.strong\] The cylindrical process $M_2$ is called [*strong*]{} if the covariance operator $$\begin{aligned} Q_2:U^\ast\to U^{\ast\prime},\qquad Q_2a(b) =E\left[\left(\int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} \beta \,\tilde{N}_a(1,d\beta)\right)\left( \int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} \beta \,\tilde{N}_b(1,d\beta)\right)\right], \end{aligned}$$ maps to $U$. \[le.strong\] If the cylindrical Lévy measure $\nu$ of the cylindrical Lévy process $M_2$ extends to a Radon measure then $M_2$ is strong. It is easily seen that the operator $$\begin{aligned} G:U^\ast\to L^2(U,\operatorname{{\mathcal B}}(U),\nu),\qquad Ga={\langle \cdot,a\rangle}\operatorname{\mathbbm 1}_{U}(\cdot)\end{aligned}$$ is a closed operator and therefore $G$ is continuous. Thus, we have that $$\begin{aligned} \Big((Q_2a)(b)\Big)^2&{\leqslant}E{\left\lvert M_2(1)a \right\rvert}^2 E{\left\lvert M_2(1)b \right\rvert}^2\\ &= E{\left\lvert M_2(1)a \right\rvert}^2 \int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} \beta^2\, (\nu\circ b^{-1})(d\beta)\\ &= E{\left\lvert M_2(1)a \right\rvert}^2 \int_{U} {\left\lvert {\langle u,b\rangle} \right\rvert}^2\, \nu(du)\\ &{\leqslant}E{\left\lvert M_2(1)a \right\rvert}^2 {\left\lVert G \right\rVert}^2{\left\lVert b \right\rVert}^2, \end{aligned}$$ which completes the proof. If $M_2$ is strong then the covariance operator $Q_2$ is a symmetric positive linear operator which maps $U^\ast$ to $U$. A factorisation lemma (see e.g. Proposition III.1.6 (p.152) in [@Vaketal]) implies that there exists a Hilbert subspace $(H_{Q_2}, [\cdot,\cdot]_{H_{Q_2}})$ of $U$ such that 1. $Q_2(U^\ast)$ is dense in $H_{Q_2}$; 2. for all $a,b\in U^\ast$ we have: $\;[Q_2a, Q_2b]_{H_{Q_2}}={\langle Q_2a,b\rangle}$. Moreover, if $i_{Q_2}$ denotes the natural embedding of $H_{Q_2}$ into $U$ we have 1. $Q_2=i_{Q_2} i^\ast_{Q_2}$. The Hilbert space $H_{Q_2}$ is called the [*reproducing kernel Hilbert space associated with $Q_2$*]{}. We have the following useful formulae: $$\begin{aligned} \operatorname{Cov}(M_2(1)a,\,M_2(1)b)= {\langle Q_2a,b\rangle}=[i^\ast_{Q_2} a, i^\ast_{Q_2} b]_{H_{Q_2}}.\end{aligned}$$ In particular, we have $$\begin{aligned} \label{eq.Covandnorm} E{\left\lvert M_2(1)a \right\rvert}^2 = {\left\lVert i^\ast_{Q_2} a \right\rVert}_{H_{Q_2}}^2.\end{aligned}$$ \[re.WandMcov\] Assume that $(L(t):\,t{\geqslant}0)$ is a weakly cylindrical Lévy process of weak order 2 in U with $E[L(t)a]=0$ for all $a\in U^\ast$. Then its decomposition according to Corollary \[co.linear\] is given by $$\begin{aligned} L(t)a=W(t)a+ M_2(t)a\qquad\text{for all }a\in U^\ast, \end{aligned}$$ where $W=(W(t):\,t{\geqslant}0)$ is a weakly cylindrical Wiener process and $M_2$ is of the form with covariance operator $Q_{2}$. The covariance operator $Q_1$ of $W$, $$\begin{aligned} Q_1:U^\ast\to U^{\ast \prime}, \qquad (Q_1(a))(b)=E[(W(1)a)(W(1)b)]\end{aligned}$$ may exhibit similar behaviour to $Q_2$ in that it might be discontinuous, see [@riedle] for an example. Consequently, we call $L$ a [*strongly cylindrical Lévy process of weak order 2*]{} if both $Q_1$ and $Q_2$ map to $U$. By independence of $W$ and $M_2$ it follows that $$\begin{aligned} Q:U^\ast \to U\qquad (Qa)(b):=(Q_1a)(b)+(Q_2 a)(b)\end{aligned}$$ is the covariance operator of $L$. As before the operator $Q$ can be factorised through a Hilbert space $H_Q$. Representation as a Series -------------------------- \[th.cylsum\] If the cylindrical process $M_2$ of the form is strong then there exist a Hilbert space $H$ with an orthonormal basis $(e_k)_{k\in\operatorname{{{\mathbbm}N}}}$, $F\in L(H,U)$ and uncorrelated real valued [càdlàg]{} Lévy processes $(m_k)_{k\in\operatorname{{{\mathbbm}N}}}$ such that $$\begin{aligned} \label{eq.cylsum} M_2(t)a=\sum_{k=1}^\infty {\langle Fe_k,a\rangle} m_k(t) \qquad \text{in }L^2(\Omega,{{\mathcal F}},P)\text{ for all $a\in U^\ast$}.\end{aligned}$$ Let $Q_2:U^\ast\to U$ be the covariance operator of $M_2(1)$ and $H=H_{Q_2}$ its reproducing kernel Hilbert space with the inclusion mapping $i_{Q_2}:H\to U$ (see the comments after Lemma \[le.strong\]). Because the range of $i_{Q_2}^\ast$ is dense in $H$ and $H$ is separable there exists an orthonormal basis $(e_k)_{k\in\operatorname{{{\mathbbm}N}}}\subseteq$range$(i_{Q_2}^\ast)$ of $H$. We choose $a_k\in U^\ast$ such that $i_{Q_2}^\ast a_k=e_k$ for all $k\in\operatorname{{{\mathbbm}N}}$ and define $m_k(t):=M_2(t)a_k$. Then by using the equation we obtain that $$\begin{aligned} E{\left\lvert \sum_{k=1}^n {\langle i_{Q_2}e_k,a\rangle}m_k(t) - M_2(t)a \right\rvert}^2 &=E{\left\lvert M_2(t)\left(\sum_{k=1}^n {\langle i_{Q_2}e_k,a\rangle} a_k -a\right) \right\rvert}^2\\ &=t{\left\lVert i_{Q_2}^\ast\left(\sum_{k=1}^n {\langle i_{Q_2}e_k,a\rangle}a_k -a\right) \right\rVert}^2_H\\ &=t{\left\lVert \sum_{k=1}^n [e_k,i_{Q_2}^\ast a]_{H}e_k -i_{Q_2}^\ast a \right\rVert}^2_H\\ &\to 0 \qquad\text{for }n\to\infty.\end{aligned}$$ Thus, $M_2$ has the required representation and it remains to establish that the Lévy processes $m_k:=(m_k(t):\,t{\geqslant}0)$ are uncorrelated. For any $s{\leqslant}t$ and $k,l\in\operatorname{{{\mathbbm}N}}$ we have: $$\begin{aligned} E[m_k(s)m_l(t)]&= E[M_2(s)a_k M_2(t)a_l]\\ & = E[M_2(s)a_k (M_2(t)a_l- M_2(s)a_l)] + E[M_2(s)a_k M_2(s)a_l]. \intertext{The first term is zero by Lemma \ref{le.weaklyind} and for the second term we obtain} E[M_2(s)a_k M_2(s)a_l] &=s{\langle Q_2a_k,a_l\rangle} = s [i_{Q_2}^\ast a_k,i_{Q_2}^\ast a_l]_{H} =s [e_k,e_l]_{H} =s \delta_{k,l}.\end{aligned}$$ Hence, $m_k(s)$ and $m_l(t)$ are uncorrelated. \[re.choicemk\] The proof of Theorem \[th.cylsum\] shows that the real valued Lévy processes $m_k$ can be chosen as $$\begin{aligned} m_k(t)=\int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} \beta \,\tilde{N}_{a_k}(t,d\beta)\qquad\text{for all }t{\geqslant}0,\end{aligned}$$ where $\tilde{N}_{a_k}$ is the compensated Poisson random measure. Because of the choice of $a_k$ the relation yields that $$\begin{aligned} \label{eq.M2=1} E{\left\lvert m_k(t) \right\rvert}^2 = t E{\left\lvert M_2(1)a_k \right\rvert}^2 = t {\left\lVert i^\ast_{Q_2} a_k \right\rVert}_{H_{Q_2}}^2 = t {\left\lVert e_k \right\rVert}_{H_{Q_2}}^2 = t\end{aligned}$$ for all $k\in\operatorname{{{\mathbbm}N}}$ implying that $$\begin{aligned} \label{eq.int=1} \int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} \beta^2 (\nu\circ a_k^{-1})(d\beta)=1.\end{aligned}$$ An interesting question is the reverse implication of Theorem \[th.cylsum\]. Under which condition on a family $(m_k)_{k\in \operatorname{{{\mathbbm}N}}}$ of real valued Lévy processes can we construct a cylindrical Lévy process via the sum ? \[re.WandMseries\] Let $(L(t):\,t{\geqslant}0)$ be a strongly cylindrical Lévy process with decomposition $L(t)=W(t)+M_2(t)$. By Remark \[re.WandMcov\] the covariance operator $Q$ of $L$ can be factorised through a Hilbert space $H_Q$ and so Theorem \[th.cylsum\] can be generalised as follows. There exist an orthonormal basis $(e_k)_{k\in\operatorname{{{\mathbbm}N}}}$ of $H_Q$, $F\in L(H_Q,U)$ and uncorrelated real valued Lévy processes $(m_k)_{k\in\operatorname{{{\mathbbm}N}}}$ such that $$\begin{aligned} L(t)a=\sum_{k=1}^\infty {\langle Fe_k,a\rangle} m_k(t) \qquad \text{in }L^2(\Omega,{{\mathcal F}},P)\text{ for all $a\in U^\ast$}.\end{aligned}$$ As the stochastic processes $m_k$ can be choosen as $m_k(t)=L(t)a_k$ for some $a_k\in U^\ast$ it follows that for all $t \geq 0, k \in {\mathbbm}{N}$ $$\begin{aligned} m_k(t)=W(t)a_k + \int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}}\beta\,\tilde{N}_{a_k}(t,d\beta).\end{aligned}$$ Integration ----------- In this section we introduce a cylindrical integral with respect to the cylindrical process $M_2=(M_2(t):\,t{\geqslant}0)$ in $U$. Because $M_2$ has weakly independent increments and is of weak order 2 we can closely follow the analysis for a cylindrical Wiener process as was considered in [@riedle]. The integrand is a stochastic process with values in $L(U,V)$, the set of bounded linear operators from $U$ to $V$, where $V$ denotes a separable Banach space. For that purpose we assume for $M_2$ the representation according to Theorem \[th.cylsum\]: $$\begin{aligned} M_2(t)a=\sum_{k=1}^\infty {\langle i_{Q_2}e_k,a\rangle} m_k(t) \qquad \text{in }L^2(\Omega,{{\mathcal F}},P)\text{ for all $a\in U^\ast$},\end{aligned}$$ where $H_{Q_2}$ is the reproducing kernel Hilbert space of the covariance operator $Q_2$ with the inclusion mapping $i_{Q_2}:H_{Q_2}\to U$ and an orthonormal basis $(e_k)_{k\in\operatorname{{{\mathbbm}N}}}$ of $H_{Q_2}$. The real valued Lévy processes $(m_k(t):\,t{\geqslant}0)$ are defined by $m_k(t)=M_2(t)a_k$ for some $a_k\in U^\ast$ with $i_Q^\ast a_k=e_k$, see Remark \[re.choicemk\]. \[de.integrablefunc\] The set $C(U,V)$ contains all random variables $\Phi:[0,T]\times \Omega\to L(U,V)$ such that: 1. $(t,\omega)\mapsto \Phi^\ast(t,\omega)f$ is $\operatorname{{\mathcal B}}[0,T]\otimes {{\mathcal F}}$ measurable for all $f\in V^\ast$; 2. $(t,\omega)\mapsto {\langle \Phi(t,\omega)u,f\rangle}$ is predictable for all $u\in U$ and $f\in V^\ast$. 3. $\displaystyle \int_{0}^T E{\left\lVert \Phi^\ast(s,\cdot)f \right\rVert}_{U^\ast}^2\, ds<\infty \;$ for all $f\in V^\ast$. As usual we neglect the dependence of $\Phi\in C(U,V)$ on $\omega$ and write $\Phi(s)$ for $\Phi(s,\cdot)$ as well as for the dual process $\Phi^\ast(s):=\Phi^\ast(s,\cdot)$ where $\Phi^\ast(s,\omega) \in L(V,U)$ denotes the dual (or adjoint) operator of $\Phi(s,\omega)\in L(U,V)$. We define the candidate for a stochastic integral: \[de.I\_t\] For $\Phi\in C(U,V)$ we define $$\begin{aligned} I_t(\Phi)f:= \sum_{k=1}^\infty \int_{0}^t {\langle \Phi(s)i_{Q_2} e_k,f\rangle}\, m_k(ds) \qquad \text{in }L^2(\Omega,{{\mathcal F}},P)\end{aligned}$$ for all $f\in V^\ast$ and $t \in [0,T]$. For a predictable mapping $h:[0,t]\times\operatorname{{{\mathbbm}R}}\times\Omega\to\operatorname{{{\mathbbm}R}}$ the stochastic integral $\int_{[0,t]\times\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} h(s,\beta)\,\tilde{N}_a(ds,d\beta)$ exists if $$\begin{aligned} \int_{[0,t]\times\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}}E\left[(h(s,\beta))^2\right]\,\nu_a(d\beta)\,ds<\infty,\end{aligned}$$ see for example Chapter 4 in [@Dave04]. Thus, the stochastic integral $$\begin{aligned} \int_{0}^t {\langle \Phi(s)i_{Q_2} e_k,f\rangle}\, m_k(ds) =\int_{[0,t]\times \operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} {\langle \Phi(s)i_{Q_2} e_k,f\rangle} \,\beta \,\tilde{N}_{a_k}(ds,d\beta)\end{aligned}$$ exists because property (c) in Definition \[de.integrablefunc\] together with implies $$\begin{aligned} &\int_{[0,t]\times\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}} E\left[\big({\langle \Phi(s)i_{Q_2} e_k,f\rangle} \,\beta\big)^2\right]\,(\nu\circ a_k^{-1})(d\beta)\,ds \\ &\qquad\qquad = \int_{[0,t]}E\left[\big({\langle i_{Q_2} e_k,\Phi^\ast(s) f\rangle}\big)^2\right]\,ds \int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}}\beta^2(\nu\circ a_k^{-1})\,(d\beta)\\ &\qquad\qquad{\leqslant}{\left\lVert i_{Q_2} e_k \right\rVert}^2\int_0^t E{\left\lVert \Phi^\ast(s)f \right\rVert}^2\,ds <\infty.\end{aligned}$$ Before we establish that the sum of these integrals in Definition \[de.I\_t\] converges we derive a simple generalisation of Itô’s isometry for stochastic integrals with respect to compensated Poisson random measures. \[le.crossexpectation\] Let $(h_i(t):\,t\in [0,T])$ for $i=1,2$ be two predictable real valued processes with $$\begin{aligned} \int_0^T E{\left\lvert h_i(s) \right\rvert}^2\,ds<\infty\end{aligned}$$ and let $m_1:=(M_2(t)a:\,t\in [0,T])$ and $m_2:=(M_2(t)b:\,t\in [0,T])$ for $a,b\in U^\ast$. Then we have $$\begin{aligned} E\left[\left(\int_0^T h_1(s)\,m_1(ds)\right)\left(\int_0^T h_2(s)\,m_2(ds)\right)\right] =\operatorname{Cov}(m_1(1),m_2(1))\,E\left[\int_0^T h_1(s)h_2(s)\,ds\right].\end{aligned}$$ Let $g_i$, $i=1,2$, be simple processes of the form $$\begin{aligned} \label{eq.simplelemma} g_i(s)=\xi_{i,0}\operatorname{\mathbbm 1}_{\{0\}}(s)+\sum_{k=1}^{n-1} \xi_{i,k}\operatorname{\mathbbm 1}_{(t_k,t_{k+1}]}(s)\end{aligned}$$ for $0=t_1{\leqslant}t_2{\leqslant}\dots {\leqslant}t_n=T$ and a sequence of random variables $\{\xi_{i,k}\}_{k=0,\dots, n-1}$ such that $\xi_{i,k}$ is ${{\mathcal F}}_{t_k}$-measurable and $\sup_{k=0,\dots, n-1}{\left\lvert \xi_{i,k} \right\rvert}<C$ $P$-a.s. We obtain $$\begin{aligned} E\left[\left(\int_0^T g_1(s)\,m_1(ds)\right)\left(\int_0^T g_2(s)\,m_2(ds)\right)\right] & = Cov(m_1(1),m_2(1))\sum_{k=1}^{n-1} E[ \xi_{1,k}\xi_{2,k}] (t_{k+1}-t_k)\\ & = Cov(m_1(1),m_2(1)) E\left[\int_0^T g_1(s)g_2(s)\,ds \right] .\end{aligned}$$ For the processes $h_i$ there exist simple processes $(g_i^{(n)})$ of the form such that $$\begin{aligned} \label{eq.approxsimple} E\left[\int_0^T (g_i^{(n)}(s)-h_i(s))^2 \,ds \right]\to 0 \qquad\text{for }n\to\infty.\end{aligned}$$ Itô’s isometry implies that there exists a subsequence $(n_k)_{k\in\operatorname{{{\mathbbm}N}}}$ such that $$\begin{aligned} \int_0^T g_i^{(n_k)}(s)\,m_i(ds)\to \int_0^T h_i(s)\,m_i(ds) \qquad\text{ $P$-a.s. for $k\to\infty$}\end{aligned}$$ for $i=1,2$. By applying Lebesgue’s dominated convergence theorem we obtain $$\begin{aligned} E\left[\left(\int_0^T g_1^{(n_k)}(s)\,m_1(ds)\right)\left(\int_0^T g_2^{(n_k)}(s)\,m_2(ds)\right)\right]\to E\left[\left(\int_0^T h_1(s)\,m_1(ds)\right)\left(\int_0^T h_2(s)\,m_2(ds)\right)\right].\end{aligned}$$ On the other hand, implies that there exists a subsequence $(n_k)_{k\in\operatorname{{{\mathbbm}N}}}$ such that $$\begin{aligned} E[g_i^{(n_k)}(s)-h_i(s)]\to 0 \qquad\text{Lebesgue almost everywhere for $k\to\infty$.}\end{aligned}$$ Lebesgue’s dominated convergence theorem again implies that $$\begin{aligned} \int_0^T E\left[ g_1^{(n_k)}(s)g_2^{(n_k)}(s)\right]\,ds \to \int_0^T E\left[ h_1(s)h_2(s)\right]\,ds\qquad\text{for }k\to\infty,\end{aligned}$$ which completes the proof. \[le.cylintwell\] $I_t(\Phi):V^\ast \to L^2(\Omega,{{\mathcal F}},P)$ is a well-defined cylindrical random variable in $V$ which is independent of the representation of $L$, i.e. of $(e_n)_{n\in\operatorname{{{\mathbbm}N}}}$ and $(a_n)_{n\in\operatorname{{{\mathbbm}N}}}$. We begin to establish the convergence in $L^2(\Omega,{{\mathcal F}},P)$. For that, let $m,n\in \operatorname{{{\mathbbm}N}}$ and we define for simplicity $h(s):=i_{Q_2}^\ast\Phi^\ast(s)f$. Doob’s maximal inequality and Lemma \[le.crossexpectation\] imply $$\begin{aligned} & E{\left\lvert \sup_{0{\leqslant}t{\leqslant}T} \sum_{k=m+1}^{n}\int_{0}^t {\langle \Phi(s)i_{Q_2} e_k,f\rangle}\,m_k(ds) \right\rvert}^2\\ &\qquad {\leqslant}4\sum_{k=m+1}^{n}\left(\int_{\operatorname{{{\mathbbm}R}}{\!{}\!}\{0\}}\beta^2\,(\nu\circ a_k^{-1})(d\beta)\right)\int_{0}^T E{\left[ e_k,h(s)\right]_{H_{Q_2}}}^2\,ds\\ &\qquad {\leqslant}4 \sum_{k=m+1}^{\infty}\int_0^T E{\left[ {\left[ e_k,h(s)\right]_{H_{Q_2}}}e_k,h(s)\right]_{H_{Q_2}}}\,ds\\ &\qquad = 4 \sum_{k=m+1}^{\infty}\sum_{l=m+1}^\infty\int_0^T E{\left[ {\left[ e_k,h(s)\right]_{H_{Q_2}}}e_k,{\left[ e_l,h(s)\right]_{H_{Q_2}}}e_l\right]_{H_{Q_2}}}\,ds\\ &\qquad =4 \int_0^T E{\left\lVert (\operatorname{ Id}-p_m)h(s) \right\rVert}_{H_{Q_2}}^2\,ds, \end{aligned}$$ where $p_m:H_{Q_2}\to H_{Q_2}$ denotes the projection onto the span of $\{e_1,\dots, e_m\}$. Because ${\left\lVert (\operatorname{ Id}-p_m)h(s) \right\rVert}_{H_{Q_2}}^2\to 0$ $P$-a.s. for $m\to \infty$ and $$\begin{aligned} \int_0^T E{\left\lVert (\operatorname{ Id}-p_m)h(s) \right\rVert}_{H_{Q_2}}^2\,ds {\leqslant}{\left\lVert i_{Q_2}^\ast \right\rVert}^2_{U^\ast\to H_{Q_2}} \int_0^T E{\left\lVert \Phi^\ast(s,\cdot)f \right\rVert}^2_{U^\ast}\,ds<\infty\end{aligned}$$ we obtain by Lebesgue’s dominated convergence theorem the convergence in $L^2(\Omega,{{\mathcal F}},P)$. Because the processes $\{m_k\}_{k\in\operatorname{{{\mathbbm}N}}}$ are uncorrelated Lemma \[le.crossexpectation\] enables us to derive an analogue of Itô’s isometry: $$\begin{aligned} \label{eq.itoiso0} E{\left\lvert \sum_{k=1}^{\infty}\int_{0}^t {\langle \Phi(s)i_{Q_2} e_k,f\rangle}\,m_k(ds) \right\rvert}^2 &= \sum_{k=1}^{\infty} E{\left\lvert \int_{0}^t {\langle \Phi(s)i_{Q_2} e_k,f\rangle}\,m_k(ds) \right\rvert}^2\notag\\ &= \sum_{k=1}^\infty E{\left\lvert m_k(1) \right\rvert}^2 \int_0^t E{\left\lvert {\langle \Phi(s)i_{Q_2} e_k,f\rangle} \right\rvert}^2\,ds\notag\\ &= \sum_{k=1}^\infty \int_0^t E\left[{\left[ e_k,i_{Q_2}^\ast\Phi^\ast(s)f\right]_{H_{Q_2}}}^2\right]\,ds\notag\\ &=\int_0^t {\left\lVert i_{Q_2}^\ast\Phi^\ast(s)f \right\rVert}^2_{H_{Q_2}}\,ds,\end{aligned}$$ where we used to obtain $$\begin{aligned} E{\left\lvert m_k(1) \right\rvert}^2={\left\lVert i_{Q_2}^\ast a_k \right\rVert}^2={\left\lVert e_k \right\rVert}^2=1.\end{aligned}$$ To prove the independence of the given representation of $M_2$ let $(d_l)_{l\in\operatorname{{{\mathbbm}N}}}$ be an other orthonormal basis of $H_{Q_2}$ and $w_l \in U^\ast $ such that $i_{Q_2}^\ast w_l=d_l$ and $(n_l(t):\,t{\geqslant}0)$ Lévy processes defined by $n_l(t)=M_2(t)w_l$. As before we define in $L^2(\Omega,{{\mathcal F}},P)$: $$\begin{aligned} \tilde{I}_t(\Phi)f:=\sum_{l=1}^\infty \int_0^t {\langle \Phi(s)i_{Q_2} d_l,f\rangle}\,n_l(ds) \qquad\text{for all }f\in V^\ast.\end{aligned}$$ Lemma \[le.crossexpectation\] enables us to compute the covariance: $$\begin{aligned} &E\left[ \big(I_t(\Phi)f\big)\big( \tilde{I}_t(\Phi)f\big)\right]\\ &\qquad =\sum_{k=1}^\infty \sum_{l=1}^\infty E\left[\left(\int_0^t{\langle \Phi(s)i_{Q_2} e_k,f\rangle}\, m_k(ds)\right) \left(\int_0^t {\langle \Phi(s)i_{Q_2} d_l,f\rangle}\, n_l(ds)\right)\right] \\ &\qquad =\sum_{k=1}^\infty \sum_{l=1}^\infty \operatorname{Cov}(m_k(1),n_l(1))E\left[\int_0^t {\langle \Phi(s)i_{Q_2} e_k,f\rangle} {\langle \Phi(s)i_{Q_2} d_l,f\rangle} \,ds \right]\\ &\qquad = \int_0^t E\left[ \sum_{k=1}^\infty \sum_{l=1}^\infty {\left[ e_k,d_l\right]_{H_{Q_2}}}{\left[ e_k,i_{Q_2}^\ast\Phi^\ast (s)f\right]_{H_{Q_2}}} {\left[ d_l,i_{Q_2}^\ast \Phi^\ast(s)f\right]_{H_{Q_2}}}\,ds \right]\\ &\qquad =\int_0^t E{\left\lVert i_{Q_2}^\ast\Phi^\ast (s)f \right\rVert}_{H_{Q_2}}^2\,ds.\end{aligned}$$ By using Itô’s isometry we obtain $$\begin{aligned} & E\left[{\left\lvert I_t(\Phi)f-\tilde{I}_t(\Phi)f \right\rvert}^2\right]\\ &\qquad= E\Big[{\left\lvert I_t(\Phi)f \right\rvert}^2\Big]+ E\Big[{\left\lvert \tilde{I}_t(\Phi)f \right\rvert}^2\Big] -2 E\Big[ \big(I_t(\Phi)f\big)\big( \tilde{I}_t(\Phi)f\big)\Big]\\ &\qquad =0,\end{aligned}$$ which proves the independence of $I_t(\Phi)$ on $(e_k)_{k\in\operatorname{{{\mathbbm}N}}}$ and $(a_k)_{k\in\operatorname{{{\mathbbm}N}}}$. The linearity of $I_t(\Phi)$ is obvious and hence the proof is complete. Our next definition is not very surprising: For $\Phi\in C(U,V)$ we call the cylindrical random variable $$\begin{aligned} \int_0^t \Phi(s)\, dM_2(s):=I_t(\Phi) \end{aligned}$$ a [*cylindrical stochastic integral with respect to $M_2$*]{}. In the proof of Lemma \[le.cylintwell\] we already derived Itô’s isometry: $$\begin{aligned} E{\left\lvert \left(\int_0^t \Phi(s)\,dM_2(s)\right)f \right\rvert}^2= \int_0^t E{\left\lVert i_{Q_2}^\ast\Phi^\ast(s)f \right\rVert}^2_{H_{Q_2}}\,ds\end{aligned}$$ for all $f\in V^\ast$. \[re.WandMint\] If a strongly cylindrical Lévy process $L$ is of the form $L(t)=W(t)+M_2(t)$ one can utilise the series representation in Remark \[re.WandMseries\] to define a stochastic integral with respect to $L$ by the same approach as in this subsection. But on the other hand we can follow [@Dave] and define $$\begin{aligned} \int \Phi(s)\,dL(s):=\int \Phi(s)\,dW(s)+ \int \Phi(s)\,dM_2(s), \end{aligned}$$ where the stochastic integral with respect to the cylindrical Wiener process $W$ is defined analogously, see [@riedle] for details. This approach allows even more flexibility because one can choose different integrands $\Phi_1$ and $\Phi_2$ for the two different integrals on the right hand side. Cylindrical Ornstein-Uhlenbeck process ====================================== Let $V$ be a separable Banach space and let $(M_2(t):\,t{\geqslant}0)$ be a strongly cylindrical Lévy process of the form on a separable Banach space $U$ with covariance operator ${Q_2}$ and cylindrical Lévy measure $\nu$. We consider the Cauchy problem $$\begin{aligned} \label{eq.cauchy} \begin{split} dY(t)&=AY(t)\,dt + C\,dM_2(t)\qquad\text{for all }t{\geqslant}0,\\ Y(0)&=Y_0, \end{split}\end{aligned}$$ where $A:\text{dom}(A)\subseteq V\to V$ is the infinitesimal generator of a strongly continuous semigroup $(S(t))_{t{\geqslant}0}$ on $V$ and $C:U\to V$ is a linear, bounded operator. The initial condition is given by a cylindrical random variable $Y_0:V^\ast\to L^0(\Omega, {{\mathcal F}},P)$. In addition, we assume that $Y_0$ is continuous when $L^0(\Omega, {{\mathcal F}},P)$ is equipped with the topology of convergence in probability. In this section we focus on the random noise $M_2$ for simplicity. But because of Remark \[re.WandMint\] our results in this section on the Cauchy problem can easily be generalised to the Cauchy problem of the form $$\begin{aligned} dY(t)=AY(t)\,dt + C_1\, dW(t)+ C_2\,dM_2(t), \end{aligned}$$ where $(W(t):\,t{\geqslant}0)$ is a strongly cylindrical Wiener process. To find an appropriate meaning of a solution of let $T:{\text{dom}}(T)\subseteq U \to V$ be a closed densely defined linear operator acting with dual operator $T^\ast: {\text{dom}}(T^\ast)\subseteq V^\ast \to U^\ast$. If $X$ is a cylindrical random variable in $U$ then we obtain a linear map $TX$ with domain ${\text{dom}}(T^\ast)$ by the prescription $$\begin{aligned} TX:{\text{dom}}(T^\ast)\subseteq V^\ast\to L^0(\Omega,{{\mathcal F}},P),\qquad (TX)a:=X(T^\ast a).\end{aligned}$$ If ${\text{dom}}(T^\ast)=V^\ast$ then $TX$ defines a new cylindrical random variable in $V$. If $\mu_X$ denotes the cylindrical distribution of $X$ then the cylindrical distribution $\mu_{TX}$ of $TX$ is given by $$\begin{aligned} \mu_{TX}(Z(a_1,\dots, a_n;B))=\mu_X(Z(T^\ast a_1,\dots, T^\ast a_n;B)),\end{aligned}$$ for all $a_1,\dots, a_n\in V^\ast$, $B\in \operatorname{{\mathcal B}}(\operatorname{{{\mathbbm}R}}^n)$ and $n\in\operatorname{{{\mathbbm}N}}$. By applying this definition the operator $C$ appearing in the Cauchy problem defines a new cylindrical process $CM_2:=(CM_2(t):\,t{\geqslant}0)$ in $V$ by $$\begin{aligned} CM_2(t)a=M_2(t)(C^\ast a) \qquad\text{for all }a \in V^\ast.\end{aligned}$$ The cylindrical process $CM_2$ is a cylindrical Lévy process in $V$ with covariance operator $CQ_2C^\ast$ and cylindrical Lévy measure $\nu_{CM_2}$ given by $$\begin{aligned} \nu_{CM_2}(Z(a_1,\dots, a_n;B))=\nu_{M_2}(Z(C^\ast a_1,\dots,C^\ast a_n;B)).\end{aligned}$$ \[de.sol\] An adapted, cylindrical process $(Y(t):\,t{\geqslant}0)$ in $V$ is called a [*weak cylindrical solution of* ]{} if $$\begin{aligned} Y(t)a=Y_0a +\int_0^t AY(s)a \,ds + (CM_2(t))a \qquad\text{for all }a\in{\text{dom}}(A^\ast).\end{aligned}$$ Definition \[de.sol\] extends the concept of a solution of stochastic Cauchy problems on a Hilbert space or a Banach space driven by a Lévy process to the cylindrical situation, see [@DaPratoZab] for the case of a Hilbert space and [@OnnoMarkus] for the case of a Banach space. The following example illustrates this generalisation. \[ex.solind\] Let $\tilde{N}$ be a compensated Poisson random measure in $U$. Then a weak solution of $$\begin{aligned} \label{eq.cauchyradon} \begin{split} dZ(t)&=AZ(t)\,dt + \int_{0<{\left\lVert u \right\rVert}} C\, d\tilde{N}(dt,du)\qquad \text{for all }t{\geqslant}0,\\ Z(0)&=Z_0 \end{split} \end{aligned}$$ is a stochastic process $Z=(Z(t):\,t{\geqslant}0)$ in $V$ such that P-a.s. $$\begin{aligned} \label{eq.RadonCauchy} {\langle Z(t),a\rangle}= {\langle Z(0),a\rangle}+\int_0^t {\langle Z(s),A^\ast a\rangle}\,ds + \int_{[0,t]\times U}{\langle C(u),a\rangle} \, \tilde{N}(ds,du)\end{aligned}$$ for all $a\in {\text{dom}}(A^\ast)$ and $t{\geqslant}0$. These kinds of equations in Hilbert spaces are considered in [@Dave] and [@PesZab] and in Banach spaces in [@OnnoMarkus]. If we define a cylindrical Lévy process $(M_2(t):\,t{\geqslant}0)$ by $$\begin{aligned} M_2(t)a:=\int_{U}{\langle u,a\rangle}\, \tilde{N}(t,du),\end{aligned}$$ then it follows that the induced cylindrical process $(Y(t):\,t{\geqslant}0)$ with $Y(t)a={\langle Z(t),a\rangle}$ where $Z$ is a weak solution of is a weak cylindrical solution of $$\begin{aligned} dY(t)&=AY(t)\,dt + C\, dM_2(t), \\ Y(0)&=Y_0 \end{aligned}$$ in the sense of Definition \[de.sol\] with $Y_0a:={\langle Z_0,a\rangle}$. A Cauchy problem of the form might not have a solution in the traditional sense. But a cylindrical solution always exists: \[th.sol\] For every Cauchy problem of the form there exists a unique weak cylindrical solution $(Y(t):\,t{\geqslant}0)$ which is given by $$\begin{aligned} Y(t)= S(t)Y_0 + \int_0^t S(t-s)C\, dM_2(s)\qquad\text{for all }t{\geqslant}0.\end{aligned}$$ We define the stochastic convolution by the cylindrical random variable $$\begin{aligned} X(t):=\int_0^t S(t-v)C\, dM_2(v)\qquad\text{for all }t{\geqslant}0.\end{aligned}$$ To ensure that the cylindrical stochastic integral exists we need only to check that the integrand satisfies the condition (c) in Definition \[de.integrablefunc\] which follows from $$\begin{aligned} \int_0^t {\left\lVert S^\ast(t-v)a \right\rVert}_{V^\ast}^2\,dv = \int_0^t {\left\lVert S(v)a \right\rVert}_{V}^2\,dv <\infty,\end{aligned}$$ because of the exponential estimate of the growth of semigroups, i.e. $$\begin{aligned} \label{eq.expgrowth} {\left\lVert S(t)a \right\rVert}_V{\leqslant}Ce^{\gamma t}\qquad \text{for all }t{\geqslant}0,\end{aligned}$$ where $C\in (0,\infty)$ and $\gamma\in\operatorname{{{\mathbbm}R}}$ are some constants. By using standard properties of strongly continuous semigroups we calculate for $a\in V^\ast$ that $$\begin{aligned} \label{eq.proofOS} \int_0^t AX(r)a\,dr &= \int_0^t X(r)(A^\ast a)\,dr \notag\\ &= \int_0^t \left( \int_0^r S(r-v)C\,dM_2(v)\right) (A^\ast a)\,dr \notag\\ &= \sum_{k=1}^\infty \int_0^t \int_0^r {\langle S(r-v)Ci_{Q_2}e_k,A^\ast a\rangle}\,m_k(dv)\,dr\notag\\ &= \sum_{k=1}^\infty \int_0^t \int_r^t {\langle S(r-v)Ci_{Q_2}e_k,A^\ast a\rangle}\,dr\, m_k(dv)\notag\\ &= \sum_{k=1}^\infty \int_0^t {\langle Ci_{Q_2}e_k,S^\ast (t-v)a-a\rangle}\,m_k(dv)\notag\\ &= X(t)a - M_2(t)(C^\ast a),\end{aligned}$$ where we have used the stochastic Fubini theorem for Poisson stochastic integrals (see Theorem 5 in [@Dave]), the application of which is justified by the estimate . For convenience we define $$\begin{aligned} Z(t):=S(t)Y_0\qquad\text{for all }t{\geqslant}0.\end{aligned}$$ Proposition 1.2.2 in [@Jan] guarantees that the adjoint semigroup satisfies $$\begin{aligned} \int_0^t S^\ast (r)A^\ast a\,dr = S^\ast (t)a-a \qquad\text{for all }a\in {\text{dom}}(A^\ast),\end{aligned}$$ in the sense of Bochner integrals. Thus, we have $$\begin{aligned} \int_0^t AZ(r)a\,dr =\int_0^t Y_0(S^\ast (r)A^\ast a)\,dr =Y_0\int_0^t S^\ast (r)A^\ast a\, dr = Z(t)a-Y_0a.\end{aligned}$$ The assumption on the continuity of the initial condition $Y_0$ enables the change of the integration and the application of the initial condition $Y_0$. Together with this completes our proof. The cylindrical process $(Y(t):\,t{\geqslant}0)$ given in Theorem \[th.sol\] is called a [*cylindrical Ornstein-Uhlenbeck process*]{}. For all $t{\geqslant}0$, let $C_t(\Omega,V)$ be the linear space of all adapted cylindrical random variables in $V$ which are ${{\mathcal F}}_t$-measurable. A family $\{Z_{s,t}:\,0{\leqslant}s{\leqslant}t\}$ of mappings $$\begin{aligned} Z_{s,t}:C_s(\Omega,V)\to C_t(\Omega,V)\end{aligned}$$ is called a [*cylindrical flow*]{} if $Z_{t,t}=\operatorname{ Id}$ and for each $0{\leqslant}r{\leqslant}s{\leqslant}t$ $$\begin{aligned} Z_{r,t}=Z_{s,t}\circ Z_{r,s} \quad\text{$P$-a.s.}\end{aligned}$$ In relation to the cylindrical Ornstein-Uhlenbeck process in Theorem \[th.sol\] we define $$\begin{aligned} \label{eq.flow} Z_{s,t}X:=S(t-s)X+\int_s^t S(t-r)C\, dM_2(r) \qquad\text{for }X\in C_s(\Omega,V)\end{aligned}$$ and for all $0{\leqslant}s{\leqslant}t$. 1. The family $\{Z_{s,t}:\,0{\leqslant}s{\leqslant}t\}$ as given by is a cylindrical flow. 2. For all $a_1,\dots, a_n\in U^{\ast}$ the stochastic process $(Y(t)(a_1,\dots,a_n):\,t{\geqslant}0)$ in $\operatorname{{{\mathbbm}R}}^n$ is a time-homogeneous Markov process. \(a) This is established by essentially the same argument as that given in the proof of Proposition 4.1 of [@Dave]. \(b) For each $0 \leq s \leq t$, $\operatorname{a_{(n)}}=(a_1,\dots, a_n) \in V^{*n}$, $f \in B_{b}(\operatorname{{{\mathbbm}R}}^n), n\in\operatorname{{{\mathbbm}N}}$, we have $$\begin{aligned} & E\Big[f(Y(t)(a_1,\dots, a_n))|{\cal F}_{s}\Big] \\ &\qquad\qquad = E\Big[f(Z_{0,t}Y(0)a_{1}, \ldots, Z_{0,t}Y(0)a_{n})|{\cal F}_{s}\Big]\\ &\qquad\qquad = E\Big[f\big((Z_{s,t}\circ Z_{0,s})Y(0)a_{1}, \ldots, (Z_{s,t}\circ Z_{0,s})Y(0)a_{n}\big)|{\cal F}_{s}\Big]\\ &\qquad\qquad = E\Big[f(S(t-s)Z_{0,s}Y(0)(a_1,\dots,a_n) + \left(\int_{s}^{t}S(t-u)C\,dM_2(u)\right)(a_1,\dots, a_n))|{\cal F}_{s}\Big].\end{aligned}$$ Now since the random vector $\left(\int_{s}^{t}S(t-u)C\, dM_2(u)\right)\operatorname{a_{(n)}}$ is measurable with respect to $\sigma\left(\{M_2(v)a - M_2(u)a; s \leq u \leq v \leq t,\,a\in V^\ast \}\right)$ we can use standard arguments for proving the Markov property for SDEs driven by $\operatorname{{{\mathbbm}R}}^n$-valued Lévy processes (see e.g. section 6.4.2 in [@Dave04]) to deduce that $$\begin{aligned} E\Big[f(Y(t)(a_1,\dots, a_n))|{\cal F}_{s}\Big] = E\Big[f(Y(t)(a_1,\dots, a_n))|Y(s)(a_1,\dots, a_n)\Big],\end{aligned}$$ which completes the proof. Although the Markov process $(Y(t)\operatorname{a_{(n)}}:\,t{\geqslant}0)$ is a projection of a cylindrical Ornstein-Uhlenbeck process it is not in general an Ornstein-Uhlenbeck process in $\operatorname{{{\mathbbm}R}}^n$ in its own right. Indeed, if this were to be the case we would expect to be able to find for every $\operatorname{a_{(n)}}\in V^{\ast n}$ a matrix ${Q}_{\operatorname{a_{(n)}}}\in\operatorname{{{\mathbbm}R}}^{n\times n}$ and a Lévy process $(l_{\operatorname{a_{(n)}}}(t):\,t{\geqslant}0)$ in $\operatorname{{{\mathbbm}R}}^n$ such that $$\begin{aligned} Y(t)\operatorname{a_{(n)}}= e^{tQ_{\operatorname{a_{(n)}}}} Y(0)\operatorname{a_{(n)}}+ \left(\int_0^t e^{(t-s)Q_{\operatorname{a_{(n)}}}} C\, dl_{\operatorname{a_{(n)}}}(s)\right) .\end{aligned}$$ That this does not hold in general is shown by the following example: On the Banach space $V=L^p(\operatorname{{{\mathbbm}R}})$, $p>1$ we define the translation semigroup $(S(t))_{t{\geqslant}0}$ by $(S(t)f)x=f(x+t)$ for $f\in V$. For an arbitrary real valued random variable $\xi\in L^0(\Omega,{{\mathcal F}},P)$ we define the initial condition by $Y_0g:=g(\xi)$ for all $g\in L^q(\operatorname{{{\mathbbm}R}})$ where $q^{-1}+p^{-1}=1$. Then we obtain $$\begin{aligned} (S(t)Y_0)g= Y_0S^\ast(t)g= g(\xi-t) \qquad\text{for every }g\in L^q(\operatorname{{{\mathbbm}R}}).\end{aligned}$$ If $(Y(t)g:\,t{\geqslant}0)$ were an Ornstein-Uhlenbeck process it follows that there exists $\lambda_g\in\operatorname{{{\mathbbm}R}}$ and a random variable $\zeta_g$ such that $$\begin{aligned} \label{eq.countexou} g(\xi-t)=e^{\lambda_g t}\zeta_g\qquad\text{$P$-a.s.}\end{aligned}$$ To see that the last line cannot be satisfied take $g=\operatorname{\mathbbm 1}_{(0,1)}$ and take $\xi$ to be a Bernoulli random variable. Then we have $$\begin{aligned} g(\xi-t)=\operatorname{\mathbbm 1}_{(0,1)}(\xi-t)=\xi \operatorname{\mathbbm 1}_{(0,1)}(t),\end{aligned}$$ which cannot be of the form . It follows from the Markov property that for each $\operatorname{a_{(n)}}\in V^{\ast n}$ there exists a semigroup of linear operators $(T_{\operatorname{a_{(n)}}}(t):\,t{\geqslant}0)$ defined for each $f\in B_b(\operatorname{{{\mathbbm}R}}^n)$ by $$\begin{aligned} T_{\operatorname{a_{(n)}}}(t)f(\beta)=E[f(Y(t)\operatorname{a_{(n)}})|Y(0)\operatorname{a_{(n)}}=\beta].\end{aligned}$$ The semigroup is of [*cylindrical Mehler type*]{} in that for all $b\in V$, $$\begin{aligned} \label{eq.mehler} T_{\operatorname{a_{(n)}}}(t)f(\pi_{\operatorname{a_{(n)}}} b)=\int_V f(\pi_{S^\ast (t)\operatorname{a_{(n)}}} b + \pi_{\operatorname{a_{(n)}}} y)\,\rho_t(dy),\end{aligned}$$ where $\rho_t$ is the cylindrical law of $\int_0^t S(t-s)C\,dM_2(s)$. We say that the cylindrical Ornstein-Uhlenbeck process $Y$ has an [*invariant cylindrical measure $\mu$*]{} if for all $\operatorname{a_{(n)}}=(a_1,\dots, a_n)\in V^{\ast n}$ and $f\in B_b(\operatorname{{{\mathbbm}R}}^n)$ we have $$\begin{aligned} \label{eq.cylinv} \int_{\operatorname{{{\mathbbm}R}}^n} T_{\operatorname{a_{(n)}}}(t)f(\beta)\,(\mu\circ\pi_{\operatorname{a_{(n)}}}^{-1})(d\beta)= \int_{\operatorname{{{\mathbbm}R}}^n} f(\beta)\,(\mu\circ \pi_{\operatorname{a_{(n)}}}^{-1})(d\beta) \qquad\text{for all }t{\geqslant}0,\end{aligned}$$ or equivalently $$\begin{aligned} \int_{V} T_{\operatorname{a_{(n)}}}(t)f(\pi_{\operatorname{a_{(n)}}} b)\,\mu(db)= \int_{V} f(\pi_{\operatorname{a_{(n)}}} b)\,\mu(db)\qquad\text{for all }t{\geqslant}0.\end{aligned}$$ By combining with we deduce that a cylindrical measure $\mu$ is an invariant measure for $(Y(t):\,t{\geqslant}0)$ if and only if it is [*self-decomposable*]{} in the sense that $$\begin{aligned} \mu\circ \pi_{\operatorname{a_{(n)}}}^{-1}=\mu\circ\pi^{-1}_{S^\ast (t)\operatorname{a_{(n)}}} \ast \rho_t\circ \pi_{\operatorname{a_{(n)}}}^{-1}\end{aligned}$$ for all $t{\geqslant}0$, $\operatorname{a_{(n)}}\in V^{\ast n}$. \[pro.stat\] 1. For each $a\in V^{ \ast}$ the following are equivalent: 1. $\rho_t\circ a^{-1}$ converges weakly as $t\to\infty$; 2. $\displaystyle \left(\int_0^t S(r)C\,dM_2(r)\right)a\,$ converges in distribution as $t\to\infty$. 2. If $\rho_t\circ a^{-1}$ converges weakly for every $a\in V^\ast$ then the prescription $$\begin{aligned} \rho_\infty:\operatorname{{\mathcal Z}}(V)\to [0,1],\qquad \rho_\infty(Z(a_1,\dots, a_n;B)):= \text{wk-}\lim_{t\to\infty} \rho_t\circ \pi_{a_1,\dots, a_n}^{-1}(B) \end{aligned}$$ defines an invariant cylindrical measure $\rho_\infty$ for $Y$. Moreover, if $\mu$ is another such cylindrical measure then $$\begin{aligned} \mu\circ \pi_{\operatorname{a_{(n)}}}^{-1}=\left(\rho_\infty\circ\pi_{\operatorname{a_{(n)}}}^{-1} \right)\ast \left(\gamma\circ\pi_{\operatorname{a_{(n)}}}^{-1}\right), \end{aligned}$$ where $\gamma$ is a cylindrical measure such that $\gamma\circ \pi_{\operatorname{a_{(n)}}}^{-1} =\gamma \circ \pi_{S^\ast(t)\operatorname{a_{(n)}}}^{-1}$ for all $t{\geqslant}0$. 3. If an invariant measure exists then it is unique if $(S(t):\,t{\geqslant}0)$ is stable, i.e. $\lim_{t\to\infty }S(t)x=0$ for all $x\in V$. The arguments of Lemma 3.1, Proposition 3.2 and Corollary 6.2 in [@ChoMich87] can be easily adapted to our situation. In order to derive a simple sufficient condition implying the existence of a unique invariant cylindrical measure we assume that the semigroup $(S(t):\, t{\geqslant}0)$ is exponentially stable, i.e. there exists $R>1$, $\lambda>0$ such that ${\left\lVert S(t) \right\rVert}{\leqslant}Re^{-\lambda t}$ for all $t{\geqslant}0$. \ If $(S(t):\, t{\geqslant}0)$ is exponentially stable then there exists a unique invariant cylindrical measure. For every $t_1>t_2>0$ and $a\in V^{\ast }$ the Itô’s isometry implies that $$\begin{aligned} & E{\left\lvert \left(\int_0^{t_1} S(r)C\, dM_2(r)\right)a - \left(\int_0^{t_2} S(r)C\, dM_2(r)\right)a \right\rvert}^2\\ &\qquad\qquad= \int_{t_2}^{t_1} {\left\lVert i_{Q_2}^{\ast} C^{\ast} S^{\ast}(r) a \right\rVert}_{H_{Q_2}}^2\,dr \\ &\qquad\qquad{\leqslant}{\left\lVert i_{Q_2} \right\rVert}^2 {\left\lVert C \right\rVert}^2{\left\lVert a \right\rVert}^2\int_{t_2}^{t_1} {\left\lVert S(r) \right\rVert}^2\,dr\\ &\qquad\qquad\to 0 \qquad\text{as }t_1,t_2\to \infty, \end{aligned}$$ because of the exponential stability. Consequently, the integral $\left(\int_0^t S(r)C\, dM_2(r)\right)a$ converges in mean square and Proposition \[pro.stat\] completes the proof. An obvious and important question is whether a cylindrical Ornstein-Uhlenbeck process is induced by a stochastic process in $V$. This will be the objective of forthcoming work but here we give a straightforward result in this direction, within the Hilbert space setting: Let $V$ be a separable Hilbert space and assume that $$\begin{aligned} \sum_{k=1}^\infty \int_0^t {\left\lVert S(r)Ci_ke_k \right\rVert}^2\,dr<\infty \qquad\text{for all }t{\geqslant}0. \end{aligned}$$ If the initial condition $Y_0$ is induced by a random variable in $V$ then the cylindrical weak solution $Y$ of is induced by a stochastic process in $V$. For all $m < n$ $$\begin{aligned} E{\left\lVert \sum_{k=m+1}^n \int_0^t S(t-r)Ci_{Q_2}e_k\,m_k(dr) \right\rVert}^2 =\sum_{k=m+1}^n \int_0^t {\left\lVert S(r)Ci_{Q_2}e_k \right\rVert}^2\,dr \rightarrow 0~\mbox{as}~m,n \rightarrow \infty, \end{aligned}$$ and it follows by completeness that there exists a $V$-valued random variable $Z$ in $L^2(\Omega,{{\mathcal F}},P;V)$ such that $$\begin{aligned} Z= \sum_{k=1}^\infty \int_0^t S(t-r)Ci_{Q_2}e_k\,m_k(dr) \qquad\text{in }L^2(\Omega,{{\mathcal F}},P;V),\end{aligned}$$ which completes the proof by Theorem \[th.sol\]. [10]{} D. Applebaum. , 2004. D. Applebaum. , 2006. Z. Brzeźniak and J. Zabczyk. . , 2008. A. Chojnowska-Michalik. , 21:251–286, 1987. H. Heyer. . , 2005. M. Ledoux and M. Talagrand. , 1991. W. Linde. , 1986. S. Peszat and J. Zabczyk. , 2007. G. Da Prato and J. Zabczyk. , 1992. E. Priola and J. Zabczyk. . , 2009. M. Riedle. . MIMS EPrint 2008.24, Manchester Institute for Mathematical Sciences, University of Manchester, 2008. M. Riedle and O. van Gaans. . , 119(6):1952–1974, 2009. J. Rosiński. . , 30:379–383, 1982. K.-I. Sato. , 1999. L. Schwartz. , 1981. N. N. Vakhaniya, V. I. Tarieladze, and S. A. Chobanyan. , 1987. J. van Neerven. , 1992. [^1]: D.Applebaum@sheffield.ac.uk [^2]: markus.riedle@manchester.ac.uk
[ ]{} [ ]{} [ ]{} [ ]{} **THE WITNESS OF SUDDEN CHANGE OF GEOMETRIC QUANTUM CORRELATION** Chang-shui Yu$^1$[^1], Bo Li$^2$, and Heng Fan$^{2}$ *$^1$School of Physics and Optoelectronic Technology,* *Dalian University of Technology, Dalian 116024, P. R. China* *$^2$Beijing National Laboratory for Condensed Matter Physics,* *Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China* [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} Introduction ============ When quantum correlation is mentioned, entanglement could immediately come to our head, because entanglement has attracted us so many years, in particular it plays a very important role in quantum information processing and is recognized as a necessary physical resource for quantum communication and computation \[1\]. However, entanglement can not cover all the quantumness of correlation in a quantum system, since some quantum tasks display the quantum advantage without entanglement, for example, quantum discord, which has been considered as the quantum correlation measure, is shown to be possibly related to the speedup of some quantum computation \[2-6\]. It is very interesting that quantum discord includes quantum entanglement but it is beyond quantum entanglement due to its potential presence in separable states \[7\]. In recent years, quantum discord has received great attention. A lot of people studied its behavior under dynamical processes \[8,9\] and the operational meanings by connecting it with Maxwell demon \[10-12\], or some quantum information processes such as broadcasting of quantum states \[13,14\], quantum state merging \[15,16\], quantum entanglement distillation \[17,18\], entanglement of formation \[19\]. In particular, due to the unavoidable interaction between the quantum system and the environment, it has been found that quantum correlation in some cases \[20,21\] is more robust against quantum decoherence than quantum entanglement \[22,23\]. Even the frozen behavior of quantum correlation under some decoherence has been reported \[20,21,24,25\]. However, like quantum entanglement measure, quantum discord based on different definitions can lead to different results if we use them to order two quantum states \[26-28\]. This implies that the behavior of quantum correlation under coherence could depend on the choice of quantum correlation measure and strong evidences have shown that the frozen behavior will vanish if one employs a different quantum correlation measure \[20\]. In addition, the sudden change of quantum correlation has also been found in some dynamical processes (see \[20-25\] and the references therein). This sudden change phenomenon is very important and physically useful, because it is shown that the sudden change of some quantum systems is connected with quantum phase transition (QPT) \[28,29\]. In particular, there are some physical situations where entanglement is not able to detect QPT but this sudden change are and even can be done at finite temperature. In addition, unlike the sudden death of quantum entanglement, it seems that the sudden change of quantum discord depends on the choice of the measure of quantum correlation. Therefore, the interesting question that we would like to focus is not only to find the models that demonstrate the behavior of sudden change of quantum discord, but also to find out what in mathematics, leads to the sudden change for a given quantum discord. In this paper, we will study this question by considering a general quantum systems with two qubits. Here we mainly employ the geometric quantum discord \[30\] as the measure of quantum correlation due to its analytical solvability. We give a mathematical definition of sudden change and find a simple witness on the sudden change of geometric quantum discord which also serves as a sufficient and necessary condition for the presence of sudden change. Meanwhile, we find that the sudden change is only of one type. Based on our witness, we study various quantum systems via the decoherence quantum channels and find many interesting phenomena of sudden changes. For the usual quantum channel, we find that two critical points of sudden change can be present compared with the previous similar work. In particular, we can accurately find out the critical points no matter whether the sudden change is obvious in the graphical representation. For the non-Markovian case, one can find plenty of sudden changes so long as we would like to properly adjust the corresponding parameters. By the state via the amplitude damping channels, we show that the critical points of sudden change with different quantum correlation measures appears at different positions. By the collective decoherence, we demonstrate the inconsistence of sudden changes in both quantity and position based on different correlation measures. This paper is organized as follows. In Sec. II, we introduce the mathematical definition and the witness of sudden change of quantum correlation. In Sec. III, we study various sudden changes of geometric quantum discord compared with the information theoretic quantum correlation in different quantum models. In Sec. IV, we draw the conclusion. Witness on sudden change of geometric quantum discord ===================================================== In order to effectively understand the sudden change of quantum correlation, we have to first give an explicit definition of the sudden change. Consider a function $Q[\rho _{AB}(\xi )]$ serving as the measure of quantum correlation, where it is implied that the quantum state $\rho _{AB}(\xi )$ depends on some parameter $\xi $, such as $\xi =\gamma t$ for decoherence process with $\gamma $ the decoherence rate and $t$ the evolution time, we say that $Q[\rho _{AB}(\xi )]$* has sudden change or non-smooth at some* $\xi ^{\ast }$*, if* $\frac{dQ[\rho _{AB}(\xi )]}{d\xi }$* is not continuous at* $\xi ^{\ast }$. On the contrary, we say $Q[\rho _{AB}(\xi )]$ is smooth if it has not any sudden changes, which should be distinguished from the corresponding definition of a smooth function in Mathematical Analysis that requires $Q[\rho _{AB}(\xi )]$ should be of class $C^{\infty }$ \[31\]. Now we restrict our research to the process of decoherence. With decoherence, the entries of the density matrix will exponentially decay in general cases, therefore a reasonable hypothesis denoted by (H) is that the evolution of the entries of the density matrix is smooth. In order to study the interesting behavior of quantum correlation, we would like to employ the analytic quantum correlation measure—–geometric quantum discord which is defined for a general bipartite quantum state of qubits as $$D(\rho _{AB})=\frac{1}{4}(\left\Vert \vec{x}\vec{x}^{T}\right\Vert ^{2}+\left\Vert TT^{T}\right\Vert ^{2}-\lambda _{\max }),$$where $\vec{x}=[x_{1},x_{2},x_{3}]^{T}$, $x_{i}=$Tr$\left[ \rho _{AB}\left( \sigma _{i}\otimes \mathbf{1}\right) \right] $ with $\sigma _{i},i=1,2,3$, corresponding to the three Pauli matrices, and $T_{ij}=$Tr$\left[ \rho _{AB}\left( \sigma _{i}\otimes \sigma _{j}\right) \right] $  and $\lambda _{\max }$ is the maximal eigenvalue of the matrix $$A=\vec{x}\vec{x}^{T}+TT^{T}.$$In addition, $\mathbf{1}$ is the $(2\times 2)$ -dimensional identity and “$T$” in the superscript denotes the transpose of a matrix and $\left\Vert \cdot \right\Vert $ is the Frobenius norm. With our hypothesis (H), it is obvious that $$f(\rho _{AB})=\left\Vert \vec{x}\vec{x}^{T}\right\Vert ^{2}+\left\Vert TT^{T}\right\Vert ^{2}$$ is a simple function that directly depends on the entries of the density matrix $\rho _{AB}$. So $f(\rho _{AB})$ is a smooth function. Thus the sudden change of $D(\rho _{AB})$ has to attribute to $\lambda _{\max }$. Therefore a quite simple and direct conclusion that can witness the sudden change can be given in the following rigid way. **Theorem 1.**-Sudden change will happen for geometric quantum discord under decoherence if and only if $\lambda _{\max }$ is non-smooth. In order to strengthen our understanding of the non-smooth behavior of quantum correlation, we will expand the Theorem further. Consider the eigenequation of $A$, we have \[32\] $$\lambda ^{3}+a_{2}\lambda ^{2}+a_{1}\lambda +a_{0}=0,$$where $a_{0}=-\det A$, $a_{2}=-$Tr$A$ and $a_{1}=\sum_{k=1}^{3}\det \mathcal A_{k}$ with $\mathcal A_{k}=\left( \begin{array}{cc} A_{kk} & A_{k,k\oplus 1} \\ A_{k\oplus 1,k} & A_{k\oplus 1,k\oplus 1}\end{array}\right) $ and $"\oplus "$ denoting addition modulo 3, then the eigenvalues of $A$ can be given by$$\begin{aligned} \lambda _{1} &=&M_{+}^{1/3}+M_{-}^{1/3}-\frac{1}{3}a_{2}, \nonumber \\ \lambda _{2} &=&-\frac{M_{+}^{1/3}+M_{-}^{1/3}}{2}+i\sqrt{3}\frac{M_{+}^{1/3}-M_{-}^{1/3}}{2}-\frac{1}{3}a_{2}, \\ \lambda _{3} &=&-\frac{M_{+}^{1/3}+M_{-}^{1/3}}{2}-i\sqrt{3}\frac{M_{+}^{1/3}-M_{-}^{1/3}}{2}-\frac{1}{3}a_{2}, \nonumber\end{aligned}$$where $M_{\pm }=-\frac{q}{2}\pm \sqrt{\Delta }$, $\ \Delta =\frac{q^{2}}{4}+\frac{p^{3}}{27}$ with $p=a_{1}-\frac{1}{3}a_{2}^{2}$ and $q=a_{0}-\frac{1}{3}a_{1}a_{2}+\frac{2}{27}a_{2}^{3}$. The derivative $\frac{d\lambda _{i}}{dt}$ can be formally written as $$\frac{d\lambda _{i}}{dt}=F(M^{-2/3},\Delta ^{-1/2},\cdots ),$$where we omit the smooth parameters. So one could imagine that the discontinuity of $\frac{d\lambda _{\max }}{dt}$ could happen when $M=0$ or $\Delta =0$ for the possible unbounded derivative. However, for an infinitesimal evolution of a density matrix, $A(\delta t)$ can always be understood as an infinitesimal symmetric perturbation $E\delta t$ on the original $A$. Thus based on the perturbation theory \[33\], $\left\vert \lambda_i(A(\delta t))-\lambda_i(A)\right\vert\leq\left\Vert E\right\Vert\delta t$, which guarantees that no unbounded derivative can occur. So all the derivatives of the eigenvalues are continuous. Consider that the maximal eigenvalue is required for the geometric discord, one will draw the conclusion that the sudden change will happen only the following corollary holds. **Corollary 1**.-Sudden change will happen at $t^{\ast }$, if any and only if there exists an eigenvalue $\lambda _{i}(t^{\ast })$ such that $\lambda _{i}(t^{\ast })$ and $\lambda _{\max }(t^{\ast })$ are crossing. That is, if $\lambda _{\max }(t^{\ast }-\varepsilon )=\lambda _{m}(t^{\ast }-\varepsilon )$ $\ $and $\lambda _{\max }(t^{\ast }+\varepsilon )=\lambda _{n}(t^{\ast }-\varepsilon )$ for any small $\varepsilon $, we have $m\neq n. $ Here we should distinguish the word ‘crossing’ from ‘degenerate’. Finally, we would like to emphasize that the similar idea can also be used in other type of quantum correlation measures, which is also demonstrated in our next section. Various sudden changes of geometric quantum discord =================================================== Using the witness proposed in previous section, we can easily find various sudden changes of geometric quantum discord for some states separately undergoing some decoherence channels. In particular, even though the critical points of sudden change could not be obvious in the graphical representation, we can also accurately find them out. Next, we consider the sudden changes in both the Markovian and the non-Markovian cases. *Markovian case.*-Let’s first consider the initial state given by $$\rho _{AB}=\frac{1}{4}\left[ \mathbf{1}_{AB}+\sum\limits_{i}(c_{i0}\sigma _{i}\otimes \sigma _{i})\right] ,$$where $\sigma _{i}$ defined the same as that in Eq. (1) and $\left\vert c_{i0}\right\vert \leq 1$ with $\sum\limits_{i}\left\vert c_{i0}\right\vert \leq 1$ and the subscript $0$ denotes the initial state. Suppose subsystem A undergoes a phase damping quantum channel which is given in the Kraus representation \[34\] as$$A_{1}=\sqrt{1-p/2}\mathbf{1},A_{2}=\sqrt{p/2}\sigma _{3},$$and subsystem B goes through a bit flip quantum channel given by$$B_{1}=\sqrt{1-q/2}\mathbf{1},B_{2}=\sqrt{q/2}\sigma _{1},$$where $p=1-e^{-\gamma _{1}t}$ and $\ q=1-e^{-\gamma _{2}t}$ with $\gamma _{1,2}$ denoting decoherence rate, the evolution of $\rho _{AB}$ can, therefore, be expressed as$$\$(\rho _{AB})=\sum\limits_{i,j=1}^{2}\left( A_{i}\otimes B_{j}\right) \rho _{AB}\left( A_{i}^{\dag }\otimes B_{j}^{\dag }\right) .$$In Bloch representation, $\$(\rho _{AB})$ can be written by the same form as Eq. (7) with $$c_{1}=c_{10}e^{-\gamma _{1}t},c_{2}=c_{20}e^{-(\gamma _{1}+\gamma _{2})t},c_{3}=c_{30}e^{-\gamma _{2}t}.$$One should note that $c_{i}(t)$ serves as the eigenvalues of the matrix $A$ mentioned in Eq. (2) for $\$(\rho _{AB})$. Thus the geometric quantum discord of $\$(\rho _{AB})$ can be easily calculated based on Eq. (1). One can find that $c_{i}(t)$ is obviously an smooth function on time $t$, so the critical point of sudden change is completely determined by the corollary. To demonstrate the sudden change, we would like to let $\left\vert c_{20}\right\vert >\left\vert c_{10}\right\vert >\left\vert c_{30}\right\vert >0$ and $\gamma _{1}<\gamma _{2}$. In this case, one can find that $c_{1}$ and $c_{2}$ are crossing at $t_{1}=\frac{1}{\gamma _{2}}\ln \left\vert \frac{c_{20}}{c_{10}}\right\vert $, so a sudden change will happen. In addition, we emphasize that no else sudden change but $t_{1}$ can be found in the whole process of the evolution, which is implied in the case of Ref. \[20\]. However, if we suppose $\left\vert c_{20}\right\vert >\left\vert c_{10}\right\vert >\left\vert c_{30}\right\vert >0$ and $\gamma _{1}>\gamma _{2}$, it is surprising that one can find two sudden changes in the evolution. One happens at $t_{1}$ and the other happens at $t_{2}=\frac{1}{(\gamma _{1}-\gamma _{2})}\ln \left\vert \frac{c_{10}}{c_{20}}\right\vert $. In order to explicitly illustrate the sudden changes, we plot the geometric quantum discord of $\$(\rho _{AB})$ in Fig. 1, where we let $c_{10}=0.12$, $c_{20}=0.13$, $c_{30}=0.08$ and $\gamma _{1}=0.035$, $\gamma _{2}=0.015$. One can find that the figure is consistent to our prediction, that is, the points of crossing eigenvalues of $A$ witness the sudden changes. As a comparison, we also plot the information theoretic quantum discord defined by the discrepancy between the quantum versions of the two equivalent mutual information \[2,4\]. It is interesting that, for the state $\$(\rho _{AB})$, information theoretic quantum discord and the geometric quantum discord have the same critical points for sudden change. Now let’s consider that the two qubits A and B of the state given in Eq. (7) simultaneously undergo the phase damping channels which is given by Eq. (8). That is, the state through the channels can be formally given by Eq. (10) with $B_{j}$ replaced by $A_{j}$ and $\gamma _{1}$ replaced by a new parameter $\gamma _{2}$. Thus the final state $\$(\rho _{AB})$ can also be written as Eq. (7) with $$c_{1}=c_{10}e^{-(\gamma _{1}+\gamma _{2})t},c_{2}=c_{20}e^{-(\gamma _{1}+\gamma _{2})t},c_{3}=c_{30}.$$ We plot the geometric and the information theoretic quantum discords in Fig. 2, where we let $c_{10}=0.5$, $c_{20}=0.3$, $c_{30}=0.4$, $\gamma _{1}=0.45$ and $\gamma _{2}=0.15$. One can find that in the given range, there is only one critical point of sudden change at $t=1/(\gamma_1+\gamma_2)\ln{\frac{5}{4}} s$ and this sudden change is consistent with information theoretic quantum discord. *Non-Markovian case.*-The above example demonstrates the Markovian process. Now we would like to consider the sudden change behavior in the non-Markovian decoherence. Similar to the above case, we also consider the two qubits separately undergo a single-direction quantum channel. We suppose that subsystem A goes through a colored noise phase flip channel and subsystem B undergoes a colored noise bit flip channel \[21\]. This can be realized in Kraus representation by replacing $p$ and $q$ in Eqs. (10) and (11) by$$x_{i}=1-e^{-\upsilon _{i}}\left[ \cos \left( \mu _{i}\upsilon _{i}\right) +\sin \left( \mu _{i}\upsilon _{i}\right) /\mu _{i}\right] ,$$where $\mu =\sqrt{(4a_{i}\tau _{i})^{2}-1}$, $a_{i}$ is a coin-flip random variable and $\upsilon _{i}=t/(2\tau _{i})$ is dimensionless time with $i=1,2 $ corresponding to $p$ and $q$, respectively. Due to the smooth dependence on the dimensionless time $\upsilon _{i}$, one can draw the conclusion that the sudden change of geometric quantum discord is determined by the corollary. We plot the geometric quantum discord of the state $\rho _{AB}$ through two non-Markov quantum channels in Fig. 4, where we set $\tau _{1}=\tau _{2}=5s$ and $a_{1}=2/3$, $a_{2}=1/3$. One can find more sudden changes in this case. It is interesting that we can find various sudden changes as we will in this case, because we can adjust the parameters based on our witness to produce various crossing points of the eigenvalues of $A$. It is obvious that the critical points of sudden changes of geometric quantum discord are the same as those of information theoretic quantum discord which is also plotted in Fig. 4. as a comparison. *Inconsistence of sudden changes with different measures.*-Since we know that it is of strong dependence on the selected quantum correlation measure for frozen quantum discord, that is, even though we have found the phenomenon of frozen quantum discord for some measure, the frozen phenomenon will vanish if we change into another quantum correlation measure, this means that the frozen phenomenon is not a property of the quantum state, but some property of the selected quantum correlation measure subject to some states. However, from the above examples, one can find that the critical points are the same between the two selected quantum correlation measure. One could ask a very natural question whether the critical points of the sudden change are independent of quantum correlation measure just as the sudden death of quantum entanglement does not depend on the entanglement measure \[35,36\]. To answer this question, let’s consider two examples separately in the decoherence of amplitude damping \[37\] and the collective decoherence \[38-40\]. At first, let’s consider the case of the amplitude damping decoherence. Suppose the two qubit A and B of the state in Eq. (7) through the amplitude damping channel, respectively. The amplitude damping channel in the Kraus representation is written as \[34\]$$\tilde{A}_{k1}=\left( \begin{array}{cc} 1 & 0 \\ 0 & \sqrt{1-p_{k}}\end{array}\right) ,\tilde{A}_{k2}=\left( \begin{array}{cc} 0 & \sqrt{p_{k}} \\ 0 & 0\end{array}\right) ,$$with $k=A,B$ corresponding to the two subsystems and $p_{k}=1-e^{-\gamma _{k}t}$. However, unlike the previous case, the final state via the channels will become a general “X" type state instead of the state given in Eq. (7). Thus even though we can analytically calculate the geometric discord, one can only find the numerical expression for the information theoretic discord. Both discords are plotted in Fig. 3 from which one can find that the geometric discord has one sudden change at $t=0.732s$, but the information theoretic discord has one sudden change at about $t=0.542s$. This example shows the critical points of the information theoretic and the geometric discords are at different positions. Our system in our second example includes two identical atoms with $\left\vert g\right\rangle $ and $\left\vert e\right\rangle $ denoting the ground state and the excited state and $\varpi $ supposed to the transition frequency. We assume the two atoms are coupled to a multimode vacuum electromagnetic field. The master equation governing the evolution is given by$$\begin{aligned} \dot{\rho} =-i\varpi \sum_{i=1}^{2}\left[ \sigma _{i}^{z},\rho \right] -i\sum_{i\neq j}^{2}\Omega _{ij}[\sigma _{i}^{+}\sigma _{j}^{-},\rho ] +\frac{1}{2}\sum_{i,j=1}^{2}\gamma _{ij}\left( 2\sigma _{j}^{-}\rho \sigma _{i}^{+}-\left\{ \sigma _{i}^{+}\sigma _{j}^{-},\rho \right\} \right) ,\end{aligned}$$ where $$\gamma _{ij}=\frac{3}{2}\gamma \left[ \frac{\sin (kr_{ij})}{kr_{ij}}+\frac{\cos (kr_{ij})}{\left( kr_{ij}\right) ^{2}}-\frac{\sin (kr_{ij})}{\left( kr_{ij}\right) ^{3}}\right]$$ denotes the collective damping with $\gamma $ the spontaneous emission rate due to the interaction between one atom and its own environment; $$\Omega _{ij}=\frac{3}{4}\gamma \left[ \frac{\sin (kr_{ij})}{\left( kr_{ij}\right) ^{2}}+\frac{\cos (kr_{ij})}{\left( kr_{ij}\right) ^{3}}-\frac{\cos (kr_{ij})}{kr_{ij}}\right]$$represents the dipole-diple interaction potential with $r_{ij}=\left\vert \mathbf{r}_{i}-\mathbf{r}_{j}\right\vert $ being the interatomic distance. In addition, $k$ in Eqs. (16) and (17) is the wave vector. Suppose the initial state of the two atoms is given by $\rho _{AB}(t)=[\rho _{ij}(t)]$ with $$\begin{aligned} \rho _{11}(t) &=&\alpha ^{2}e^{-2\gamma t}, \end{aligned}$$ $$\begin{aligned} \rho _{14}(t) &=&\rho _{41}^{\ast }(t)=\alpha \sqrt{1-\alpha ^{2}}e^{-\left( \gamma +2i\varpi \right) t}, \end{aligned}$$ $$\begin{aligned} \rho _{22}(t) &=&\rho _{33}(t)=a_{1}\left[ e^{-\gamma _{12}^{+}t}-e^{-\gamma t}\right] +a_{2}[e^{-\gamma _{12}^{-}t}-e^{-\gamma t}], \end{aligned}$$ $$\begin{aligned} \rho _{23}(t) &=&\rho _{32}(t)=a_{1}\left[ e^{-\gamma _{12}^{+}t}-e^{-\gamma t}\right] -a_{2}[e^{-\gamma _{12}^{-}t}-e^{-\gamma t}], \end{aligned}$$ $$\begin{aligned} \rho _{44} &=&1-\rho _{11}(t)-\rho _{22}(t)-\rho _{33}(t).\end{aligned}$$where $\gamma _{12}^{\pm }=\gamma \pm \gamma _{12}$ and $a_{1,2}=\alpha ^{2}\gamma _{12}^{\pm }/\left( 2\gamma _{12}^{\mp }\right) $. If we calculate $f(\rho _{AB}(t))$ defined by Eq. (3), we can find that $f$ smoothly depends on $t$, so we will focus on the matrix $A$ defined by Eq. (2). Under some local unitary transformation, we can find that the eigenvalues of the matrix $A$ can be given by$$\begin{aligned} \lambda _{\pm }(A) &=&4(\rho _{23}\mp \left\vert \rho _{14}\right\vert )^{2}, \\ \lambda _{0}(A) &=&C^{2}+R^{2},\end{aligned}$$with $C=1-4\rho _{22}$ and $R=2\rho _{11}+2\rho _{22}-1$. Thus we can directly check where the eigenvalues are crossing or non-smooth in order to find out the sudden change, since the eigenvalues given in Eqs. (5-7) are obviously smoothly dependent of time $t$. The evolution of the eigenvalues and the geometric quantum discord is plotted in Fig. 6 and Fig. 5, respectively, where we set $\alpha =\sqrt{0.9}$ and $r_{12}=0.6737\lambda $ with $\lambda $ the wave length. From the Fig. 5. and Fig. 6, one can find that the critical points of sudden change are consistent with the crossing points of the maximal eigenvalues. In order to compare it with information theoretic quantum discord, we would like to give the explicit expression of the quantum discord as $$D^{\prime }(\rho _{AB})=1+H(R)+\min_{i=0,\pm }\left\{ s_{i}\right\} -\sum\limits_{i=\pm }\left( u_{i}\log _{2}u_{i}+v_{i}\log _{2}v_{i}\right) ,$$ where $$H(x)=-(1-x)\log _{2}(1-x)-(1+x)\log _{2}(1+x),$$and$$\begin{aligned} u_{\pm } &=&\frac{1}{4}(1-C\pm 4\rho _{23}), \\ v_{\pm } &=&\frac{1}{4}(1+C\pm 2\sqrt{R^{2}+\left\vert \rho _{14}\right\vert ^{2}}),\end{aligned}$$and $$\begin{aligned} s_{\pm } &=&1+H(\sqrt{R^{2}+\lambda _{\pm }}), \\ s_{0} &=&-\sum\limits_{i=\pm }\left[ \frac{m_{i}}{4}\left( \log _{2}\frac{m_{i}}{n_{i}}\right) +\frac{1-C}{4}\log _{2}\frac{1-C}{n_{i}}\right] ,\end{aligned}$$with $$\begin{aligned} m_{\pm } &=&\frac{1+C\pm 2R}{4}, \\ n_{\pm } &=&2\left( 1\pm R\right) .\end{aligned}$$$$\left\vert \Psi \right\rangle _{AB}=\alpha \left\vert e\right\rangle _{A}\left\vert e\right\rangle _{B}+\sqrt{1-\alpha ^{2}}\left\vert g\right\rangle _{A}\left\vert g\right\rangle _{B},$$then the state via the evolution subject to Eq. (15) at time $t$ can be From Eq. (26), we can draw the conclusion that the sudden changes are only determined by the crossing of $s_{i}$, because all the other terms in Eq. (26) are smooth on $t$ which can be found by continuous derivatives of these terms. Therefore, we can find the critical points of sudden changes for information theoretic quantum discord by looking for the crossing points of $s_{i}$, which is given in the inset of Fig. 5. However, according to our calculation as well as the illustration in Fig. 5, one can easily find that the critical points of sudden change of geometric and information theoretic quantum discords are not consistent. It is obvious that there exist two critical points for geometric quantum discord, but only one critical point exists for information theoretic quantum discord. In addition, one can also find that the sudden changes given by different quantum correlation measure do not happen at the same time. From the previous examples, we can safely say that the sudden change of quantum correlation, like the frozen quantum correlation, strongly depends on the choice of quantum correlation measure. Conclusion and discussions ========================== We have introduced a definition of sudden change of quantum correlations, based on which we present a simple witness on the sudden change in terms of the geometric quantum discord. It is shown that there is only one kind of way to leading to sudden change, that is, the crossing of the two larger eigenvalues of matrix $A$. As applications, we demonstrate the sudden changes of quantum correlation by considering some quantum systems under various decoherence processes. One can find out any critical points of sudden changes using our witness, even though the sudden changes could not be so obvious in the graphical representation. As comparisons, we simultaneously consider the information theoretic quantum discord. It is interesting that, the sudden change will not be consistent if we choose different quantum correlation measures. This implies that sudden change, like frozen quantum correlation but unlike the sudden death of quantum entanglement, strongly depend on the choice of quantum correlation measure. In other words, sudden change of quantum correlation should not be the property of quantum state , but that of the quantum correlation measure subject to some states. From a different angle, it is interesting that different measure of entanglement may give different ordering of two bipartite states, this only means that those two states are uncomparable, i.e., they cannot be converted to each other by local operations and classical communication. However, considering different measure of quantum correlation, although they, beyond quantum entanglement, show many inconsistencies, we can not obtain any clear information on the conversion between quantum states similar to that of quantum entanglement. Acknowledgements ================ This work was supported by the National Natural Science Foundation of China, under Grant No. 11175033 and ‘973’ program No. 2010CB922904 and the Fundamental Research Funds of the Central Universities, under Grant No. DUT12LK42. [99]{} R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Rev. Mod. Phys. **81**, 865 (2009). L. Henderson, and V. Vedral, J. Phys. A **34**, 6899 (2001). V. Vedral, Phys. Rev. Lett. **90**, 050401 (2003). H. Ollivier, and W. H. Zurek, Phys. Rev. Lett. **88**, 017901(2001). S. Luo, Phys. Rev. A **77**, 042303 (2008). A. Datta, A. Shaji, and C. M. Caves, Phys. Rev. Lett. **100**, 050502 (2008). W. H. Zurek, Annalen der Physik (Leipzig) **9**, 855 (2000). T. Werlang, S. Souza, F. F. Fanchini, and C. J. VillasBoas, Phys. Rev. A **80**, 024103 (2009); F. F. Fanchini, T. Werlang, C. A. Brasil, L. G. E. Arruda, and A. O. Caldeira, *ibid.*, **81**, 052107 (2010); J. Maziero, L.C. Celeri, R. M. Serra, and V. Vedral, *ibid.*, **80**, 044102 (2009); A. Ferraro, L. Aolita, D. Cavalcanti, F. M. Cucchietti, and A. Acin, Phys. Rev. A **81**, 052318 (2010). W. H. Zurek, Physical Review A **67**, 012320 (2003). J. Oppenheim, M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. Lett. **89**, 180402 (2002). M. Horodecki, P. Horodecki, R. Horodecki, J. Oppenheim, A. Sen(De), U. Sen, and B. Synak-Radtke, Phys. Rev. A **71**, 062307 (2005). M. Piani, P. Horodecki, and R. Horodecki, Phys. Rev. Lett. **100**, 090502 (2008). M. Piani, M. Christandl, C. E. Mora, and P. Horodecki, Phys. Rev. Lett. **102**, 250503 (2009). D. Cavalcanti, L. Aolita, S. Boixo, K. Modi, M. Piani, and A. Winter, Phys. Rev. A **83**, 032324 (2011). V. Madhok, and A. Datta, arXiv: 1008.4135 \[quant-ph\]. M. F. Cornelio, M. C. de Oliveira, and F. F. Fanchini, Phys. Rev. Lett. **107**, 020502 (2011). M. Piani, S. Gharibian, G. Adesso, J. Calsamiglia, P. Horodecki, and A. Winter, Phys. Rev. Lett. **106**, 220403 (2011). F. F. Fanchini, M. F. Cornelio, M. C. De Oliveira, and A. O. Caldeira, Phys. Rev. A **84**, 012313 (2011). L. Mazzola, J. Piilo, and S. Maniscalco, Phys. Rev, Lett. **104**, 200401 (2010). L. Mazzola, J. Piilo, and S. Maniscalco, Int. J. Quant. Inf. **9**(3), 981 (2011). Zhi-Hao Ma, Zhi-Hua Chen, F. F. Fanchini, arXiv: 1207.4095 \[quant-ph\]. S. Campbell, arXiv: 1207.6562 \[quant-ph\]. Jin-Shi Xu, *et al*., Nat. Commun. **1**, 7 (2010). R. Auccaise, *et al*., Phys. Rev. Lett. **107**, 140403 (2011). M. Okrasa, Z. Walczak, arXiv: 1109.4132 \[quant-ph\]. S. Virmani, M. B. Plenio, Phys. Letts. A **268**, 31 (2000). Adam Miranowicz, Andrzej Grudka, Phys. Rev. A **70**, 032326 (2004). J. Maziero, H. C. Guzman, L. C. Celeri, M. S. Sarandy, and R. M. Serra, Phys. Rev. A ** 82**, 012106 (2010); M. S. Sarandy, *ibid.*, ** 80**, 022108 (2009); T. Werlang and Gustavo Rigolin, *ibid.*, ** 81**, 044101 (2010); T. Werlang, C. Trippe, G. A. P. Ribeiro, and Gustavo Rigolin, Phys. Rev. Lett. **105**, 095702 (2010). B. Dakic, V. Vedral, and C. Brukner, Phys. Rev. Lett. **105**, 190502 (2010). http://en.wikipedia.org/wiki/Smooth-function. R. R. Puri, *Mathematical methods of quantum optics* (Springer-Verlag Berlin Heidelberg, 2001). G. W. Stewart, J. G. Sun, *Matrix perturbation theory* (Academic press, INC, New York, 1990). M. A. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information*(Cambridge University Press, Cambridge, 2000). T. Yu and J. H. Eberly, Phys. Rev. Lett. **97**, 140403 (2006). T. Yu and J. H. Eberly, Science **323**, 598 (2009), and references therein. M. F. Cornelio, *et al.*, Phys. Rev. Lett. **109**, 190402 (2012). Z. Ficek, S. Swain, *Quantum Interference and Coherence: Theory and Experiments* ( Spring, New York, 2005). Z. Ficek, R. Tanas, Phys. Rev. A **74**, 024304 (2006). Ming-Liang Hu, Heng Fan, Annals of Phys. **327**, 851 (2012). [^1]: quaninformation@sina.com; ycs@dlut.edu.cn
--- abstract: 'Graphene on silicon carbide (SiC) bears great potential for future graphene electronic applications [@gaskill2009-power-electronics; @avouris-GHz; @kubatkin-QHE; @hertel-monolithic; @bianco2015-THz] because it is available on the wafer-scale [@emtsev2009-towards; @deHeer2011-confined-growth; @lin-waferscale] and its properties can be custom-tailored by inserting various atoms into the graphene/SiC interface [@riedl2009quasi; @emtsev-Ge-intercalation; @nandkishore2012-SC-doping; @li2013-TI-intercalation; @baringhaus2015-ballistic-Ge; @anderson2017-Eu-intercalation; @speck2017growth]. It remains unclear, however, how atoms can cross the impermeable graphene layer during this widely used intercalation process [@riedl2009quasi; @berry2013-impermeability; @Hu2014-proton-transport]. Here we demonstrate that, in contrast to the current consensus, graphene layers on SiC are not homogeneous, but instead composed of domains of different crystallographic stacking [@hibino2009stacking; @alden2013strain; @butz2014dislocations]. We show that these domains are intrinsically formed during growth and that dislocations between domains dominate the (de)intercalation dynamics. Tailoring these dislocation networks, e.g. through substrate engineering, will increase the control over the intercalation process and could open a playground for topological and correlated electron phenomena in two-dimensional superstructures [@ju2015-TI-transport; @hunt2013-butterfly; @herrero2018-magic-Mott; @herrero2018-superconductivity].' author: - 'T.A. de Jong' - 'E.E. Krasovskii' - 'C. Ott' - 'R.M. Tromp' - 'S.J. van der Molen' - 'J. Jobst' bibliography: - 'bilayer.bib' title: 'Stacking domains in graphene on silicon carbide: a pathway for intercalation' --- Graphene can routinely be produced on the wafer scale by thermal decomposition of silicon carbide (SiC) [@emtsev2009-towards; @deHeer2011-confined-growth; @lin-waferscale]. Due to the direct growth on SiC(0001) wafers, epitaxial graphene (EG) naturally forms on a wide band gap semiconductor, providing a doped or insulating substrate compatible with standard CMOS fabrication methods. Hence, EG is a contender for future graphene electronic applications such as power electronics [@gaskill2009-power-electronics; @hertel-monolithic], high-speed transistors [@avouris-GHz], quantum resistance standards [@kubatkin-QHE] and terahertz detection [@bianco2015-THz]. In EG, the first hexagonal graphene layer resides on an electrically insulating monolayer of carbon atoms that are sp$^3$ bonded to silicon atoms of the SiC(0001) surface [@emtsev2009-towards; @deHeer2011-confined-growth; @lin-waferscale; @tanaka2010anisotropic]. The presence of this so-called buffer layer strongly affects the graphene on top, e.g. by pinning the Fermi level. Consequently, the graphene properties can be tuned via intercalation of atoms into the buffer layer/SiC interface. The intercalation of hydrogen is most widely used and results in the conversion of the buffer layer to a quasi-freestanding graphene (QFG) layer by cutting the silicon-carbon bonds and saturating silicon dangling bonds with hydrogen. This treatment reverses the graphene doping from n-type to p-type and improves the mobility [@riedl2009quasi; @speck-QFMLG]. Intercalation of heavier atoms is used to further tailor the graphene properties, e.g. to form pn-junctions [@emtsev-Ge-intercalation; @baringhaus2015-ballistic-Ge], magnetic moments [@anderson2017-Eu-intercalation] or potentially superconducting [@nandkishore2012-SC-doping] and topologically non-trivial states [@li2013-TI-intercalation]. ![image](figure1){width="90.00000%"} Graphene on SiC (EG and QFG) appears homogeneous with low defect concentration in most techniques [@emtsev2009-towards; @deHeer2011-confined-growth; @lin-waferscale; @riedl2009quasi]. Together with the fact that layers span virtually unperturbed over SiC substrate steps [@lauffer-STM; @ross-steps; @kautz2015-LEEP], this has led to the consensus of perfectly crystalline graphene. On the other hand, two observations point to a less perfect sheet. First, the charge carrier mobility is generally low, even at cryogenic temperatures [@speck-QFMLG; @emtsev2009-towards]. Second, an ideal graphene sheet is impermeable even to hydrogen [@berry2013-impermeability; @Hu2014-proton-transport], whereas a wide variety of atomic and molecular species has been intercalated into EG [@riedl2009quasi; @nandkishore2012-SC-doping; @li2013-TI-intercalation; @baringhaus2015-ballistic-Ge; @anderson2017-Eu-intercalation; @speck2017growth]. In this Report, we demonstrate that graphene on SiC is less homogeneous than widely believed and is, in fact, fractured into domains of different crystallographic stacking order. We use advanced low-energy electron microscopy (LEEM) methods and *ab initio* calculations to show that those domains are naturally formed during growth due to nucleation dynamics and built-in strain. They are thus present in all graphene-on-SiC materials. Figure \[fig:domains\](a) and (b) show bright-field LEEM images of two QFG samples (see Methods section for details on sample growth and hydrogen intercalation) with areas of different graphene thickness. Bright-field images are recorded using specularly reflected electrons that leave the sample perpendicular to the surface (see Fig. \[fig:domains\](c)). The main contrast mechanism in this mode is the interaction of the imaging electrons with the thickness-dependent, unoccupied band structure of the material, which is used to unambiguously determine the number of graphene layers [@hibino2008-thickness; @feenstra2013-QFG-IV; @jobst2015-ARRES]. Large, homogeneous areas of bilayer, trilayer and four-layer graphene can thus be distinguished in Fig. \[fig:domains\](a,b), supporting the notion of perfect crystallinity. In stark opposition to this generally accepted view, the dark-field images in Fig. \[fig:domains\](d,e) clearly reveal that all areas are actually fractured into domains of alternating contrast. The symmetry breaking introduced in dark-field imaging, where the image is formed from one diffracted beam only (cf. Fig. \[fig:domains\](f) and Methods), leads to strong contrast between different stacking types of the graphene layers [@hibino2009stacking; @speck2017growth]. In fact, the contrast between different domains inverts (Fig. \[fig:domains\](d,e) versus (g,h)) when dark-field images are recorded from non-equivalent diffracted beams (cf. Fig. \[fig:domains\](f) and (i)). At first glance, the observation of different stacking orders is surprising, as it is known that graphene layers grown on SiC(0001) are arranged in Bernal stacking [@deHeer2011-confined-growth; @hibino2009stacking]. However, two energetically equivalent versions of Bernal stacking exist, AB and AC. The AC stacking order can be thought of either as AB bilayer where the top layer is translated by one bond length, or alternatively, as a full AB bilayer rotated by 60 degrees (Fig. \[fig:spectra\](a,b)). Consequently, AB and AC stacking are indistinguishable in bright-field imaging. Subsequent layers can be added in either orientation, generating more complicated stacking orders for trilayer and beyond. ![Low-energy electron reflectivity spectra reveal precise stacking order. (a) Sketched top view of AC (orange) and AB (blue) stacking orders. Inequivalent atoms of the unit cell of the top layer (orange or blue) sit in the center of the hexagon of the bottom layer (black). (b) Side view of the stacking along the dashed line in A. Open and closed circles denote the inequivalent atoms of the graphene unit cell. (c, d) Experimental dark-field reflectivity spectra recorded on different stacking domains on bilayer and trilayer graphene, respectively. The areas from which the spectra are recorded are indicated by circles in Fig. \[fig:domains\](e). (e, f) Theoretical dark-field spectra for AB and AC as well as ABA, ABC, ACA and ACB stacking orders obtained by *ab initio* calculations. A Gaussian broadening of is applied to account for experimental losses. The vertical lines in (c) to (f) indicate the landing energy at which Fig. \[fig:domains\](e,h) are recorded. []{data-label="fig:spectra"}](Figure2){width="\columnwidth"} In order to identify the exact stacking in each area, we simulate bilayer and trilayer graphene slabs in different stacking orders and compare their reflectivity with measured low-energy electron reflectivity spectra. The latter are extracted from the intensity of an area in a series of spectroscopic LEEM images recorded at different electron landing energy (see Supplementary Movie 1 and 2 for such measurements of the area in Fig. \[fig:domains\](b) in bright-field and dark-field geometry, respectively). While different domains show identical bright-field reflectivity (cf. Supplementary Figure 1), dark-field spectra extracted from different bilayer domains (marked blue and orange in Fig. \[fig:spectra\](c) and \[fig:domains\](e) are clearly distinguishable. Moreover, four distinct reflectivity curves are observed for trilayer graphene (Fig. \[fig:spectra\](d)). Figure \[fig:spectra\](e,f) shows theoretical dark-field spectra, obtained by *ab initio* calculations (see Methods section for computational details), of different bilayer and trilayer stacking orders, respectively. The excellent agreement of theoretical and experimental data in Fig. \[fig:spectra\](c,e) is clear evidence that the assignment of Bernal AB and AC stacking orders for different bilayer domains is correct. Moreover, the comparison of Fig. \[fig:spectra\](d) and (f) shows that using these dark-field LEEM methods, we can distinguish the more complicated trilayer stacking orders: Bernal, ABA (cyan) and ACA (pink), versus rhombohedral ABC (purple) and ACB (brown). Due to the small electron penetration depth in LEEM, however, the spectra fall into two families (ABA and ABC vs. ACA and ACB) dominated by the stacking order of the top two layers. ![Stacking domains are caused by growth-induced strain and graphene nucleation dynamics. (a) Sketch of bilayer graphene where the top layer is uniformly strained causing a Moiré pattern. (b) Sketch of the energetically favored arrangement of AB and AC stacked domains with all strain concentrated into dislocation lines. The trigonal shape of the domains is clearly visible. The color denotes how close a local stacking order is to AB (orange) or AC (blue) stacking. (c) A bright-field LEEM image of EG where growth was stopped shortly after bilayer starts to form. (d) Dark-field LEEM of the same area reveals that the resulting islands, which emerged from individual nucleation sites, exhibit constant stacking order, i.e. they are either AB (bright) or AC (dark) stacked. []{data-label="fig:nucleation"}](figure3){width="\columnwidth"} In addition to their stacking orders, bilayer graphene and thicker areas differ in the morphology of the stacking domains (cf. Fig. \[fig:domains\](d,e)), which indicates two distinct formation mechanisms. Most notably, bilayer domains are smaller, triangular and relatively regular. Similar morphologies, observed in free-standing bilayer graphene [@butz2014dislocations] and graphene grown on copper [@Brown2012; @alden2013strain], were linked to strain between the layers. While uniform strain causes a Moiré reconstruction (Fig. \[fig:nucleation\](a)), it is often energetically favorable to form domains of commensurate, optimal Bernal stacking. In this case, all strain is concentrated into the domain walls, thus forming dislocation lines [@butz2014dislocations; @alden2013strain], as sketched in Fig. \[fig:nucleation\](b). Upon close examination of Fig. \[fig:domains\](b), the network of these dislocations is visible as dark lines in our bright-field measurements. The size of the triangular domains shrinks for increasing uniform strain, while anisotropic strain causes domains elongated perpendicular to the strain axis. The observed average domain diameter of $\sim$100–200nm coincides well with relaxation of the 0.2% lattice mismatch between buffer layer and first graphene layer [@schumann2014effect] (see calculation in the Supplementary Information). We thus conclude that the triangular domains in bilayer graphene result from strain thermally induced during growth and from the lattice mismatch with the SiC substrate. The presence of elongated triangular domains indicates non-uniform strain due to pinning to defects and substrate steps. The larger, irregularly shaped domains that dominate trilayer and four-layer areas (Fig. \[fig:domains\](d,g)) can be explained by nucleation kinetics. To test this hypothesis, we study EG samples where the growth was stopped shortly after the nucleation of bilayer areas to prevent their coalescence (see Methods). The resulting small bilayer islands on monolayer terraces are shown in bright-field and dark-field conditions in Fig. \[fig:nucleation\](c) and (d), respectively. We observe that bilayer areas with a diameter below $\sim$300nm form single domains of constant stacking order (either bright or dark in Fig. \[fig:nucleation\](d)) and that AB and AC stacked bilayer islands occur in roughly equal number. This indicates that new layers nucleate below existing ones in one of the two Bernal stacking orders randomly [@emtsev2009-towards; @deHeer2011-confined-growth; @lin-waferscale; @tanaka2010anisotropic]. At the elevated growth temperature, dislocations in the existing layers can easily move to the edge of the new island where they annihilate. As islands of different stacking grow and coalesce, new dislocation lines are formed where they meet (cf. Fig. \[fig:spectra\](a)). This opens the interesting possibility to engineer the dislocation network by patterning the SiC substrate before graphene growth. Notably, we observe strain-induced domains also in monolayer EG (Fig. \[fig:nucleation\](d)) and between the bottom two layers in trilayer QFG (visible only for some energies, e.g. 33 eV in Supplementary Figure 2). The prevalence of these triangular domains in all EG and QFG samples between the two bottommost layers demonstrates that stacking domains are a direct consequence of the epitaxial graphene growth and consequently are a general feature of this material system. The resulting dislocation network explains the linear magnetoresistance observed in bilayer QFG [@kisslinger2015-linear] and might be an important culprit for the generally low mobility in EG and QFG [@speck-QFMLG]. ![The hydrogen deintercalation dynamics is dominated by the graphene dislocation network. (a) Bright-field LEEM snapshots ($E=2.2$eV) of hydrogen deintercalation at (the full time series is available as Supplementary Movie 3). Deintercalation starts in distinct points and deintercalated areas (dark in the bilayer region) grow in a strongly anisotropic fashion. Scale bars are 500nm. (b) Overlay of the deintercalation state at 15min with a LEEM image showing the dislocation network (dark lines) beforehand. It reveals that deintercalation proceeds faster along dislocation lines. Areas shaded in color are still intercalated, while hydrogen is already removed in the uncolored areas. (c, d) Bright-field images comparing the domain boundaries before and after deintercalation, respectively. While some dislocations move slightly, the overall features remain unchanged during the process. (a) to (d) show the same area as Fig. \[fig:domains\](b). (e) Slices along the time axis, perpendicular (left) and parallel (right) to the dislocation line marked yellow in (a), illustrate the velocity of the deintercalation front. (f) Same for the dislocation marked white in (a). The movement of all deintercalation fronts is roughly linear in time and much faster parallel to dislocation lines than perpendicular. (g) The fraction of deintercalated area $A_\text{EG}$ extracted from the bilayer area in (a) grows non-linearly in time, indicating that the process is limited by the desorption of hydrogen at the boundary between intercalated and deintercalated areas. []{data-label="fig:deintercalation"}](figure4){width="\columnwidth"} The presence of these strain-induced domains in EG as well as QFG raises the question of their role during (hydrogen) intercalation. Since the high hydrogen pressures necessary for intercalation are not compatible with *in situ* imaging, we investigate the inverse process. Figure \[fig:deintercalation\](a) shows a time series of bright-field LEEM images of the area shown in Fig. \[fig:domains\](b) recorded at (cf. Supplementary Movie 3). At this temperature, hydrogen slowly leaves the SiC–graphene interface [@riedl2009quasi; @speck-QFMLG] and $n$-layer QFG is transformed back to $n-1$ layer (+ buffer layer) EG. The change in the reflectivity spectrum accompanied with this conversion (c.f. Supplementary Figure 1) yields strong contrast (e.g. dark in the bilayer in Fig. \[fig:deintercalation\](a)) and enables capture of the full deintercalation dynamics. Deintercalation starts at distinct sites where hydrogen can escape and proceeds in a highly anisotropic fashion. An overlay of the half deintercalated state (15min) with an image of the dislocations in the initial surface (Fig. \[fig:deintercalation\](b)) shows that deintercalation happens preferentially along dislocation lines. Although the dislocation lines are slightly mobile at higher temperatures (cf. Fig. \[fig:deintercalation\](c,d) before and after deintercalation, respectively), their overall direction and density is preserved during the process. The local deintercalation dynamics reveal details of the underlying microscopic mechanism. Figure \[fig:deintercalation\](e,f) show that deintercalation fronts move roughly linearly in time both perpendicular and parallel to dislocation lines. The velocity of the deintercalation fronts however, is much larger parallel to dislocation lines (up to $v_\parallel = \SI{95}{\nano\metre\per\second}$), than perpendicular to them ($v_\perp \approx \SI{0.1}{\nano\metre\per\second}$). This linear movement rules out that deintercalation is limited by hydrogen diffusion, but indicates that hydrogen desorption at the deintercalation front is the limiting factor. The non-linear growth of the fraction of deintercalated area $A_\text{EG}$ (Fig. \[fig:deintercalation\](g)) demonstrates that deintercalation is also not capped by the venting of hydrogen from the defects where deintercalation starts (7min in Fig. \[fig:deintercalation\](a)). While $v_\perp$ is the same for all areas, $v_\parallel$ varies from to (marked yellow and white in Fig. \[fig:deintercalation\](a), respectively), suggesting that the deintercalation process is strongly affected by the precise atomic details of the dislocations. These findings indicate that not only the deintercalation, but also the intercalation of hydrogen and other species, which all can not penetrate graphene, is dominated by the presence of stacking domains. Consequently, their manipulation, e.g. by patterning the substrate, will open a route towards improved intercalation and tailored QFG on the wafer-scale. We conclude that graphene on SiC is a much richer material system than has been realized to this date. Specifically, we show that domains of AB and AC Bernal stacking orders are always present in this material even though its layers appear perfectly crystalline to most methods. We deduce that these domains are formed between the two bottommost carbon layers (either graphene and buffer layer for EG or bilayer QFG) by strain relaxation. In addition, the nucleation of grains of different stacking order during growth causes larger domains in thicker layers. We show that dislocation lines between domains dominate hydrogen deintercalation dynamics, highlighting their importance for intercalation as well. By engineering these dislocation networks, we foresee wide implications for customized QFG for electronic applications. Moreover, the dislocation networks observed here can yield a wafer-scale platform for topological [@ju2015-TI-transport] and strongly correlated electron phenomena [@hunt2013-butterfly; @herrero2018-magic-Mott; @herrero2018-superconductivity] when tailored into periodic structures. We thank Marcel Hesselberth and Douwe Scholma for their indispensable technical support. This work was supported by the Netherlands Organisation for Scientific Research (NWO/OCW) via the VENI grant (680-47-447, J.J.) and as part of the Frontiers of Nanoscience program. It was supported by the Spanish Ministry of Economy and Competitiveness MINECO, Grant No. FIS2016-76617-P, as well as by the DFG through SFB953. Methods ======= #### Sample fabrication {#sample-fabrication .unnumbered} Graphene growth is carried out on commercial 4H-SiC wafers (semi-insulating, nominally on axis, RCA cleaned) at $\sim$ and Ar pressure for $\sim$ as described in Ref.  . To convert EG to bilayer QFG via hydrogen intercalation, the sample is placed in a carbon container and heated to for at ambient hydrogen pressure as described in Ref. . Samples with small bilayer patches on large substrate terraces are achieved in a three-step process. First, SiC substrates are annealed at $\sim$ and Ar pressure for in a SiC container to enable step bunching. Second, unwanted graphitic layers formed during this process are removed by annealing the sample at in an oxygen flow for . Third, graphene growth is carried out as described above. #### Low-energy electron microscopy {#low-energy-electron-microscopy .unnumbered} The LEEM measurements are performed using the aberration correcting LEEM facility [@schramm2011low] which is based on a commercial SPECS P90 instrument and provides high-resolution imaging. Limitations on the angles of the incident and imaging beams make dark-field imaging in the canonical geometry, where the diffracted beam used for imaging leaves the sample along the optical axis, impossible. Instead, we use a tilted geometry where the incident angle is chosen such that the specular beam and the refracted beam used for imaging leave the sample under equal, but opposite, angles (illustrated in Fig. \[fig:domains\]f,i). The tilted incidence yields an in-plane $k$-vector, which influences the reflectivity spectrum [@jobst2015-ARRES; @jobst-ARRES-GonBN]. This is taken into account in our calculations, but needs to be considered when comparing to other LEEM and LEED data. Microscopy is performed below $2\cdot10^{-9}$mbar and at , to prevent the formation of hydrocarbon-based contaminants under the electron beam. Images are corrected for detector-induced artifacts by subtracting a dark count image and dividing by a gain image before further analysis. Fig. 3 is corrected for uneven illumination by dividing by the beam profile. Additionally, the minimum intensity in images shown is set to black and maximum intensity is set to white to ensure visibility of all details. All dark-field images and images showing dislocation lines are integrated for 4s, all other images for 250ms. #### Computations {#computations .unnumbered} All calculations were performed with a full-potential linear augmented plane waves method based on a self-consistent crystal potential obtained within the local density approximation, as explained in Ref. . The *ab initio* reflectivity spectra are obtained with the all-electron Bloch-wave-based scattering method described in Ref. . The extension of this method to stand-alone two-dimensional films of finite thickness was introduced in Ref. . Here, it is straightforwardly applied to the case of finite incidence angle to represent the experimental tilted geometry. An absorbing optical potential $V_\mathrm{i}=\SI{0.5}{\electronvolt}$ was introduced to account for inelastic scattering: the imaginary potential $-iV_\mathrm{i}$ is taken to be spatially constant over a finite slab (where the electron density is non-negligible) and to be zero in the two semi-infinite vacuum half-spaces. In addition, a Gaussian broadening of is applied to account for experimental losses.
[****]{} 3.0cm [S. Kumar$^{(a)}$, B. K. Kureel$^{(a)}$, R. P. Malik$^{(a,b)}$]{}\ $^{(a)}$ [*Physics Department, Institute of Science,*]{}\ [*Banaras Hindu University, Varanasi - 221 005, (U.P.), India*]{}\ 0.1cm 0.1cm $^{(b)}$ [*DST Centre for Interdisciplinary Mathematical Sciences,*]{}\ [*Institute of Science, Banaras Hindu University, Varanasi - 221 005, India*]{}\ [**Abstract:**]{} We discuss about the Becchi-Rouet-Stora-Tyutin (BRST), anti-BRST and (anti-)co-BRST symmetry transformations and derive their corresponding conserved charges in the case of a two (1+1)-dimensional (2D) [*self-interacting*]{} non-Abelian gauge theory (without any interaction with matter fields). We point out a set of [*novel*]{} features that emerge out in the BRST and co-BRST analysis of the above 2D gauge theory. The algebraic structures of the symmetry operators (and corresponding conserved charges) and their relationship with the cohomological operators of differential geometry are established, too. To be more precise, we demonstrate the existence of a [*single*]{}Lagrangian density that respects the continuous symmetries which obey proper algebraic structure of the cohomological operators of differential geometry. We lay emphasis on the existence and properties of the Curci-Ferrari (CF) type restrictions in the context of (anti-)BRST and (anti-)co-BRST symmetry transformations and pinpoint their differences and similarities. All the observations, connected with the (anti-)co-BRST symmetries, are [*completely*]{} novel. PACS: 11.15.q; 04.50.+h; 73.40.Hm\ [*[Keywords]{}*]{}: [2D non-Abelian 1-form gauge theory; (anti-)BRST and (anti-)co-BRST symmetries; conserved charges; nilpotency and absolute anticommutativity; Curci-Ferrari type restrictions; cohomological operators; algebraic structures ]{} Introduction ============ The principles of [*local*]{} gauge theories are at the heart of a precise theoretical description of electromagnetic, weak and strong interactions of nature. One of the most intuitive, geometrically rich and theoretically elegant methods to quantize such kind of theories is the Becchi-Rouet-Stora-Tyutin (BRST) formalism \[1-4\]. In this formalism, the [*local*]{} gauge symmetries of the [*classical*]{} theories are traded with the (anti-)BRST symmetries at the [*quantum*]{} level where unitarity is satisfied at any arbitrary order of perturbative computations. These (anti-)BRST symmetries are fermionic (i.e. supersymmetric-type) in nature and, therefore, they are nilpotent of order two. However, these symmetries absolutely anticommute with each other. This latter property encodes the linear independence of these symmetries. Hence, the BRST and anti-BRST symmetries have their own identities. The (anti-)BRST symmetry transformations are [*fermionic*]{} in nature because they transform a bosonic field into its fermionic counterpart and [*vic[è]{}-versa*]{}. This is what precisely happens with the supersymmetric (SUSY) transformations which are [*also*]{} fermionic in nature. However, there is a decisive difference between the two (in spite of the fact that [*both*]{} types of symmetries are nilpotent of order two). Whereas the (anti-)BRST symmetry transformations are absolutely anticommuting in nature, the anticommutator of two distinct SUSY transformations always produces the spacetime translation of the field on which it (i.e. the anticommutator) operates. Thus, the SUSY transformations are distinctly different from the (anti-)BRST symmetry transformations. The clinching point of [*difference*]{} is the property of absolute anticommutativity (which is respected by the (anti-)BRST symmetry transformations [*but*]{} violated by the SUSY transformations). In a set of research papers (see, e.g. \[5,6\] and references therein), we have established that any arbitrary [*Abelian*]{} [*p*]{}-form $(p = 1,2,3...)$ gauge theory would respect, in addition to the (anti-)BRST symmetry transformations, the (anti-)co-BRST symmetry transformations, too, in the $ D = 2p$ dimensions of spacetime at the [*quantum*]{} level. This observation has been shown to be true \[7\] in the cases of (1+1)-dimensional (2D) (non-)Abelian $1$-form gauge theories (without any interaction with matter fields). In fact, these 2D theories have been shown \[7\] to be the field theoretic examples of Hodge theory as well as a [*new*]{} model of topological field theory (TFT) which captures some salient features of Witten-type TFTs \[8\] as well as a few key aspects of Schwarz-type TFTs \[9\]. In a recent set of couple of papers \[10,11\], we have discussed the Lagrangian densities, their symmetries and Curci-Ferrari (CF)-type restrictions for the 2D [*non-Abelian*]{} 1-form gauge theory within the framework of BRST and superfield formalisms. Some novel features have been pointed out, too. In our earlier work \[10\], we have been able to show the [*equivalence*]{} of the coupled Lagrangian densities w.r.t. the (anti-)BRST as well as (anti-)co-BRST symmetries of the 2D non-Abelian $1$-form gauge theory (without any interaction with matter fields). However, we have [*not*]{} been able to compute the conserved currents (and corresponding charges) for the above continuous symmetries. One of the central themes of our present investigation is to compute [*all*]{} the conserved charges and derive their algebra to show the validity of CF-type restrictions at the [*algebraic*]{} level. This exercise establishes the [*independent*]{} existence of a set of CF-type restrictions for the 2D non-Abelian $1$-form theory (which has been shown from the symmetry considerations \[10\] as well as from the point of view of the superfield approach to BRST formalism \[11\]). In our present endeavor, we accomplish this goal in a straightforward fashion and show that the CF-type restrictions, corresponding to the (anti-)co-BRST symmetries, have some [*novel*]{} features that are different from the [*usual*]{} CF-condition \[12\] corresponding to the (anti-)BRST symmetries of our present theory. One of the highlights of our present investigation is the derivation of the CF-type restrictions and some of the equations of motion (EOM) from the algebra of conserved charges where the ideas of symmetry generators corresponding to the continuous symmetry transformations of our 2D non-Abelian theory are exploited. Thus, to summarize the key results of our previous works \[10,11\] and [*present*]{} one, we would like to state that we have been able to show the existence of the CF-type of restrictions from the point of view of symmetries of the $2D$ non-Abelian 1-form gauge theory \[10\], superfield approach to BRST formalism applied to the above $2D$ theory \[11\] and algebra of the conserved charges of the above theory. The latter (i.e. algebra) is reminiscent of the algebra of the de Rham cohomological operators of the differential geometry. Our present studies establish the [*independent*]{} nature of the CF-type restrictions in the context of nilpotent (anti-)co-BRST symmetries (existing in the 2D non-Abelian 1-form gauge theory) which are [*different*]{} from the CF-condition \[12\] that appears in the context of (anti-)BRST symmetries (existing in [*any*]{} arbitrary dimension of spacetime for the non-Abelian 1-form gauge theory). One of the key observations of our present [*endeavor*]{} is the fact that each of the coupled Lagrangian densities (cf. Eq. (26) below) represent the [*perfect*]{} model of Hodge theory because their symmetry operators obey an algebra that happens to be the [*exact*]{} algebra of the de Rham cohomological operators of differential geometry. The [*novel* ]{} feature of this algebra is the observation that this is satisfied by symmetry operators where there is [*no*]{} use of the CF-type restrictions [*anywhere*]{} despite the fact that they correspond to the 2D [*non-Abelian* ]{} $1$-form gauge theory (cf. Sec. 6 below). This happens because of the fact that [*individually*]{} each Lagrangian density of (26) respects [*five* ]{} perfect[^1] symmetries (where there is [*no*]{} use of any type of CF-type restrictions as well as EQM of our theory). In the case of individual Lagrangian density, the mapping between the symmetry operators and cohomological operators is [*one-to-one*]{}. Both the coupled Lagrangian densities [*also* ]{} represent the model of Hodge theory [*together* ]{} provided we use the CF-type restrictions as well as equations of motion of our 2D non-Abeian 1-form gauge theory. In the case of coupled Lagrangian densities, the mapping between the symmetry operators and cohomological operators is [*two-to-one*]{}. One of the key findings of our present endeavor is the observation that the (anti-)co-BRST charges are nilpotent and absolutely anticommuting. The proof of these properties does [*not*]{} require any kinds of CF-type restrictions (see, Sec. 6 below). In our present endeavor, we have demonstrated that the [*normal*]{} coupled Lagrangian densities (1) (see below) for the non-Abelian 1-form gauge theory respect [*four*]{} perfect symmetries individually whereas the [*generalized*]{} versions of these Lagrangian densities (26) (see below) respect [*five*]{} perfect symmetries individually. It has been shown that [*both*]{} the Lagrangian densities of Eq. (26) respect (anti-)co-BRST symmetries that have been listed in (27) (see below) which is a completely novel observation (cf. Eq. (28)). The absolute anticommutativity of the (anti-)co-BRST charges (that have been computed from the Lagrangian densities (1)) requires the validity of the CF-type restrictions $({\cal B}\times C = 0, {\cal B}\times \bar C = 0)$. However, the requirement of the absolute anticommutativity of the above charges (that are computed from the generalized Lagrangian densities (26)) turn out to be [*perfect*]{}. This happens because of the fact that the conditions ${\cal B}\times C = 0$ and ${\cal B}\times \bar C = 0$ become equations of motion for the Lagrangian densities (26). This is also a novel observation in our present endeavor (connected with the $2D$ non-Abelian theory). Our present endeavor is propelled by the following key considerations. First and foremost, we have derived the conserved charges corresponding to the continuous symmetries which have [*not*]{} been discussed in our earlier works \[10,11\]. Second, we have derived the CF-type restrictions in the context of 2D non-Abelian theory which emerge from the symmetry considerations \[10\] as well as from the application of augmented version of superfield approach to BRST formalism \[11\]. We show, in our present endeavor, the existence of [*such*]{} restrictions in the language of algebra, connected with the conserved charges, which obey the algebra of cohomological operators of differential geometry. Third, the (anti-)co-BRST symmetries [*absolutely*]{} anticommute [*without*]{} use of any kinds of the CF-type restrictions (which is [*not*]{} the case with the (anti-)BRST symmetries). However, in our present endeavor, we have shown that CF-type restrictions $({\cal B}\times C = 0, {\cal B}\times\bar C = 0)$ appear when we consider the requirement of absolute anticommutativity of the (anti-)co-BRST charges (derived from the Lagrangian densities (1)). Finally, we speculate that the understanding and insights, gained in the context of 2D non-Abelian theory, might turn out to be useful for the 4D Abelian 2-form and 6D Abelian 3-form gauge theories which have [*also*]{} been shown to be the models of Hodge theory (where the (anti-)BRST and (anti-)co-BRST invariant [*non-trivial*]{} CF-type restrictions [*do*]{} exist in a clear fashion \[5,6\]). The material of our present research work is organized as follows. In Sec. 2, we [*briefly*]{} recapitulate the bare essentials of the nilpotent (anti-)BRST and (anti-)co-BRST symmetries, a [*unique*]{} bosonic symmetry and a ghost-scale symmetry of the 2D non-Abelian gauge theory in the Lagrangian formulation. Our Sec. 3 contains the details of the derivation of conserved Noether currents and conserved charges corresponding to the above continuous symmetries. Our Sec. 4 deals with the elaborate proof of the coupled Lagrangian densities to be [*equivalent*]{} w.r.t. the nilpotent (anti-)BRST as well as (anti-)co-BRST symmetry transformations. In Sec. 5, we derive the algebraic structures of the symmetry operators and conserved charges and establish their connection with the cohomological operators of differential geometry (at the algebraic level). Our Sec. 6 deals with the discussion of some [*novel*]{} observations in the context of algebraic structures. Finally, we make some concluding remarks and point out a few future directions for further investigations in Sec. 7. In our Appendices A and B, we collect some of the explicit computations that have been incorporated in the main body of the text of our present endeavor. In our Appendix C, we show the consequences of the (anti-)BRST symmetry transformations when they are applied on the generalized forms of the Lagrangian densities (cf. Eq. (26) below).\ [*Convention and Notations*]{}: Our whole discussion is based on the choice of the 2D flat metric $\eta_{\mu\nu}$ with signatures $(+1,-1)$ which corresponds to the background Minkowskian 2D spacetime manifold. We choose the 2D Levi-Civita tensor $\varepsilon_{\mu\nu}$ such that $\varepsilon_{01} =+1=\varepsilon^{10}$ and $\varepsilon_{\mu\nu}\,\varepsilon^{\mu\nu} =-\; 2!$, $\varepsilon_{\mu\nu}\,\varepsilon^{\nu\lambda} =\delta ^{\lambda}_{\mu}$, etc. Throughout the whole body of our text, we adopt the notations for the (anti-)BRST and (anti-)co-BRST transformations as $s_{(a)b}$ and$ s_{(a)d}$, respectively. In the 2D Minkowskian flat spacetime, the field strength tensor: $F_{\mu\nu} = \partial_{\mu}A_{\nu} -\partial_{\nu}A_{\mu} + i\,(A_{\mu}\times A_{\nu})$ has only [*one*]{} existing component $E = F_{01} = -\varepsilon^{\mu\nu}[\partial_{\mu}A_{\nu} +\,\frac{i}{2}(A_{\mu}\times A_{\nu})]$ and our Greek indices ${\mu}\,,{\nu}\,,{\lambda}\,... = 0\,,1$ correspond to the time and space directions. We have also adopted the dot and cross products in the$ SU(N)$ Lie algebraic space where $P\cdot Q = P^a\,Q^a$ and $(P\times Q)^a = f^{abc}P^b\,Q^c$ for the non-null vectors $P^a $ $(P = P^a T^a\equiv P\cdot T)$ and $Q^a$ $( Q = Q^aT^a\equiv Q\cdot T )$ where the $SU(N)$ Lie algebra is: $[T^a,T^b] = f^{abc} T^c$. In this specific mathematical algebraic relationship, $T^a$ are the generators of the $SU(N)$ Lie algebra and the structure constants $f^{abc}$ are chosen to be totally antisymmetric in [*all*]{} their indices $a,b,c = 1,2.....N^2-1$.\ [*Standard Definition*]{}: On a compact manifold without a boundary, the set of three mathematical operators $(d, \delta, \Delta)$ is called as a set of the de Rham cohomological operators of differential geometry where $(\delta)d$ are christened as the (co-)exterior derivatives and $\Delta = (d + \delta)^2$ is called as the Laplacian operator. Together, these operators satisfy an algebra: $ d^2 = \delta^2 = 0, \;\Delta = d \delta + \delta d, \;[\Delta, d ] = 0, \; [\Delta, \delta] = 0$ which is called as the Hodge algebra of differential geometry. The co-exterior derivative $\delta$ and exterior derivative $d$ are connected by a relationship $\delta = \pm \, * \; d \;*$ where $*$ is the Hodge duality operation (defined on the given compact manifold without a boundary). It is obvious that the (co-)exterior derivatives are nilpotent of oder two and Laplacian operator is like the Casimir operator for the whole algebra. However, the latter (i.e. the Hodge algebra) is [*not*]{} a Lie algebra.\ **Preliminaries: Lagrangian Formulation** ========================================= We begin with the coupled (but equivalent) Lagrangian densities \[13,14,10,11\] of our 2D non-Abelian 1-form gauge theory in the Curci-Ferrari gauge (see, e.g. \[15,16\]) as $$\begin{aligned} &&{\cal L}_B = {\cal B} {\cdot E} - \frac {1}{2}\,{\cal B} \cdot {\cal B} +\, B\cdot (\partial_{\mu}A^{\mu}) + \frac{1}{2}(B\cdot B + \bar B \cdot \bar B) - i\,\partial_{\mu}\bar C \cdot D^{\mu}C, \nonumber\\ &&{\cal L}_{\bar B} = {\cal B} {\cdot E} - \frac {1}{2}\,{\cal B} \cdot {\cal B} - \bar B\cdot (\partial_{\mu}A^{\mu}) + \frac{1}{2}(B\cdot B + \bar B \cdot \bar B) - i\, D_{\mu}\bar C \cdot \partial^{\mu}C,\end{aligned}$$ where $B$, $\bar B $ and ${\cal B}$ are the auxiliary fields, $D_{\mu} C = \partial_{\mu} C + i\, (A_{\mu}\times C) $ and $D_{\mu}\bar C = \partial_{\mu}\bar C + i\, (A_{\mu}\times\bar C)$ are the covariant derivatives on the ghost and anti-ghost fields, respectively. These derivatives are in the [*adjoint*]{} representation of the $ SU(N)$ Lie algebra and ${B + \bar B + (C\times \bar C)} = 0 $ is the Curci-Ferrari (CF) condition \[12\]. The latter is responsible for the[*equivalence*]{} of the Lagrangian densities ${\cal L}_B $ and ${\cal L}_{\bar B}$. This observation is one of the[*inherent*]{} properties of the basic concept behind the existence of [*coupled*]{} Lagrangian densities for a given gauge theory \[13,14\]. The fermionic $[(C^a)^2 = 0,\, {({\bar C}^a)^2} = 0\, ]$ (anti-) ghost fields $(\bar C^a)C^a $ are needed for the validity of unitarity in the theory and they satisfy: $C^a \bar C^b+ \bar C^b C^a = 0, C^a C^b+C^b C^a= 0, \bar C^a\bar C^b+\bar C^b\bar C^a = 0, \bar C^a C^b+C^b\bar C^a= 0,$ etc. We would like to remark here that the 2D kinetic term \[i.e. $- (1/4) F^{\mu\nu} \cdot F_{\mu\nu} = (1/2) E \cdot E \equiv {\cal B} {\cdot E} - (1/2)\,{\cal B} \cdot {\cal B}$\] has been lineared by invoking the auxiliary field ${\cal B}$. The Lagrangian densities in (1) respect the following off-shell nilpotent $(s_{(a)b}^2= 0)$ (anti-)BRST symmetry transformations $s_{(a)b}$: $$\begin{aligned} &&s_{ab} A_\mu= D_\mu\bar C,\,\,\,\,\, s_{ab} \bar C= -\frac{i}{2}\,(\bar C\times\bar C), \,\,\,\, s_{ab}C = i{\bar B},\,\,\,\, s_{ab}\bar B = 0 ,\,\,\,\, s_{ab}({\cal B}\cdot{\cal B}) = 0,\nonumber\\ &&s_{ab} E = i \,(E\times\bar C),\,\;\; \,\,\; s_{ab}{\cal B} = i\,({\cal B}\times\bar C), \,\,\, \; \;s_{ab} B = i\,(B \times \bar C),\quad \;s_{ab}({\cal B}\cdot E) = 0, \nonumber\\ &&s_b A_\mu = D_\mu C, \;\,\,\,\,s_b C = - \frac{i}{2} (C\times C),\,\,\,\; s_b\bar C \;= i\,B ,\; \,\,\,\, \;s_b B = 0,\,\,\,\, \;s_b({\cal B}\cdot{\cal B}) = 0, \nonumber\\ && s_b\bar B = i\,(\bar B\times C),\;\;\,\,\,s_b E = i\,(E\times C),\qquad \,\,\,s_b {\cal B} = i\,{(\cal B}\times C),\quad \,\,\, s_b({\cal B}\cdot E) = 0.\end{aligned}$$ This is due to the fact that we observe the following: $$\begin{aligned} &&s_b{\cal L}_B = \partial_\mu(B \cdot D^\mu C), \qquad\qquad\qquad\quad s_{ab}{\cal L}_{\bar B}= - \;\partial_\mu{(\bar B \cdot D^\mu \bar C)}.\end{aligned}$$ As a consequence, the (anti-)BRST transformations are the [*symmetry*]{} transformations for the action integrals ${S= \int d^2x \,{\cal L}_B}$ and ${S = \int d^2x \, {\cal L}_{\bar B}}$, respectively. The (anti-)BRST symmetry transformations absolutely anticommute with each other (i.e. ${\{s_b,s_{ab}}\} = 0$) [*only*]{} when the CF-condition is satisfied. One of the decisive features of the (anti-)BRST symmetry transformations is the observation that the kinetic term ($ -\frac{1}{4}F_{\mu\nu}\cdot F^{\mu\nu}$ =$\frac{1}{2}E\cdot E\equiv {\cal B}\cdot E$ - $\frac{1}{2} {{\cal B}}\cdot{{\cal B}}$) remains invariant under it. This observation would be exploited, later on, in establishing a connection between the continuous symmetries of our 2D theory and cohomological operators of differential geometry at the [*algebraic*]{} level. In addition to the (anti-)BRST symmetry transformations (2), we note the presence of the following nilpotent $(s_{(a)d}^2 = 0) $ and absolutely anticommuting $(s_d s_{ad}+s_{ad} s_d = 0)$ (anti-)co-BRST symmetry transformations in the theory (see, e.g. \[7\] for details): $$\begin{aligned} &&s_{ad} A_\mu = - \varepsilon_{\mu\nu}\partial^\nu C,\quad\, s_{ad} C = 0,\qquad\quad\,\,\,\,\,\, s_{ad} \bar C = i {\cal B},\qquad\qquad\,s_{ad} {\cal B} = 0,\nonumber\\ && s_{ad} E =D_\mu\partial^\mu C,\quad\quad\,\,\, s_{ad} B = 0,\qquad\qquad s_{ad}\bar B = 0,\qquad\, s_{ad}({\partial_\mu A^\mu})= 0, \nonumber\\ &&s_d A_\mu = - \varepsilon_{\mu\nu}\partial^\nu \bar C, \quad\quad s_d C = - i {\cal B},\quad\qquad s_d \bar C = 0,\,\qquad\qquad\quad s_d{\cal B} = 0,\nonumber\\ && s_d E = D_\mu\partial^\mu\bar C,\,\qquad\quad s_d B = 0, \qquad\qquad\,\, s_d\bar B = 0, \qquad\quad s_d({\partial_\mu A^\mu})= 0. \end{aligned}$$ The Lagrangian $ {\cal L}_B $ and ${\cal L}_{\bar B}$ transform, under the above transformations, as follows $$\begin{aligned} &&s_{ad}\,{\cal L}_{\bar B} = \partial_\mu[{\cal B}\cdot \partial^\mu C], \qquad\qquad\qquad\quad s_d {\cal L}_B = \partial_\mu[{\cal B} \cdot\partial^{\mu} \bar C],\end{aligned}$$ which imply that the action integrals ${S = \int d^2x \,{\cal L}_{B}}$ and ${ S = \int d^2x \, {\cal L}_{\bar B }}$ remain invariant under the (anti-)co-BRST transformations. One of the decisive features of the (anti-)co-BRST symmetries is the observation that the gauge-fixing term $( \partial_\mu A^\mu) $ remains invariant under them. This observation would play a key role in establishing a connection between these symmetries and the cohomological operators of differential geometry at the [*algebraic*]{} level. It is quite clear that we have [*four*]{} fermionic symmetries in our present 2D theory. There are [*two*]{} bosonic symmetries in our theory, too. The [*first*]{} one is the ghost-scale symmetry $(s_g)$ and [*second*]{} one is a unique bosonic symmetry $s_w = {\{s_d,s_b}\} = -{\{s_{ad},s_{ab}}\}$. We focus $first$ on the ghost-scale symmetry. Under this symmetry, we have the following transformations for the fields of our present theory, namely; $$\begin{aligned} && C\longrightarrow e^\Omega \,C,\qquad {\bar C}\longrightarrow e^{-\Omega }\,{\bar C},\qquad \Phi\longrightarrow e^0\,{\Phi},\end{aligned}$$ where the generic field $\Phi = A_{\mu},\,B,\,{\cal B},\,\bar B,\,E $ and $\Omega$ is a [*global*]{} (spacetime independent)scale transformation parameter. One of the decisive features of the ghost-scale symmetry transformations is the observation that [*only*]{} the (anti-)ghost fields transform and the remaining ordinary basic/auxiliary fields of the theory remain [*invariant*]{} under them. The infinitesimal version $(s_g)$ of the above ghost-scale symmetry transformations is: $$\begin{aligned} && s_g C = C,\qquad s_g{\bar C} = -{\bar C},\qquad s_g{\Phi }= 0.\end{aligned}$$ In the above, we have set $\Omega = 1$ for the sake of brevity. Under these infinitesimal transformations, it can be readily checked that: $$\begin{aligned} &&s_g {\cal L}_B = 0,\qquad\qquad\qquad s_g{\cal L}_{\bar B} = 0.\end{aligned}$$ Thus, the action integrals automatically remain invariant under the above ghost-scale symmetry transformations. Now, we focus on the bosonic symmetry ${s_w }$ of our theory. It is elementary to check that, for the Lagrangian density ${\cal L}_B $, we have the following $$\begin{aligned} &&s_w A_\mu = -[ D_\mu{\cal B} + \varepsilon_{\mu\nu} \,(\partial^\nu\bar C\times C) + \varepsilon_{\mu\nu}\,\partial^\nu B], \qquad s_w \bar B = (\bar B\times {\cal B}),\nonumber\\ &&s_w(\partial_{\mu} A^{\mu}) = -[\partial_{\mu} D^{\mu}{\cal B} + \varepsilon_{\mu\nu}(\partial^{\nu}\bar C\times \partial_{\mu} C)], \qquad s_w [C,\bar C,{\cal B},B] = 0, \nonumber\\ &&s_w E = -[D_{\mu}\partial^{\mu}{\cal B} + i\,(E\times{\cal B}) - D_{\mu } C\times\partial^{\mu}\bar C- D_{\mu}\partial^{\mu}\bar C\times C],\end{aligned}$$ where we have taken $ s_w = {\{s_b,s_d}\}$ (modulo a factor of [*i*]{}) and $E = -\varepsilon^{\mu\nu}(\partial_{\mu}A_{\nu} +\frac{i}{2}A_{\mu}\times A_{\nu})$. One of the key observations is that the (anti-)ghost fields of the theory [*do not*]{} transform under the bosonic symmetry transformation $s_w$. It can be checked that the Lagrangian density ${\cal L}_B$ transforms under this bosonic symmetry transformation[^2] as (see, e.g. \[10\]) $$\begin{aligned} &&s_w{\cal L} _B = \partial_{\mu}[{\cal B}\cdot\partial^{\mu}B-B\cdot D^{\mu}{\cal B}-\partial^{\mu}\bar C\cdot({\cal B}\times C) - \varepsilon^{\mu\nu} B\cdot(\partial_{\nu}\bar C\times C)],\end{aligned}$$ thereby rendering the action integral $ S =\int d^2 x\,{\cal L}_B$ invariant. Thus, the bosonic transformations (9) correspond to the [*symmetry*]{} of the theory. We remark that one can define another bosonic symmetry $s_{\bar w} = -\;{\{s_{ad},\,s_{ab}}\}$ for the Lagrangian density ${\cal L}_{\bar B} $ but it turns out to be equivalent (i.e.$ s_w + s_ {\bar w }=0$) to $s_w = {\{s_d,\,s_b}\}$ if we use the equations of motion of the theory and the CF-type condition $(B+\bar B+( C \times \bar C ) = 0)$. To sum up, we have total[*six* ]{} continuous symmetries in the theory. Together,these symmetry operators satisfy an algebra that is[*exactly*]{} similar to the algebraic structure of the cohomological operators of the differential geometry. Thus, there is a connection between the [*two*]{} (cf. Sec. 5 below). **Conserved Charges: Noether Theorem** ====================================== The Noether theorem states that the invariance of the action integral, under continuous symmetry transformations, leads to the existence of conserved currents. As pointed out earlier, the Lagrangian densities ${\cal L}_B$ and $ {\cal L}_{\bar B}$ transform, under $s_b$ and $s_{ab}$, to the total spacetime derivatives as given in (3) thereby rendering the action integrals $S =\int d^2x\,{\cal L}_B$ and $S =\int d^2x\,{\cal L}_{\bar B}$ invariant. The corresponding Noether currents (w.r.t. BRST and anti-BRST symmetry transformations) are: $$\begin{aligned} && J^{\mu}_b = -\varepsilon^{\mu\nu}{\cal B}\cdot D_{\nu} C + B\cdot D^{\mu} C+\frac{1}{2}\, \partial^{\mu}\bar C\cdot(C\times C),\nonumber\\ &&J^{\mu}_{ab} = -\varepsilon^{\mu\nu}{\cal B}\cdot D_{\nu}\bar C -\bar B\cdot D^{\mu}\bar C -\frac{1}{2}\,\partial^{\mu} C\cdot(\bar C\times\bar C).\end{aligned}$$ The above currents are conserved (i.e. $\partial_{\mu}J^{\mu}_b = 0$ and $\partial_{\mu} J^{\mu}_{ab} = 0$) due to the following Euler-Lagrange (EL) equations of motion(EQM) that emerge from ${\cal L}_B$ and ${\cal L}_{\bar B}$, namely; $$\begin{aligned} && {\cal B} = E,\quad D_\mu\partial^{\mu}\bar C = 0,\qquad \partial_{\mu} D^{\mu} C = 0,\qquad \varepsilon^{\mu\nu}D_{\nu}{\cal B} + \partial ^{\mu}B+(\partial^{\mu}\bar C\times C) = 0,\nonumber\\ && {\cal B} = E,\quad \partial_\mu D^{\mu}\bar C = 0,\qquad D_{\mu}\partial^{\mu}C = 0,\qquad \varepsilon^{\mu\nu}D_{\nu}{\cal B} - \partial ^{\mu}\bar B-({\bar C}\times \partial^{\mu} C) = 0.\end{aligned}$$ The above observations are sacrosanct as far as Noether’s theorem is concerned. It is to be noted that we have used $E = -\varepsilon^{\mu\nu}(\partial_{\mu}A_{\nu}+\frac{i}{2} A_{\mu}\times A_{\nu})$ in the derivation of the EOM. The conserved charges (that emerge out from the conserved Noether currents) are: $$\begin{aligned} &&Q_b =\int dx \;J^0_b\equiv \int dx \;[{\cal B}\cdot D_1 C + B\cdot D_0 C +\frac{1}{2}\dot{\bar C}\cdot(C\times C)],\nonumber\\ &&Q_{ab} =\int dx\; J^0_{ab}\equiv \int dx \;[{\cal B}\cdot D_1\bar C - \bar B\cdot D_0\bar C -\frac{1}{2}\cdot(\bar C\times\bar C)\cdot\dot C].\end{aligned}$$ Using the EL-EOM (12), the above charges can be expressed in a more useful (but equivalent) forms as $$\begin{aligned} &&Q_b =\int dx \;[B\cdot D_0 C -\dot B\cdot C -\frac{1}{2}\,\dot{\bar C}\cdot(C\times C)],\nonumber\\ &&Q_{ab} =\int dx\;[\dot{\bar B}\cdot\bar C -\bar B\cdot D_0\bar C +\frac{1}{2}(\bar C\times \bar C)\cdot\dot C],\end{aligned}$$ which are the [*generators*]{} for the (anti-)BRST transformations(2). This statement can be verified by observing that the (anti-)BRST symmetry transformations, listed in equation (2), can be derived from the following general expression $$\begin{aligned} s_r \, {\Phi} = \mp \, i\,\,[\Phi, Q_r]_{\mp}\qquad\qquad\qquad r = b, ab,\end{aligned}$$ where the subscripts ($\mp$), on the square bracket, correspond to the bracket being commutator and anticommutator for the generic field $\Phi$ being bosonic and fermionic, respectively. The signs $\mp$ in front of square bracket can be chosen appropriately (see, e.g. \[17\])). Under the (anti-)co-BRST symmetry transformations $s_{(a)d}$, the Lagrangian densities ${\cal L}_B$ and ${\cal L}_{\bar B}$ transform as given in (5). According to the Noether theorem, these infinitesimal continuous symmetry transformations lead to the derivation of conserved Noether currents. The explicit expressions for these conserved currents are: $$\begin{aligned} &&J^{\mu}_d = {\cal B}\cdot\partial^{\mu}\bar C -\varepsilon^{\mu\nu} B\cdot\partial_{\nu}\bar C,\qquad\qquad J^{\mu}_{ad} = {\cal B}\cdot\partial^{\mu} C +\varepsilon^{\mu\nu}\bar B\cdot\partial_{\nu} C.\end{aligned}$$ The conservation laws $\partial_{\mu}J^{\mu}_d = 0$ and $\partial_{\mu}J^{\mu}_{ad} = 0$ can be proven by using EL-EQM (12). The conserved charges can be expressed[*equivalently* ]{}in various forms as: $$\begin{aligned} &&Q_d =\int\, dx \;J^0_d =\int dx\;[{\cal B}\cdot\dot{\bar C}+B\cdot\partial_1\bar C] \equiv \int dx\;[{\cal B}\cdot\dot{\bar C}-\partial_1 B\cdot\bar C]\nonumber\\ &&\qquad\qquad\qquad\;\;\equiv \int dx\;[{\cal B}\cdot\dot{\bar C}-D_0{\cal B}\cdot\bar C +(\partial_1\bar C\times C)\cdot\bar C],\nonumber\\ &&Q_{ad} =\int dx \;J^0_{ad} =\int dx\;[{\cal B}\cdot\dot C- \bar B \cdot\partial_1 C] \equiv \int dx\;[{\cal B}\cdot C +\partial_1 \bar B\cdot C]\nonumber\\ &&\qquad\qquad\qquad\quad\, \equiv \int dx\;[{\cal B}\cdot\dot C-D_0{\cal B}\cdot C -(\bar C\times \partial_1 C)\cdot C].\end{aligned}$$ The above charges are the generators of the (anti-)co-BRST symmetry transformations in equation (4). This statement can be corroborated by using the formula (15) where we have to replace:$r= a,ab\longrightarrow $ $r = d,ad $. We remark that the fermionic symmetries $s_{(a)b}$ and $s_{(a)d}$ are off-shell nilpotent of order two (i.e. $ \,s_{(a)b}^2 = 0$,$s_{(a)d}^2 = 0$). This can be explicitly checked from the transformations listed in equations(2) and (4). This property (i.e. nilpotency) is also reflected at the level of conserved charges. To corroborate this assertion, we note that $$\begin{aligned} &&s_b Q_b = -i\,{\{Q_b,Q_b}\} = 0\qquad\qquad \Longrightarrow \quad Q_b^2 = 0,\nonumber\\ &&s_{ab} \,Q_{ab} = -i\,\{Q_{ab},Q_{ab}\} = 0 \quad\quad\,\Longrightarrow \quad Q_{ab}^2 = 0,\nonumber\\ &&s_d Q_d = -i\,{\{Q_d,Q_d}\} = 0 \qquad\,\,\,\,\quad\Longrightarrow \quad Q_d^2 = 0,\nonumber\\ &&s_{ad}Q_{ad} = -i\,{\{Q_{ad},Q_{ad}}\} = 0\quad\quad \,\Longrightarrow \quad Q_{ad}^2 = 0,\end{aligned}$$ where we have used the definition of the symmetry generator (15). This observation is straightforward because the l.h.s. of the above equations can be computed explicitly by using the expressions for $ Q_{(a)b}$, $Q_{(a)d}$ (cf. Eqs.(14) and (17)) and the transformations (2) and (4) corresponding to the (anti-)BRST and (anti-)co-BRST transformations[^3]. The conserved Noether current and corresponding charge for the infinitesimal and continuous ghost-scale transformations(7) are: $$\begin{aligned} &&J^{\mu}_g = - i\,[\partial^{\mu}\bar C\cdot C -\bar C\cdot D^{\mu } C],\nonumber\\ &&Q_g =\int dx \;J^0_g\equiv - i\int dx \;[\dot{\bar C}\cdot C -{\bar C}\cdot D_0\ C].\end{aligned}$$ Using the equations of motion (12), it can be readily checked $\partial_{\mu}J^{\mu}_g=0$. Hence, the charge $Q_g $ is also conserved. Finally, we discuss a bit about the [*unique*]{} bosonic symmetry transformations $ s_w = {\{s_d,s_b}\} = -{\{s_{ad},s_{ab}}\}$ in this theory \[7\]. As pointed out earlier, the Lagrangian density ${\cal L}_B $ transforms to the total spacetime derivative under $s_w$ as given in (10). The conservation of Noether current (i.e. $\partial_{\mu}J^{\mu}_w = 0) $ can be proven by using Eq.(12). The conserved current $(J^{\mu}_w)$ and corresponding charge$(Q_w)$ are \[7\]: $$\begin{aligned} &&J^{\mu}_w =-\varepsilon^{\mu\nu}[{\cal B}\cdot D_{\nu}{\cal B} -B\cdot\partial_{\nu}B],\nonumber\\ &&Q_w =\int dx \;J^0_w =\int dx\; [{\cal B}\cdot D_1{\cal B} -B\cdot\partial_1 B].\end{aligned}$$ It our Appendix B, we have shown the alternative derivations of $ Q_w $ from the continuous symmetry transformations and the concept behind the symmetry generator. It is evident that we have [*six*]{} conserved charges which correspond to the [*six*]{} infinitesimal and continuous symmetries that exist in our theory. We shall establish their connections with the de Rham cohomological operations of differential geometry in our Sec. 5 where the emphasis would be laid on the algebraic structure(s) [*only*]{}. **Equivalence of the Coupled Lagrangian Densities: Symmetry Considerations** ============================================================================ We observe, first of all, that ${\cal L}_B$ and ${\cal L}_{\bar B}$ are equivalent [*only* ]{}when the CF-condition ${B + \bar B + (C\times \bar C)} = 0 $ is satisfied. This can be shown by the requirement of the equivalence of the Lagrangian densities (i.e ${\cal L}_B$ - ${\cal L}_{\bar B} \equiv 0$, modulo a total spacetime derivative term) which primarily leads to the following equality, namely; $$\begin{aligned} &&B\cdot(\partial_{\mu} A^{\mu}) - i\,\partial_{\mu}\bar C\cdot D^{\mu} C = -\bar B\cdot(\partial_{\mu} A^{\mu}) - i\, D_{\mu}\bar C\cdot \partial^{\mu} C.\end{aligned}$$ Thus, it is evident that [*both*]{} the Lagrangian densities are [*equivalent*]{} only on a hypersurface which is described by the CF-condition (i.e. ${B + \bar B + (C\times \bar C)} = 0 )$ in the 2D Minkowskian flat spacetime manifold. Furthermore, we note that [*both*]{} the Lagrangian densities respect the (anti-)BRST symmetry transformations because, we observe that, besides (3), we have the following explicit transformations: $$\begin{aligned} &&s_{ab}{\cal L}_B = -\partial_\mu\,[{\{\bar B + ( C\times\bar C)\} \cdot \partial^\mu \bar C}\,] +\{(B+\bar B + ( C \times {\bar C})\} \cdot D_\mu \partial^\mu \bar C, \nonumber\\ &&s_b{\cal L }_{\bar B}\; = \partial_\mu\,[ {\{ B + ( C \times \bar C )\}}\cdot \partial^\mu C \,]-{\{B + \bar B + ( C\times\bar C )\}}\cdot D_\mu\partial^\mu C.\end{aligned}$$ Thus, if we exploit the strength of the CF-condition: ${B + \bar B + (C\times \bar C)} = 0, $ we obtain $$\begin{aligned} &&s_{ab}\,{\cal L}_{B} = \partial_\mu[B \cdot\partial^{\mu}\bar C ], \qquad\qquad s_b{\cal L}_{\bar B} = -\partial_\mu[{\bar B}\cdot\partial^{\mu}C],\end{aligned}$$ thereby rendering the action integrals invariant. We draw the conclusion that, due to the key equations (3) and (23), [*both*]{} the Lagrangian densities ${\cal L}_B$ and ${\cal L}_{\bar B}$ respect [*both*]{} the BRST and anti-BRST symmetries provided we confine ourselves on the hypersurface defined by the CF-condition (where the absolute anticommutativity property (i.e. ${\{s_b, s_{ab}}\} = 0 $) is [*also*]{} satisfied for $s_{(a)b}$). As a consequence, we infer that [*both*]{} the Lagrangian densities are [*equivalent*]{} w.r.t. the (anti-)BRST symmetries on the hypersurface defined by the CF-condition \[12\]. Now we focus on the issue of[*equivalence*]{} of the Lagrangian densities ${\cal L}_B$ and ${\cal L}_{\bar B}$ from the point of view of the(anti-)co-BRST symmetry transformations. Besides the symmetry transformation in equation (5), we observe the following: $$\begin{aligned} && s_d{\cal L}_{\bar B} = \partial_\mu[{\cal B}\cdot D^\mu\bar C- \varepsilon^{\mu\nu} (\partial_\nu\bar C\times \bar C)\cdot C] + i\; (\partial_\mu A^\mu)\cdot({\cal B}\times\bar C),\nonumber\\ && s_{ad}{\cal L}_B = \partial_\mu[{\cal B}\cdot D^\mu C+ \varepsilon^{\mu\nu}\bar C\cdot(\partial_\nu C\times C)] + i\; (\partial_\mu A^\mu)\cdot({\cal B}\times C).\end{aligned}$$ We draw the conclusion, from the above, that [*both*]{} the Lagrangian densities ${\cal L}_B$ and ${\cal L}_{\bar B}$ are [*equivalent*]{} w.r.t. the (anti-)co-BRST symmetry transformations if and only if the conditions $({\cal B}\times C) =0 $, $({\cal B}\times\bar C)=0 $ are satisfied. Taking the analogy with equations (22) and (23), it is straightforward to conclude that ${\cal B}\times C = 0$ and ${\cal B}\times\bar C = 0$ are the CF-type restrictions w.r.t. the (anti-)co-BRST symmetries for the [*self-interacting*]{} 2D non-Abelian gauge theory. We would like to mention here that there are differences between the CF-condition ${B + \bar B + (C\times \bar C)} = 0 $ (existing for the non-Abelian 1-form gauge theory in the context of (anti-)BRST symmetry transformations for [*any*]{} arbitrary dimension of spacetime) and the CF-type restrictions that appear in the context of (anti-)co-BRST symmetry transformations for the 2D non-Abelian 1-form gauge theory. Whereas the latter conditions ${\cal B}\times C = 0$ and ${\cal B}\times\bar C = 0$ are [*perfectly*]{} (anti-)co-BRST invariant \[i.e. $s_{(a)d}({\cal B}\times C) = 0$, $s_{(a)d}({\cal B}\times\bar C) = 0$\] quantities, the same is [*not*]{} true in the case of CF-condition ${B + \bar B + (C\times \bar C)} = 0. $ It can be checked that: $$\begin{aligned} &&s_b[{B + \bar B + (C\times \bar C)}] = i\,[{B + \bar B + (C\times \bar C)}]\times C, \nonumber\\ &&s_{ab}[{B + \bar B + (C\times \bar C)}] = i\,[{B + \bar B + (C\times \bar C)}]\times\bar C. \end{aligned}$$ The above transformations show that the CF-condition $B + \bar B + (C\times \bar C)= 0$ is (anti-)BRST invariant [*only*]{} on the hypersurface defined by the restriction $B + \bar B + (C\times \bar C)= 0$. Furthermore, the (anti-)BRST symmetry transformations are absolutely anticommuting (i.e. ${\{s_b,s_{ab}}\} = 0$) [*only*]{} on the hypersurface described by the CF-condition ${B + \bar B + (C\times \bar C)} = 0 $.However, the absolute anticommutativity of the nilpotent(anti-)co-BRST symmetry transformations (i.e.${\{s_d, s_{ad}\} = 0}$) is satisfied $without$ any use of ${\cal B}\times C = 0$ and ${\cal B}\times\bar C = 0$. In other words, the absolute anticommutativity of the (anti-)BRST symmetry transformations does not need any kinds of restrictions from outside. We shall see, later on, that the above CF-type restrictions (i.e. ${\cal B}\times C = 0$ and ${\cal B}\times\bar C = 0$) appear at the level of algebra obeyed by the conserved charges (derived from the Lagrangian densities (1)) when we demand the absolute anticommutativity of the co-BRST and anti-co-BRST charges. As pointed out earlier, we have seen that $s_{(a)d}({\cal B}\times C )= 0$ and $s_{(a)d}({\cal B}\times \bar C) = 0$. Thus, these CF-type constraints are (anti-)co-BRST invariant and, therefore, they are physical and theoretically very useful. As a consequence of the above observation, the Lagrangian densities ${\cal L}_B $ and ${\cal L}_{\bar B}$ can be [*modified*]{} in such a manner that ${\cal L}_B $ and ${\cal L}_{\bar B}$ can have the [*perfect*]{} (anti-)co-BRST symmetry invariance(s). For instance, we note that the following modified versions of the Lagrangian densities, with fermionic ($\lambda ^2 = \bar\lambda^2=0,\,\bar\lambda\lambda+\lambda\bar\lambda = 0)$ Lagrange multiplier fields ${\lambda }$ and $ {\bar\lambda }$, namely; $$\begin{aligned} {\cal L}^{(\lambda)}_{\bar B} &=& {\cal B} {\cdot E}-\frac {1}{2}\,{\cal B} \cdot {\cal B} - \bar B\cdot (\partial_{\mu}A^{\mu}) + \frac{1}{2}(B\cdot B + \bar B \cdot \bar B)\nonumber\\ &-& i\, D_{\mu}\bar C \cdot \partial^{\mu}C + \lambda\cdot({\cal B}\times\bar C), \nonumber \\ {\cal L}^{(\bar\lambda)}_B &=& {\cal B}{\cdot E}-\frac {1}{2}\,{\cal B} \cdot {\cal B} + B\cdot (\partial_{\mu}A^{\mu}) + \frac{1}{2}(B\cdot B + \bar B \cdot\bar B)\nonumber\\ &-& i\,\partial_{\mu}\bar C \cdot D^{\mu}C + \bar\lambda\cdot({\cal B}\times C),\end{aligned}$$ respect the following [*perfect*]{} (anti-)co-BRST symmetry transformations: $$\begin{aligned} &&s_{ad} A_{\mu} = - \varepsilon_{\mu\nu}\partial^\nu C,\quad\quad s_{ad} C = 0,\quad \quad s_{ad}\, \bar C = i\; {\cal B},\qquad\quad s_{ad} {\cal B} = 0,\nonumber\\ &&s_{ad} E =D_\mu\partial^\mu C,\quad\quad s_{ad}({\partial_\mu A^\mu})= 0, \quad s_{ad}\,{\lambda}= -i\;({\partial_\mu A^\mu}),\quad s_{ad}\,{\bar \lambda}= 0, \nonumber\\ &&s_d A_\mu = - \varepsilon_{\mu\nu}\partial^\nu \bar C, \quad\quad s_d \bar C = 0,\quad\quad\quad s_d C = - i\; {\cal B},\qquad\qquad s_d{\cal B} = 0,\nonumber\\ && s_d E = D_\mu\partial^\mu\bar C,\qquad s_d({\partial_\mu A^\mu})= 0,\quad\quad s_d \,{\bar \lambda} = -i\;({\partial_\mu A^\mu}),\quad\quad s_d\,{\lambda } = 0.\end{aligned}$$ We remark here that the above (anti-)co-BRST symmety transformations are off-shell nilpotent as well as absolutely anticommuting (without any use of CF-type restrictions). Hence, these symmetries are proper and perfect. In the above, the superscripts $(\lambda)$ and ${(\bar\lambda)}$ on the Lagrangian densities are due to obvious reasons (i.e. they characterize ${\cal L}_B$ and ${\cal L}_{\bar B}$). It should be noted that the Lagrange multipliers ${\lambda}$ and $\bar{\lambda}$ carry the ghost numbers equal to (+1) and (-1), respectively. Ultimately, we observe that the following transformations of the Lagrangian densities are true, namely; $$\begin{aligned} &&s_d {\cal L}_B^{(\bar\lambda)} = \partial_{\mu}[{\cal B}\cdot\partial^{\mu}\bar C], \qquad\qquad s_{ad} {\cal L}^{(\lambda)}_{\bar B} = \partial_{\mu}[{\cal B}\cdot\partial^{\mu} C],\nonumber\\ &&s_d {\cal L}^{(\lambda)}_{\bar B} = \partial_{\mu}[{\cal B}\cdot D^{\mu}\bar C -\varepsilon^{\mu\nu}(\partial_\nu\bar C\times\bar C)\cdot C ],\nonumber\\ && s_{ad}{\cal L}_B^{(\bar\lambda)} =\partial_{\mu}[{\cal B}\cdot D^{\mu} C + \varepsilon^{\mu\nu} \bar C\cdot(\partial_{\nu} C\times C)].\end{aligned}$$ which show that the action integrals $S = \int d^2 x \, {\cal L}^{(\bar\lambda)}_B$ and $S =\int d^2 x \, {\cal L}^{(\lambda)}_{\bar B}$ remain invariant under the (anti)co-BRST symmetry transformations $s_{(a)d}$. Thus, we lay emphasis on the observation that $both$ the Lagrangian densities ${\cal L}_B^{(\bar\lambda)}$ and ${\cal L}_{\bar B}^{(\lambda)}$ (cf. Eq. (26)) are [*equivalent*]{} as far as the $symmetry$ considerations w.r.t. the (anti-)co-BRST symmetry transformations (27) are concerned. Henceforth, we shall $only$ focus on the Lagrangian densities ${\cal L}_B^{(\bar\lambda)}$ and ${\cal L}_{\bar B}^{(\lambda)}$ for our further discussions and we shall discuss their symmetry properties under the off-shell nilpotent (anti-)BRST transformations, too (cf. Appendix C below) . **Algebraic Structures: Symmetries and Charges** ================================================ The Lagrangian densities in Eq. (26) are good enough to provide the physical realizations of the cohomological operators of differential geometry in the language of their symmetry properties. First of all, let us focus on ${\cal L}_B^{(\bar\lambda)}$. This Lagrangian density (and corresponding action integral) respect the (anti-)co-BRST symmetry transformations (27) and BRST symmetry transformations in a [*perfect*]{} manner because the nilpotent BRST symmetry transformations $(s_b)$, listed in (2) (along with $s_b \,{\bar\lambda} = 0$), are the $symmetry$ of the action integral $S =\int d^2x \,{\cal L}_B^{(\bar\lambda)}$. This is because of the fact that we have $s_b({\cal B}\times C) = 0$ due to the nilpotency condition: $s_b^2{\cal B} = i\,s_b({\cal B}\times C) = 0$ and $s_b{\cal L}_B = \partial_{\mu}[B\cdot D^{\mu} C]$ (cf. Eq. (3)). To be more precise, the Lagrangian density ${\cal L}_B^{(\bar\lambda)}$ respects $s_b, s_d,s_{ad},s_g$ and $s_w = {\{s_b,s_d}\}$ as discussed in Sec. 2 (with the additional transformations $s_b{\bar\lambda} = 0$, $s_b({\cal B}\times C) = 0$ and the transformations (27) which lead to (28)). This observation should be contrasted with the Lagrangian density ${\cal L}_B$ (cf. Eq. (11)) which respects only [*four*]{} [*perfect*]{} symmetries, namely; $s_b, s_d, s_g$ and $s_w$. It does not respect $s_{ad}$ [*perfectly*]{}. One can explicitly check that, in their operator form, the above set of five [*perfect*]{} symmetries[^4] obey the following algebra $$\begin{aligned} && s_{(a)d}^2 = s_b^2 = 0,\qquad\qquad {\{s_b,s_d}\} = s_w,\qquad {\{s_d,s_{ad}}\} = 0, \nonumber\\ &&[s_w,s_r ] = 0\qquad\quad\,\; r = b,d,ad,g,\quad{\{s_b,s_{ad}}\} = 0,\nonumber\\ &&[s_g, s_b] = + s_b,\qquad [s_g, s_d] = -s_d,\qquad\,\, [s_g,s_{ad}] = + s_{ad}.\end{aligned}$$ In the above, we note that $s_w\bar\lambda = 0$ ($\Longrightarrow $ $ s_b\bar\lambda= 0$, $s_d\bar\lambda = 0)$ and $s_g\bar\lambda = -\bar\lambda.$ The algebra in (29) is reminiscent of the algebra obeyed by the de Rham cohomological operators of differential geometry (see, e.g. \[18,19\]), namely; $$\begin{aligned} && d^2 = 0,\qquad \delta^ 2 = 0, \qquad{\{d,\delta }\} =\triangle ,\qquad[\triangle ,d] = 0=[\triangle,\delta].\end{aligned}$$ where $(d,\delta,\triangle)$ are the exterior derivative, co-exterior derivative and Laplacian operators, respectively. These operators constitute the set of de Rham cohomological operators. It is clear that we have $ d\longleftrightarrow s_b$, $ \delta \longleftrightarrow s_d$ and $ \triangle\longleftrightarrow s_w$. Such identification is justified due to the algebra of the conserved charges, too, where the transformation $s_g$ and corresponding charge $Q_g$ play an important role. We shall discuss it later. We note here that there is [*one-to-one*]{} mapping between the symmetry operators and cohomological operators. It is worth pointing out that the algebra in (29) is obeyed for the Lagrangian density ${\cal L}_B^{(\bar\lambda)}$ (which respects $five$ perfect continuous symmetries). However, the algebra (29) is satisfied $only$ on the on-shell where we use the EQM (derived from Lagrangian density ${\cal L}_B^{(\bar\lambda)}$) and the set of CF-type restrictions that have been discussed in earlier works \[10,11\]. We list here a few of these algebraic relationships which are juxtaposed along with the EL-EQM and the constraints (i.e. CF-type restrictions) that are invoked in their proof. To be more explicit and precise, we have the following algebraic relations as well as the restrictions/EQM (which are exploited in the proof of the algebraic relations), namely; $$\begin{aligned} &&{\{s_b,s_{ad}}\}\,\bar C = 0\qquad\quad\;\Longleftrightarrow \qquad\quad{\cal B}\times C = 0,\nonumber\\ &&{\{s_b,s_{ad}}\}\bar\lambda = 0\qquad\quad\;\; \Longleftrightarrow\quad\qquad \partial_{\mu}D^{\mu}C = 0,\nonumber\\ &&[s_w,s_{ad}]\,A_{\mu} = 0\qquad\quad\Longleftrightarrow\qquad\quad {\cal B}\times C = 0,\nonumber\\ &&[s_w, s_{ad}]\,\bar\lambda = 0 \qquad\quad\;\; \Longleftrightarrow\quad\qquad \partial_{\mu} D^{\mu} {\cal B} + \varepsilon^{\mu\nu}(\partial_{\nu}\bar C\times \partial_{\mu} C) = 0.\end{aligned}$$ Thus, we observe that the algebra (29) is very nicely respected provided we utilize the strength of EQM from ${\cal L}_B^{(\bar\lambda)}$ and use the CF-type restrictions appropriately. Now we focus on the ${\cal L}_{\bar B}^{(\lambda)}$ and briefly discuss about algebra of its symmetry operators. This Lagrangian density also respects $five$ perfect symmetries. These are $s_d,s_{ad}$, $s_w = -\;{\{s_{ad},s_{ab}}\},$ $s_{ab}$ and $s_g$ (cf. Eqs. (2), (6), (27)). In particular, we note that the anti-BRST symmetry transformations $(s_{ab})$ are same as (2) together with $s_{ab}\lambda = 0$ because we find that $s_{ab}({\cal B}\times \bar C) = 0$ due to the nilpotency condition $s_{ab}^2{\cal B} = 0$. The algebra satisfied by the above symmetry operators are: $$\begin{aligned} &&s_{(a)d}^2 =s_{ab}^2 = 0,\qquad\qquad{\{s_{ad},s_{ab}}\} = - s_w,\qquad {\{s_d,s_{ad}}\} = 0,\nonumber\\ &&[s_w,s_r] = 0,\qquad\quad r = d,ad,ab,g,\qquad\quad {\{s_d,s_{ab}}\} = 0,\nonumber\\ &&[s_g,s_d] = -s_d,\qquad[s_g,s_{ab}] = - s_{ab},\qquad\;\; [s_g,s_{ad}] = s_{ad} \,.\end{aligned}$$ We note that $s_{ad}\lambda=s_{ab}\lambda = 0$ implies that $s_w\lambda = 0$ because $s_w = -{\{s_{ab},s_{ad}}\}$. We also have $s_g\lambda = +\lambda$ (i.e. the ghost number of $\lambda$ is +1). From the above algebra, it is clear that we have found out the physical realizations of the cohomological operators $(d,\delta,\triangle)$ in the language of the symmetry transformations of the Lagrangian density ${\cal L}_{\bar B}^{(\lambda)}$. However, the algebra (32) is satisfied only when the EQM and the constraints (i.e. CF-type restrictions) of the theory are exploited together in a judicious manner. We have been brief here in our statements but it can be easily checked that our claims are true. To be more explicit, we note that we have obtained a [*one-to-one*]{} mapping: $d\longleftrightarrow s_{ad}$, $\delta\longleftrightarrow s_{ab}$ and $\triangle\longleftrightarrow s_w = -\;{\{s_{ab},s_{ad}}\}$. We conclude, from the above discussions, that the Lagrangian densities ${\cal L}_B^{(\bar\lambda)}$ and ${\cal L}_{\bar B}^{(\lambda)}$ respect $five$ perfect symmetries out of which $two$ are fermionic symmetries and there is a unique bosonic symmetry $(s_w)$ in the theory. With these, we are able to provide the physical realization of the cohomological operators $(d,\delta,\triangle)$. In other words, we have obtained $two$ independent Lagrangian densities where the continuous symmetries provide the physical realizations of the cohomological operators of differential geometry (at the algebraic level) which demonstrate that we have found out a 2D field theoretic model for the Hodge theory (see, e.g. \[5,7\] for more details). The identifications that have been made after equations (29) and (32) are correct in the language of continuous symmetries of the theory. In this context, we have to recall our statements after Eq. (2) and Eq. (4) where we stated that the kinetic term and gauge-fixing term remain invariant under the [*fermionic*]{} symmetries $s_{(a)b}$ and $s_{(a)d}$, respectively. It is worth pointing out that the kinetic term owes its origin to the exterior derivative ($d = dx^{\mu}, d^2 = 0$). On the other hand, the mathematical origin of the gauge-fixing term lies with the co-exterior derivative[^5] $(\delta = - * d*, \delta ^2 = 0$.). It is the ghost number considerations, at the level of charge, which leads to the identifications $ d\longleftrightarrow s_b,\quad\delta \longleftrightarrow s_d,\quad\triangle \longleftrightarrow s_w$ after the equation (29) as well as the mappings $ d\longleftrightarrow s_{ad},\quad\delta \longleftrightarrow s_{ab} ,\quad\triangle \longleftrightarrow s_w$ after the equation (32). Thus, the abstract mathematical cohomological operators find their realizations in the language of physically well-defined continuous symmetry operators of our present 2D non-Abelian 1-form gauge theory. Now we concentrate on the algebraic structures associated with the [*six*]{} conserved charges (i.e. $Q_{(a)b},{Q_{(a)d}}, Q_w, Q_g$) that correspond to the [*six*]{} continuous symmetries of our theory. We note that the nilpotency property of fermionic charges $Q_{(a)b}$ and $Q_{(a)d}$ has already been quoted in Eq. (18). Using the expressions for the conserved and nilpotent charges $Q_d$ and $Q_{ad}$ (cf. Eq. (17)) and the (anti-)co-BRST symmetry transformations (4), it can be readily checked that the following is true as far as Lagrangian densities (1) are concerned, namely; $$\begin{aligned} &&s_{ad}\;Q_d = - i\;{\{Q_d,Q_{ad}}\} = 0, \qquad \Longrightarrow \qquad \mbox{iff} \qquad \qquad \Longrightarrow \qquad {\cal B}\times C = 0,\nonumber\\ &&s_d \;Q_{ad} = - i\;{\{Q_{ad},Q_d}\} = 0, \qquad \Longrightarrow \qquad \mbox{iff} \qquad \qquad \Longrightarrow \qquad {\cal B}\times \bar C = 0.\end{aligned}$$ Thus, we note that even though the absolute anticommuting property $({\{s_d,s_{ad}}\}=0)$ associated with $s_{(a)d}$ is satisfied at the level of symmetry operators [*without*]{} any use of CF-type restrictions $({\cal B}\times C = 0$, ${\cal B}\times\bar C = 0)$, we find that, at the level of conserved charges, we have to exploit the strength of these restrictions (i.e. ${\cal B}\times C = 0$, ${\cal B}\times\bar C = 0$) for the proof of absolute anticommutativity[^6]. This is a [*novel*]{} observation which does [*not*]{} appear in the case of (anti-)BRST symmetries where ${\{s_b,s_{ab}}\} = 0$ and ${\{Q_b,Q_{ab}}\} = 0$ are satisfied [*only*]{} when the CF-condition $B+\bar B+( C\times \bar C) = 0$ is invoked. Another point to be noted is that the CF-type restrictions ${\cal B}\times C = 0$ and ${\cal B}\times\bar C = 0$ are required for the proof of $s_d\; Q_{ad} = -i {\{Q_{ad},Q_d}\} = 0$ and $s_{ad}\; Q_d = - i\;{\{Q_{ad},Q_d}\}$ as well as for the invariance of the Lagrangian densities (i.e. $s_d \;{\cal L}_{\bar B}$ and $s_{ad}\,{\cal L}_ B$) which are evident from Eq. (24). The other algebraic relations amongst $Q_{(a)b}, Q_{(a)d}, Q_w$ and $Q_g$ are satisfied in a straight-forward manner (except the absolute anticommutativity properties where the CF-type restrictions are required). It can be checked that $$\begin{aligned} &&s_g Q_b = - i\;[Q_b,Q_g] = + Q_b,\qquad\quad\quad s_g Q_{ad} = - i\,[Q_{ad},Q_g ] = +\; Q_{ad},\nonumber\\ &&s_g Q_{ab}= - i\;[Q_{ab},Q_g] = - Q_{ab},\quad\quad\quad s_g Q_d = - i\;[Q_d,Q_g] = - \;\;Q_d,\nonumber\\ &&s_g Q_w = -i\;[Q_w,Q_g] = 0,\end{aligned}$$ which shows that the ghost number of $(Q_b,Q_{ad})$ is equal to $( +1 )$ but the ghost number for $(Q_{ab},Q_d)$ is equal to $(-1)$ . It is also evident that $Q_w$ commutes with [*all*]{} the charges of the theory. As far as the proof of this statement is concerned, we note that $$\begin{aligned} &&s_w\;Q_r = - i\;[Q_r,Q_w] = 0,\qquad\qquad r = b,ab,d,ad,g,w,\end{aligned}$$ which shows that $Q_w$ is the Casimir operator for the whole algebra because it commutes with [*all*]{} the charges. One of the simplest ways to prove this result is to compute the l.h.s. of equation (35) from the transformations (9) and the expressions for the charges $Q_r \;(r = b, ab, d , ad, g)$ that have been derived in Sec. 2. We briefly comment here on the algebraic structure that is satisfied by the conserved charges of our theory. In this context, we have seen various forms of the algebras (cf. Eqs. (18), (34), (35)) that are satisfied by the [*six*]{} conserved charges of our theory. It can be verified that [*collectively*]{} these charges satisfy the following extended BRST algebra: $$\begin{aligned} && Q_{(a)b}^2 = 0, \qquad Q_{(a)d}^2 = 0, \qquad \{ Q_b, Q_{ab} \} = \{ Q_d, Q_{ad} \} = 0, \nonumber\\ && [Q_w, Q_r ] = 0, \qquad \qquad r = b, ab, d, ad, g, w, \quad \{ Q_d, Q_{ab} \} = 0, \nonumber\\ && i\;[Q_g,Q_b] = +\; Q_b,\qquad\quad\quad i\,[Q_{g},Q_{ad} ] = \; Q_{ad}, \quad \{ Q_b, Q_{ad} \} = 0, \nonumber\\ &&i\;[Q_{g},Q_{ab}] = -\; Q_{ab},\quad\quad\quad i\;[Q_g ,Q_d] = - \;\;Q_d. \end{aligned}$$ The above algebra is obeyed [*only*]{} on a hypersurface in the 2D Minkowskian spacetime manifold where [*all*]{} types of CF-type restrictions as well as EOM, emerging from the Lagrangian densities (1), are satisfied. The above algebra is reminiscent of the Hodge algebra satisfied by the de Rham cohomological operators of differential geometry \[18,19\] where the mapping between the set of conserved charges and cohomological operators is: $$\begin{aligned} (Q_b, Q_{ad}) \Leftrightarrow d, \qquad (Q_d, Q_{ab}) \Leftrightarrow \delta, \qquad Q_w = \{Q_b, Q_d\} = -\;\; \{Q_{ab}, Q_{ad}\} \Leftrightarrow \Delta.\end{aligned}$$ This [*two-to-one* ]{} mapping is true only for the coupled (but equivalent) Lagrangian densities (1) where the EOM and CF-type restrictions are exploited together in the proof. In the above identifications, the ghost number of a state (in the quantum Hilbert space), plays a very important role. We have shown in our earlier works \[7, 20-22\] that the algebra (36) indeed implies that if the ghost number of a state $|\psi>_n$ is $n$ (i.e. $ i\; Q_g |\psi>_n = n \, |\psi>_n$), then, the states $Q_b |\psi>_n$, $Q_d |\psi>_n$ and $Q_w |\psi>_n$ would have the ghost numbers $(n + 1)$, $(n-1)$ and $n$, respectively. In exactly similar fashion, we have already been able to prove that the states $ Q_{ad} |\psi>_n$, $ Q_{ab}|\psi>_n$ and $Q_w|\psi>_n$ (with $Q_w = - {\{Q_{ab},Q_{ad}}\}$) would carry the ghost number $(n + 1)$, $(n-1)$ and $n$, respectively[^7]. We have discussed the Hodge decomposition theorem in the quantum Hilbert space of states in our earlier works \[7, 20-22\] which can be repeated for our 2D theory, too. This would fully establish the fact that our present theory is a field theoretic model for the Hodge theory which provides the physical realizations of the cohomological operators in the language of symmetry transformations (treated as operators) and corresponding conserved charges.\ **Novel Observations: Algebraic and Symmetry Considerations in Our 2D Theory** ============================================================================== As far as symmetry property is concerned, we observe that there are CF-type restrictions (${\cal B} \times C = 0, {\cal B} \times \bar C = 0$) corresponding to the [*(anti-)co-BRST*]{} symmetries, too, as is the case with the (anti-)BRST symmetries of our 2D non-Abelian 1-form gauge theory where the CF-condition ($ B + \bar B + C \times \bar C = 0$) exists \[12\]. However, there are specific novelties that are connected with the CF-type restrictions: ${\cal B} \times C = 0, \;{\cal B} \times \bar C = 0$. First, these restrictions are (anti-)co-BRST invariant \[i.e. $ s_{(a)d} ({\cal B} \times C) = 0$ and $ s_{(a)d} ({\cal B} \times \bar C) = 0$\] whereas the CF-condition ($ B + \bar B + C \times \bar C = 0$) is not [*perfectly*]{} invariant under the (anti-) BRST transfromations (cf. Eq. (25)). Second, the restrictions (${\cal B} \times C = 0,\; {\cal B} \times \bar C = 0$) can be incorporated into the Lagrangian densities (cf. Eq. (26)) in such a manner that one can have [*perfect*]{} (anti-)co-BRST symmetry invariance for the [*individual*]{} Lagrangian densities in (26). Such kind of thing can [*not*]{} be done with the CF-condition $ B + \bar B +( C \times \bar C) = 0$. We observe that (anti-)co-BRST symmetries (where the gauge-fixing term remains invariant) exist at the [*quantum*]{} level when the gauge-fixing term is added to the Lagrangian densities. In other words, there is no [*classical*]{} analogue of the (anti-)co-BRST symmetries. However, the (anti-)BRST symmetry transformations (where the kinetic term remains invariant) is the generalization of the [*classical*]{} local $SU(N)$ gauge symmetries to the [*quantum*]{} level. Furthermore, we note that the (anti-)BRST symmetries would exist for any $p$-form gauge theory in [*any*]{} arbitrary dimension of spacetime. However, the (anti-)co-BRST symmetries have been shown to exist for the $p$-form gauge theory [*only*]{} in $D = 2p$ dimensions of spacetime \[5,6\]. They have [*not*]{} been shown to exist, so far, in any [*arbitrary*]{} dimension of spacetime. In addition, the absolute anticommutativity property of the BRST and anti-BRST symmetry transformations require the validity of CF-condition. On the contrary, the nilpotent (anti-)co-BRST symmetries [*do*]{} absolutely anticommute with each other [*without*]{} any use of the CF-type restrictions that exist in the 2D non-Abelian gauge theory. We note that ${\{s_d,s_{ad}}\}= 0$ without any use of the CF-type restrictions (${\cal B} \times C = 0,$ and ${\cal B} \times \bar C = 0$) as far as the Lagrangian densities ${\cal L}_B$ and ${\cal L}_{\bar B}$ (cf. Eq. (1)) are concerned. However, the restrictions ${\cal B} \times C = 0$ and ${\cal B} \times \bar C = 0$ are required for the proof of ${\{Q_d,Q_{ad}}\} = 0$ when we compute this bracket from $s_d Q_{ad} = - i\;{\{Q_d,Q_{ad}}\}$ and/or $s_{ad} Q_d = -i\, {\{Q_{ad},Q_d}\}$ \[as far as the Lagrangian densities ${\cal L}_B$ and ${\cal L}_{\bar B}$ (cf. Eq. (1)) are concerned\]. It is interesting to point out that the property of nilpotency and absolute anticommutativity is satisfied [*without*]{} any use of CF-type restrictions for the Lagrangian densities (26) (where the Lagrange multipliers ${\lambda}$ and ${\bar\lambda}$ are incorporated to accommodate the CF-type restrictions). This statement is true for the (anti-)co-BRST symmetry operators as well as for the corresponding conserved charges. The CF-type restrictions $({\cal B}\times C = 0, {\cal B}\times C = 0)$ appear in the proof of ${\{Q_d, Q_{ad}}\} = 0$ (cf. Eq. (33)) as well as in the mathematical expressions for $s_{ad}{\cal L}_B$ and $s_d {\cal L}_{\bar B}$ (cf. Eq. (24)) but they do [*not*]{} appear in ${\{s_d, s_{ad}}\} = 0$. On the contrary, the CF-condition $(B + \bar B + C\times \bar C = 0)$ appears in the proofs of:${\{s_b, s_{ab}}\} = 0$, ${\{Q_b, Q_{ab}}\} = 0$ and in the mathematical expressions for: $s_{ab} {\cal L}_B$ as well as $s_b {\cal L}_{\bar B}$ (cf. Eq. (22)) when the the Lagrangian densities (1) are considered. To corroborate the above statements, we take a couple of examples to demonstrate that we do [*not*]{} require the strength of CF-type restrictions $({\cal B}\times C =0,\, {\cal B}\times \bar C = 0 $ from outside) in the proof of nilpotency and absolute anticommutativity of the (anti-)co-BRST charges (derived from the Lagrangian densities (26)). In this context, we note that the expressions for the nilpotent (anti-)co-BRST charges (17) remain the [*same*]{} for the Lagrangian densities (26) [*but*]{} the EOM (derived from (26)) are different from (12). We note that the latter are: $$\begin{aligned} &&\varepsilon^{\mu\nu} D_{\nu}{\cal B} +\partial^{\mu}B+(\partial^{\mu}\bar C\times C) = 0, \quad \partial_{\mu}D^{\mu}C = 0,\quad {\cal B}\times C = 0,\nonumber\\ && E = {\cal B} + (\bar{\lambda}\times C),\quad D_{\mu}\partial^{\mu}\bar C - i\,(\bar{\lambda}\times {\cal B}) = 0,\nonumber\\ &&\varepsilon^{\mu\nu}D_{\nu}{\cal B} -\partial^{\mu}\bar B - (\bar C\times\partial^{\mu} C) = 0,\quad \partial_{\mu}D^{\mu}\bar C = 0,\quad {\cal B}\times \bar C = 0,\nonumber\\ && E = {\cal B} + ({\lambda}\times\bar C),\qquad D_{\mu}\partial^{\mu}C + i\; ({\lambda}\times {\cal B}) = 0.\end{aligned}$$ The above equations are to be used in the proof of conservation of the Noether currents from which the charges are computed. In this context, we observe the expressions for the (anti-)co-BRST conserved Noether current for the Lagrangian density ${\cal L}_B^{(\bar\lambda)}$ are as follows: $$\begin{aligned} && J^{\mu(\bar\lambda)}_d = {\cal B}\cdot\partial^{\mu}\bar C -\varepsilon^{\mu\nu} B\cdot\partial_{\nu}\bar C \equiv J^{\mu}_d \qquad \mbox{ (cf.\; Eq. (16))},\nonumber\\ &&J_{ad}^{\mu(\bar\lambda)} = {\cal B}\cdot\partial^{\mu}C - \varepsilon^{\mu\nu}B\cdot\partial_{\nu}C -\varepsilon^{\mu\nu}\bar C\cdot(\partial_{\nu}C\times C),\end{aligned}$$ where the superscript ${(\bar\lambda)}$ denotes that the above currents have been derived from ${\cal L}_B^{(\bar\lambda)}$ (cf. Eq. (26)). The expressions (39) demonstrate that, for the Lagrangian density ${\cal L}_B^{(\bar\lambda)}$ , the co-BRST Noether conserved current remains same as given in (16) (for ${\cal L}_B$) [*but*]{} the expression for the anti-co-BRST Noether conserved current is [*different*]{} from the [*same*]{} current derived from ${\cal L}^{(\bar\lambda)}_ B$ (cf. Eq. (16)). The conservation of the above currents can be proven by using EL-EOM (38). The expressions for the conserved co-BRST charge remains the [*same*]{} as given in (17) but the expression for the anti-co-BRST charge is: $$\begin{aligned} Q_{ad}^{(\bar\lambda)} &=&\int dx\; J^{0(\bar\lambda)}_{ad} \equiv \int dx \,\big[{\cal B}\cdot\dot C-\partial_1 B\cdot C + \bar C\cdot(\partial_1 C\times C)\big]\nonumber\\ &\equiv &\int dx\;\big[{\cal B}\cdot\dot C - D_0{\cal B}\cdot C +(\partial_1\bar C\times C)\cdot C +\bar C\cdot(\partial_1 C\times C)\big].\end{aligned}$$ The nilpotency of the co-BRST charge $Q^{(\lambda)}_d = Q_d$ has already been proven in Eq. (18). Similarly, it can be checked that $$\begin{aligned} s_{ad} \;Q_{ad}^{(\bar\lambda)} &=& s_{ad} \int\; dx \;[{\cal B}\cdot\dot C - D_0{\cal B}\cdot C +(\partial_1\bar C\times C)\cdot C +\bar C\cdot(\partial_1 C\times C)]\nonumber\\ &\equiv & \int\; dx\; \partial _1\; [i \;({\cal B}\times C)\cdot C]\longrightarrow 0\; \Longleftrightarrow \; - i\; {\{Q_{ad}^{(\bar\lambda)},Q_{ad}^{(\bar\lambda)}}\}=0,\end{aligned}$$ which demonstrate the validity of nilpotency of $Q_{ad}^{(\bar\lambda)}$ because it can be explicitly checked that $s_{ad} \;Q^{(\bar\lambda)}_{ad} = - i\; {\{Q^{(\bar\lambda)}_{ad}, Q^{(\bar\lambda)}_{ad}}\} = 0$ which implies that $(Q^{(\bar\lambda)}_{ad})^2 = 0$. We emphasize that the r.h.s. of (41) is zero due to the EOM (i.e. ${\cal B}\times C = 0)$), too. We now concentrate on the Lagrangian density ${\cal L}_{\bar B}^{(\lambda)}$ and compute the expressions for the Noether currents corresponding to the (anti-)co-BRST symmetry transformations. It is evident, from transformations (27), that under the anti-co-BRST symmetry transformations, the Lagrangian density ${\cal L}_{\bar B}^{(\lambda)}$ transforms in exactly the same manner as given in (5). Thus, the conserved current would be same as in (16). However, in view of the transformation of ${\cal L}_{\bar B}^{(\lambda)}$ (in (28)) under $s_d$, we have the following expressions for the Noether current $$\begin{aligned} && J^{\mu(\lambda)}_d = {\cal B}\cdot\partial^{\mu}\bar C + \varepsilon^{\mu\nu}\bar B\cdot \partial_{\nu}\bar C +\varepsilon^{\mu\nu}(\partial_{\nu}\bar C\times \bar C)\cdot C,\end{aligned}$$ which is different from (16). The conservation law (i.e. $\partial_{\mu}J^{\mu(\lambda)}_d = 0$) can be proven by exploiting the EL-EOM given in (38). The conserved charge $Q^{(\lambda)}_d$ has the following forms: $$\begin{aligned} Q^{(\lambda)}_d &=&\int dx J^{0{(\lambda)}}_d \equiv \int\; dx \;\Big[{\cal B}\cdot \dot{ \bar C} + \partial_1 \bar B\cdot\bar C - (\partial_1\bar C\times \bar C)\cdot C\Big]\nonumber\\ &\equiv & \int\; dx \; \Big[{\cal B}\cdot\dot{\bar C} - D_0{\cal B}\cdot\bar C -(\bar C\times\partial_1 C)\cdot\bar C -(\partial_1 \bar C\times\bar C)\cdot C\Big],\nonumber\\\end{aligned}$$ where the EL-EOM have been used to obtain the above equivalent forms of the conserved charge . The nilpotency of the above charge can be proven by using the symmetry principle (with $s_d\;Q^{(\lambda)}_d =- i {\{Q^{(\lambda)}_d,Q^{(\lambda)}_d}\} = 0$) as: $$\begin{aligned} s_d\; Q^{(\lambda)}_d & = & s_d \;\int dx\;\big[{\cal B}\cdot\dot{\bar C} - D_0{\cal B}\cdot\bar C -(\bar C\times\partial_1 C)\cdot\bar C -(\partial_1 \bar C\times\bar C)\cdot C\big]\nonumber\\ &\equiv &\int dx\; \partial _1\; [i \;({\cal B}\times \bar C)\cdot\bar C]\longrightarrow 0.\end{aligned}$$ Thus, we note that $s_d\; Q^{(\lambda)}_d = - i {\{Q^{(\lambda)}_d,Q^{(\lambda)}_d}\} = 0$ implies that $(Q^{(\lambda)}_d)^2 = 0$. This proves the nilpotency of the co-BRST charge, derived from ${\cal L}_{\bar B}^{(\lambda)}$, for physically well-defined fields which vanish off $x =\pm \infty$. Furthermore, the r.h.s. of (44) is zero due to the EOM (i.e. ${\cal B}\times C = 0$) which emerges from ${\cal L}^{(\lambda)}_{\bar B}$ (cf. Eq. (38)). We have to prove the absolute anticommutativity of the (anti-)co-BRST charges that have been derived from the Lagrangian densities (26). As pointed out earlier, the expressions for co-BRST charge for ${\cal L}^{(\bar\lambda)}_B$ remains the [*same*]{} as given in (17) (where there are primarily two equivalent expressions for it). We take, first of all, the following (with $Q^{(\bar\lambda)} = Q_d$) and apply the anti-co-BRST transformation $s_{ad}$: $$\begin{aligned} s_{ad}\; Q^{(\bar\lambda)}_d &=& - i\; {\{Q^{(\bar\lambda)}_d,Q^{(\bar\lambda)}_d}\} \equiv \int\; dx\;\big [ {\cal B}\cdot \dot {\bar C} - \partial_1 B\cdot \bar C\big]\nonumber\\ &\equiv & \int\; dx \big[{\cal B}\cdot (\dot{\cal B} -\partial_1 B)\big]. \end{aligned}$$ Using the equation of motion (38), the above expression yields $$\begin{aligned} && s_{ad}\;Q^{(\bar\lambda)}_d = i\,\int dx \big[({\cal B}\times C)\cdot\partial_1\bar C\big] = 0,\end{aligned}$$ due to the validity of EOM (i.e. ${\cal B}\times C = 0)$ w.r.t. $\bar\lambda$ from ${\cal L}^{(\bar\lambda)}_B$. Thus, we note that ${\{Q^{(\bar\lambda)}_d,Q^{(\bar\lambda)}_d}\} = 0$ on the [*on-shell*]{} for ${\cal L}^{(\bar\lambda)}_B$. In other words, the absolute anticommutativity is satisfied. Now let us focus on the alternative expression for $Q^{(\bar\lambda)}_d$ and apply $s_{ad}$ on it: $$\begin{aligned} s_{ad}\; Q^{(\bar\lambda)}_d &=& \int\; dx\; \big[ {\cal B}\cdot\dot {\bar C} - D_0{\cal B}\cdot\bar C + (\partial_1\bar C\times C)\cdot\bar C\big]\nonumber\\ &\equiv & \int\; dx \;\partial_1 \big[({\cal B}\times C)\cdot{\bar C}\big] = 0.\end{aligned}$$ Thus, we note that $s_{ad} \,Q^{(\bar\lambda)}_d = - i\;{\{Q^{(\bar\lambda)}_d,Q^{(\bar\lambda)}_d}\} = 0$ for the physically well-defined fields that vanish off at $x =\pm \infty$. This absolute anticommutativity is also satisfied on the on-shell where ${\cal B}\times C = 0$ (due to the EOM from ${\cal L}^{(\bar\lambda)}_B$ w.r.t. $\bar\lambda$). Finally, we conclude that the property of absolute anticommutativity of the (anti-)co-BRST charges is satisfied [*without* ]{} invoking any CF-type constraint condition from [*outside*]{}. We now concentrate on the derivation of the absolute anticommutativity for $Q^{(\bar\lambda)}_d$ which is derived from ${\cal L}^{(\bar\lambda)}_B$. There are [*two* ]{} equivalent expressions for it in Eq. (40). We observe that the following are true, namely; $$\begin{aligned} s_d\; Q^{(\bar\lambda)}_d &=& \int \; dx \;s_d \big [ {\cal B}\cdot \dot C - \partial_1 B\cdot C +\bar C\cdot (\partial_1 C\times C)\big]\nonumber\\ &\equiv & \int\; dx \,\partial_1\;\big [i \;\bar C\cdot ({\cal B}\times C)\big] = 0, \end{aligned}$$ where we have used the EOM from ${\cal L}^{(\bar\lambda)}_B$ w.r.t. $\bar\lambda$ that leads to ${\cal B}\times C = 0$. Furthermore, for [*all* ]{} the physically well-defined fields, we obtain $s_d\; Q_{ad}^{\bar\lambda} = - i\; {\{Q_{ad}^{(\bar\lambda)} ,Q_d^{(\bar\lambda)}}\} = 0$ because [*all*]{} such fields vanish off at $x=\pm \infty$. Thus, the r.h.s. of (48) is zero due to the Gauss’s divergence theorem. Taking the alternative expressions for ${Q_{ad}^{(\bar\lambda)}}$ in (40), we note that $$\begin{aligned} s_d \;Q^{(\bar\lambda)}_{ad} &=& \int\; dx s_d \;\big [ {\cal B}\cdot \dot C - D_0{\cal B}\cdot C + (\partial_1\bar C\times C)\cdot C + \bar C\cdot (\partial_1 C\times C)\big]\nonumber\\ &\equiv & \int \;dx \;\partial_1\; \big [- i\; ({\cal B}\times C)\cdot \bar C\big] = 0,\end{aligned}$$ because of the fact that ${\cal B}\times C = 0$ (due to the EOM from ${\cal L}^{(\bar\lambda)}_B$ w.r.t. ${\bar\lambda}$ field). Moreover, all the fields vanish-off at $x =\pm \infty $. Thus, the Gauss divergence theorem shows that $s_d\;Q^{(\bar\lambda)}_{ad} =- i\; {\{Q_{ad}^{(\bar\lambda)} ,Q_d^{(\bar\lambda)}}\} = 0$ which proves the absolute anticommutativity of the (anti-) co-BRST charges. This observation is a [*novel*]{} result in our present endeavor. At this juncture, now we take up Lagrangian density ${\cal L}^{(\lambda)}_{\bar B}$ into consideration. The anti-co-BRST charge for this Lagrangian density is same as given in (17) (i.e. $Q^{(\lambda)}_{ad} = Q_{ad}$). We observe the following after the application of the co-BRST symmetry $s_d$ on $Q_{ad}^{(\lambda)}$, namely; $$\begin{aligned} s_d\; Q^{(\lambda)}_{ad} &=&\int\; dx\; s_d\, \big[{\cal B}\cdot\dot C+ \partial_1\bar B\cdot C\big]\nonumber\\ &\equiv &\int dx \big [ i\; ({\cal B}\times\bar C)\cdot \partial_1 C \big ] = 0.\end{aligned}$$ Thus, we have seen now that $s_d\; Q^{(\lambda)}_{ad} \equiv - i\; {\{Q^{(\lambda)}_{ad},{Q^{(\lambda)}_d}\}} = 0$ due to ${\cal B}\times\bar C = 0$ which emerges as EOM from ${\cal L}^{(\lambda)}_{\bar B}$ w.r.t. the field $\lambda$. In other words, the absolute anticommutativity ${\{Q^{(\lambda)}_{ad},{Q^{(\lambda)}_{ad}}\}} = 0$ is satisfied on the [*on-shell*]{}. A similar exercise, with another equivalent expression for $Q^{(\lambda)}_{ad}$, namely; $$\begin{aligned} s_d\; Q^{(\lambda)}_{ad} &=& \int\; dx\; s_d\; \big [ {\cal B}\cdot\dot C - D_0{\cal B}\cdot C - (\bar C\times \partial_1 C)\cdot C\big]\nonumber\\ &\equiv & \int\; dx \;\partial_1 \big [ i \,({\cal B}\times\bar C)\cdot C\big] = 0,\end{aligned}$$ establishes the absolute anticommutativity (i.e. ${\{Q^{(\lambda)}_{ad},{Q^{(\lambda)}_d}\}} = 0 $) due to Gauss’s divergence theorem which states that [*all* ]{} the physical fields vanish off at $x =\pm \infty$. The absolute anticommutativity can be [*also*]{} proven by using the expressions for the co-BRST charge $Q_d^{(\lambda)}$ (cf. Eq. (43)). It can be readily checked that the following is true: $$\begin{aligned} s_{ad}\;Q_d^{(\lambda)} &=& \int\; dx\; s_{ad}\; \big [ {\cal B}\cdot \dot {\bar C} + \partial_1 \bar B\cdot\bar C - (\partial_1\bar C\times\bar C)\cdot C\big]\nonumber\\ &\equiv & \int\; dx\; \partial_1 \big [ - i \,({\cal B}\times\bar C)\cdot C\big]\;\longrightarrow\;0. \end{aligned}$$ Thus, we note that $s_{ad}\;Q_d^{(\lambda)} = - i\;{\{Q_d^{(\lambda)},Q_{ad}^{(\lambda)}}\} = 0$ for the physically well-defined fields that vanish off at $x =\pm \infty$. Moreover, the absolute anticommutativity is also satisfied due to EOM (i.e. ${\cal B}\times C = 0$) that is derived from ${\cal L}^{(\lambda)}_{\bar B}$ w.r.t. Lagrange multiplier field $\lambda$. Hence, the absolute anticommutativity of the (anti-)co-BRST charges is satisfied [*on-shell*]{}. We now take up the alternative expression for the $Q_d^{(\lambda)}$ from (43) and show the validity of absolute anticommutativity. Towards this goal in mind, we observe the following $$\begin{aligned} s_{ad}\; Q_d^{(\lambda)} &=&\int\; dx\; s_{ad}\; \big [ \;{\cal B}\cdot\dot {\bar C} - D_0{\cal B}\cdot\bar C -(\bar C\times\partial_1 C)\times\bar C- (\partial_1\bar C\times\bar C)\cdot C\; \big]\nonumber\\ &=&\int\; dx \;\partial_1\,\big [-i \;({\cal B}\times\bar C)\cdot C\big]\longrightarrow 0.\end{aligned}$$ This shows that $s_{ad}\;Q_d^{(\lambda)} = - i\;{\{Q_d^{(\lambda)},Q_d^{(\lambda)}}\} = 0$ due to Gauss’s divergence theorem which states that [*all* ]{} the physical fields must vanish off at $x =\pm \infty $. There is another interpretation, too. The absolute anticommutativity (i.e. ${\{Q_d^{(\lambda)},\;Q_{ad}^{(\lambda)}}\}= 0)$ is satisfied [*on-shell*]{} (where ${\cal B}\times\bar C = 0$ due to EOM from ${\cal L}_{\bar B}^{(\lambda)}$ w.r.t. to $\lambda)$. **Conclusions** =============== In our present endeavor, we have computed [*all*]{} the conserved charges of our theory and obtained the algebra followed by them. We have shown that, for the validity of the [*proper* ]{} algebra (consistent with the algebra obeyed by the cohomological operators), we have to use the EOM as well as CF-type restrictions of our theory described by the Lagrangian densities (1). In particular, we have demonstrated that the requirement of the absolute anticommutativity property amongst the fermionic symmetry operators (cf. Eq. (31)) leads to the emergence of our EOM and/or CF-type restrictions. In other words, it is the requirement of consistency of the operator algebra with the Hodge algebra (i.e. the algebra obeyed by the de Rham cohomological operators of differential geometry) that leads to the derivation of the EOM as well as the CF-type restrictions of our theory. This way of derivation of the CF-type restrictions is completely different from our earlier derivations \[10,11\] where the existence of the continuous symmetries (and their operator algebra) and the application of the superfield approach to BRST formalism have played key roles. One of the highlights of our present investigation is the observation that the individual Lagrangian density (of the coupled Lagrangian densities (26)) provides a model for the Hodge theory because the continuous symmetry operators of the [*specific*]{} Lagrangian density (and corresponding charges) obey an algebra that is reminiscent of the algebra obeyed by the de Rham cohomological operators of differential geometry. In other words, the continuous symmetry operators (and corresponding charges) provide the physical realizations of the cohomological operators of differential geometry. This happens because of the fact that the individual Lagrangian density respects [*five* ]{} perfect symmetries where there is no use of any kind of CF-type restrictions. This is precisely the reason that [*four*]{} of the above mentioned [*five* ]{} symmetries of the theory obey an $exact$ algebra that a reminiscent of the algebra obeyed by the de Rham cohomological operators of the differential geometry. We have claimed in earlier works \[23,24\] that the existence of the CF-restrictions is the hallmark of a [*quantum* ]{} gauge theory (described within the framework of BRST formalism). This claim is as fundamental as the definition of a [*classical* ]{} gauge theory in the language of first-class constraints by Dirac \[25,26\]. Thus, it was a challenge for us to derive [*all*]{} types of CF-type restrictions on our theory which respect the (anti-)BRST as well as the (anti-)co-BRST symmetries [*together*]{}. It is gratifying to state that we have discussed about the existence of CF-type restrictions from various points of view in our works \[10,11\]. In fact, we have been able to show the existence of CF-type restrictions: (i) from symmetry considerations \[10\], (ii) from superfield approach to BRST formalism \[11\], and (iii) from the algebraic considerations (in our present work). These works focus on the importance of CF-type restrictions in the discussion of the 2D non-Abelian 1-form theory. As has been pointed out earlier, one of the key features of (anti-)co-BRST symmetry transformations is the observation that these transformations absolutely anticommute [*without*]{} any use of CF-type restrictions ${\cal B}\times C = 0$ and ${\cal B}\times\bar C = 0$. However, the latter appear very elegantly when we discuss the absolute anticommutativity of the co-BRST and anti-co-BRST charges in the language of symmetry transformations and their generators (e.g. $s_d\,Q_{ad} = - i\,{\{Q_{ad},Q_d}\} = 0$ and $s_{ad}\,Q_d = - i\;{\{Q_d.Q_{ad}}\} = 0$). This is a completely [*novel*]{} observation in our theory as it does [*not*]{} happen in the case of (anti-)BRST symmetry transformations and in their absolute anticommutativity requirement. In fact, in the latter case of symmetries (i.e. (anti-)BRST symmetries), the CF-condition is required for the proof of the absolute anticommutativity of the (anti-)BRST charges $\{ Q_b, Q_{ab} \} = 0 $ as well as (anti-)BRST symmetries $\{ s_b, s_{ab} \} = 0 $ (cf. Appendix A). As far as the property of absolute anticommutativity and the existence of the CF-type conditions is concerned, we would like to point out that the CF-type restrictions ${\cal B}\times C = 0$ and ${\cal B}\times\bar C = 0$ are invoked from [*outside*]{} in the requirement of the absolute anticommutativity condition for the (anti-)co-BRST charges that are derived from the Lagrangian densities (1). However, these restrictions are [*not*]{} required in the case of the absolute anticommutativity requirement of the (anti-)co-BRST charges that are derived from the Lagrangian densities (26). This happens because of the observation that the CF-type restrictions: ${\cal B}\times C = 0$ and ${\cal B}\times\bar C = 0$ become EOM for the Lagrangian densities (26). All the tower of restrictions that have been derived in \[10,11\] do not affect the d.o.f. counting for the gauge field because the $2D$ non-Abelian gauge theory has been shown to be a new model of topological field theory where there are [*no*]{} propagating d.o.f. \[7\]. Furthermore, the CF-type restrictions are amongst the auxiliary fields and (anti-)ghost fields which do [*not*]{} directly affect the d.o.f. counting of the gauge field. We have been able to show the existence of (anti-)BRST and (anti-)co-BRST symmetry transformations in the case of a 1D model of a rigid rotator \[20\]. However, the CF-type restriction, in the case of this 1D model is [*trivial*]{} (as is the case with the Abelian 1-form gauge theory without any interaction with matter fields \[7\]). The non-trivial CF-type restrictions appear in the cases of 6D Abelian $3$-form and 4D Abelian $2$-form gauge theories which have been shown to be the models for the Hodge theory within the framework of BRST formalism (where the (anti-)BRST and (anti-)co-BRST symmetries co-exist [*together*]{}) \[5,6\]. It would be a nice future endeavor for us to apply our present ideas of 2D non-Abelian 1-form theory to the above mentioned systems of physical interest. We are currently busy with these issues and our results would be reported in our future publications \[27\].\ [**Acknowledgment**]{}\ One of us (S. Kumar) is grateful to the BHU-fellowship under which the present investigation has been carried out. The authors are thankful to N. Srinivas and T. Bhanja for fruitful discussions on the central theme of the present research work.\ [**Appendix A: On proof of $\{Q_b, \, Q_{ab}\} = 0 $**]{}\ In this Appendix, we discuss a few essential theoretical steps to provide a proof for the absolute anticommutativity of the conserved (anti-)BRST charges $Q_{(a)b}$. Towards this goal in mind,we observe (with the input $s_b Q_{ab} = - i\;{\{Q_b,Q_{ab}}\}$) the following: $$s_b \, Q_{ab} =\int \,dx \,s_b\,\Big[\dot{\bar B}\cdot\bar C -\bar B\cdot D_0\bar C +\frac{1}{2}(\bar C\times \bar C)\cdot {\dot C}\Big].\eqno (A.1)$$ Using the BRST transformations from Eq.(2), we obtain the following explicit mathematical expressions from the[*first*]{} term (on the r.h.s. of the above equation): $$s_b(\dot{\bar B}\cdot \bar C) = i\,(\dot{\bar B}\times C)\cdot \bar C + i ({\bar B} \times \dot C)\cdot \bar C +i\,\dot{\bar B}\cdot B.\eqno (A.2)$$ The [*second*]{} term, on the r.h.s of (A.1), leads to 0.5 cm $$s_b\,(-\,\bar B\cdot D_0\bar C) = -\, i\,(\bar B\times C)\cdot {\dot{\bar C}} + (\bar B\times C)\cdot (A_0\times\bar C) - i\,{\bar B} \cdot {\dot B}~~~~~~~~~~~~~$$ $$~~~~~~~~~~~~\equiv - i\,{\bar B} \cdot (\dot C\times\bar C) +\bar B\cdot[(A_0\times C)\times \bar C+ \bar B\cdot(A_0\times B),\eqno (A.3)$$ and the [*third* ]{}term produces: $$s_b\,\Big[\frac{1}{2}\,(\bar C\times \bar C)\cdot {\dot C}\Big] = i\,(B\times\bar C)\cdot\dot C -\frac{i}{2}\,(\bar C\times\bar C)\cdot(\dot C\times C).\eqno(A.4)$$ Now we are in the position to apply the Jacobi identity to expand $\bar B\cdot[(A_0\times C)\times \bar C]$ and $\frac{i}{2}[(\bar C\times\bar C)\cdot(\dot C\times C)]$. The outcome of these exercises yield: $$\bar B\cdot \Big[(A_0\times C)\times \bar C\Big] = -(A_0\times \bar B)\cdot (\bar C\times C) - (A_0\times \bar C)\cdot (\bar B\times C),$$ $$-\frac{i}{2}\,\Big[(\bar C\times \bar C)\cdot\\ (\partial_0 C \times C)\Big]= i\,(\partial_0 C\times \bar C)\cdot (\bar C\times \bar C).\eqno(A.5)$$ The addition of [*all*]{} the terms with proper combinations, ultimately, leads to the following: $$i\,(\dot C\times\bar C)\cdot[B+\bar B +(C\times \bar C)] - i\,\bar B\cdot D_0[B+(\bar C\times C)] + i\,\dot{\bar B}\cdot B +i\,\dot{\bar B}\cdot(C\times\bar C).\eqno(A.6)$$ We note that the application of the CF-condition (i.e. $B+\bar B + (C\times \bar C) = 0$) produces: $$i\,\bar B\cdot\dot{\bar B}+ i\,\dot{\bar B}\cdot B-i\,\bar B\cdot\dot{\bar B}- i\,\dot{\bar B}\cdot B =0\equiv s_b \,Q_{ab},\eqno(A.7)$$ where we have used - $i \bar B\cdot D_0( B+C\times \bar C)= +\bar B\cdot D_0\bar B\equiv i\bar B\cdot\dot{\bar C}$. In other words,we obtain the relationship: $s_b\,Q_{ab} = -i\;{\{Q_{ab},Q_b}\} = 0$(which is[*true*]{} only on the hypersurface, embedded in the 2D spacetime manifold, where the CF-condition $B + {\bar B} + C \times {\bar C} = 0$ is satisfied). This is a reflection of the fact that the absolute anticommutativity of the (anti-)BRST transformations $\{s_b, s_{ab}\}\,A_{\mu} = 0$ is true only when the CF-condition $(B + {\bar B} + C \times {\bar C} = 0)$ is imposed from [*outside*]{}. We conclude that the requirement of absolute anticommutativity condition for the (anti-)BRST symmetry transformations is also reflected at the level of the requirement of the absolute anticommutativity property of the off-shell nilpotent (anti-)BRST charges. The CF-condition also appears in (22). [**Appendix B: On derivation of**]{} $ Q_w$\ In the main body of our text, we have derived the explicit expression for $Q_w$ from the Noether conserved current $(J_w)$. There is a simple way to obtain the same expression of $(Q_w)$ where the ideas behind the symmetry principle (and concept of symmetry genrator) play an important role. In this context, we note the following: $$s_d Q_b = -i\,{\{Q_b,Q_d}\} = -i\,Q_w,\quad\,s_b Q_d = -i\,{\{Q_d,Q_b}\} = - i\,Q_w.\eqno(B.1)$$ Thus, an explicit calculation of the l.h.s, (due to the transformations (2) and (4) as well as the expressions (13) and (17)) yields the correct expression for $Q_w$. Let us, first of all, focus on the following: $$s_b Q_d =\int dx\, s_b\,[{\cal B}\cdot\dot{\bar C} + B\cdot\partial_1\bar C].\eqno(B.2)$$ The [*first*]{} term produces the following explicit computions: $$s_b({\cal B}\cdot\dot{\bar C}) = i\,({\cal B}\times C)\cdot\dot{\bar C} + i\,{\cal B}\cdot\partial_0B$$ $$= i\,({\cal B}\times C)\cdot\dot{\bar C} - i\,{\cal B}\cdot D_1 {\cal B}- i\,{\cal B}\cdot(\dot{\bar C} \times C) \equiv -\,{\cal B}\cdot D_1 {\cal B},\eqno(B.3)$$ where we have used the EQM $$\partial_0B = -D_1{\cal B} - (\dot{\bar C}\times C).\eqno(B.4)$$ The [*second*]{} term leads to $$s_b(i\,B\cdot\partial_1\bar C) = i B\cdot\partial_1 B.\eqno(B.5)$$ The addition of both the terms yield, $$s_b\,Q_d = - i\,{\{Q_d,Q_b}\} = - i\,\int dx \Big[{\cal B}\cdot D_1{\cal B} - B\cdot\partial_1 B\Big],\eqno(B.6)$$ which, ultimately, leads to the derivation of $Q_w$ (cf. Eq. (20)). Now we dwell a bit on the anticommutator $s_d Q_b = - i\,{\{Q_b,Q_d}\} = -i\,Q_w$. In this connection,we have to use the symmetry transformations $(4)$ and expression $(13)$. In other words, we compute the following: $$s_d\, Q_b =\int dx\, s_d\,\Big[{\cal B}\cdot D_1 C + B\cdot D_0 C +\frac{1}{2}\,\dot{\bar C}\cdot(C\times C)\Big].\eqno(B.7)$$ The [*first*]{} term, using the partial integration and dropping the total space derivative term, can be written in a different looking form (i.e. ${\cal B}\cdot D_1 C = -D_1{\cal B}\cdot C)$. Now application of $s_d$ on the latter form, leads to the following explicit computation: $$s_d \,(-D_1{\cal B}\cdot C) = i\,{\cal B}\cdot D_1{\cal B} + i\,(\dot{\bar C}\times {\cal B})\cdot C.\eqno(B.8)$$ From the [*second*]{} and [*third*]{} terms of (B.7), we obtain $$s_d (B\cdot D_0 C) = -\,{\cal B}\cdot \partial_0{\cal B} + i\,B\cdot(\partial_1\bar C\times C) - B\cdot(A_0\times {\cal B}),$$ $$s_d\,\Big[\frac{1}{2}\,\dot{\bar C} \cdot(C\times C)\Big] = i\,\dot{\bar C}\cdot({\cal B}\times C).\eqno(B.9)$$ Now, by using the equation of motion $$D_0 {\cal B} = \partial_1 B + (\partial_1\bar C\times C ),\eqno(B.10)$$ we observe that the sum of (B.8), (B.9) and (B.10) leads to the equality $[-i\,B\cdot \partial_1B]$. Thus, ultimately, we obtain the following $$s_d Q_b = -i\,{\{Q_b,Q_d}\} = -i\,Q_w,$$ where $(Q_w)$ (cf. Eq.(20)) is $ Q_w = i\,\int dx \Big[{\cal B}\cdot D_1 {\cal B} - {\cal B}\cdot\partial_1 B\Big]$. Thus, we have derived the precise form of $Q_w$ by using the ideas of continuous symmetries and their corresponding generators. Thus, there are two distinct ways to derive $Q_w$. [**Appendix C: On (anti-)BRST symmetries of ${\cal L}^{(\bar\lambda)}_B$ and ${\cal L}^{(\lambda)}_{\bar B}$**]{} We have observed earlier that the coupled Lagrangian densities (26) respect [*five*]{} perfect symmetries [*individually*]{}. As far as the (anti-)BRST symmetries are concerned, we have noted that ${\cal L}^{(\bar\lambda)}_B$ respects [*perfect*]{} BRST symmetries, listed in (2), along with $s_b \;\bar\lambda = 0$[^8]. We discuss here the anti-BRST symmetry of this Lagrangian density (i.e. ${\cal L}^{(\bar\lambda)}_B$). It can be seen that, under the anti-BRST transformations (2) along with $s_{ab}\;\bar\lambda = - i \;(\bar\lambda\times\bar C)$, we have the following transformation for the Lagrangian density ${\cal L}^{(\bar\lambda)}_B$: $$s_{ab}\;{\cal L}^{(\bar\lambda)}_B = \partial_{\mu} \big [ -(\bar B + C\times\bar C)\cdot\partial^{\mu}\bar C\big] +(B+\bar B+C\times\bar C)\cdot D_{\mu}\partial^{\mu}\bar C - i\;\bar\lambda \cdot({\cal B}\times {\{\bar B+(C\times\bar C)}\}).\eqno(C.1)$$ If we implement the CF-condition $B+\bar B+(C\times\bar C)=0$, we obtain the following (from the above transformation of ${\cal L}^{(\bar\lambda)}_B$), namely; $$s_{ab}\;{\cal L}^{(\bar\lambda)}_B =\partial_{\mu} \big [B\cdot\partial^{\mu}\bar C\big] + i\;\bar\lambda\cdot({\cal B}\times B).\eqno (C.2)$$ For the anti-BRST invariance, we impose a new CF-type restrictions (i.e. $\bar\lambda\cdot({\cal B}\times B) = 0$) which involves [*three*]{} auxiliary fields. As a consequence, this restriction can be equivalent to the following [*three*]{} individual constraints in terms of [*only*]{} [*two*]{} auxiliary fields, namely; $$\bar\lambda\cdot({\cal B}\times B) = 0\quad\Longrightarrow\quad {\cal B}\times B = 0,\qquad \lambda\times B = 0,\qquad \lambda\times {\cal B}= 0.\eqno (C.3)$$ The above restrictions have been derived from the symmetry point of view \[10\] as well as by using the augmented version of superfield formalism \[11\]. It is gratifying to note that the (anti-)BRST symmetry transformations (that include transformations on $\lambda$ and $\bar\lambda$) absolutely anticommute if we consider the above restrictions. Now we focus on the Lagrangian density ${\cal L}^{(\lambda)}_{\bar B}$. It has perfect anti-BRST invariance with $s_{ab} \lambda = 0$ (and $s_{ab} ({\cal B}\times \bar C) = 0$). We discuss here the application of BRST transformation (2), along with $s_b\;\lambda = - i\;(\lambda\times C)$, on ${\cal L}^{(\lambda)}_{\bar B}$. This exercise leads to the following; $$s_b \,{\cal L}^{(\lambda)}_{\bar B}= \partial_{\mu}\big[(B+C\times \bar C)\cdot\partial^{\mu}C\big] -(B+\bar B+C\times\bar C).D_{\mu}\partial^{\mu}C -i\lambda\cdot\big[{\cal B}\times{\{B+(C\times\bar C)}\}\big].\eqno (C.4)$$ If we impose the usual CF-condition $B+\bar B+(C\times\bar C) = 0$ from outside on (C.4), we obtain the following (from the above transformation of ${\cal L}^{(\lambda)}_{\bar B}$), namely; $$s_b \;{\cal L}^{(\lambda)}_{\bar B} = \partial_{\mu}\big[-\bar B\cdot\partial^{\mu} C\big] +i\lambda\cdot({\cal B}\times\bar B).\eqno (C.5)$$ Thus, for the BRST invariance of the action integral $S =\int d^2x \;{\cal L}^{(\lambda)}_{\bar B}$, we invoke another CF-type restriction $$\lambda\cdot({\cal B}\times\bar B) = 0 \qquad \Longrightarrow \qquad {\cal B}\times \bar B = 0,\quad \lambda\times {\cal B} = 0,\quad \lambda\times\bar B = 0.\eqno (C.6)$$ In the above, we have noted that there are two constraint restrictions (i.e. $B+\bar B+C\times \bar C$,$\lambda\cdot({\cal B}\times\bar B) = 0$) that ought to be invoked for the BRST invariance of the action integral. It is clear that the latter restriction involves [*three*]{} auxiliary fields. However, this restriction [*actually*]{} corresponds to [*three*]{} CF-type restrictions that have been written in (C.6). The latter [*three*]{} CF-type restrictions are correct as they have been derived from the symmetry consideration in \[10\]. It is gratifying to state, at this juncture, that the restrictions, listed in (C.3) and (C.6), are required for the absolute anticommutativity of the (anti-)BRST symmetries (2) along with $s_b\;\lambda = - i\;(\lambda\times C)$ and $s_{ab}\;\bar\lambda = -i\,(\bar\lambda\times\bar C)$. [99]{} C. Becchi, A. Rouet, R. Stora, Phys. Lett. B 32, 344 (1974) C. Becchi, A. Rouet, R. Stora, Commun. Math. Phys. 42, 127 (1975) C. Becchi, A. Rouet, R. Stora, Ann. Phys. (N. Y.) 98, 287 (1976) I. V. Tyutin, Lebedev Institute Report, Preprint FIAN-39, 1975 (Unpublished) R. P. Malik, Int. J. Mod. Phys. A 22, 3521 (2007) R. Kumar, S. Krishna, A. Shukla, R. P. Malik,\ Int. J. Mod. Phys. A 29, 1450135 (2014) R. P. Malik, J. Phys. A: Math. Gen. 34, 4167 (2001) See, e.g, E. Witten, Nucl. Phys. B 202, 253 (1982) See, e.g, A. S. Schwarz, Lett. Math. Phys. 2, 217 (1978) N. Srinivas, S. Kumar, B. K. Kureel, R. P. Malik, arXiv:1606.05870 \[hep-th\]\ (To appear in Int. J. Mod. Phys. A (2017)) N. Srinivas, R. P. Malik, arXiv:1701.00136 \[hep-th\] G. Curci, R. Ferrari, Phys. Lett. B 63, 91 (1976) N. Nakanishi, I. Ojima, Covariant Operator Formalism of Gauge Theories and Quantum Gravity (World Scientific, Singapore, 1990) K. Nishijima, Czech. J. Phys. 46, 1 (1996) D. Dudal, V. E. R. Lemes, M. S. Sarandy, S. P. Sorella, M. Picariello,\ JHEP 0212, 008 (2002) D. Dudal, H. Verschelde, V. E. R. Lemes, M. S. Sarandy, S. P. Sorella, M. Picariello,\ A. Vicini, J. A. Gracey, JHEP 0306, 003 (2003) R. Kumar, S. Gupta, R. P. Malik, Int. J. Theor. Phys. 55, 2857 (2016) See, e.g., T. Eguchi, P. B. Gilkey, A. Hanson, Phys. Rep. 66, 213 (1980) See, e.g., S. Mukhi, N. Mukunda, Introduction to Topology, Differential Geometry and Group Theory for Physicists (Wiley Eastern Private Limited, New Delhi, 1990) S. Gupta, R. P. Malik, Eur. Phys. J. C 58, 517 (2008) R. P. Malik, Int. J. Mod. Phys. A 15, 1685 (1998) S. Gupta, R. P. Malik, Eur. Phys. J. C 68, 325 (2010) L. Bonora, R. P. Malik, Phys. Lett. B 655, 75 (2007) L. Bonora, R. P. Malik, J. Phys. A: Math. Theor. 43, 375 (2010) P. A. M. Dirac, Lectures on Quantum Mechanics, Belfer Graduate School of Science (Yeshiva University Press, New York, 1964) K. Sundermeyer, Constrained Dynamics: Lecture Notes in Physics, Vol. 169\ (Springer, Berlin, 1982) R. P. Malik, etal., in preperation [^1]: The symmetry operators are [*perfect*]{} because they leave the Lagrangian densities (or corresponding actions) invariant [*without*]{} any use of EOM and/or CF-type restrictions. However, their algebra do require CF-type restrictions (cf. Eq. (29) below). It is essential to point out that the conserved charges [*do*]{} require the validity of EOM as well as CF-type restrictions for their algebra (cf. Sec. 6) provided we demand that these charges should satisfy the Hodge algebra of the de Rham cohomological operators [^2]: There is a simple way to derive Eq. (10). Using the basic definition $s_w = {\{s_b,s_d}\} $ and applying it on ${\cal L}_B$ we obtain Eq. (10) (with the inputs from (3) and (5)). [^3]: These claims are true for any arbitrary expressions for the charges listed in (13), (14) and (17) provided we take into account the symmetry transformations (2) and (4). [^4]: We mean by the [*perfect*]{} symmetries as the transformations for which the Lagrangian densities [*either*]{} remain invariant [*or*]{} transform to the total space time derivative [*without* ]{} any use of CF-type restrictions. [^5]: The curvature 2-form $F^{(2)} = dA^{(1)}+ i A^{(1)}\wedge A^{(1)}$ (with $d = dx^{\mu}\partial_{\mu}$ and $A^{(1)} = dx^{\mu}A_{\mu})$ leads to the derivation of the field strength tensor $F_{\mu\nu} = \partial_{\mu}A_{\nu}-\partial_{\nu}A^{\mu} + i\; ( A_{\mu}\times A_{\nu})$. Hence, the kinetic term owes its origin to $d = dx^{\mu}\partial_{\mu}$. It can be explicitly checked that $\delta A^{(1)} = -\star \; d \;\star \;A^{(1)} = \partial_{\mu}A^{\mu}.$ Hence, the gauge-fixing term (i.e. a 0-form) has its origin in the co-exterior derivative $\delta = -\star \; d \;\star.$ [^6]: The claims, made in Eq. (33), are [*strong*]{} statements. There are weaker versions of them which become transparent when the operators $s_{(a)d}$ are applied on the [*third*]{} expressions for $Q_d$ and $Q_{ad}$ in (17). For instance, we note that $s_{ad}Q_d = i \int dx\; \partial_1 [({\cal B}\times C)\cdot\bar C] \longrightarrow 0$ for physicall well-defined fields that vanish off at $x = \pm \infty $. Similarly, we observe that $s_dQ_{ad} = i\int dx \; \partial_1 [({\cal B}\times \bar C)\cdot C]\longrightarrow 0.$ [^7]: The above observations are the analogue of the operations of the cohomological operators $(d ,\delta ,\triangle )$ on the $n$-form $(f_n)$ where the degrees of forms $df_n$, $\delta f_n$ and $\triangle f_n$ are $(n+1)$, $(n-1)$ and $n$, respectively. [^8]: Because it transform to a total spacetime derivative (i.e. $s_b {\cal L}^{(\bar\lambda)}_B = \partial_{\mu}[B\cdot D^{\mu}C]$).
--- author: - '\' bibliography: - 'bib.bib' title: 'Combinatorial Optimization by Decomposition on Hybrid CPU–non-CPU Solver Architectures' --- Introduction ============ Discrete optimization problems lie at the heart of many studies in operations research and computer science ([@blazewicz2013scheduling; @kouvelis2013robust]), as well as a diverse range of problems in various industries. Crew scheduling problem [@kasirzadeh2017airline], vehicle routing [@UPS], anomaly detection [@NASA], optimal trading trajectory [@usOTT], job shop scheduling [@usJSP], prime number factorization [@usPrimeFac], molecular similarity [@usGS], and the kidney exchange problem [@kidney] are all examples of discrete optimization problems encountered in real-world applications. Finding an optimum or near-optimum solution for these problems leads not only to more efficient outcomes, but also to saving lives, building greener industries, and developing procedures that can lead to increased work satisfaction. In spite of the diverse applications and profound impact the solutions to these problems can have, a large class of these problems remain intractable for conventional computers. This intractability stems from the large space of possible solutions, and the high computational cost for reducing this space [@NoC]. These characteristics have led to extensive research on the design and development of both exact and heuristic algorithms that exploit the structure of the specific problem at hand to either solve these problems to optimality, or find high-quality solutions in a reasonable amount of time (e.g., see [@san2016new], [@reviewjava], and [@lewis2015guide]). Alongside research in algorithm design and optimized software, building quantum computers that work based on a new paradigm of computation, such as D-Wave Systems’ quantum annealer [@Dwave], or specialized classical hardware for optimization problems, such as Fujitsu’s digital annealer [@fujitsu], has been a highly active field of research in recent years. All of the problems described above can potentially be solved with these devices after the problem has been transformed into a quadratic unconstrained binary optimization (QUBO) problem (see Ref. [@ising]), and these quantum and digital annealers serve as good examples of what we refer to as “non-CPU" hardware in this paper. The arrival of new, specialized hardware calls for new approaches to solving optimization problems, many of which simultaneously harness the power of conventional CPUs and emerging new technologies. In one such approach, CPUs are used for pre- and post-processing steps, while solving the problem is left entirely to the non-CPU device. The CPUs then handle tasks such as converting the problems into an acceptable format, or analyzing the results received from the non-CPU device, without taking an active part in solving the problem. In this paper, we focus on a different approach that is based on problem decomposition. In this approach, the original problem is decomposed into smaller-sized problems, extending the scope of the hardware to larger-sized problems. However, the practical use of problem decomposition depends on a multitude of factors. We lay out the foundations of using problem decomposition in a hybrid CPU/non-CPU architecture in Sec. \[Sec:ProbDecom\], and explain some critical characteristics that are essential for a practical problem decomposition method within such an architecture. We then focus on a specific NP-hard problem, namely the maximum clique problem, provide and explain the formal definition of the problem in Sec. \[Sec:MaxCliqueDef\], and propose a new problem decomposition method for this problem in Sec. \[Sec:ProbDecomMaxClique\]. Sec. \[Sec:ResDis\] showcases the potential of our approach in extending the applicability of new devices to large and challenging problems, and Sec. \[Sec:Dis\] summarizes our results and presents directions for future study. Using a Hybrid Architecture for Hard Optimization Problems {#Sec:ProbDecom} ========================================================== ![image](diagrams.eps){width="\textwidth"} As new hardware is designed and built for solving optimization problems, one key question is how to optimally distribute the tasks between a conventional CPU and this new hardware (see, e.g., [@NASAdecomp], [@dWaveDecomp]). These new hardware devices are designed and tuned to address a specific problem efficiently. However, the process of solving an optimization problem involves some pre- and post-processing that might not be possible to perform on the application-specific non-CPU hardware. The pre-processing steps include the process of reading the input problem, which is quite likely available in a format that is most easily read by classical CPUs, as well as embedding that problem into the hardware architecture of the non-CPU device. Therefore, the use of a hybrid architecture that combines CPU and non-CPU resources is inevitable. The simplest hybrid methods also use a low-complexity, classical, local search algorithm to further optimize the results of the non-CPU device as a post-processing step. This simple picture was used in the early days of non-CPU solver development. However, not all problems are well-suited for a non-CPU device. Furthermore, when large optimization problems are decomposed into smaller subproblems, each of the subproblems might exhibit different complexity characteristics. This means that in any given problem, there might be subproblems that are better handled by CPU-based algorithms. This argument, together with the fact that usually a single call to a non-CPU device will cost more than using a CPU, emphasizes the importance of identifying the best use of each device for each problem. Thus, the CPU should also be responsible for identifying which pieces of the problem are best suited for which solver. Fig. \[fig:flowcharts\] illustrates three different hybrid approaches to using a CPU-based and a non-CPU-based solver to solve an optimization problem. Flowchart (a) represents the simplest hybrid method, which has the lowest level of sophistication in distributing tasks between the two hardware devices. It solves the problem at hand using the non-CPU solver only if the size of the problem is less than or equal to the size of the solver. In this approach, all problems are meant to be solved using the non-CPU device unless they do not fit on the hardware for some technical reason. The CPU’s function is to carry the pre- and post-processing tasks as well as to solve the problems that do not fit on the non-CPU hardware. Flowchart (b) adds a level of sophistication in that it involves decomposing every subproblem until it either fits the non-CPU hardware, or proves to be difficult to decompose further, in which case it uses a CPU to solve the problem. Finally, the method we propose is depicted in (c). It is a hybrid system that uses the idea of problem decomposition in (b), but augments it with a decision maker and a method that assigns optimization bounds to each subproblem. These additional steps are necessary for the practical use of decomposition techniques in hybrid architectures. However, as we will demonstrate, not every method of decomposition will be beneficial in a hybrid CPU/non-CPU architecture. For this method to work best in such a scenario, we propose the following requirements: - the number of generated subproblems should remain a polynomial function of the input; - the CPU time for finding subproblems should scale polynomially with the input size. Given these two conditions, the total time spent on solving a problem will remain tractable if the new hardware is capable of efficiently solving problems of a specific type. More precisely, the total computation time in a hybrid architecture can be broken into three components: $$T_{\text {total}} = t_{\text {CPU}} + t_{\text {comm}} + t_{\text {non-CPU}}.$$ Here, $t_{\text {CPU}}$ is the total time spent using the CPU. It consists of decomposing the original problem, solving a fraction of the subproblems that are not well-suited for the non-CPU hardware, and converting the remaining subproblems into an acceptable input format for the new hardware (e.g., a QUBO formulation for a device like the D-Wave 2000Q or Fujitsu’s digital annealer). The amount of time devoted to the communication between a CPU and the new hardware is denoted by $t_{\text {comm}}$. This time is proportional to the number of calls made from the CPU to the hardware (which, in itself, is less than the total number of subproblems, as we will explain shortly). Furthermore, $t_{\text {non-CPU}}$ is the total time that it takes for the non-CPU hardware to solve all of the subproblems that it receives. Given the two requirements for problem decomposition, $ t_{\text {CPU}} + t_{\text {comm}}$ remains polynomial, and using the hybrid architecture will be justified if the non-CPU hardware is capable of solving the assigned problems significantly more efficiently than a CPU. Problem Decomposition in a Hybrid Architecture ---------------------------------------------- Algorithm \[alg:decompose\] comprises our proposed procedure for using problem decomposition in a hybrid architecture. This algorithm takes a $problem$ of size $N$, the size of the non-CPU hardware $nonCPU\_size$, and the maximum number of times to apply the decomposition method [decomposition\_level]{} as input arguments. In this pseudocode, [solve\_CPU(.)]{} and [solve\_nonCPU(.)]{} denote subroutines that solve problems on classical and non-CPU hardware, respectively. At the beginning, the algorithm checks whether a given $problem$ is “well-suited” for the non-CPU hardware. We define a “well-suited" problem for a non-CPU hardware device as a problem that is expected to be solved faster on a non-CPU device compared to a CPU. This step is performed by the “decision maker” (explained in Sec. \[Sec:DecMaker\]). The algorithm then proceeds to decompose the problem only if the entire $problem$ is not well-suited for the non-CPU hardware. When a problem is sent to the [do\_decompose(.)]{} method, it is broken into smaller-sized subproblems, and each subproblem is tagged with an upper bound, in the case of maximization, or a lower bound, in that of minimization. These bounds will be used later to reduce the number of calls to the non-CPU hardware. This decomposition step can be performed a single time, or iteratively up to $decomposition\_level$ times. After the original problem is decomposed, every new problem in the $subProblem$ list is checked by the decision maker. The well-suited problems are stored in $nonCPU\_subproblems$, and the rest are placed in the $CPU\_subproblems$ list. After the full decomposition has bee achieved, the problems in $nonCPU\_subproblems$ are sent to the [PruneAndPack(.)]{} subroutine. This subroutine ignores the problems with an upper bound (lower bound) less than (greater than) the best found solution by the [solve\_CPU(.)]{} and [solve\_nonCPU(.)]{} methods, and continues to pack in the rest of the subproblems until the size of the non-CPU hardware has been maxed out. These are necessary steps for minimizing the number of calls to the non-CPU hardware, and thus minimizing the communication time. At the final step, the results of all of the solved subproblems are combined and analyzed using the [Aggregate(.)]{} subroutine. \[line:pack\] Decision Maker {#Sec:DecMaker} -------------- There is always an overhead cost in converting each subproblem into an acceptable format for the non-CPU hardware, sending the correctly formatted subproblems to this hardware, and finally receiving the answers. It is hence logical to send the subproblems to a “decision maker” before preparing them for the new hardware. In an ideal scenario, this decision maker will have access to a portfolio of classical algorithms, along with the specifications of the non-CPU hardware. Based on this information, the decision maker will be able to decide whether a given problem is well-suited for the non-CPU hardware. These decisions may be achieved via either some simple characteristics of the problem, or through intelligent machine-learning models with good predictive power, depending on the case at hand. Specific Case Study: Maximum Clique {#Sec:MaxCliqueDef} =================================== Now that we have laid out the specifics of our proposal for problem decomposition in a hybrid architecture, we apply this method to the maximum clique problem. We begin by explaining the graph theory notation and necessary definitions, along with a few real-world applications for the maximum clique problem. A graph consists of a finite set of vertices and a set $E\subseteq V\times V$ of edges. Two distinct vertices $v_i$ and $v_j$ are adjacent if $\{v_i, v_j\}\in E$. The *neighbourhood* of a vertex $v$ is denoted by $\mathcal N(v)$, and is the subset of vertices of $G$ which are adjacent to $v$. The degree of a vertex $v$ is the cardinality of $\mathcal N(v)$, and is denoted by $d(v)$. The maximum degree and minimum degree of a graph are denoted by $\Delta(G)$ and $\delta(G)$, respectively. The *subgraph* of $G$ *induced* by a subset of vertices $U\subseteq V$ is denoted by $G[U]$ and consists of the vertex set $U$, and the edge set defined by $$E(G[U]) = \{\{u_i, u_j\}~|~ u_i,~u_j \in U,~\{u_i, u_j\}\in E(G)\}.$$ A *complete subgraph*, or a *clique*, of $G$ is a subgraph of $G$ where every pair of its vertices are adjacent. The size of a maximum clique in a graph $G$ is called the *clique number of* $G$ and is denoted by $\omega(G)$. An independent set of $G$, on the other hand, is a set of pairwise nonadjacent vertices. As every clique of a graph is an independent set of the complement graph, one can find a maximum independent set of a graph by simply solving the maximum clique problem in its complement. A node-weighted graph $G$ is a graph that is augmented with a set of positive weights $W = \{w_1, w_2, \ldots, w_n\}$ assigned to each node. The maximum weighted-clique problem is the task of finding a clique with the largest sum of weights on its nodes. Many real-world applications have been proposed in the literature for the maximum clique and maximum independent set problems. One commonly suggested application is community detection for social network analysis [@CommDet]. Even though cliques are known to be too restrictive for finding communities in a network, they prove to be useful in finding overlapping communities. Another example is the finding of the largest set of correlated/uncorrelated instruments in financial markets. This problem can be readily modelled as a maximum clique problem, and it plays an important role in risk management and the design of diversified portfolios (see [@finance] and [@marketGraph]). Recent studies have shown some merit in using a weighted maximum-clique finder for drug discovery purposes (see [@bio] and [@graphSimilarity]). In these studies, the structures of molecules are stored as graphs, and the properties of unknown molecules are predicted by solving the maximum common subgraph problem using the graph representations of the molecules. Aside from the proposed industrial applications, the clique problem is one of the better-studied NP-hard problems, and there exist powerful heuristic and exact algorithms for solving the maximum clique problem in the literature (see, e.g., [@reviewjava], [@heuristic], and [@para]). It is, therefore, beneficial to map a part of, or an entire, optimization problem into a clique problem and benefit from the runtime of these algorithms (see, e.g., [@mapApp]). Problem Decomposition for the Maximum Clique Problem {#Sec:ProbDecomMaxClique} ==================================================== In this section, we explain the details of two problem decomposition methods for the maximum clique problem. The first approach is based on the branch-and-bound framework and is similar to what is dubbed “vertex splitting” in Ref. [@newAnnealerClique]. This method is briefly explained in Sec. \[Sec:BnB\], followed by a discussion on why it fails to meet the problem decomposition requirements of Sec. \[Sec:ProbDecom\]. We then present our own method in Sec. \[Sec:k-core\] and prove that it is an effective problem decomposition method, that is, it generates a polynomial number of subproblems and requires polynomial computational complexity to generate each subproblem. Branch and Bound {#Sec:BnB} ---------------- The branch-and-bound technique (BnB) is a commonly used method in exact algorithms for solving the maximum clique problem (see Ref. [@reviewjava] for a comprehensive review on the subject). At a very high level, BnB consists of three main procedures that are repeatedly applied to a subgraph of the entire graph until the size of the maximum clique is found. The main procedures of a BnB approach consist of: (a) ordering the vertices in a given subproblem and adding the highest-priority vertex to the solution list; (b) finding the space of feasible solutions based on the vertices in the solution list; and (c) assigning upper bounds to each subproblem. ![image](bnb_MB.eps) Fig. \[fig:bnb\] shows a schematic representation of the steps involved in traversing the BnB search tree. In the first step, all of the vertices of the graph are listed at the root of the tree, representing the space of feasible solutions, along with an empty set that will contain possible solutions as the algorithm traverses the search tree (we will call this set “*growing-clique*”). The vertices inside the feasible space are ordered based on some criteria (e.g., increasing/decreasing degree, or the sum of the degree of the neighbours of a vertex [@TomitaMCR]), and the highest-priority vertex ($v_{\rm 1}$) is chosen as the “branching node". The branching node is added to *growing-clique* and the neighbourhood of this node ($\mathcal N(v_{\rm 1})$) is chosen as the new space of feasible solutions. This procedure continues until the domain of feasible solutions becomes an empty list, indicating that now contains a maximal clique. If the size of this clique is larger than the best existing solution, the best solution is updated. The number of nodes in the BnB tree is greatly reduced by applying some upper bounds based on graph colouring [@TomitaSeki] or Max-SAT reasoning [@maxSAT]. These upper bounds prune the tree if the upper bound on the size of the clique inside a feasible space is smaller than the best found solution (minus the size of *growing-clique*). In a fully classical approach, the entire BnB search tree is explored via a classical computer. On the other hand, some of the work can be offloaded to the non-conventional hardware in the hybrid scenario. More precisely, one can stop traversing a particular branch of the search tree when the size of the subproblem under consideration becomes smaller than the capacity of the non-conventional hardware (see, e.g., [@newAnnealerClique]). Although this idea can combine the two hardware devices in an elegant and coherent way, it suffers from two main drawbacks. It creates an exponential number of subproblems (see Fig. $6$ in Ref. [@newAnnealerClique]), and, in the worst case, it takes an exponential amount of time to traverse the search tree until the size of the subproblem becomes smaller than the capacity of the non-conventional hardware. A Proposed Method for Problem Decomposition {#Sec:k-core} ------------------------------------------- $sorted\_nodes$ $\leftarrow$ an empty list $V'\leftarrow$ order vertex set $V$ by non-decreasing vertex degree select a vertex $v$ with minimum $k$-core, and find its neighbourhood $\mathcal N(v)$ **return** $sorted\_nodes$, $k$-$core$ In this section, we explain our proposed problem decomposition method, which is much more effective than BnB (explained in the previous section). We show, in particular, that our proposed method generates a much smaller number of subproblems compared to BnB, and that these subproblems can be obtained via an efficient $\mathcal O(E)$ algorithm. Our method begins by sorting the vertices of the graph based on their $k$-core number. Ref. [@kCore] details the formal $k$-core definition, and proposes an efficient $\mathcal O(E)$ algorithm for calculating the $k$-core number of the vertices of a graph. Intuitively, the $k$-core number of a vertex $v$ is equal to $k$ if it has at least $k$ neighbours of a degree higher than or equal to $k$, *and* not more than $k$ neighbours of a degree higher than or equal to $k+1$. We denote the core number of a vertex $v$ by $K(v)$. The core number of a graph $G$, denoted by $K(G)$, is the highest-order core of its vertices. $K(G)$ is always upper bounded by the maximum degree of the vertices of the graph $\Delta(G)$, and the minimum core number of the vertices is always equal to the minimum degree $\delta(G)$. A *degeneracy*, or *k-core ordering*, of the vertices of a graph $G$ is a non-decreasing ordering of the vertices of $G$ based on their core numbers. Algorithm \[alg:kCorePatent\] is a method for finding the $k$-core ordering of the vertices along with their $k$-core numbers. The following proposition shows that, given a degeneracy ordering for the vertices of the graph, one can decompose the maximum clique problem into a linear number of subproblems. Our proposed method is based on this proposition. \[prop:oracleCounts\] For a graph $G$ of size $n$, one can decompose the maximum clique problem in $G$ into at most $n - K(G) + 1$ subproblems, each of which is upper-bounded in size by $K(G)$. Let $d(v)$ be the degree of vertex $v$ in $G$. From Algorithm \[alg:kCorePatent\] (lines $8$–$11$), the number of vertices that are adjacent to $v$ and precede vertex $v$ in $sorted\_nodes$ is greater than or equal to $d(v) - K(v)$. Therefore, the number of vertices that appear after $v$ in this ordering is upper-bounded by $K(v)$. Using this fact, the algorithm starts from the last $K(G)$ vertices of $sorted\_node$, and solves the maximum clique on that induced subgraph. It then moves towards the beginning of $sorted\_node$ vertex by vertex. Each time it takes a root vertex $w$ and forms a new subproblem by finding the adjacent vertices that are listed after $w$ in $sorted\_node$. The size of these subproblems is upper-bounded by $K(w)$, which itself is upper-bounded by $K(G)$, and the number of the subproblems created in this way is exactly $n - K(G) + 1$. Since we have $$\begin{aligned} \omega(G) \leq K(G) + 1 \leq \Delta(G) + 1,\end{aligned}$$ one can stop the procedure as soon as the size of the clique becomes larger than or equal to the $k$-core number of a root vertex. To illustrate, consider the 6-cycle with a chord shown in Fig. \[fig:C6\]. In the first step, the vertices are ordered based on their core numbers, according to Algorithm \[alg:kCorePatent\]: $sorted\_nodes$ = \[c, b, e, f, d, a\]. ![A 6-cycle graph with a chord[]{data-label="fig:C6"}](C6.eps) Following our proposed algorithm, we first consider the subgraph induced by the last $K(G) = 2$ vertices from the list, that is, $[a,d]$. Solving the maximum clique problem in this subgraph results in a lower bound on the size of the maximum clique, that is, $\omega(G) \geq 2$. After this step, we proceed by considering the vertices one by one from the end of $sorted\_nodes$ to form the subproblems that follow: $$\begin{aligned} root\_node \quad & subproblem \\ f \qquad \quad \, & \quad \quad \{\} \\ e \qquad \quad \, & \quad \,\, \{f,d\} \\ b \qquad \quad \, & \quad \,\,\,\,\, \{a\} \\ c \qquad \quad \, & \quad \,\, \{b,d\} \,.\end{aligned}$$ Among these subproblems, only the two with need to be examined by the clique solver, since the size of the other subproblems is less than or equal to the size of the largest clique found. This example shows how a problem of size six can be broken down to three problems of size two. As a final note, the $k$-core decomposition takes $\mathcal O(E)$ time, and constructing the resulting subproblems takes $\mathcal O(N^2)$. The entire process takes time at the first level, and $\mathcal O(N^{(\ell +1)})$ time at . The maximum number of subproblems can grow up to $N^\ell$ at . Results {#Sec:ResDis} ======= In this section, we discuss our numerical results for different scenarios in terms of density and the size of the underlying graph. In particular, we study the effect of the graph core number, $K(G)$, and the density of the graph on the number of generated subproblems. In the fully classical approach, we also compare the running time of our proposed algorithm with state-of-the-art methods for solving the maximum clique problem in the large, sparse graphs. It is worth noting that $k$-core decomposition is widely used in exact maximum clique solvers as a means to find computationally inexpensive and relatively tight upper bounds in large, sparse graphs. However, to the best of our knowledge, no one has used $k$-core decomposition as a method of problem decomposition as is proposed in this paper (e.g., Ref. [@newAnnealerClique] uses it for pruning purposes and BnB for decomposition). Large and Sparse Graphs ----------------------- The importance of Proposition \[prop:oracleCounts\] is more pronounced when we consider the standard large, sparse graphs listed in Table \[table:results\]. For each of these graphs, we first perform one round of $k$-core decomposition, and then solve the generated subproblems with our own exact maximum clique solver. It is worth mentioning that, after decomposing the original problem into sufficiently smaller subproblems, our approach for finding the maximum clique of the smaller subproblems is similar to what has been proposed in Ref. [@pmc]. [ l r r r r | r r r | r]{} & & & & & &Runtime (s) & &Graph Name & Num. of Vertices & Num. of Edges & $K(G)$ & MaxClique & 1QBit Solver & PMC & BBMCSP & Num. of Subprobs. **Stanford Large Network** & & & & & & & &**Dataset:** & & & & & & & & ego-Facebook & $4,039$ & $88,234$ & $115$ & $69$ & $\textbf{0.009}$ & $0.04$ & $0.03$ & $367$ca-CondMat & $23,133$ & $93,468$ & $25$ & $26$ & $\textbf{0.004}$ & $0.03$& $0.02$ & $3$email-Enron & $36,692$ & $25,985$& $43$ & $20$ & $\textbf{0.01}$ & $0.3$ & $0.06$ & $2235$com-Amazon & $334,863$ & $925,872$& $6$ & $7$ & $\textbf{0.06}$ & $0.06$ & $0.2$ & $3$roadNet-PA & $1,088,092$ & $1,541,898$& $3$ & $4$ & $\textbf{0.1}$ & $2.1$ & $0.4$ & $580$ com-Youtube & $1,134,890$ & $2,987,624$& $51$ & $17$ & $\textbf{0.5}$ & $2.0$ & $2.5$ & $24466$ as-skitter & $1,696,415$ & $11,095,298$& $111$ & $67$ & $\textbf{0.7}$ & $1.4$ & $5.6$ & $4087$ roadNet-CA & $1,965,206$ & $2,766,607$& $3$ & $4$ & $\textbf{0.2}$ & $3.9$ & $0.4$ & $2286$com-Orkut & $3,072,441$ & $117,185,083$& $253$ & $51$ & $\textbf{82}$ & $179$ & $220$ & $741349$com-LiveJournal & $3,997,962$ & $34,681,189$& $360$ & $327$ & $\textbf{2.1}$ & $2.5$ & $20$ & $25$ **Network Repository Graphs**: & & & & & & & soc-buzznet & $101,163$ & $2,763,066$& $153$ & $31$ & $\textbf{1.9}$ & $14.6$ & $4.0$ & $29484$ soc-catster & $149,700$ & $5,448,197$& $419$ & $81$ & $\textbf{1.1}$ & $>1 \mathrm{\, h}$ & $5.5$ & $12095$ delaunay-n20 & $1,048,576$ & $3,145,686$& $4$ & $4$ & $\textbf{1.4}$ & $2.5$ & $2.8$ & $1036595$ web-wikipedia-growth & $1,870,709$ & $36,532,531$& $206$ & $31$ & $\textbf{18}$ & $397$ & file not supported & $358272$ delaunay-n21 & $2,097,152$ & $6,291,408$& $4$ & $4$ & $\textbf{2.8}$ & $5.1$ & $\textbf{2.8}$ & $2073021$ tech-ip & $2,250,498$ & $21,643,497$& $253$ & $4$ & $\textbf{28}$ & $1031$ & $220$ & $222338$ soc-orkut-dir & $3,072,441$ & $117,185,083$& $253$ & $51$ & $\textbf{74}$ & $188$ & $170$ & $741349$ socfb-A-anon & $3,097,165$ & $23,667,394$& $74$ & $25$ & $\textbf{10}$ & $18$ & $30$ & $357836$soc-livejournal-user-groups & $7,489,073$ & $112,305,407$& $116$ & $9$ & $\textbf{103}$ & $>1 \mathrm{\, h}$ & $1600$ & $2404573$ aff-orkut-user2groups & $8,730,857$ & $327,036,486$& $471$ & $6$ & $\textbf{852}$ & $>1 \mathrm{\, h}$ & $2400$ & $4173108$ soc-sinaweibo & $58,655,849$ & $261,321,033$& $193$ & $44$ & $\textbf{93}$ & $>1 \mathrm{\, h}$ & $1070$ & $713652$ In the large and sparse regime, the core numbers of the graphs are typically orders of magnitude smaller than the number of vertices in the graph. This implies that non-CPU hardware of a size substantially smaller than the size of the original problem can be used to find the maximum clique of these massive graphs. For example, the *com-Amazon* graph, with 334,863 vertices and 925,872 edges has $K(G) = 6$. These facts, combined with Proposition \[prop:oracleCounts\], imply that this graph can be decomposed into $\sim$ 334,863 problems of a size $\leq 6$. However, as the table shows, the actual number of subproblems that should be solved is only three, since the $k$-core number of the next subproblem drops to a number less than or equal to the size of the largest clique that was found. Hence, in this specific case, a single call to a non-CPU hardware device of size greater than $21$ can solve the entire problem, that is, this problem can be solved by submitting an effective problem of size 21 to the D-Wave 2000Q chip. Numerical results also indicate that the fully classical runtime of our proposed method is considerably faster than two of the best-known algorithms in the literature for large–sparse graphs, namely PMC [@pmc] and BBMCSP [@bbmcsp]. Table \[table:results\] shows that in some instances, our method is orders of magnitude faster than these algorithms. Hierarchy of minimum degree, maximum degree, max $k$-core, and clique number ---------------------------------------------------------------------------- The $k$-core decomposition proves extremely powerful in the large and sparse regime because it dramatically reduces the size of the problem, and also prunes a good number of the subproblems. This situation changes as we move towards denser and denser graphs. As graphs increase in density, $K(G)$ approaches the graph size, and the size reduction becomes less effective in a single iteration of decomposition. It is hence necessary to apply the decomposition method for at least a few iterations in the dense regime. ![$\omega(G)$, $K(G)$, $\Delta(G)$, and $\delta(G)$ comparisons in different graph density regimes[]{data-label="fig:regimes"}](regimes.eps) Moreover, the number of subproblems that should be solved also grows as the graphs increase in density. This is partially due to there being more rounds of decomposition, and partially to the hierarchy of the clique number $\omega(G)$ and the maximum and minimum core numbers (shown in Fig. \[fig:regimes\]). In the sparse regime, the clique number $\omega(G)$ lies between the minimum core number $\delta(G)$ and the core number of the graph $K(G)$. This means that all of the subproblems that stem from a root node with a core number less than the clique number can be pruned. This phenomenon leads to the effective pruning that is reflected in the number of subproblems listed in Table \[table:results\]. On the other hand, as shown in Fig. \[fig:regimes\], the minimum core of the graph, i.e, $\delta(G)$ is larger than the clique number in the dense regime. Therefore, core numbers are no longer suitable for upper-bounding purposes, and some other upper-bounding methods should be used, as we discuss in the next section. Dense Graphs ------------ As explained in the previous section, the $k$-core number becomes an ineffective upper bound in the case of dense graphs; therefore, the number of subproblems that should be solved grows as $N^\ell$ for $\ell$ levels of decomposition. Because of this issue, and since colouring is an effective upper bound in the dense regime, we used the heuristic DSATUR algorithm explained in Ref. [@lewis2015guide] to find the colour numbers of each generated subproblem. We then ignored the subproblems with a colour number less than the size of the largest clique that has been found from the previous set of subproblems. This technique reduces the number of subproblems by a large factor, as can be seen in Table \[table:results2\]. In this table, we present the results for random Erdős–Rényi graphs of three different sizes and varying densities. For each size and density, we generated $10$ samples, and decomposed the problems iteratively three levels ($decomposition\_level = 3$). The reported results are the average of the $10$ samples for each category. “max", “min", and “avg" refer to the maximum, minimum, and average size of the generated subproblems at every level. Notice the significant difference in the $\frac{K(G)}{ \rm{graph \, size}}$ ratio between the sparse graphs presented in Table \[table:results\] and the relatively dense graphs presented here. Unlike in the sparse regime, the gain in size reduction after one level of decomposition is only a factor of few. This means that a graph of size $N$ is decomposed into $\sim N$ graphs of smaller but relatively similar size after one level of decomposition. This fact hints towards using more levels of decomposition, as with more decomposition, the maximum size of the subproblems decreases. However, there is usually a tradeoff between the number of generated subproblems and the maximum size of these subproblems, as can be seen in Table \[table:results2\]. It is, therefore, not economical to use problem decomposition for these types of graphs in a fully classical approach for solving the maximum clique problem in dense graphs. However, if a specialized non-CPU hardware device becomes significantly faster than the performance of CPUs on the original problem, this problem decomposition approach will become useful. In fact, the merit of this approach compared to the BnB-based decomposition, shown in Fig. 6 in Ref. [@newAnnealerClique], is that it generates considerably fewer of subproblems, and that the time for constructing the subproblems is polynomial. Fig. \[fig:densityScaling\] shows the scaling of the number of subproblems with the density of the graph. For this plot, we assume two devices of size $\{45,65\}$, representing an instance of the D-Wave 2X chip, and the theoretical upper bound on the maximum size of a complete graph embeddable into the new D-Wave 2000Q chip. The graph size is fixed to 500 in every case, and the points are the average of 10 samples, with error bars showing standard deviation. For each point, we first run a heuristic on the whole graph, and then prune the subproblems based on their colour numbers obtained using the DSATUR algorithm of Ref. [@lewis2015guide]. Densities below 0.2 are shaded with a grey band, since the number of subproblems for $\{0.05,0.1,0.15\}$ densities is zero. This happens because all of the subproblems are pruned after the second round of decomposition. Aside from scaling with respect to density, this plot also shows that a small increase to the size of the non-CPU hardware (e.g., from 45 nodes to 65), will not have a significant effect on the total number of generated subproblems. In these scenarios, the non-CPU hardware becomes competitive with classical CPUs only if, in comparison to CPUs, it can either solve a single problem with very high quality and speed, or it can be mass produced and parallelized at lower costs. ![Scaling of the number of subproblems with density. Graph size is fixed to 500 in every case, and every point is the average of 10 samples, with error bars representing sample standard deviation.[]{data-label="fig:densityScaling"}](subProblem.eps) Discussion {#Sec:Dis} ========== We focused on the specific case of the maximum clique problem and proposed a method of decomposition for this problem. Our proposed decomposition technique is based on the $k$-core decomposition of the input graph. The approach is motivated mostly by the emergence of non-CPU hardware for solving hard problems. This approach is meant to extend the capabilities of this new hardware for finding the maximum clique of large graphs. While the size of generated subproblems is greatly reduced in the case of sparse graphs after a single level of decomposition, an effective size reduction happens only after multiple levels of decomposition in the dense regime. Compared to the branch-and-bound method, this method generates considerably fewer subproblems, *and* creates these subproblems in polynomial time. We believe that further research on finding tighter upper bounds on the size of the maximum clique in each subproblem would be extremely useful. Tighter upper bounds make it possible to attain more levels of decomposition, and hence reduce the problem size, without generating too many subproblems. In the fully classical approach, there is a chance that combining integer programming solvers for the maximum clique problem with our proposed method can lead to better runtimes for dense graphs, or for large, sparse graphs with highly dense $k$-cores. This suggestion is based on from the fact that integer programming solvers such as CPLEX become highly competitive for graphs of moderate size, that is, between 200 and 2000, and high density, that is, higher than 90% (e.g., see Table 1 in Ref. [@wMaxCliqueExact]). Since decomposition tends to generate relatively high-density and small-sized subgraphs, the combination of the two we consider to be a promising avenue for future study. Acknowledgement {#acknowledgement .unnumbered} =============== The authors would like to thank Marko Bucyk for editing the manuscript, and Michael Friedlander, Maliheh Aramon, Sourav Mukherjee, Natalie Mullin, Jaspreet Oberoi, and Brad Woods for useful comments and discussion. This work was supported by 1QBit. =0mu plus 2mu
--- abstract: 'We present the first scalable *bound analysis* that achieves *amortized complexity analysis*. In contrast to earlier work, our bound analysis is not based on general purpose reasoners such as abstract interpreters, software model checkers or computer algebra tools. Rather, we derive bounds directly from abstract program models, which we obtain from programs by comparatively simple invariant generation and symbolic execution techniques. As a result, we obtain an analysis that is more predictable and more scalable than earlier approaches. We demonstrate by a thorough experimental evaluation that our analysis is fast and at the same time able to compute bounds for challenging loops in a large real-world benchmark. Technically, our approach is based on lossy vector addition systems (VASS). Our bound analysis first computes a lexicographic ranking function that proves the termination of a VASS, and then derives a bound from this ranking function. Our methodology achieves amortized analysis based on a new insight how lexicographic ranking functions can be used for bound analysis.' author: - Moritz Sinn - Florian Zuleger - 'Helmut Veith [^1]' bibliography: - 'main.bib' title: A Simple and Scalable Static Analysis for Bound Analysis and Amortized Complexity Analysis --- = \[diamond, draw, text width=3em, text badly centered, inner sep=0pt, aspect=2, minimum width=4.1em, minimum height=2.3em\] = \[rectangle, draw, text width=3em, text centered, minimum height=2em, minimum width=4.1em\] = \[rectangle, draw, text width=3em, text centered, minimum height=2em\] = \[draw, color=black, -latex\] = \[rectangle, draw, text width=3em, text centered, rounded corners, minimum height=2em\] = \[draw, color=black\] = \[draw, ellipse\] #### Acknowledgements. We thank Fabian Souczek and Thomas Pani for help with the experiments. [^1]: Supported by the Austrian National Research Network S11403-N23 (RiSE) of the Austrian Science Fund (FWF) and by the Vienna Science and Technology Fund (WWTF) through grants PROSEED and ICT12-059.
--- abstract: | Bacteria (e.g. [*E. Coli*]{}) are very sensitive to certain chemoattractants (e.g. asparate) which they themselves produce. This leads to chemical instabilities in a uniform population. We discuss here the different case of a single bacterium, following the general scheme of Brenner, Levitov and Budrene. We show that in one and two dimensions (in a capillary or in a thin film) the bacterium can become self-trapped in its cloud of attractant. This should occur if a certain coupling constant $g$ is larger than unity. We then estimate the reduced diffusion $D_{\rm eff}$ of the bacterium in the strong coupling limit, and find $D_{\rm eff}\sim g^{-1}$. author: - Yoav Tsori - 'Pierre-Gilles de Gennes' title: Self Trapping of a Single Bacterium in its Own Chemoattractant --- Introduction ============ Budrene and Berg [@BB] studied an initially homogeneous population of [*Eshrechia Coli*]{} ([*E. Coli*]{}) bacteria on an agar plate, in conditions where food (succinate) is available. Depending on the food content they discovered various patterns such as moving rings or aggregates. These patterns were lucidly interpreted by Brenner, Levitov and Budrene [@brenner]. They observed that in the (usual) conditions of rapid diffusion, the bacteria produce a concentration field of the chemoattractant, $c(r)$, which has the form of a gravitational field ($c(r)\sim 1/r$ in three dimensions). The bacteria attract each other and cluster by a “gravitational” instability. However, when a cluster (“star”) is formed, food is depleted and the star then becomes dark in the center, a ring is created, etc. Brenner [*et. al.*]{} also discussed the spontaneous aggregation of a small group of bacteria, and found that they shall indeed aggregate if their number $N$ is larger than a certain limit $N^*$. Because of the (rough) similarity with astrophysics, they called $N^*$ the Chandrasekhar limit. Our aim here is to discuss some properties of these small clusters and in particular the limit of a [*single*]{} bacterium. We point out that it may be trapped in its own cloud of asparate. This question has some weak similarity with a classical problem of solid-state physics and field theory, the [*polaron*]{} problem, defined first by Frolich [@frolich] and analyzed by many theorists [@LP; @pekar; @feynman]. A polaron is an electron coupled to a phonon field in a solid. In the strong coupling limit analyzed by Pekar [@pekar], the electron builds up a distorted region, and is essentially occupying the lower bound state in the resulting (self-consistent) potential. However, the electron moves slowly: it has a large effective mass. Our problem here is somewhat similar: if a certain coupling constant $g$ is larger than unity, the bacterium sees a strong attractant cloud. We shall see that in three dimensions, it can always escape, but in one and two dimensions it cannot. The question of interest is then the [*effective*]{} diffusivity of the bacterium. In section II we start from the basic coupled equations for the bacteria and attractant [@brenner; @KS] (except for an alteration of the food kinetics). From this we investigate the possibility of a self-trapped state, define a coupling constant $g$ and find that it can indeed be of order unity in some favorable cases. $g$ is (except for coefficients) equal to $1/N^*$, where $N^*$ is the Chandrasekhar limit of ref. [@brenner]. We are interested in high $g$ values ($N^*<1$). If there are a number $N$ of bacteria in one small droplet, $g$ is multiplied by $N$ and the large $g$ limit becomes easier to reach. In section III we discuss this high $g$ limit and the renormalized diffusion constant. Self-trapping ============= [*E. Coli*]{} colonies enjoying a large supply of food are governed by two coupled reaction-diffusion equations $$\begin{aligned} \frac{\partial\rho}{\partial t}&=&-\nabla{\bf J}\hspace{2cm} {\bf J}=-D_b\nabla\rho+\kappa\rho\nabla c\\ \frac{\partial c}{\partial t}&=&D_c\nabla^2c+\beta\rho\label{c_t}\end{aligned}$$ $\rho$ and $c$ are the number densities of the bacteria and chemoattractant fields, respectively, $D_b$ and $D_c$ are the diffusion constants of the bacteria and attractant, $\kappa$ determines the strength of positive feedback and $\beta$ is the production rate of the attractant by the bacteria. Below we are interested in the case of fast attractant diffusion. In this limit, Eq. (\[c\_t\]) shows us that $\rho$ and $c$ relate to each other like charge density and potential in electrostatics. In this analogy $\nabla c$ is the force acting on a bacterium. For one bacterium at the origin and in two dimensions we find that $$\begin{aligned} c=c_0\ln\left(\frac{r_0}{r}\right)+const.\end{aligned}$$ Here $r_0$ is the cutoff length. It is instructive to consider the steady-state obtained when ${\bf J}=0$. We find that $\rho$ obeys the “Boltzmann distribution” $$\begin{aligned} \rho=\tilde{\rho}_0\exp\left(\frac{\kappa c}{D_b}\right)&=&\rho_0\left(\frac{r_0}{r}\right)^g\\ g&=&\frac{\kappa c_0}{D_b}\end{aligned}$$ Therefore, a self-trapped state exists if the coupling constant is $g\geq 2$. In order to know the value of $g$, we denote $e$ the thickness of the growth medium on the Petri dish, and equate the attractant flow out of a circular domain of radius $r$ with the attractant production, $$\begin{aligned} eD_c\cdot 2\pi r\nabla c\simeq 2\pi eD_c c_0=\beta\end{aligned}$$ Thus, the coupling constant $g$ can be written as $$\begin{aligned} g=\frac{\kappa\beta}{2\pi e D_b D_c}\end{aligned}$$ Putting reasonable values for the parameters [@brenner] $D_b\simeq 6.6\cdot 10^{-6}$ cm$^2$/s, $D_c=D_b$, $\beta=10^3$ molecules/bacterium/s, $\kappa\simeq 10^{-14}$ cm$^5$/s and $e=0.05$ cm, we find that $g\approx 0.73$. This estimate shows that the coupling between the bacterium and its own chemoattractant field can be rather strong in many experimental situations. Calculation along similar lines for the one-dimensional infinitely long “wire” with diameter $d$ gives $c\sim |x|$ and confined bacterium, $\rho=\rho_0\exp(-2\beta\kappa|x|/\pi d^2 D_bD_c)$. In three dimensions, however, $c\sim 1/r$ and a self-trapped state does not exist because $\rho\sim \exp(\kappa c/D_b)$ does not tend to zero at large distances. As we will see in the next section, the coupling of a moving bacterium with its chemoattractant leads to the appearance of a “drag force” acting on the bacterium, which is manifested by an effective, smaller, diffusion constant. Effective mobility of a self trapped bacterium in a film ======================================================== We have seen above that in favorable situations the coupling constant can be large, $g\gg 1$. This case occurs, for example, with a small drop (with diameter comparable to the thickness of the culture medium) containing a significant number of bacteria. In the following we consider the bacteria moving at a constant small velocity $v$, and look for the effective diffusion coefficient. The attractant profile is given by $$\begin{aligned} \frac{\partial c}{\partial t}=D_c\nabla ^2c+\frac{\beta}{e}\delta(x-vt)\end{aligned}$$ We write $c(r)$ as $c=\int c_{\bf k}\exp(i{\bf k}({\bf r}-vt))~{\rm d}{\bf k}$ and obtain for the Fourier component $c_{\bf k}$ $$\begin{aligned} c_{\bf k}=\frac{\beta}{D_ck^2-i{\bf kv}}\simeq \frac{\beta}{eD_ck^2}\left(1+\frac{i{\bf kv}}{D_ck^2}\right)\end{aligned}$$ The “force” $f$ acting on the bacteria is $$\begin{aligned} f=\nabla c=\int i{\bf k}c_{\bf k}~{\rm d}{\bf k}=\frac{\beta }{eD_c^2}\int \frac{{\bf k}({\bf k v})}{k^4}~{\rm d}{\bf k}=\frac{\pi\beta v}{2eD_c^2}\ln(k_{\rm max}/k_{\rm min})\end{aligned}$$ Approximating the logarithmic term by unity, we identify the effective mobility $\kappa_{\rm eff}$ as $$\begin{aligned} \kappa_{\rm eff}=\frac{v}{f}=\frac{2eD_c^2}{\pi\beta}\end{aligned}$$ We may return for a moment to a problem of many bacteria with concentration $\rho({\bf r})$ moving in an external concentration field $c_{\rm ext}({\bf r})$. The bacteria density at steady state $\rho=\rho_0 \exp(\kappa c_{\rm ext}/D_b)$ is the same if it is written in terms of the “effective” quantities $\kappa_{\rm eff}$ and $D_{\rm eff}$ instead of $\kappa$ and $D_b$. This means that $$\begin{aligned} \frac{\kappa_{\rm eff}}{D_{\rm eff}}=\frac{\kappa}{D_b}\end{aligned}$$ This relation tells us that the effective diffusion constant $D_{\rm eff}$ is given by $$\begin{aligned} D_{\rm eff}=\frac{D_c}{\pi^2 g}\end{aligned}$$ Hence, bacterial diffusion is greatly diminished because of the chemoattractant cloud which is left behind. Discussion ========== Isolated bacteria moving in thin films (e.g. in the dental plaque) may be slowed down by their own chemoattractant at scales larger than the film thickness. This should be observable in experiments using fluorescent bacteria. In addition, clusters of a few bacteria are nearly stopped; this could be relevant for their ultimate fixation in the plaque. We discussed some effects of the chemoattractant cloud. One may wonder whether there is an analogous effect related to the food problem: the bacterium eats some food, and this creates a depleted food region around it. If this “food hole” is lagging behind the bacterium, there will be more food available ahead, and the bacterium can go faster. (The corresponding transport is reminiscent of a hot wire anemometer). However, the resulting food effect goes like $v^2$ (not $v$) and is thus irrelevant for our problem of mobility at low $v$. We would like to thank P. Silberzan for introducing us to the chemoattractant problems and also E. Raphaël for useful comments and discussions. [0]{} . . . . . . .
--- abstract: 'We consider an attacker-operator game for monitoring a large-scale network that is comprised on components that differ in their criticality levels. In this zero-sum game, the operator seeks to position a limited number of sensors to monitor the network against an attacker who strategically targets a network component. The operator (resp. attacker) seeks to minimize (resp. maximize) the network loss. To study the properties of mixed-strategy Nash Equilibria of this game, we first study two simple instances: (i) When component sets monitored by individual sensor locations are mutually disjoint; (ii) When only a single sensor is positioned, but with possibly overlapping monitoring component sets. Our analysis reveals new insights on how criticality levels impact the players’ equilibrium strategies. Next, we extend a previously known approach to obtain an approximate Nash equilibrium for the general case of the game. This approach uses solutions to minimum set cover and maximum set packing problems to construct an approximate Nash equilibrium. Finally, we implement a column generation procedure to improve this solution and numerically evaluate the performance of our approach.' author: - 'Jezdimir Milošević$^1$, Mathieu Dahan$^2$, Saurabh Amin$^3$, Henrik Sandberg$^1$[^1]' bibliography: - 'CDC\_2019\_BIB.bib' title: | **A Network Monitoring Game with Heterogeneous\ Component Criticality Levels** --- Introduction ============ Critical infrastructure networks such as water distribution or power networks are attractive targets for malicious attackers [@sandberg2015cyberphysical; @weerakkody2019challenges]. In fact, successful attacks against these networks have already been documented [@slay2007lessons; @case2016analysis], amplifying the need for development of effective defense strategies. An important part of a defense strategy is attack detection [@nist], which can be achieved by deployment of sensors to monitor the network [@dan2010stealth; @2017arXiv170500349D; @krause2011randomized]. However, if a network is large, it is expected that the number of sensors would be insufficient to enable monitoring of the entire network. Hence, the problem that naturally arises is how to strategically allocate a limited number of sensors in that case. We adopt a game theoretic approach to tackle this problem. So far, game theory has been used for studying various security related problems [@zhu2015game; @MIAO201855; @7498672; @shreyasinvestment; @pita2008deployed; @washburn1995two; @bertsimas2016power], including the ones on sensor allocation. The existing works considered developing both static [@stack_metju; @pirani2018game; @ren2018secure] and randomized (mixed) monitoring strategies [@2017arXiv170500349D; @krause2011randomized]. Our focus is on randomized strategies, which are recognized to be more effective than static once the number of sensors to deploy is limited [@krause2011randomized; @2017arXiv170500349D]. Our game model is related to the one in [@2017arXiv170500349D]. The network consists of the components to be monitored, and sensor locations can be selected from the predefined set of nodes. From each node, attacks against a subset of components can be detected. However, while [@2017arXiv170500349D] studies the game where the players (the operator and the attacker) make decisions based on so-called detection rate, in our game the decisions are made based on the component criticality. This game model is motivated by the risk management process, where one first conducts a risk assessment to identify the critical components in the system, and then allocates resources based on the output of the assessment [@nist]. Particularly, the operator seeks placing a limited number of sensors to minimize the loss that is defined through the component criticality, while the attacker seeks to attack a component to maximize it. A monitoring strategy we aim to find is the one that lies in a Nash Equilibrium (NE) of the game. Since our game is a zero-sum game, a NE can be calculated by solving a pair of linear programs [@basar1999dynamic]. However, these programs are challenging to solve in our case, since the number of actions of the operator grows rapidly with the number of sensors she has at disposal. Moreover, a NE calculated using this numerical procedure usually does not provide us with much intuition behind the players’ equilibrium strategies. Our objective in this work is to: (i) Study how the components’ criticality influences the equilibrium strategies of the players; (ii) Investigate if some of the tools from [@2017arXiv170500349D] can be used to calculate or approximate an equilibrium monitoring strategy for our game in a tractable manner. Our contributions are threefold. Firstly, for a game instance where component sets monitored by individual sensor locations are mutually disjoint, we characterize a NE analytically (Theorem \[theorem:analytical\_solution\_special\_case\]). This result provide us with valuable intuition behind the equilibrium strategies, and reveals some fundamental differences compared to the game from [@2017arXiv170500349D]. Particularly, the result illustrates how the components’ criticality influences strategies of the players, that the resource limited operator can leave some of the noncritical components unmonitored, and that the attacker does not necessarily have to attack these components. We also consider a game instance where a single sensor is positioned but the monitoring sets are allowed to overlap, and extend some of the conclusions to this case (Proposition \[theorem:solution\_special\_case\_3\]). Secondly, we show that the mixed strategies proposed in [@2017arXiv170500349D] can be used to obtain an approximate NE. In this approximate NE, the monitoring (resp. attack) strategy is formed based on a solution to minimum set cover (resp. maximum set packing) problem. A similar approach for characterizing equilibria was also used in [@pita2008deployed; @washburn1995two; @bertsimas2016power], yet for specific models and player resources. Our analysis reveals that these strategies may represent an exact or a relatively good approximation of a NE if the component criticality levels are homogeneous, while the approximation quality decreases if the gap in between the maximum and the minimum criticality level is large (Theorem \[thm:mix\_strategies\_diff\_indexes\]). Finally, we discuss how to improve the set cover monitoring strategy from the above-mentioned approximate equilibrium. The first approach exploits the intuition from Theorem \[theorem:analytical\_solution\_special\_case\]. Particularly, if a group of the components have a criticality level sufficiently larger then the others, we show that the strategy can be improved by a simple modification (Proposition \[thm:binary\_weights\]). The second approach is by using a column generation procedure (CGP) [@desrosiers2005primer]. This procedure was suggested in [@2017arXiv170500349D] as a possible way to improve the set cover strategy, but it was not tested since the strategy already performed well. We show that CGP can be applied in our game as well, and test it on benchmarks of large scale water networks. The results show that: (i) The running time of CGP rapidly grows with the number of deployed sensors, but the procedure can still be used for finding an equilibrium monitoring for water networks of several hundred nodes; (ii) Running a limited number of iterations of CGP can considerably improve the set cover monitoring strategy. The paper is organized as follows. In Section \[section:security\_game\], we introduce the game. In Sections \[section:exact\], we discuss two special game instances. In Sections \[section:approximate\_strategies\], we show that the strategies from [@2017arXiv170500349D] can be used to obtain an approximate NE, and discuss how the monitoring strategy from this approximate equilibrium can be further improved. In Section \[section:simulations\], we test CGP. In Section \[section:conclusion\], we conclude. Game Description {#section:security_game} ================ Our network model considers a set of components $\mathcal{E}$$=$$\{e_1,\ldots,e_m\}$ that can be potential targets of an attacker, and a set of nodes $\mathcal{V}$$=$$\{v_1,\ldots,v_n\}$ that can serve as sensor positions for the purpose of monitoring. By placing a sensor at node $v$, one can monitor a subset of components $E_v$$ \subseteq$$ \mathcal{E}$, which we refer to as the monitoring set of $v$. Without loss of generality, we assume $E_v$$ \neq$$ \emptyset$, and that every component can be monitored from at least one node. If sensors are positioned at a subset of nodes $V$$\subseteq $$\mathcal{V}$, then the set of monitored components can be written as $E_V$$\coloneqq$$\cup_{v\in V}E_v$. We refer the reader to Fig. \[figure:example\_0\] for an illustration of monitoring sets. ![The set of nodes (resp. components) is $\mathcal{V}$$=$$\{v_1,\ldots,v_4\}$ (resp. $\mathcal{E}$$=$$\{e_1,\ldots,e_7\}$). The monitoring sets are $E_{v_1}$$=$$\{e_1,e_2\}$, $E_{v_2}$$=$$\{e_2,e_3\}$, $E_{v_3}$$=$$\{e_3,\ldots,e_7\}$, and $E_{v_4}$$=$$\{e_5\}$. []{data-label="figure:example_0"}](example_0.pdf){width="75mm"} To study the problem of strategic sensor allocation in the network, we adopt a game-theoretic approach. Specifically, we consider a zero sum game $\Gamma$$=$$\langle\{1,2\},(\mathcal{A}_1,\mathcal{A}_2),l\rangle$, in which Player 1 (P1) is the operator and Player 2 (P2) is the attacker. P1 can select up to $b_1$ nodes from $\mathcal{V}$ to place sensors and monitor some of the network components from $\mathcal{E}$. We assume that these sensors are protected, in that they are not subject to the actions of P2. P2 seeks to select a component from $\mathcal{E}$ to attack. We assume that if P1 successfully detects the attack, she can start a response mechanism to mitigate the damage. Thus, in our model, the attack is successful only if it remains undetected by P1. Based on the discussion, the action set of P1 (resp. P2) is $\mathcal{A}_1$$=$$\{V$$ \in $$2^\mathcal{V}$$|$$\hspace{1mm}|V|$$\leq$$ b_1 \}$ (resp. $\mathcal{A}_2$$=$$\mathcal{E}$). The loss function $l$$:$$ \mathcal{A}_1$$ \times$$ \mathcal{A}_2 $$\longrightarrow$$ \mathbb{R}$ is defined by $$\label{eqn:index_and_set_x} l(V,e):=\begin{cases} w_{e}, \hspace{2.5mm} e \notin E_V, \\ \hspace{2mm}0,\hspace{3mm}e \in E_V, \end{cases}$$ where $w_{e}$$\in$$ (0,1]$ is a known constant whose value indicates the level of criticality of the component $e$; the assumption $w_e$$>$$0$ is without loss of generality. For practical purposes, for each $e$$\in$$\mathcal{E}$, $w_e$ can be evaluated as the normalized monetary loss to P1, negative impact on the overall system functionality when the component $e$ is compromised by P2, or a combination of several factors. We assume that P1 (resp. P2) seeks to minimize (resp. maximize) $l$. The players are allowed to use mixed strategies. A mixed strategy of a player is a probability distribution over the set of her pure actions. Particularly, mixed strategies are defined as $$\begin{aligned} & \sigma_{1} \in \Delta_1, \hspace{0.2mm}\Delta_1=\bigg\{\sigma_{1} \in [0,1]^{|\mathcal{A}_1|}\bigg| \sum_{V\in \mathcal{A}_1} \sigma_1(V) =1 \bigg\},\\ & \sigma_{2} \in \Delta_2, \hspace{0.2mm}\Delta_2=\bigg\{\sigma_{2} \in [0,1]^{|\mathcal{A}_2|}\bigg| \sum_{e \in \mathcal{A}_2} \sigma_2(e) =1\bigg\},\end{aligned}$$ where $\sigma_{1}$ (resp. $\sigma_{2}$) is a mixed strategy of P1 (resp. P2), and $\sigma_1(V)$ (resp. $\sigma_2(e)$) is the probability the action $V$ (resp. $e$) is taken. One interpretation of mixed strategy $\sigma_1$ for P1 is that it provides a randomized sensing plan; similarly for P2. For example, in a day-to-day play in which both players play myopically, P1 (resp. P2) selects sensor placement (resp. attack) plan according to sampling from probability distribution $\sigma_1$ (resp. $\sigma_2$). In the analysis that follows, it is convenient to characterize $\sigma_1$ through the marginal probabilities. The marginal probability $\rho_{\sigma_{1}}(v)$ is given by $$\begin{aligned} \label{eqn:marginal} \rho_{\sigma_{1}}(v) \coloneqq \sum_{ V \in \mathcal{A}_1, v \in V}\sigma_1(V),\end{aligned}$$ and it represents the probability that a sensor is placed at $v$ if P1 plays $\sigma_{1}$. Next, given $(\sigma_1$$,$$\sigma_2)$$\in$$ \Delta_1 $$\times $$\Delta_2$, the expected loss is defined by $$\begin{aligned} L(\sigma_1,\sigma_2) \coloneqq \sum_{V \in \mathcal{A}_1} \sum_{e \in \mathcal{A}_2} \sigma_1(V) \sigma_2(e) l(V,e).\end{aligned}$$ We use $L(V,\sigma_2)$ (resp. $L(\sigma_1,e)$) to denote the case where $\sigma_1(V)$$=$$1$ (resp. $\sigma_2(e)$$=$$1$) for some $V $$\in$$ \mathcal{A}_1$ (resp. $e$$ \in$$ \mathcal{A}_2$). We are concerned with strategy profile(s) that represent a NE of $\Gamma$. A strategy profile $(\sigma^*_1$$,$$\sigma^*_2)$ is a NE if $$\begin{aligned} L(\sigma_1^{*},\sigma_2) \leq L(\sigma_1^{*},\sigma_2^{*}) \leq L (\sigma_1,\sigma_2^{*}),\end{aligned}$$ holds for all $\sigma_{1}$$ \in$$ \Delta_1$ and $\sigma_{2}$$ \in$$ \Delta_2$. We refer to $L (\sigma_1^{*},\sigma_2^{*})$ as the value of the game. Thus, given that P2 plays according to $\sigma_2^{*}$, P1 cannot perform better than by playing according to randomized monitoring strategy $\sigma_1^{*}$. Additionally, in a zero sum game, the value of the game is equal for every NE. Hence, it suffices for P1 to find a single randomized monitoring strategy that lies in equilibrium. Similar argument holds for P2’s randomized attack strategy $\sigma_2^{*}$. We say that a strategy profile $(\sigma^\epsilon_1,\sigma^\epsilon_2)$ is an $\epsilon$–NE of $\Gamma$ if $$\begin{aligned} L(\sigma^\epsilon_1,\sigma_2)-\epsilon \leq L (\sigma^\epsilon_1,\sigma^\epsilon_2) \leq L(\sigma_1,\sigma^\epsilon_2)+\epsilon, \hspace{1mm}\epsilon\geq 0,\end{aligned}$$ for all $\sigma_{1} $$\in $$\Delta_1$ and $\sigma_{2}$$ \in $$\Delta_2$. In this case, if P2 plays according to $\sigma^\epsilon_2$, P1 may be able to decrease her loss by deviating from $\sigma^\epsilon_1$, but not more than $\epsilon$. Thus, if $\epsilon$ is small enough, $\sigma^\epsilon_{1}$ represents a good suboptimal strategy; similarly for P2. Since $\Gamma$ is a zero-sum game with finite number of player actions, equilibrium strategies and the value of the game in a NE exists, and can be obtained by solving the following pair of linear programs [@basar1999dynamic] $$\label{eqn:original_LPs} \begin{aligned} (\text{LP}_1)\hspace{2mm}&\underset{z_1,\sigma_1 \in \Delta_1 }{\text{minimize }} z_1 \hspace{2mm}\text{subject to}\hspace{2mm}L(\sigma_1,e)\leq z_1, \forall e\in\mathcal{A}_2, \\ (\text{LP}_2)\hspace{2mm}&\underset{z_2,\sigma_2\in \Delta_2}{\text{maximize }} z_2\hspace{2mm} \text{subject to}\hspace{2mm}L(V,\sigma_2)\geq z_2, \forall V\in \mathcal{A}_1. \end{aligned}$$ Yet, these LPs can be computationally challenging to solve using standard optimization solvers for realistic instances of $\Gamma$. Namely, since the cardinality of $\mathcal{A}_1$ rapidly grows with respect to $b_1$, so does the number of variables (resp. constraints) of $\text{LP}_1$ (resp. $\text{LP}_2$). In the following section, we provide structural properties of equilibria for two simple but instructive cases. Subsequently, we discuss an approach to compute $\epsilon$–NE, and then discuss how to further improve the monitoring strategy from this $\epsilon$–NE. Exact Equilibrium Strategies {#section:exact} ============================ In this section, we first study the game instance in which the monitoring sets are mutually disjoint. We then analyze the game in which the monitoring sets can overlap with each other, but P1 can only use a single sensor ($b_1$$=$$1$). Mutually Disjoint Monitoring Sets {#section:first_special_case} --------------------------------- We first derive a NE for an instance of $\Gamma$ where the monitoring sets are mutually disjoint, that is, $E_{v_i} $$\cap$$ E_{v_j}$$=$$\emptyset$ holds for any two nodes $v_i $$\neq $$v_j$. Let $e^*_i$ denote the component from $E_{v_i}$ with the largest criticality $w_{e^*_i}$. One can identify such a component for each of the monitoring sets, and assume without loss of generality $w_{e^*_1}$$\geq $$\ldots $$\geq$$ w_{e^*_n}$. For given $b_1$, we define $Z(b_1)$ as follows: $$\label{eqn:Zset} Z(b_1)=\bigg\{j\in \{1,\ldots,n\} \bigg|\frac{j-b_1}{\sum_{i=1}^{j}1/w_{e^*_i}}\leq w_{e^*_j} \bigg\}.$$ We argue that this set determines nodes on which P1 places sensors in a NE. Particularly, let $p$ be the largest element of $Z(b_1)$, $E_p$$=$$\{e^*_1,\ldots,e^*_p\}$, $S_p$$=$$\sum_{i=1}^{p}1/w_{e^*_i}$, and $(\sigma^*_1,\sigma^*_2)$ be a strategy profile that satisfies the following conditions: $$\begin{aligned} \label{eqn:def_strategy_analytical} \rho_{\sigma^*_1}(v_j )&= \begin{cases} 1-\frac{p-b_1}{w_{e^*_j}S_p},\hspace{2mm}j\leq p, \\ \hspace{13mm} 0,\hspace{2.2mm}j>p, \end{cases}\\ \label{eqn:att_strategy_analytical} \sigma^*_2(e)&= \begin{cases}\frac{1}{w^*_{e} S_p},\hspace{2mm}e \in E_p, \\ \hspace{5.5mm} 0,\hspace{2mm}\text{otherwise}. \end{cases}\end{aligned}$$ Lemma \[lemma:existance\_eq\_1\] establishes existence of $(\sigma^*_1,\sigma^*_2)$. In Theorem \[theorem:analytical\_solution\_special\_case\], we show that this strategy profile is a NE.  \[lemma:existance\_eq\_1\] There exists at least one strategy profile $(\sigma^*_1,\sigma^*_2)$ that satisfies –. To prove existence of $\sigma^*_1$, we need to prove: (i) $\rho_{\sigma^*_1}(v )$$\in$$[0,1]$ for any $v$$\in$$\mathcal{V}$; (ii) $\sum_{v\in \mathcal{V}}\rho_{\sigma^*_1}(v)$$=$$b_1$. If (i) and (ii) are satisfied, then $\sigma^*_1 $$\in$$ \Delta_1$ from Farkas lemma (see Lemma EC.6. [@2017arXiv170500349D]). We begin by proving (i). Note that $b_1$$ \in$$ Z(b_1)$, so $p$$\geq $$b_1$. Then $\frac{p-b_1}{w_{e^*_j}S_p}$$\geq$$ 0$, which implies $\rho_{\sigma^*_1}(v )$$\leq $$1$ for any $v$$\in$$\mathcal{V}$. From $w_{e^*_1}$$\geq $$\ldots $$\geq$$ w_{e^*_p}$ and , we have $\frac{p-b_1}{w_{e^*_1 S_p}}$$\leq$$\ldots$$ \leq$$ \frac{p-b_1}{w_{e^*_p S_p}}$$\leq$$ 1.$ Hence, $0 $$\leq $$\rho_{\sigma^*_1}(v )$ must hold for any $v$$\in$$\mathcal{V}$. Thus, (i) is satisfied. In addition, we have $$\sum_{v\in \mathcal{V}}\rho_{\sigma^*_1}(v)\stackrel{\eqref{eqn:def_strategy_analytical}}{=}p-\frac{p-b_1}{S_p}\sum_{i=1}^p \frac{1}{w^*_{e_i}} = b_1,$$ so (ii) holds as well. Thus, $\sigma^*_1 $$\in$$ \Delta_1$. Next, we show $\sigma^*_{2} $$\in$$ \Delta_2$. Firstly, we have from  that $0$$ \leq$$ \sigma^*_{2}(e) $$\leq$$ 1$ for any $e$$\in$$\mathcal{E}$. Moreover, $$\sum_{e\in\mathcal{E}} \sigma^*_{2}(e)\stackrel{\eqref{eqn:att_strategy_analytical}}{=}\frac{1}{S_p} \sum_{e\in E_p}\frac{1}{w^*_{e}}=1,$$ so we conclude $\sigma^*_{2} $$\in$$ \Delta_2$. \[theorem:analytical\_solution\_special\_case\] If $E_{v_i} $$\cap $$E_{v_j}$$=$$\emptyset$ holds for any two nodes $v_i$$\neq $$ v_j$ from $\mathcal{V}$, then any strategy profile $(\sigma^*_1,\sigma^*_2)$ that satisfies – is a NE of $\Gamma$. Let $(\sigma^*_{1},\sigma^*_{2})$ be a strategy profile that satisfies –. We know from Lemma \[lemma:existance\_eq\_1\] that at least one such a profile exists. We first derive an upper bound on the expected loss if P1 plays ${\sigma}^*_{1}$. Assume P2 targets component $e$ that belongs to $E_{v_j}$, $j\leq p$. Then $$\label{eqn:loss_monitored} \begin{aligned} L({\sigma}^*_{1},e) &= \hspace{-2mm}\sum_{V \in \mathcal{A}_1}\hspace{-2mm}{\sigma}^*_{1}(V) l(V,e) = \hspace{-4mm}\sum_{V \in \mathcal{A}_{1},e \notin E_V}\hspace{-4mm} {\sigma}^*_{1}(V)w_e \\ &=w_e \hspace{-6mm} \sum_{V \in \mathcal{A}_{1},v_j \notin V} \hspace{-4mm} {\sigma}^*_{1}(V) \stackrel{\eqref{eqn:marginal}}{=}w_{e}(1-\rho_{{\sigma}^*_{1}}(v_j))\\ &\stackrel{\eqref{eqn:def_strategy_analytical}}{=}\frac{w_e}{w_{e^*_j}}\frac{p-b_1}{S_p} \stackrel{(*)}{\leq} \frac{p-b_1}{S_p}, \end{aligned}$$ where (\*) follows from the fact that $w_{e^*_j}$ is the largest criticality among the components from $E_{v_j}$. If $p$$=$$n$, we established that $\frac{p-b_1}{S_p}$ is an upper bound on P1’s loss. If $p$$<$$n$, there exist nodes that are never selected for sensor positioning, so the components from $E_{v_{p+1}}$$,\ldots,$$E_{v_{n}}$ are never monitored. From , by targeting an unmonitored component $e_l$, P2 can achieve the payoff $w_{e_l}$. Note that $w_{e_l}$ cannot be larger than $w_{e^*_{p+1}}$, because $w_{e^*_{p+1}}$ is the largest criticality for the monitoring set $E_{v_{p+1}}$, and $w_{e^*_{p+1}}\geq \ldots \geq w_{e^*_{n}}$ holds for the remaining sets $E_{v_{p+2}},\ldots, E_{v_{n}}$. Since $p+1$ does not belong to $Z(b_1)$, it follows from  $$\begin{aligned} w_{e^*_{p+1}} < \frac{p+1-b_1}{S_p+1/w_{e^*_{p+1}}} \Longleftrightarrow w_{e^*_{p+1}}S_p< p-b_1.\end{aligned}$$ Thus, the loss associated with any unmonitored component $e_l$ is upper bounded by $L({\sigma}^*_1,e_l) $$\leq $$w_{e^*_{p+1}}$$<$$ \frac{p-b_1}{S_p}.$ From the later observation and , we conclude that the loss of P1 cannot be larger than $\frac{p-b_1}{S_p}$. Consider now $\sigma_2^*$. For any $V$, such that $|V|\leq b_1$, we have $$\begin{aligned} L(V,\sigma^*_2) &= \sum_{e \in \mathcal{A}_2} \sigma^*_2(e) l(V,e)=\hspace{-5mm} \sum_{i=1,e^*_i \notin E_V}^p \hspace{-5mm} \sigma^*_2 (e^*_i) w_{e^*_{i}} \\ &\stackrel{\eqref{eqn:att_strategy_analytical}}{=} \sum_{i=1,e^*_i \notin E_V}^p \hspace{-2mm}\frac{1/w_{e^*_{i}} }{S_p} w_{e^*_{i}} =\sum_{i=1,e^*_i \notin E_V}^p\hspace{-2mm} \frac{1}{S_p} \stackrel{(**)}{\geq} \ \frac{p-b_1}{S_p},\end{aligned}$$ where (\*\*) follows from the fact that every component $e^*_i$ belongs to a different monitoring set, so at most $b_1$ of them can be monitored by placing sensors at nodes $V$. Thus, we can conclude that $\frac{p-b_1}{S_p}$ is the value of the game, and $({\sigma}^*_{1},{\sigma}^*_{2})$ is a NE of $\Gamma$. We now discuss P1’s equilibrium strategy. From , we see that the probability of P1 placing a sensor at node $v_j$ depends on the corresponding maximum criticality $w_{e^*_j}$: the higher $w_{e^*_j}$ is, the higher the probability of placing a sensor at $v_j$ is. This is intuitive because P1 monitors more critical components with higher probability. Additionally, note that P1 places sensors only on the first $p$ nodes. If $p$$<$$n$, nodes $v_{p+1},\ldots,v_n$ are never allocated any sensor, and hence, the components from $E_{v_{p+1}},\ldots,E_{v_n}$ are never monitored. This is in contrast with the result from [@2017arXiv170500349D], where it was shown that P1 monitors every component with non-zero probability in any NE. Indeed, in our proof, we show that the unmonitored components have criticality lower than the value of the game. Another interesting observation is that the set of nodes on which sensors are allocated also depends on the number of sensors P1 has at her disposal. Particularly, the more sensors P1 has, on the more nodes she allocates sensors, as shown in the following proposition. \[lemma:set\_Z\] Let $b_1$$ \in$$ \mathbb{N}$ (resp. $b_1'$$ \in$$ \mathbb{N}$) be given, and $p$ (resp. $p'$) be the largest element of $Z(b_1)$ (resp. $Z(b_1')$). If $b_1$$<$$ b'_1$$\leq n$, then $p$$\leq$$ p'$. Note that $p$ (resp. $p'$) exists, since $b_1$$ \in$$ Z(b_1)$ (resp. $b_1'$$ \in$$ Z(b_1')$). We then have $$w_{e^*_p}\stackrel{\eqref{eqn:Zset}}{\geq} \frac{p-b_1}{S_p}\stackrel{(*)}{>}\frac{p-b'_1}{S_p},$$ where (\*) holds because $b_1$$<$$ b'_1$. Hence, $p$$ \in $$Z(b_1')$. Since $p'$ is the largest element of $Z(b_1')$, $p'\geq p$ must hold. We now discuss P2’s equilibrium strategy. Firstly, it follows from  that P2 targets only the components from $E_p$. Thus, the unmonitored components are not necessarily targeted in equilibrium, again in contrast to [@2017arXiv170500349D]. Indeed, P2 on average gains more by attacking components from $E_p$, even though they may be monitored by P1 with a non-zero probability. Next, observe that the components from $E_p$ with higher criticality are targeted with lower probability. The reason is that P1 monitors high criticality components with higher probability, which results in P2 targeting these components with a lower probability to remain undetected. Finally, note that the number of components P2 attacks is non-decreasing with the number of sensors P1 decides to deploy; this follows from Proposition \[lemma:set\_Z\]. Overlapping Monitoring Sets and Single Sensor {#section:second_special_case} --------------------------------------------- To better understand if some of the conclusions from Section \[section:first\_special\_case\] can be extended to the case of overlapping monitoring sets, we discuss the case of single sensor ($b_1$$=$$1$). We introduce the following primal and dual linear programs that characterize the equilibrium for this game instance: $$\begin{aligned} (\mathcal{P})\hspace{1mm}&\underset{x\geq 0 }{\text{maximize}}\hspace{1mm}\sum_{v \in \mathcal{V}} x_{v}\hspace{2mm} \text{subject to } \sum_{\substack{v \in \mathcal{V}\\ e \notin E_{v}}} x_{v} \leq \frac{1}{w_e}, \forall e\in\mathcal{E}, \\ (\mathcal{D})\hspace{1mm}&\underset{y\geq 0 }{\text{minimize}}\hspace{1mm}\sum_{e \in \mathcal{E}} \frac{y_{e}}{w_{e}}\hspace{2mm} \text{subject to } \sum_{ \substack{e \in \mathcal{E} \\ e \notin E_{v}}} y_{e}\hspace{-0.5mm} \geq1, \forall v \in\mathcal{V}. \end{aligned}$$ These problems are reformulations of $\text{LP}_1$ and $\text{LP}_2$ [@basar1999dynamic Section 2]. Under the reasonable assumption that P1 cannot monitor all the components using a single sensor, $(\mathcal{P})$ and $(\mathcal{D})$ are bounded. Moreover, thanks to strong duality, their optimal values coincide. Let $x^*$ be a solution of $(\mathcal{P})$, $y^*$ be a solution of $(\mathcal{D})$, and $J^*$ be the optimal value of these programs. Then the following strategy profile $$\begin{aligned} \label{eqn:lin_prog} \bar{\sigma}^*_1(v)= \frac{x^*_{v}}{ J^*}, \hspace{5mm}\bar{\sigma}^*_2(e)= \frac{y^*_{e}}{J^*w_{e}}, \end{aligned}$$ is a NE of $\Gamma$. \[theorem:solution\_special\_case\_3\] Let $b_1$$=$$1$, and assume that $E_v$$ \neq $$\mathcal{E}$ for any $v \in \mathcal{V}$. The strategy profile  is a NE of $\Gamma$, and the value of the game is $L(\bar{\sigma}^*_1,\bar{\sigma}^*_2)=\frac{1}{J^*}$. If $E_v$$ \neq $$\mathcal{E}$ for any $v $$\in $$\mathcal{V}$, then $(\mathcal{D})$ is feasible. For example, $y_{e_1}$$=$$\ldots$$=$$y_{e_m}$$=$$1$ represents a feasible solution of $(\mathcal{D})$. Thus, $J^*$ is bounded and the strategy profile  is well-defined. Now, for any $e $$\in$$ \mathcal{E}$, we have $$\begin{aligned} L (\bar{\sigma}^*_1,e) = \sum_{v \in \mathcal{V}} \bar{\sigma}^*_1(v) l(v,e) \stackrel{\eqref{eqn:index_and_set_x},\eqref{eqn:lin_prog}}{=} %\sum_{v_i \in \mathcal{V}} \frac{p^*_{v_i}}{J^*} l(v_i,e). \frac{ w_e}{J^*} \sum_{v \in \mathcal{V}, e \notin E_{v}} x^*_{v}. \end{aligned}$$ Note that $ w_e\sum_{v \in \mathcal{V}, e \notin E_{v}} x^*_{v}$$\leq $$1$, since $x^*$ is a solution of $(\mathcal{P})$. Thus, $L(\bar{\sigma}^*_1,e)$$\leq$$ \frac{1}{J^*}$ for any $e $$\in $$\mathcal{E}$. Similarly, for any $v$$ \in$$ \mathcal{V}$ $$\begin{aligned} L (v,\bar{\sigma}^*_2) = \sum_{e \in \mathcal{E}} \bar{\sigma}^*_2(e) l(v,e) \stackrel{\eqref{eqn:index_and_set_x},\eqref{eqn:lin_prog}}{=} % \sum_{ e_i \in \mathcal{E}, e_i \notin E_{v} } \frac{q^*_{e_i}}{w_{e_i} J^*} w_{e_i}\\ \sum_{ e\in \mathcal{E}, e \notin E_{v} } \frac{y^*_{e} }{ J^*} , \end{aligned}$$ where $\sum_{e\in \mathcal{E}, e \notin E_{v}} $$y^*_{e} $$\geq $$1$ since $y^*$ is a solution of $(\mathcal{D}$). Thus, $L (v,\bar{\sigma}^*_2)\geq \frac{1}{J^*}$ for any $v \in \mathcal{V}$. Hence, $\frac{1}{J^*}$ is the value of the game, and $(\bar{\sigma}^*_1,\bar{\sigma}^*_2)$ is a NE. To understand P1’s equilibrium strategy, note that $x^*_{v}$ can be viewed as a scaled probability of inspecting $v$. By inserting $x^*$ into the constraints of $(\mathcal{P})$, and dividing them by $J^*$, we obtain $ \sum_{v \in \mathcal{V},e \notin E_{v}} \frac{x^*_{v}}{J^*} $$\leq$$ \frac{1}{ w_e}L(\bar{\sigma}^*_1,\bar{\sigma}^*_2),$$\forall $$e\in\mathcal{E}.$ One can now verify that the left side of this inequality is the probability of *not* monitoring $e$. Thus, if $w_e $$\leq$$L(\bar{\sigma}^*_1,\bar{\sigma}^*_2)$, then P1 can leave $e$ unmonitored. Otherwise, P1 monitors $e$ with non–zero probability. Additionally, the higher $w_e$ enforces the lower probability that $e$ is left unmonitored. Note that all these observations are similar to the ones we made for the case discussed in Section \[section:first\_special\_case\]. In P2’s equilibrium strategy, $y^*_{e}$ can be interpreted as the scaled gain that P2 achieves by targeting $e$. Namely, by inserting $y^*$ into the constraints of $(\mathcal{D})$, and dividing them by $J^*$, we obtain $ \sum_{e\in \mathcal{E},e \notin E_{v}}$$ \frac{y^*_{e}}{J^*} $$\geq $$L(\bar{\sigma}^*_1,\bar{\sigma}^*_2),$$\forall $$v$$\in$$\mathcal{V}. $ The left hand side of the inequality represents P2’s payoff once P1 monitors $v$. Thus, the constraints of $(\mathcal{D})$ guarantee that P2’s payoff is at least $\frac{1}{J^*}$. Next, P2’s objective is to minimize $\sum_{e \in \mathcal{E}}$$ \frac{y_{e}}{w_{e}}$, so she has more incentive to increase $y_{e}$ for which the corresponding criticality $w_{e}$ is higher. This is consistent with the attack strategy , where P2 targeted the components from $E_p$. Additionally, assume that the components $e_1$ and $e_2$ are associated with the same value of the scaled gain, that is, $y_{e_1}$$=$$y_{e_2}$. It then follows from  that the component with higher criticality has the lower probability of being targeted by P2, which is another similarity with . Although the discussion above provides us with some game-theoretic intuition, we are unable to say more about a NE  since $x^*$ and $y^*$ are unknown. In the next section, we introduce an $\epsilon$-NE that can give us more insights about equilibrium strategies in the general case of the game. Approximate Equilibrium Strategies {#section:approximate_strategies} ================================== In this section, we show that the mixed strategies developed in [@2017arXiv170500349D] can be used to obtain an $\epsilon$-NE for $\Gamma$, and discuss possible ways to improve the monitoring strategy from this $\epsilon$-NE. We begin by introducing necessary preliminaries. Preliminaries ------------- We first define set packings and set covers, which are two essential notions that we use subsequently. We say that $E \in 2^\mathcal{E}$ is: (1) A set packing, if for all $v \in \mathcal{V}$, $|E_{v} \cap E|\leq 1$; (2) A *maximum* set packing, if $|E'|\leq |E|$ holds for every other set packing $E'$. We say that $V \in 2^\mathcal{V}$ is: (1) A set cover, if $E_V=\mathcal{E}$; (2) A *minimum* set cover if $|V|\leq |V'|$ holds for every other set cover $V'$. Set packings are of interest to P2. Namely, each of the components from a set packing needs to be monitored by a separate sensor. Thus, by randomizing the attack over a set packing, P2 can make it more challenging for P1 to detect the attack. Similarly, set covers are of interest for P1. In fact, if P1 is able to form a set cover using $b_1$ sensors, she can monitor all the components. In that case, $\Gamma$ is easy to solve in pure strategies, as shown in the following proposition. \[thm:purestrategies\] A pure strategy profile ($V^*,e^*$) is a NE of $\Gamma$ if and only if $V^*$ is a set cover and $|V^*|\leq b_1$. ($\Rightarrow$) The proof is by contradiction. Let $(V^*,e^*)$ be a NE in which $V^*$ is not a set cover. Assume first that $l(V^*,e^*)$$=$$0$. Since $V^*$ is not a set cover, P2 can attack $e$$ \notin$$ E_{V^*}$. Then $l(V^*,e)$$=$$w_e$$>$$l(V^*,e^*)$$=$$0$, so $(V^*,e^*)$ cannot be a NE. The remaining option is $l(V^*,e^*)$$>0$. In this case, P1 can select to play $V$, $e^* $$\in $$E_{V}$, and decrease the loss to 0. Thus, $(V^*,e^*)$ cannot be a NE in this case either. ($\Leftarrow$) If $|V^*|$$\leq$$ b_1$, then $V^*$$ \in$$ \mathcal{A}_1$. Furthermore, if $V^*$ is a set cover, then $l(V^*,e)$$=$$0$ for all $e$$ \in$$ \mathcal{A}_2$. Thus, P1 cannot decrease the loss any further, and P2 cannot increase it, which implies $(V^*,e^*) $ is a NE. A more interesting and practically relevant situation is one in which P1 is not able to monitor all the components simultaneously due to limited sensing budget. Therefore, we henceforth assume that P1 cannot form a set cover using $b_1$ sensors; i.e. $b_1$$ < $$|V|$ hold for any set cover $V$$ \in$$ 2^\mathcal{V}$. Set Cover/Set Packing Based Strategies -------------------------------------- We now introduce the mixed strategies constructed using the notion of minimum set cover and maximum set packing. Particularly, let $V^*$ (resp. $E^*$) be a minimum set cover (resp. a maximum set packing), and $n^*$$\coloneqq$$ |V^*|$ (resp. $m^*$$\coloneqq $$|E^*|$). Following [@2017arXiv170500349D], we consider the mixed strategies $\sigma^\epsilon_{1}$ and $\sigma^\epsilon_{2}$ characterized by $$\begin{aligned} \label{eqn:def_strategy_covers} \rho_{\sigma^\epsilon_{1}}(v)&= \begin{cases} \frac{b_1}{n^*},\hspace{2mm}v\in V^*, \\ \hspace{2.2mm}0,\hspace{2mm}v\notin V^*, \end{cases}\\ \label{eqn:att_strategy_packings} \sigma^\epsilon_{2}(e)&= \begin{cases}\frac{1}{m^*},\hspace{2mm}e\in E^*, \\ \hspace{2.5mm} 0,\hspace{2.2mm}e\notin E^*. \end{cases}\end{aligned}$$ In other words, P1 places sensors only on nodes from $V^*$ with probability $\frac{b_1}{n^*}$. Since $V^*$ is a set cover, it follows that every component is monitored with probability at least $\frac{b_1}{n^*}$. The strategy of P2 is to attack the components from $E^*$ with probability $\frac{1}{m^*}$. The proof of existence of a strategy profile $(\sigma^\epsilon_{1},\sigma^\epsilon_{2})$ satisfying – is by construction, and can be found in [@2017arXiv170500349D Lemma 1]. Let $w_{\min}$$\coloneqq$$\min_{e \in \mathcal{E}}$$w_e$, $w_{\max}$$ \coloneqq$$\max_{e \in \mathcal{E}}$$ w_e$, and $\Delta_w $$\coloneqq$$ w_{\max}$$-$$w_{\min}.$ The following theorem establishes that $(\sigma^\epsilon_{1},\sigma^\epsilon_{2})$ is an $\epsilon$–NE, and gives the worst case values for $\epsilon$ and P1’s loss. \[thm:mix\_strategies\_diff\_indexes\] Any strategy profile that satisfies – is an $\epsilon$-NE of $\Gamma$, where $$\epsilon=\underbrace{b_1 w_{\min} \frac{n^*-\max\{b_1,m^*\}}{n^*\max\{b_1,m^*\}}}_{=\epsilon_1}+\underbrace{\Delta_w\frac{n^*-b_1}{n^*}}_{=\epsilon_2}.$$ Furthermore, for any $\sigma_2 $$\in $$\Delta_2$, we have $$\label{eqn:worst_case_loss_set_cover} L(\sigma^{\epsilon}_1,\sigma_2) \leq w_{\max} \frac{n^*-b_1}{n^*}.$$ We first derive an upper bound on P1’s expected loss if she plays ${\sigma}^{\epsilon}_{1}$. Let $e$ be an arbitrary component, and $ \mathcal{A}_{1}' $$=$$ \{V $$\in$$ \mathcal{A}_1$$|$$ l(V,e)$$=$$w_e$$ \}$ be the set of sensor placements in which $e$ is not monitored. The expected loss $L({\sigma}^{\epsilon}_{1},e)$ is then $$\begin{aligned} L({\sigma}^{\epsilon}_{1},e)&= \sum_{V \in \mathcal{A}_1}\sigma^{\epsilon}_1(V) l(V,e)=w_e \sum_{V \in \mathcal{A}_{1}'} \sigma^{\epsilon}_1(V).\end{aligned}$$ Note that $\sum_{V \in \mathcal{A}_{1}'}\sigma^{\epsilon}_1(V)$ represents the probability that $e$ is not monitored. This probability is at most $1-\frac{b_1}{n^*}$, since P1 inspects every element of a set cover with probability $\frac{b_1}{n^*}$. Moreover, $w_{e}\leq w_{\max}$. It then follows that $$\label{eqn:lb_equal_prob} \begin{aligned} L(\sigma^{\epsilon}_1,e) \leq w_{\max} \frac{n^*-b_1}{n^*}=\bar{L}, \end{aligned}$$ which confirms . We now derive a lower bound on the expected payoff of P2 if she plays $\sigma^{\epsilon}_2$. Let $V$ be an arbitrary element of $\mathcal{A}_{1}$, and $ E' $$= $$ \{e $$\in $$ E^*| l(V,e) $$= $$w_e\}$ be the set of components from $E^*$ that are not monitored from $V$. Then $$\begin{aligned} L(V,\sigma^{\epsilon}_2) &= \sum_{e \in \mathcal{E}}\sigma^{\epsilon}_2(e) l(V,e) \stackrel{\eqref{eqn:att_strategy_packings}}{=} \frac{1}{m^*} \sum_{e \in E' } w_e \\ & \geq \frac{1}{m^*}\sum_{e \in E' } w_{\min} =\frac{ |E' |}{m^*} w_{\min}.\end{aligned}$$ Since $E^*$ is a maximum set packing and $|V|\leq b_1$, at most $b_1$ components can be monitored by positioning $V$. Therefore, $|E'|\geq \max\{0,m^* -b_1\}$, and we conclude $$\label{eqn:ub_equal_prob} L(V,\sigma^{\epsilon}_2) \geq w_{\min}\frac{ \max{ \{0,m^* -b_1\} }}{m^*}=\underline{L}.$$ From  and , it follows that $\underline{L} \leq L(\sigma^{\epsilon}_1,\sigma^{\epsilon}_2)\leq \bar{L}$. Thus, $(\sigma^{\epsilon}_1,\sigma^{\epsilon}_2)$ is an $\epsilon$-NE, where $$\begin{aligned} \epsilon&= \bar{L}-\underline{L}=w_{\max} \frac{n^*-b_1}{n^*} - w_{\min}\frac{ \max{ \{0,m^* -b_1\} }}{m^*} \\ &= (w_{\min}+\Delta_w) \frac{n^*-b_1}{n^*} - w_{\min}\frac{ \max{ \{0,m^* -b_1\} }}{m^*}\\ &= b_1 w_{\min} \frac{n^*-\max\{b_1,m^*\}}{n^*\max\{b_1,m^*\}}+\Delta_w\frac{n^*-b_1}{n^*}. \text{\hspace{13mm}}\end{aligned}$$ This concludes the proof. From Theorem \[thm:mix\_strategies\_diff\_indexes\], we can draw the following conclusions. If all the components have equal criticality level, then $\Delta_w$$=$$0$ and $\epsilon_2$$=$$0$. In that case, $\epsilon_1$$=$$0$ if $n^*$$=$$m^*$, and $(\sigma^\epsilon_{1},\sigma^\epsilon_{2})$ is an exact NE. Although $n^*$$=$$m^*$ may look as a restrictive condition, it turns out that $n^*$ and $m^*$ are often equal or close to each other in practice [@2017arXiv170500349D]. Also note that the strategy profile constructed using – differs from the equilibrium profile developed in Section \[section:first\_special\_case\] in two aspects: (i) Since $V^*$ is a set cover, every component is monitored with non-zero probability; (ii) The set of nodes where sensors are placed (resp. the set of attacked components) does not change with $b_1$, that is, it is always $V^*$ (resp. $E^*$). However, if $\Delta_w$ is large, $\epsilon$ can be large even if $n^*$$=$$m^*$. The strategies $\sigma^\epsilon_{1}$ and $\sigma^\epsilon_{2}$ may fail in this case because they assume every component to be equally critical. For instance, consider the case from Fig. \[figure:example\_2\]. We have $V^*$$=$$\{v_1,v_2\}$, $E^*$$=$$\{e_1,e_3\}$, the criticality of blue (resp. red) components is $w_{\min}$ (resp. $w_{\max}$), and $b_1$$=$$1$. From Fig. \[figure:example\_2\] a), we see that P1 monitors $e_1$ and $e_3$ with equal probability, although they have different criticality levels. Thus, the best response of P2 is to target $e_3$, which results in the worst case loss of P1. Similarly, as seen in Fig. \[figure:example\_2\] b), P2 targets the components $e_1$ and $e_3$ with equal probability. The best response of P1 is then to monitor $e_3$, leaving P2 with the lowest payoff. ![The figure illustrates why the strategies $\sigma^\epsilon_{1}$ and $\sigma^\epsilon_{2}$ may fail. The criticality of red (resp. blue) components is $w_{\max}$ (resp. $w_{\min}$).[]{data-label="figure:example_2"}](example_2.pdf){width="75mm"} Nevertheless, the set cover strategy $\sigma_1^\epsilon$ has several favorable properties. Firstly, we note that by playing $\sigma_1^\epsilon$, P1 cannot lose more than . Thus, if $b_1$ is close to $n^*$, the worst case loss  and $\epsilon$ approach 0, and $\sigma_1^\epsilon$ represents a good approximation for equilibrium monitoring strategy. If $b_1$$=$$n^*$, both the worst case loss  and $\epsilon$ are 0, and $\sigma^\epsilon_1$ becomes a pure equilibrium strategy from Proposition \[thm:purestrategies\]. Secondly, this strategy is easy to construct. Namely, once $V^*$ is known, one can straightforwardly find $\sigma_1^\epsilon$ that satisfies  (see [@2017arXiv170500349D Lemma 1]). Although calculating $V^*$ is NP–hard problem, modern integer linear program solvers can obtain a solution of this problem for relatively large values of $n$, and greedy heuristics can be used for finding an approximations of $V^*$ with performance guarantees [@chvatal1979greedy]. Finally, $\sigma_1^\epsilon$ can be further improved in several ways, as discussed next. Improving the Set Cover Monitoring Strategy ------------------------------------------- ### Increasing $b_1$ As we already mentioned, both the worst case loss  and $\epsilon$ approach 0 when $b_1$ approaches $n^*$. Thus, an obvious way to improve $\sigma_1^\epsilon$ is by increasing $b_1$. ### Focusing on highest criticality components Assume a situation where a group of components $\bar{\mathcal{E}}$ have criticality $w_{\max}$ that is much larger compared to the criticality of the remaining components. In Section \[section:exact\], we showed that depending on $b_1$ and the components’ criticality, P1 (resp. P2) may focus on monitoring (resp. attacking) the components with the highest criticality, while neglecting the others. Let $\bar{w}_{\max}$ be the largest criticality among the components $\mathcal{E}$$\setminus$$ \bar{\mathcal{E}}$. We show that if $\bar{\Delta}_w$$:=$$w_{\max}$$-$$\bar{w}_{\max}$$ \geq$$ w_{\max} \frac{ b_1}{\bar{n}^*}$ , a small modification of the strategies – can give us a potentially improved $\epsilon$-NE. Particularly, let $\bar{V}^*$ (resp. $\bar{E}^*$) be a minimum set cover for $\bar{\mathcal{E}}$ (resp. maximum set packing of $\bar{\mathcal{E}}$), $|\bar{V}^*|$$ \coloneqq$$\bar{n}^*$, $|\bar{E}^*| $$\coloneqq$$ \bar{m}^*$, and $(\bar{\sigma}^\epsilon_{1},\bar{\sigma}^\epsilon_{2})$ be a strategy profile that satisfies $$\begin{aligned} \label{eqn:def_strategy_covers_bin} \bar{\rho}_{\sigma^\epsilon_{1}}(v)&= \begin{cases} \frac{b_1}{\bar{n}^*},\hspace{2mm}v\in \bar{V}^*, \\ \hspace{2mm}0,\hspace{2mm}v\notin \bar{V}^*, \end{cases}\\ \label{eqn:att_strategy_packings_bin} \bar{\sigma}^\epsilon_{2}(e)&= \begin{cases}\frac{1}{\bar{m}^*},\hspace{2mm}e\in \bar{E}^*, \\ \hspace{3mm} 0,\hspace{2.2mm}e\notin \bar{E}^*. \end{cases}\end{aligned}$$ In other words, P1 (resp. P2) focuses on monitoring (resp. targeting) the components $\bar{\mathcal{E}}$ using the strategy $\bar{\sigma}^\epsilon_{1}$ (resp. $\bar{\sigma}^\epsilon_{2}$). The proof that $(\bar{\sigma}^\epsilon_{1},\bar{\sigma}^\epsilon_{2})$ exists is the same as for $(\sigma^\epsilon_{1},\sigma^\epsilon_{2})$. The following then holds. \[thm:binary\_weights\] If $\bar{\Delta}_w$$\geq $$ w_{\max}\frac{ b_1}{\bar{n}^*}$, then any strategy profile that satisfies – is an $\bar{\epsilon}$-NE of $\Gamma$, where $$\bar{\epsilon}=b_1 w_{\max} \frac{\bar{n}^*-\max\{b_1,\bar{m}^*\}}{\bar{n}^*\max\{b_1,\bar{m}^*\}}.$$ Furthermore, for any $\sigma_2 $$\in $$\Delta_2$, we have $$\label{eqn:worst_case_loss_set_cover_2} L(\bar{\sigma}^\epsilon_{1},\sigma_2 ) \leq w_{\max} \frac{\bar{n}^*-b_1}{\bar{n}^*}.$$ Assume P1 plays according to . If P2 attacks $e $$\in $$\bar{\mathcal{E}}$, we can show using the same reasoning as in the proof of Theorem \[thm:mix\_strategies\_diff\_indexes\] that $L(\bar{\sigma}^\epsilon_{1},e) $$ \leq $$ w_{\max} \frac{\bar{n}^*-b_1}{\bar{n}^*} $$= $$\bar{L}. $ If P2 attacks $e$$\in $$ \mathcal{E} $$\setminus $$ \bar{\mathcal{E}}$, we have $$L(\bar{\sigma}^\epsilon_{1},e) \stackrel{(*)}{\leq} \bar{w}_{\max} \stackrel{(**)}{\leq} w_{\max} \frac{\bar{n}^*-b_1}{\bar{n}^*}=\bar{L},$$ where (\*) follows from the fact that the largest loss occurs when $e$ is unmonitored and has criticality $\bar{w}_{\max}$, and (\*\*) from $\bar{\Delta}_w$$\geq $$w_{\max}$$ \frac{ b_1}{\bar{n}^*}$. Thus, P1 looses at most $\bar{L}$ by playing according to $\sigma^\epsilon_{1}$. If P2 plays according to , we obtain $$L(V,\bar{\sigma}^\epsilon_{2})\geq w_{\max}\frac{ \max{ \{0,\bar{m}^* -b_1\} }}{\bar{m}^*}=\underline{L},$$ by following the same steps as in the proof of Theorem \[thm:mix\_strategies\_diff\_indexes\]. Thus, $(\bar{\sigma}^\epsilon_{1},\bar{\sigma}^\epsilon_{2})$ is an $\bar{\epsilon}$–NE with $\bar{\epsilon}=\bar{L}-\underline{L}$. Proposition \[thm:binary\_weights\] has two consequences. Firstly, since $\bar{n}^*$$\leq $$n^*$, the worst case loss  achieved with strategy $\bar{\sigma}_1^\epsilon$ cannot be larger than the one given by . Secondly, if $\bar{n}^*$$=$$\bar{m}^*$, we have that any strategy profile that satisfies  – is a NE, so $\bar{\sigma}_1^\epsilon$ is an equilibrium monitoring strategy. ### Numerical approach We now briefly explain how CGP [@desrosiers2005primer] can be used for improving the set cover monitoring strategy $\sigma_1^\epsilon$. We refer the interested reader to the Appendix for more details. We begin by rewriting $\text{LP}_1$ in the form $$\label{eqn:cg_problem} \underset{\sigma_1 \geq0,z_1 \geq0}{\text{minimize}} \hspace{2mm} z_1 \hspace{3mm} \text{subject to }A \sigma_1+\textbf{1} z_1 \geq 0, \hspace{1mm}\textbf{1}^T\sigma_1=1,$$ where $A$ is a matrix representation of $\Gamma$. Note that every element of $\sigma_1$ corresponds to a possible pure strategy from $\mathcal{A}_1$. Since the number of pure strategies grows quickly with $b_1$, we cannot directly solve  due to the size of decision vector. However, the number of inequality constrains is always $m$, which allow us to use CGP to solve . The first step of CGP is to solve the master problem, which is obtained from  by considering only a subset $\tilde{\mathcal{A}}_1$ of pure strategies. Hence, to form the master problem, we only generate columns of $A$ that correspond to variables $\tilde{\mathcal{A}}_1$, which explains the name of the procedure. In our case, we initialize $\tilde{\mathcal{A}}_1$ with those pure strategies that are played with non-zero probability once P1 employs the set cover monitoring strategy $\sigma^\epsilon_1$ (see [@2017arXiv170500349D Lemma 1] for construction of these strategies). Once a solution $(\tilde{z}_1^*,\tilde{\sigma}_1^*)$ of the master problem is calculated, one solves the sub-problem $$\label{eqn:cg_SP_problem} \text{maximize}_{V \in \mathcal{A}_1} \hspace{2mm} (\rho^*)^T a_V + \pi^*,$$ where $(\rho^*,\pi^*)$ is a dual solution of the master problem and $a_V$ is the column of $A$ that corresponds to a pure strategy $V$. If the optimal value of  is negative, $\tilde{z}_1^*$ can be decreased. We then add a solution of  to $\tilde{\mathcal{A}}_1$, and proceed to the next iteration. Otherwise, $\tilde{z}_1^*$ (resp. $\tilde{\sigma}_1^*$) is the optimal value of the game (resp. an equilibrium monitoring strategy), and we stop the procedure. The key point of CGP is to be able to solve  efficiently, which is not necessarily the case for every linear program. However, in case of $\text{LP}_1$, $A$ is determined based on the loss function $l$ and has a structure that allow us to obtain a solution and the optimal value of  by solving a binary linear program. Additionally, this program has $n$$+$$m$ binary decision variables and $m$$+$$1$ constrains for any $b_1$, so it can be solved efficiently for relatively large values of $n$ and $m$ using modern solvers. This allow us to use CGP to find or approximate an equilibrium monitoring strategy for the networks of relatively large size, as shown in the next section. Numerical Study {#section:simulations} =============== We now test CGP on benchmarks of large scale water networks ky4 and ky8 [@jolly2013research]. These networks can be modeled with a directed graph. The vertices of the graph model pumps, junctions, and water tanks. The edges model pipes, and the edge direction is adopted to be in the direction of the water flow. We consider attacks where P2 injects contaminants in a water network, while P1 allocates sensors to detect contaminants. In this case, $\mathcal{E}$ are the locations where contaminants can be injected, and $\mathcal{V}$ are the locations where sensors can be placed. We adopt $\mathcal{E}$ and $\mathcal{V}$ to be the vertices of the water network graph. The monitoring sets were formed as follows: if a water flow from contamination source $e$ passes through $v$, then $e$ belongs to $E_v$ [@de2019optimal]. Criticality $w_{e}$ in this case can characterize the normalized population affected by contaminants injected in $e$ [@berry2005sensor]. For simplicity, we generated $w_e$ randomly. We remark that $n$$=$$m$$=$$964$ (resp. $n$$=$$m$$=$$1332$) for ky4 (resp. ky8) network. We first measured how much time does it take to construct the set cover monitoring strategy $\sigma_1^\epsilon$, and to further improve it to an equilibrium monitoring strategy using CGP. We considered ky4 and ky8 networks, and varied $b_1$. The results are shown in Fig. \[figure:sim\_1\]. Notice that the longest running time was 1180 seconds, which demonstrates that CGP may allow us to improve $\sigma_1^\epsilon$ to an equilibrium monitoring strategy for the networks of relatively large size. However, we also see that the running time rapidly grows with $b_1$ and the network size. This indicates that this way of calculating an equilibrium monitoring strategy may become inefficient if the network size exceeds several thousand nodes. ![ Time needed to calculate an equilibrium monitoring strategy using CGP for different values of $b_1$. []{data-label="figure:sim_1"}](figure_1.pdf){width="80mm"} ![ Improving the set cover monitoring strategy $\sigma_1^\epsilon$ by running a limited number of CGP iterations. []{data-label="figure:sim_2"}](figure_2.pdf){width="80mm"} Therefore, we also explored how much can we improve $\sigma_1^\epsilon$ by running only a limited number of iterations of CGP. We considered ky8 network, and adopted $b_1$$=$$150$. As the performance metric, we used the ratio $d(i)$$:=$$\bar{L}(i)/L(\sigma^*_1,\sigma^*_2),$ where $\bar{L}(i)$ is the optimal value of the master program after $i$ iterations. The value $\bar{L}(i)$ upper bounds the value of the game, and represents the worst case loss of P1 if she uses a monitoring strategy obtained by running $i$ iterations of CGP. Hence, if $d(i)$$=$$1$, then $\bar{L}(i)$$=$$L(\sigma^*_1,\sigma^*_2)$, and CGP recovers an equilibrium monitoring strategy after $i$ iterations. The plot of $d$ and the execution time with respect to the number of iterations is shown in Fig. \[figure:sim\_2\]. Same as in the previous experiment, the execution time includes the time to construct the set cover monitoring strategy $\sigma_1^\epsilon$. Although initially $d(0)$$\approx$$2$, $d$ reaches the value 1.11 after 700 iterations. We also indicate that the running time to achieve this improvement was 391 seconds, which is approximately 3 times shorter compared to the time to obtain an equilibrium monitoring strategy for $b_1$$=$$150$. This indicates that even if CGP may not be used to improve $\sigma_1^\epsilon$ to an equilibrium monitoring strategy, we can still significantly improve this strategy by running a limited number of CGP iterations. Conclusion {#section:conclusion} ========== This paper investigated a network monitoring game, with the purpose of developing monitoring strategies. The operator’s (resp. attacker’s) goal was to deploy sensors (resp. attack a component) to minimize (resp. maximize) the loss function defined through the component criticality. Our analysis revealed how criticality levels impact a NE, and outlined some fundamental differences compared to the related game [@2017arXiv170500349D]. Particularly, the operator can leave some of the noncritical components unmonitored based on their criticality and available budget, while the attacker does not necessarily need to attack these components. Next, we proved that previously known strategies [@2017arXiv170500349D] can be used to obtain an $\epsilon$–NE, and showed how $\epsilon$ depends on component criticality. Finally, we discussed how to improve the monitoring strategy from this $\epsilon$-NE. It was shown that if a group of the components have criticality level sufficiently larger then the others, the strategy can be improved by a simple modification. We also demonstrated that the strategy can be improved numerically using the column generation procedure. The future work will go into two directions. Firstly, we plan to find the way to characterize and analyze properties of a NE in the general case of the game. Secondly, we intend to generalize the game model by relaxing some of the modeling assumptions. For instance, to allow the attacker to target several components simultaneously, and to remove the assumption that deployed sensors are perfectly secured. Appendix: Column Generation Procedure {#appedix:CG .unnumbered} ===================================== CGP can be used to solve linear programs with a large number of decision variables and a relatively small number of constraints [@desrosiers2005primer], such as $\text{LP}_1$. The first step of CGP is to solve the master problem of $\text{LP}_1$, which can be formulated as $$\label{eqn:MP_LP1} \begin{aligned} &\underset{\tilde{z}_1\geq0,\tilde{\sigma}_1 \geq0}{\text{minimize}} \hspace{-2mm}&&\tilde{z}_1\\ &\text{subject to}&& \sum_{V \in \tilde{\mathcal{A}}_1}a_V \tilde{\sigma}_1(V) + \textbf{1} \tilde{z}_1 \geq 0, \\ & && \sum_{V \in \tilde{\mathcal{A}}_1}\tilde{\sigma}_1(V)=1, \end{aligned}$$ where $ a_V$$ \in$$ \mathbb{R}^m$ is given by $$\label{eqn:aV} a_V (i)= \begin{cases}-w_{e_i},\hspace{2mm}e_i \notin E_V, \\ \hspace{5.8mm} 0,\hspace{2.4mm}e_i \in E_V. \end{cases}$$ The only difference between  and $\text{LP}_1$ is that we consider only a subset of pure actions $\tilde{\mathcal{A}}_1$ instead of the whole set $\mathcal{A}_1$. As mentioned before, we initialize $\tilde{\mathcal{A}}_1$ with pure strategies that are played with non-zero probability once P1 employs the set cover monitoring strategy $\sigma^\epsilon_1$ (see [@2017arXiv170500349D Lemma 1] for construction of these strategies). Let ($\tilde{z}_1^*$,$\tilde{\sigma}^*_1$) be a solution of . The next step is to check if $\tilde{z}_1^*$ can be further decreased, which can be done by solving the following subproblem $$\label{eqn:reduced_cost} \tilde{c}:= \text{minimize}_{V \in \mathcal{A}_1} \hspace{2mm} -\sum_{i=1}^m \rho^*_{i} a_V(i) -\pi^*,$$ where $\rho^*$$ \in$$ \mathbb{R}^m$ (resp. $\pi^* $$\in$$ \mathbb{R}$) is an optimal dual solution of  that corresponds to the inequality constraints (resp. equality constraint). If $\tilde{c}$$<$$0$, $\tilde{z}_1^*$ can be further decreased. We then add a solution of  to $\tilde{\mathcal{A}}_1$, and repeat the procedure with the new set $\tilde{\mathcal{A}}_1$. Yet, if $\tilde{c}$$\geq $$0$, $\tilde{z}_1^*$ is the optimal value of $\text{LP}_1$, and $\tilde{\sigma}^*_1$ is an equilibrium monitoring strategy. However, the crucial point of CGP is to find an efficient way to solve . Namely, due to the large cardinality of $\mathcal{A}_1$, it is not tractable to simply go through all the columns $a_V$ and pick the optimal one. In our case, we can avoid this by solving the following binary linear program to obtain a solution and the optimal value of  $$\label{eqn:subproblem} \begin{aligned} &\underset{x \in \{0,1\}^{n}, y \in \{0,1\}^{m}}{\text{minimize}} \hspace{-2mm}&& \sum_{e_i \in \mathcal{E}} \rho^*_{i} w_{e_i} y_{e_i}-\pi^*\\ &\hspace{5mm}\text{subject to}&& \sum_{ \substack{v \in \mathcal{V}\\ e\in E_{v}}} \hspace{-1mm}x_{ v} \hspace{-1mm}\geq \hspace{-1mm}1 - y_{e} , \forall e \in\mathcal{E},\\ & && \sum_{ v \in \mathcal{V}} x_{v} \leq b_1. \end{aligned}$$ Note that this program has $n$$+$$m$ binary variables and $m$$+$$1$ constraints regardless of $b_1$. Therefore, modern day integer linear program solvers can obtain a solution and the optimal value of  for relatively large values of $n$ and $m$. We conclude by showing how to obtain a solution and the optimal value of  by solving . Let $\tilde{c}$ (resp. $\tilde{x},\tilde{y}$) be the optimal value (resp. a solution) of . Let $\tilde{V}$ be formed as follows: if $\tilde{x}_{v}$$=$$0$ (resp. $\tilde{x}_{v}$$=$$1$), then $v $$\notin $$\tilde{V}$ ($v $$\in $$\tilde{V}$). Then $\tilde{c}$ (resp. $\tilde{V}$) is the optimal value (resp. a solution) of . Firstly, note that $|\tilde{V}|\leq b_1$ since $\tilde{x}$ has to satisfy the second constraint of . Thus, $\tilde{V}$ is a feasible point of . We now show that $\tilde{V}$ is a solution of , and that the optimal values of  and  coincide. Note that $\rho^*$$\geq$$ 0$ as a dual solution of , $w_{e}$$>$$0$, and the objective of  reduces to minimizing $\sum_{e_i \in \mathcal{E}} \rho^*_{i} w_{e_i} y_{e_i}$. Thus, for fixed $\tilde{x}$, the best way to minimize the objective is to set as many elements of $y$ to 0. Yet, an element $y_{e_i}$ can be set to zero only if $\sum_{ v \in \mathcal{V},e_i\in E_{v}}$$ x_{v}$$\geq$$1$. This happens once $e_i $$\in$$ E_{\tilde{V}}$. Otherwise, $y_{e_i}$$=$$1$ has to hold in order for a constraint to be satisfied. Hence, for a fixed $\tilde{x}$, the lowest objective value that can be achieved over all feasible $y$ is $$\label{eqn:appdendex_statement1} \tilde{c}= \sum_{e_i \in \mathcal{E},e_i \notin E_{\tilde{V}}} \rho^*_{i} w_{e_i} -\pi^*.$$ On the other hand, the value of the objective function from  for $\tilde{V}$ is given by $$\label{eqn:appdendex_statement2} - \sum_{e_i \in \mathcal{E}} \rho^*_{i} a_{\tilde{V}}(i)-\pi^*\stackrel{\eqref{eqn:aV}}{=}\sum_{e_i \in \mathcal{E},e_i \notin E_{\tilde{V}}} \rho^*_{i} w_{e_i}-\pi^*\stackrel{\eqref{eqn:appdendex_statement1} }{=}\tilde{c}.$$ From , it follows that the optimal value of  is at least $\tilde{c}$. We now finalize the proof by showing that the optimal value of  cannot be lower than $\tilde{c}$ using contradiction. Let $V'$ be a solution of , and assume $c'$$ <$$ \tilde{c}$. Let $x'$ be constructed as follows: $x'_{v }$$=$$0$ (resp. $x'_{v }$$=$$1$) if $v $$\notin $$V'$ (resp. $v $$\in $$V'$). Since $|V'|$$\leq$$ b_1$, $x'$ satisfies the constraints of . For this $x'$, let $y'_{e_i }$$=$$0$ (resp. $y'_{e_i }$$=$$1$) if $e_i $$\in $$E_{V'}$ ($e_i$$\notin$$E_{V'}$). This $y'$ also satisfies the constraints of , and we have $$\begin{aligned} \sum_{e_i\in \mathcal{E} } \rho^*_{i} w_{e_i}y'_{e_i}-\pi^*&=\sum_{e_i\in \mathcal{E},e_i \notin E_{V'}} \rho^*_{i} w_{e_i}-\pi^* \\ &\stackrel{\eqref{eqn:aV}}{=}\sum_{e_i\in \mathcal{E}} \rho^*_{i} a_{V'}(i)-\pi^*=c'.\end{aligned}$$ This contradicts the assumption that the optimal value of  equals $\tilde{c}$, since $c' $$<$$ \tilde{c}$. Thus, $V'$ cannot exist, and $\tilde{c}$ (resp. $\tilde{V}$) is the optimal value (resp. a solution) of . [^1]: $^1$The Division of Decision and Control Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden. Emails: {jezdimir, hsan}@kth.se; $^2$Center for Computational Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, Email: mdahan@mit.edu $^3$Department of Civil and Environmental Engineering, and Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge. Email: amins@mit.edu.
--- abstract: 'Honeycomb structures formed by the growth of perovskite 5d transition metal oxide heterostructures along the (111) direction in $t_{2g}^5$ configuration can give rise to topological ground states characterized by a topological index $\nu$=1, as found in Nature Commun. 2, 596 (2011). Using a combination of a tight binding model and ab initio calculations we study the multilayers $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ and $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ as a function of parity asymmetry, on-site interaction and uniaxial strain and determine the nature and evolution of the gap. According to our DFT calculations, $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ is found to be a topological semimetal whereas $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ is found to present a topological insulating phase that can be understood as the high U limit of the previous one, that can be driven to a trivial insulating phase by a perpendicular external electric field.' author: - 'J. L. Lado' - 'V. Pardo' - 'D. Baldomir' title: 'Ab initio study of $Z_2$ topological phases in perovskite (111) $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ and $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ multilayers ' --- Introduction ============ Topological insulators (TI)[@topins; @reviewti] are a type of materials which show a gapped bulk spectrum but gapless surface states. The topological nature of the surface states protects them against perturbations and backscattering. [@robust2; @robust3; @robust5; @robust6; @robust7; @robust8; @robust9] In addition, the surface of a d-dimensional TI is such that the effective Hamiltonian defined on its surface cannot be represented by the Hamiltonian of a d-1 dimensional material with the same symmetries, so the physics of a (d-1)-surface of a topological insulator may show completely different behavior from that of a conventional (d-1)-dimensional material. Surface states[@bulk-edge] can be understood in terms of solitonic states which interpolate between two topologically different vacuums, the topological vacuum of the TI and the trivial vacuum of a conventional insulator or empty space. The ground state of a system can be classified by a certain topological number[@top-num; @kane-qshe] depending on its dimensionality and symmetries present which define its topological classification. [@clas-sup; @reviewti] Two-dimensional single-particle Hamiltonians with time reversal (TR) invariance are classified in a $Z_2$ ($\nu=0,1$) topological class.[@kane-qshe] Two-dimensional TR systems with nontrivial topological index ($\nu=1$) show the so-called quantum spin Hall effect (QSHE) which is characterized by a non-vanishing spin Chern number [@index-qshe] and a helical edge current. [@current-qshe] This state has been theoretically predicted and experimentally confirmed in HgTe quantum wells,[@hg-te; @hg-te2; @hg-te3] as well as predicted in several materials such as two-dimensional Si and Ge[@si-qshe] and transition metal oxide (TMO) heterostructures.[@xiao] All these systems have in common that they present a honeycomb lattice structure with two atoms (A,B) as atomic basis. In that situation, it is tempting to think that the effective Hamiltonian in certain k-points could have the form of a Dirac Hamiltonian. The components of the spinor would be some combination of localized orbitals in the A or B atoms, whereas the coupling would take place via non-diagonal elements due to the bipartite geometry of the lattice. The simplest and best known example is graphene, where the Hamiltonian is a Dirac equation in two nonequivalent points K and K’. At half filling, graphene with TR and inversion symmetry (IS) has $\nu=1$,[@prb-kane] so a term which does not break those symmetries and opens a gap in the whole Brillouin zone would give rise to the QSHE state. The way in which an IS term can arise in a graphene Hamiltonian is due to spin-orbit coupling (SOC), however it is known that the gap opened this way is too small.[@soc-gra] In contrast, a sublattice asymmetry, which breaks IS, will open a trivial gap, as in BN.[@bn-dft; @bn-gra] How a honeycomb structure can be constructed from a perovskite unit cell can be seen in Fig. 1a and 1b, a perovskite bilayer grown along the (111) direction made of an open-shell oxide is sandwiched by an isostructural band insulating oxide. The metal atoms of the bilayer form a buckled honeycomb lattice. It has been shown that perovskite (and also pyrochlore) (111) multilayers can develop topological phases, [@xiao; @111-orb; @xiao2; @ir-dice; @la-al; @pirocloro; @pirocloro2] as well as spin-liquid phases and non-trivial superconducting states.[@spin-liq-111] Topological insulating phases have been predicted for various fillings of the d shell,[@xiao] here we will focus on the large SOC limit (5d electrons) and formal $d^5$ filling. We will study two different multilayers, $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ and $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ and we will focus on the realization of a nontrivial $\nu=1$ ground state. $\text{SrIrO}_3$ ($a_{\text{SrIrO}_3}=3.94$ Å)[@sriro3] is a correlated metal[@sriro3-elec] whose lattice match with $\text{SrTiO}_3$ (STO) would be close enough ($a_{\text{SrTiO}_3}=3.905$ Å)[@srtio3] for them to grow epitaxially with standard growth techniques.[@grow111] $\text{KPtO}_3$ has not been synthesized (to the best of our knowledge) but our calculations ($a_{\text{KPtO}_3}=4.02$ Å) show a reasonable lattice match with $\text{KTaO}_3$ would be possible ($a_{\text{KTaO}_3}=3.98$ Å).[@ktao3] The first multilayer is an iridate very similar to the well known $\text{Na}_2\text{IrO}_3$.[@Na2IrO3; @arpNa; @1evir] This system presents a layered honeycomb lattice of Ir atoms at $t_{2g}^5$ filling, whereas the present bilayers show a buckled honeycomb lattice, and is predicted to develop the QSHE, however electron correlation would lead to an antiferromagnetic order in the edges. We will study the dependence of the topological ground state on the applied uniaxial strain and the electron-electron interaction and we will determine a transition between two topological phases in both materials. The work is organized as follows. In Section II we introduce a simple tight binding (TB) model as in Ref. focusing on the $t_{2g}^5$ case. In Section III we use density functional theory (DFT) calculations to study the evolution of both multilayers with uniaxial strain and on-site Coulomb repulsion and we determine the ground state of each material. In Section IV we study the stability of the topological phase against TR and IS breaking using both TB and DFT calculations. Finally in Section V we summarize the results obtained. ![(Color online) (a) Scheme of the cubic perovskite structure $XYO_3$. (b) Construction of the bilayer, the TM atoms are arranged in a triangular A and B lattice in the (111) direction in such a way the two atoms will form a honeycomb lattice. The Z atom corresponds to the insulating layer (in our case $\text{SrTiO}_3$ or $\text{KTaO}_3$) and does not participate in the honeycomb. (c) Scheme of the multilayer considered in the DFT calculations.[]{data-label="bilayer"}](fig1.png){width="\columnwidth"} Tight binding model =================== The qualitative behavior of this system can be understood using a simple TB model for the 5d electrons in the TM atoms, as shown in Ref. . In Section IIA we will give the qualitative behavior of the effective Hamiltonian. In Section IIB we will show numerical calculations of the full model. Full Hamiltonian ---------------- The octahedral environment of oxygen atoms surrounding the transition metal atoms decouples the d levels in a $t_{2g}$ sextuplet and an $e_g$ quadruplet. Given that the crystal field gap is higher than the other parameters considered we will retain only the $t_{2g}$ orbitals. The Hamiltonian considered for the $t_{2g}$ levels takes the form $$H=H_{SO}+H_{t}+H_{tri}+H_m$$ $H_{SO}$ is the SOC term, which gives rise to an effective angular momentum $J_{eff}=S-L$ which decouples the $t_{2g}$ levels into a filled j=3/2 quadruplet and a half filled $j=1/2$ doublet. $H_t$ is the hopping between neighboring atoms via oxygen that couples the local orbitals. $H_{tri}$ is a local trigonal term[@xiao] which is responsible for opening a gap (as we will see below) without breaking TR and IS. $H_{m}$ is a term which breaks IS making the two sublattices nonequivalent tending to open a trivial gap by decoupling them. In the following discussion we will suppose that this last term is zero, but we will analyze its role in Section IV. We are interested in two different regimes as a function of SOC strength: strong and intermediate. We call strong SOC to the regime where the j=3/2 and j=1/2 are completely decoupled so that there is a trivial gap between them. We will refer to an intermediate regime if the two subsets are coupled by the hopping. The key point to understand the topological character of the calculations is that a $t_{2g}^5$ configuration can be adiabatically connected from the strong to the intermediate regime without closing the gap. The argument is the following, beginning in the strong SOC regime it is expected that a four-band effective model will be a good approximation. In this regime the mathematical structure of the effective Hamiltonian turns out to be equivalent to graphene. The trigonal term is responsible for opening a gap $\Delta$ via a third order process in perturbation theory $$\Delta\sim\lambda_{tri}\frac{t^2}{\alpha^2}$$ where $\lambda_{tri}$ is the trigonal coupling, $t$ is the hopping parameter and $\alpha$ the SOC strength. It can be checked by symmetry considerations that the restriction of the matrix representations leads to this term as the first non-vanishing contribution in perturbation theory. Eq. (2) has been checked by a logarithmic fitting of numerical calculations of the full model. This term will open a gap in the K point conserving TR and IS and thus realizing a $\nu=1$ ground state.[@prb-kane] As SOC decreases, the gap becomes larger while the system evolves from the strong to the intermediate regime, so the intermediate regime is expected to be a topological configuration [@xiao] with a non-vanishing gap. Note that even though perturbation theory will only hold in the strong SOC regime, the increase in the gap as the system goes to the intermediate regime suggests that the $t_{2g}^5$ configuration will always remain gapped. This argument is checked by the numerical calculations shown below. ![(Color online) (a) Band structure of the TB model with intermediate $\alpha=t$ (left) and strong $\alpha=2t$ (right) SOC strength and $\lambda_{tri}=-0.5t$. The difference between the two cases relies on the $\nu_b$ invariant of the last filled band. The red lines are the band structure with $\lambda_{tri}=0$. (b) Band structure zoomed for the j=1/2 bands near the K point for negative ($\lambda_{tri}=-0.5t$), zero and positive ($\lambda_{tri}=0.5t$) trigonal coupling. In the three cases the topological invariant gives a topological ground state. (c) Evolution of the gap in the K point with $\lambda_{tri}$ for the intermediate SOC regime. The two topological phases found will be identified as HUTI and LUTI in the DFT calculations.[]{data-label="tight"}](fig2.png){width="\columnwidth"} Results from tight binding calculations --------------------------------------- In Fig. 2 we show the results of a calculation using the TB Hamiltonian proposed above. Figure 2a is the bulk band structure for strong and intermediate SOC strength $\alpha$. If a non-vanishing trigonal term is included, it opens a gap in the Dirac points of the band structure generating topologically non-trivial configurations. We can see this clearly in Fig. 2b, where the band structure close to the Fermi level in the vicinity of the K point is shown. The topological character of each configuration is defined by the $\nu$ topological invariant which for a band in an IS Hamiltonian can be calculated as [@prb-kane] $$(-1)^{\nu_b}=\prod_{\text{TRIM}}\langle\Psi_b|P|\Psi_b\rangle \label{parities}$$ where the product runs over the four time reversal invariant momenta (TRIM). The full invariant of a configuration will be the product of the last equation over all the occupied bands $$\nu=\sum_{\text{occ.bands}} \nu_b \text{ (mod 2)} \label{par-full}$$ For a $t_{2g}^5$ filling the first unfilled band has always $\nu_b=1$ but the difference between strong and intermediate SOC is the $\nu_b$ invariant of the last filled band. For strong SOC the j=1/2 and j=3/2 are completely decoupled so a $t_{2g}^4$ configuration would be topologically trivial, being the invariant of the fifth band $\nu_b=1$. However, when SOC is not sizable the bandwidths are large enough to couple the j=1/2 and j=3/2 levels so that the $t_{2g}^4$ filling is a topological configuration. In both cases the $t_{2g}^5$ filling is topologically non-trivial. [@xiao] We will see below using DFT calculations that the systems under study (TMO’s with 5d electrons in a perovskite bilayer structure) are in this intermediate SOC regime. Figure 2b shows the bulk band structure, focusing now on the j=1/2 bands, for negative, zero and positive trigonal terms. The left numbers are the $\nu_b$ invariant of the band while the right numbers are the sum of the invariants of that band and the bands below it. No matter what the sign of $\lambda_{tri}$ is, the configuration becomes non-trivial, being its role to open a gap in the K point around the Fermi level. In the DFT calculations below, it will be seen that a change of sign of the trigonal term can be understood as a topological transition between a low U topological insulating phase (LUTI) and a high U topological insulator (HUTI), across a boundary where the system behaves as a topological semimetal (TSM). DFT calculations ================ Computational procedures ------------------------ Ab initio electronic structure calculations have been performed using the all-electron full potential code [wien2k]{} [@wien] The unit cell chosen is shown in Fig. 1c. It consists of 9 perovskite layers grown along the (111) direction, 2 layers of $\text{SrIrO}_3$ ($\text{KPtO}_3$) which conform the honeycomb and 7 layers of $\text{SrTiO}_3$ ($\text{KTaO}_3$) which isolate one honeycomb from the other. For the different off-plane lattice parameters along the (111) direction of the perovskite considered, the structure was relaxed using the full symmetry of the original cell. The exchange-correlation term is parametrized depending on the case using the generalized gradient approximation (GGA) in the Perdew-Burke-Ernzerhof[@gga] scheme, local density approximation+U (LDA+U) in the so-called “fully located limit”[@sic1] and the Tran-Blaha modified Becke-Jonsson (TB-mBJ) potential.[@tbmbjlda] The calculations were performed with a k-mesh of 7 $\times$ 7 $\times$1, a value of R$_{mt}$K$_{max}$= 7.0. SOC was introduced in a second variational manner using the scalar relativistic approximation.[@singh] The $R_{mt}$ values used were in a.u. 1.89 for Ti, 1.91 for Ir, 2.5 for Sr and 1.67 for O in the $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ multilayer and 1.93 for Ta, 1.92 for Pt, 2.5 for K, 1.7 for O in the $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ multilayer. Band structure of the non-magnetic ground state ----------------------------------------------- We have already discussed that the systems chosen to study a $d^5$ filling in a honeycomb lattice with substantial SOC are the multilayers $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ and $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ formed by perovskites grown along the (111) direction. First the structure is optimized for different $c$ lattice parameters respecting IS using GGA and without SOC. This means that mainly only the inter-planar distances in the multilayers are relaxed. For the energy minimum the band structure is calculated turning on SOC. The band structure using three exchange-correlation schemes (GGA, LDA+U and TB-mBJ) develops the same structure. The $\nu_b$ topological invariant of each band is calculated as in the TB model, [@prb-kane] the topological invariant being the sum of the $\nu$ invariants over all the occupied bands. Figure 3 shows the band structure calculated with TB-mBJ as well as the $\nu_b$ invariants also obtained ab initio. The difference in the curvature of the bands with respect to the result obtained with the TB model is due to the existence of bands near the bottom of the j=3/2 $t_{2g}$ quadruplet which are not considered in the TB Hamiltonian. Each band has double degeneracy due to the combination of TR and IS. At the optimized c, the gap between the last filled and the first unfilled band is located at the corner of the Brillouin zone (K-point). At low c (unstable energetically but attainable via uniaxial compression), GGA predicts that the system can become a metal by closing an indirect gap between the K and M points, however TB-mBJ calculations predict that a direct gap is localized at the K point. For all the calculations the ground state has $\nu=1$ and thus it develops a topological phase. The last filled band has $\nu_b=0$ so by comparison with the TB results the system corresponds to the intermediate SOC limit in which the $J_{eff}=1/2$ and $J_{eff}=3/2$ are not completely decoupled.[@1evir] If a 5d electron system like this is in the intermediate SOC limit, it is hard to imagine how one can build a TMO heterostructe closer to the strong SOC limit (the only simple solution would be to weaken the hopping between the TM somehow to increase the $\alpha/t$ ratio). The first unfilled band has $\nu_b=1$ as expected since a $t_{2g}^6$ configuration will be a trivial insulator with a gap opened by the octahedral crystal field. ![(Color online) Band structure obtained in the DFT calculations for the optimized lattice parameter c. The calculations were performed with TB-mBJ for both $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ (a) and $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ (b). The right panels are the band structure zoomed in near the Fermi level with the topological invariants displayed. $\nu_b$ is the invariant of the band considered calculated by Eq. \[parities\] whereas $\nu$ is the sum over all the bands up to the one considered, in Eq. \[par-full\]. []{data-label="bandsdft"}](fig3.png){width="\columnwidth"} The way a trigonal field is present in the DFT calculations is mainly in two ways. On one hand, strain along the z direction varies the distance to the first neighbors in that direction, so that the electronic repulsion varies as well. We define this deformation as $\epsilon_{zz}=\frac{c-c_0}{c_0}$ where $c_0$ is the off-plane lattice constant with lowest energy. On the other hand, an on-site Coulomb repulsion defined on the TM by using the LDA+U method has precisely the symmetry of the bilayer, i.e. trigonal symmetry, so varying in some way the on-site potential (always preserving parity symmetry) will have the effect of a trigonal term in the Hamiltonian (see A.3 for further details). According to this, it is expected that in a certain regime, variations in $\epsilon_{zz}$ can be compensated by tuning U. In this regime, similar to what we discussed above, the system will develop a transition between two topological phases: a LUTI and a HUTI. At even higher U the system will show magnetic order. We will address this point later and by now we will focus first on the non-magnetic (NM) phase. In order to study the phase diagram defined by the parameters $\epsilon_{zz}$ and U, we will perform calculations keeping one of them constant and determine how the gap closes as the other parameter varies, keeping track of the parities at both sides of the transition. ![image](fig4.png){width="2.0\columnwidth"} \[evolu\] Evolution of the gap with uniaxial strain ----------------------------------------- Here we will discuss the behavior of the gap in the K point with $\epsilon_{zz}$ for various U and J values. First we analyze the behavior of both materials in parameter space showing their similarities finishing characterizing the actual position of the ground state of the system in the general phase diagram. First we focus on the $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ case. As shown in Figure 4a, the gap closes as a function of c, so uniaxial strain can drive the system between two insulating phases just as $\lambda_{tri}$ does in the TB model. However, for high U (see below the discussion on the plausible U values) the transition point disappears (two such cases are plot in Fig. 4a). For low U (see Figs. 4 c,d), $\epsilon_{zz}$ can drive the system from a positive trigonal term to a negative one. This means that uniaxial strain can change the sign of the effective trigonal term of the Hamiltonian. However, for high U (Fig. 4a), strain is not capable of changing the trigonal field, so the system remains in the same topological phase for every $\epsilon_{zz}$. For the calculations using the TB-mBJ scheme (we will see below to what effective U this situation would correspond), the transition point takes place almost at $\epsilon_{zz}=0$, so based on this scheme, the Ir-based multilayer would be classified rather as a TSM than as a TI. Now we will focus on the $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ system (see Fig. 4 b,e,f). For the GGA calculations the transition with $\epsilon_{zz}$ disappears, being the system always in the HUTI phase. If the system is calculated using LDA+U with U negative (circles in Fig. 4b), the behavior is similar to the previous system and the transition point across a TSM reappears. For more realistic values of U and J (stars in Fig. 4b) the transition point disappears again. Thus, the present system (Pt-based), though isoelectronic and isostructural, can be understood as the strong-U limit of the previous system (Ir-based). The band gap is larger, which is a sought-after feature of these TI, but not large enough to make it suitable for room temperature applications. We observe that changing J does not vary the overall picture, just displaces slightly the phase diagram in U-space. Evolution of the gap with U at constant $\epsilon_{zz}$ ------------------------------------------------------- The Hamiltonian felt by the electrons depends also on the on-site Coulomb interaction between them. If the variation in the term of the Hamiltonian that controls it takes place only in the TM atoms, the symmetry of the varying term will have the same local symmetry as the TM atoms, i.e. trigonal symmetry. Thus, it is expected that a variation in U will have a similar effect as $\lambda_{tri}$ in the TB model, so the gap can also be tuned by the on-site interaction. Figure \[evolu\] c,d,e,f shows the behavior of the gap for both systems in an LDA+U scheme with J=0.27 (realistic) and 0.95 (large) eV as the parameter U is varied. The slow increase of the gap with U suggests that the gap opened is not that of a usual Mott insulator and reminds rather to the slow increase obtained in the TB model. In fact, the calculation of the $\nu$ invariant shows again a topological phase on both sides of the transition. Both systems develop a transition between the LUTI to the HUTI by increasing U. However, the transition point of $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ is at positive U’s, for the $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ the transition appears at negative U’s, so for all possible reasonable U values the system will be in the HUTI phase. TB-mBJ calculations have proven to give accurate results of band gaps in various systems,[@test_mbj; @test_mbj2; @test-mbj-blaha; @prb-antia] including s-p semiconductors, correlated insulators and d systems, however it might give an inaccurate position of semicore d orbitals[@test_mbj2] and overestimate magnetic moments for ferromagnetic metals.[@test-mbj-blaha] For $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ it is possible to use the transition between the LUTI and the HUTI with the TB-mBJ scheme to estimate which value of U should be used in an LDA+U calculation for these 5d systems to reproduce the result of the TB-mBJ calculation. The actual value of U needed for a correct prediction of the properties under study is often a matter of contention when dealing with insulating oxides containing 5d TM’s.[@Na2IrO3; @piro-ir] From Fig. 4 c,d it can be checked that the gap closes at U=1.4 eV for J=0.95 eV and at U=1.0 eV for the more realistic J=0.27 eV, so this suggests that the values which might be used in an LDA+U calculation to mimic the TB-mBJ result (a zero gap at $\epsilon_{zz}=0$) are on the order of $U=1.0$ eV in agreement with Ref. . Other works relate to a value of U on the order of 2 eV, [@martins; @arita] however due to the well known property dependence of the value of U, [@u-ceo] it is still unclear which is the correct value to study these topological phases. Therefore, our result can serve as a reference for other ab initio based phase diagrams for iridates where topological phases have been predicted as a function of U.[@piro-ir] Moreover, we study this system in a broad range of U values and using different exchange-correlation schemes to provide a broad picture of the system, rather than using a fixed U value that would yield a more restricted view of the problem. For the Pt-based multilayer we could also consider the hypothetical effective value for which gap would close at negative U ($U_{eff}=U-J$) for both values of $J$, we obtain the values $U_{eff}=-1.42$ eV for $J=0.27$ eV and $U_{eff}=-2.1$ eV for $J=0.95$. So, it is clearly seen that the gap of these systems is not only dependent on the parameter $U-J$, but also has an strong dependence on $J$, both in the realistic picture of the Ir-based multilayer as well as in the negative U regime of the Pt-based multilayer. Recently topological phases dominated by interactions, called topological Mott insulating phases, have been theoretically found,[@top-mot-ins; @ir-top-mot-ins] being this term employed for physically different phenomena. In the HUTI phase, the topological gap of the systems is enhanced by increasing the U parameter so that the system seems to be robust against electron-electron interactions. In the same fashion a usual band insulator can be connected to a Mott insulating phase,[@band-mot; @band-mot2] the previous robustness suggests that the HUTI phase might be adiabatically connected to a $Z_2$ non-trivial interacting topological phase.[@z2-inter1; @z2-inter2] Whether this is an artifact of the DFT method or an acceptable mean field approach of a many body problem is something that can only be clarified with experiments. Stability of the topological phase ================================== So far we have considered a system with both TR and IS. However, given that the $Z_2$ classification is valid only for TR invariant systems it is necessary to determine if the ground state possesses this symmetry. IS breaking could destroy the topological phase opening a trivial gap by decoupling both sublattices, as would happen if the honeycomb is sandwiched by two different materials.[@xiao] TR symmetry breaking will be fulfilled by a magnetic ground state whereas IS breaking will be realized by a structural instability. In this Section we will study the two possibilities and conclude that both systems are structurally stable and remain NM according to TB-mBJ in their ground states. Stability of time reversal symmetry ----------------------------------- ![(Color online) Difference of the sublattice magnetization for the $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ (a) and $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ (b) as a function of U for two different J. (c) Phase diagram in $\epsilon_{zz}$-U space, the approximate position of the ground state of both systems according to TB-mBJ is indicated. At high U the systems develop an AF order, however at realistic U’s both systems remain non-magnetic.[]{data-label="nm-af"}](fig5.png){width="\columnwidth"} ![image](fig6.png){width="2.0\columnwidth"} Increasing electronic interactions will drive the NM ground state to a magnetic trivial Mott insulating phase at very high U. For a magnetic $d^5$ S=1/2 localized-electron system, from Goodenough’s rules[@goody] an antiferromagnetic (AF) exchange between the two sublattices is expected which would create an AF ground state breaking both TR and IS. To check this result, we have performed DFT calculations within an LDA+U scheme taking $J_I=0.95$ eV and $J_{II}=0.27$ eV and varying U. For both systems the calculations have been carried out at different U’s for the NM and AF configurations at the two J’s. In both cases the ground state is AF at high U. Also, the sublattice magnetization increases with U. Figures 5a and 5b show the evolution of the sublattice magnetic moment for both compounds. Also, a ferromagnetic (FM) phase has been analyzed, being the least preferred one. In the Ir compound a FM phase can be stabilized but has always higher energy than the NM or AF. In the Pt compound a FM phase could not be stabilized for any of the U values considered. The true ground state of the system would be a TI phase in the Pt-based multilayer (or TSM in the Ir-based multilayer according to TBmBJ) depending on the value of U employed, so if the correct value to be used is larger than the critical value (of about 3 eV, which is large according to our previous discussions), the topological $Z_2$ phase will break down and the Kramers protection of the gapless edge states will disappear; whether the edge states would become gapped or not requires further study. Thus the experimental measurement of the magnetic moment of the ground state of these bilayers would shed light into the correct value of U which should be used in these and other similar compounds. In the Pt-based multilayer the sublattice magnetization is almost only dependent on $U_{eff}$ as can be checked by the shifting of the curves (Fig. 5b), however in the Ir-based multilayer there is a stronger dependence on J (Fig. 5a). Again, simplifying the evolution in terms of the effective $U_{eff} =U -J$ is discouraged for these systems according to our results. The non trivial effect of J[@review-ldau] has been also observed in several compounds such as multi-band materials.[@luca; @luca2] To summarize, we have obtained the magnetization of both compounds as a function of U, showing the system shows a NM ground state for both compounds until a certain U (larger than the value of U that would be equivalent to TB-mBJ calculations) where the system becomes AF (see Fig. 5c). Stability of inversion symmetry ------------------------------- The topological properties of this system rely on both TR and IS. Tight-binding calculations predicted that non-invariant parity terms with energy associated of the order of magnitude of the topological gap could eventually drive the topological phase to a trivial one. First we will discuss the $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ system. The simplest IS breaking could be driven by a structural instability. To study the structural stability, we have displaced one of the TM atoms from its symmetric position and then relaxed the structure. As a result, the structure returned to the symmetric configuration. However, due to being in the transition point between the two topological phases, any external perturbation (such as a perpendicular electric field) could break inversion symmetry. This system should be classified more as a topological semimetal rather than a topological insulator due to the (almost) vanishing gap of the relaxed structure. Now we will proceed to $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$. The first difference between this and the previous system is that in the present case the structure is well immersed in the HUTI phase. A large IS breaking is expected to drive the system to a trivial phase where the sublattices A and B would be decoupled. However, to change its topological class, the system has to cross a critical point where the gap vanishes in some point of the Brillouin zone. According to that, the expected behavior is that the gap closes and reopens as the sublattice asymmetry grows, going from the original topological phase to a trivial insulating phase. To check this, we will compare the results obtained from the TB model and DFT calculations. In the TB case, we introduce a new parameter which is a diagonal on-site energy whose value is $+m$ for A atoms and $-m$ for B atoms. This new parameter will break IS and its value will take into account the amount of breaking. When IS is broken the index $\nu$ cannot be calculated with the parities at the TRIM’s. However, we can study the topological character searching for gapless edge states. For that sake we calculate the band structure of a zig-zag ribbon of 40 dimers width with $\alpha=t$ and $\lambda_{tri}=-t$. The calculations were also carried out in an armchair ribbon and the same behavior is found (not shown), however the mixing of valleys makes the band structure harder to understand. The color of the bands indicates the expectation value of the position along the width of the ribbon of the eigenfunction corresponding to that eigenvalue, it is checked that the edge states are located in the two edges (red and blue) whereas the rest of the states are bulk states (green). According to the result of the TB model shown in Figs. 6a and 6b, for small $m$ the system remains a topological insulator although the introduction of $m$ weakens the gap. If $m$ keeps increasing the system reaches a critical point where the gap vanishes and if $m$ increases even more the gap reopens but now the edge states become gapped so the system is in a trivial insulating phase. To model the symmetry breaking in the DFT calculations we move one of the Pt atoms in the z direction. As the distance to the original point increases IS gets more broken. We show the four closest eigenvalues to the Fermi level in the K point for the DFT calculations (Fig. 6c) and TB model (Fig. 6d). For the symmetric structure ($\Delta z=0$), the combination of TR and IS guarantees that each eigenvalue is two-fold degenerate. Once the atom is moved the degeneracy is broken and the eigenvalues evolve with the IS breaking parameter. The analogy between the two calculations suggests that in the DFT calculation once the gap reopens the new state is also a trivial insulator. The dashed lines in Fig. 6c correspond to the eigenvalues at the K point for the fully relaxed structure allowing IS breaking. Comparing with the curves obtained for the evolution of the eigenvalues it is observed that the relaxed structure is in a slightly asymmetric configuration but it remains in the HUTI phase. Due to the dependence of the topological state on IS, tuning this behavior would allow to make a device formed by a perovskite heterostructure which can be driven from a topological phase to a trivial phase applying a perpendicular electric field. The device will be formed by the TM honeycomb lattice sandwiched between layers of the same insulating (111) perovskite from above and below. In this configuration the system will be in a HUTI phase. However, a perpendicular electric field will break more the sublattice symmetry inducing a much greater mass term in the Hamiltonian proportional to the applied field. Modifying the value of the electric field it would be possible to drive the system from the topological phase ($\vec E=0$), to the trivial phase (high $\vec E$). This can be exploited as an application of this TI in nanoelectronic and spintronic devices. [@device1; @device2; @device3; @device4] The sublattice asymmetry needed to make the transition is of the same order of magnitude as the topological gap, as can be checked in Fig. 6a. Summary ======= We have studied the gap evolution in the $t_{2g}^5$ perovskite multilayers $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ and $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ as a function of the on-site Coulomb interactions and uniaxial perpendicular strain conserving time reversal and inversion symmetry. The behavior of the system has been understood with a simple TB model where SOC gave rise to an effective j=1/2 four-band Hamiltonian. Uniaxial strain and on-site interactions have been identified as a trigonal term in the TB model whose strength controls the magnitude of a topological gap. The topological invariant $\nu$ has been calculated using the parities at the TRIM’s both in TB and DFT calculations. Comparisons between the invariants of the bands determines that both of these 5d electron systems stay in the intermediate SOC regime. The small value of the gap in the K-point comes from being a contribution of third order in perturbation theory. In contrast, sublattice asymmetry contributes as a first order term, so the topological phase can be easily destroyed by an external perturbation that gives rise to an IS-breaking term in the Hamiltonian. $(\text{SrTiO}_3)_7/(\text{SrIrO}_3)_2$ has been found to be a topological semimetal at equilibrium $\epsilon_{zz}$. Comparing TB-mBJ and LDA+U calculations reasonable results can be obtained for U in the range 1 - 2 eV. In $(\text{KTaO}_3)_7/(\text{KPtO}_3)_2$ a HUTI phase at equilibrium has been found. This last system can be driven from topological insulating state to a trivial one by switching on a perpendicular electric field which would break inversion symmetry. Also, we have verified that the properties of these systems are dependent on both $U$ and $J$ instead of only in $U_{eff}=U-J$. Although the smallness of the gap (less than 10 meV according to TB-mBJ) makes the $t_{2g}^5$ configuration not particularly attractive for technological applications, the simple understanding of the system turns it physically very interesting. The present system can be thought of as an adiabatic deformation of a mathematical realization of the four band graphene with SOC, with an experimentally accessible gap, the roles of $\vec S$ and $H_{SO}$ being played now by $\vec J_{eff}$ and $H_{tri}$, but with a different physical nature of the topological state. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank financial support from the Spanish Government via the project MAT-200908165, and V. P. through the Ramón y Cajal Program. We also thank W. E. Pickett for fruitful discussions. Tight binding model =================== In this Section we explain the form of the different terms of the TB Hamiltonian $$H=H_{SO}+H_t+H_{tri}+H_m$$ Spin-orbit term --------------- We want to obtain the form of the SOC restricted to the $t_{2g}$ subspace. Taking the basis $$|yz\rangle= \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \quad |xz\rangle= \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \quad |xy\rangle= \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}$$ can be easily seen that the representation $L_i=\hbar l_i$ takes the form $$l_x= \begin{pmatrix} 0&0&0 \\ 0&0&i \\ 0&-i&0 \end{pmatrix} \quad l_y= \begin{pmatrix} 0&0&i \\ 0&0&0 \\ -i&0&0 \end{pmatrix}$$ $$l_z= \begin{pmatrix} 0&i&0 \\ -i&0&0 \\ 0&0&0 \end{pmatrix}$$ The SOC term has the usual form $$H_{SO} =\frac{2\alpha}{\hbar^2}\vec L \cdot \vec S =\alpha \vec l \cdot\vec\sigma$$ where $\sigma_i$ are the usual Pauli matrices acting on spin space. The representation follows the commutation relation $[l_i,l_j]=-i\epsilon_{ijk}l_k$ so defining $J_i=S_i-L_i$ the usual commutation relations hold. This change of sign introduces an overall minus sign on the eigenvalues so the spectrum results in a $j=3/2$ quadruplet with $E_{j=3/2}=-\alpha$ and $j=1/2$ doublet with $E_{j=1/2}=2\alpha$. Hopping term ------------ Each site has three bonds directed along the edges of the perovskite structure. The bonds link the A and B sublattices so there will be three different overlaps $t_i=\langle A |H_t|B \rangle$ depending on the direction $$(t_x,t_y,t_z)=t(\tau_x,\tau_y,\tau_z)$$ The hopping takes place through the overlap of the $t_{2g}$ orbitals of the TM and the oxygen p orbitals, It can be easily checked that the main contribution gives a matrix form that can be casted in the form $$(\tau_x,\tau_y,\tau_z)=(l_x^2,l_y^2,l_z^2)$$ Trigonal term ------------- Due to the geometry of the system, a possible term in the Hamiltonian that does not break the spatial symmetries of the system is a trigonal term. This term will differentiate the perpendicular direction from the in-plane directions. This term will behave as an on-site term which mixes the $t_{2g}$ states without breaking the symmetry between them so the general form in the $t_{2g}$ basis is $$H_{tri}=\lambda_{tri} \begin{pmatrix} 0&1&1 \\ 1&0&1\\ 1&1&0 \end{pmatrix}$$ given that the previous matrix is diagonal in the (111) direction, being the perpendicular eigenvalues degenerated, thus preserving the trigonal symmetry. The way $\epsilon_{zz}$ enters in this term is on one hand by anisotropy of charge density due to the lack of local spatial inversion and on the other hand by distortion of the cubic perovskite edges (and thus of the octahedral environment) by expansion/contraction of the (111) direction. The absence of local octahedral rotational symmetry is also responsible for the dependence of $\lambda_{tri}$ on U. Since varying the onsite interaction will modify the local electron density, provided the local trigonal symmetry is conserved but not the local inversion (as it is broken explicitly by the multilayer), this modification in the electronic density will influence the electrons across the Hartree and exchange-correlation terms, by a term with those symmetries. In summary, local symmetry forces that a spin-independent perturbation, that includes $\epsilon_{zz}$ and spin-independent U-terms, can be recasted on the previous form. Whether spin-mixing trigonal terms are relevant for the effective model of this system or not should be clarified in the future. Either way, the agreement of the predictions both of the TB model and the DFT calculations, in addition with the explicitly checked topological phase at high U suggests that this model is a well behaved effective model to study the NM phase of this type of systems. Mass term --------- A term which makes the two sublattices nonequivalent will break inversion symmetry. The minimal term which fulfills this is $$H_{m}=ms_z=m \begin{pmatrix} I_A&0 \\ 0&-I_B\\ \end{pmatrix}$$ where $I_A$ and $I_B$ are the identity matrix over the A and B sublattices [50]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , **** (). , ****, (). , , , ****, (). , , , , , , , , , , , ****, (). , , , , , , , , , , , ****, (). , , , , , , , , , , ,[Advanced Materials 22, 4002 (2010).]{} , ****, (). , ****, (). , , , , ****, (). , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , , , , , , , , ****, (). Katja C. Nowack, Eric M. Spanton, Matthias Baenninger, Markus König,John R. Kirtley, Beena Kalisky, C. Ames, Philipp Leubner, Christoph Brüne,Hartmut Buhmann, Laurens W. Molenkamp, David Goldhaber-Gordon, Kathryn A. Moler ,Nature Materials 12, 787 (2013). , , , ****, (). , , , , , ****, (). , ****, (). , , , , , ****, (). , , , ****, (). , , , , , ****, (). , ****, (). , , , , , , ****, (). , ****, () . , , , , ****, () . Xiang Hu, Andreas Rüeg, and Gregory A. Fiee ,Phys. Rev. B 86, 235141 (2012). Gang Chen and Michael Hermele, Phys. Rev. B 86, 235129 , ****, (). , , , , , , , , ****, . , , , , , , , , , , , ****, (). , , , ****, () . I. Hallsteinsen, J. E. Boschker, M. Nord2, S. Lee, M. Rzchowski, P. E. Vullum, J. K. Grepstad, R. Holmestad, C. B. Eom, and T. Tybell J. Appl. Phys. 113, 183512 (2013). , ****, () . , , , , , , ****, (). , , , , , , , , , , , ****, (). C. H. Sohn, H.-S. Kim, T. F. Qi, D. W. Jeong, H. J. Park, H. K. Yoo, H. H. Kim, J.-Y. Kim, T. D. Kang, Deok-Yong Cho, G. Cao, J. Yu, S. J. Moon, and T. W. Noh Phys. Rev. B 88, 085125 (2013). , ****, (). , , , ****, (). , , , ****, (). , ****, (). , ** (, ). , , , , ****, (). , ****, (). , ****, (). David Koller, Fabien Tran, and Peter Blaha, Phys. Rev. B 83, 195134 (2011). , , , , , ****, (). Cyril Martins, Markus Aichhorn, Loïg Vaugi, and Silke Bierman, Phys. Rev. Lett. 107, 266404 (2011). R. Arita, J. Kune, A. V. Kozhevniko, A. G. Eguilu, and M. Imad, Phys. Rev. Lett. 108, 086403 (2012). Christoph Loschen, Javier Carrasco, Konstantin M. Neyman, and Francesc Illas Phys. Rev. B 75, 035115 , , , , ****, (). , ****, (). Andreas Fuhrmann, David Heilmann, and Hartmut Monien, Phys. Rev. B 73, 245118 (2006). S. S. Kancharla and S. Okamoto, Phys. Rev. B 75, 193103 (2007). Thomas C. Lang, Andrew M. Essin, Victor Gurarie, and Stefan Wessel, Phys. Rev. B 87, 205101 (2013). Hsiang-Hsuan Hung, Lei Wang, Zheng-Cheng Gu, and Gregory A. Fiete, Phys. Rev. B 87, 121113(R) (2013). , ** (, ). Burak Himmetoglu, Andrea Floris, Stefano de Gironcoli, Matteo Cococcioni, arXiv:1309.3355. L. de Medici, Phys. Rev. B, 83:205112, (2011). L. de Medici, J. Mravlje, and A. Georges. Janus-Faced, Phys. Rev. Lett., 107:256401, (2011). , ****, (). Parijat Sengupta, Tillmann Kubis, Yaohua Tan, Michael Povolotskyi, and Gerhard Klimeck, J. Appl. Phys. 114, 043702 (2013). , , , , , , , , ****, (). , , , , , , , , , , , **** ().
--- abstract: 'Biological vision infers multi-modal 3D representations that support reasoning about scene properties such as materials, appearance, affordance, and semantics in 3D. These rich representations enable us humans, for example, to acquire new skills—such as the learning of a new semantic class—with extremely limited supervision. Motivated by this ability of biological vision, we demonstrate that 3D-structure-aware representation learning leads to multi-modal representations that enable 3D semantic segmentation with extremely limited, 2D-only supervision. Building on emerging neural scene representations, which have been developed for modeling the shape and appearance of 3D scenes supervised exclusively by posed 2D images, we are first to demonstrate a representation that jointly encodes shape, appearance, and semantics in a 3D-structure-aware manner. Surprisingly, we find that only a few tens of labeled 2D segmentation masks are required to achieve dense 3D semantic segmentation using a semi-supervised learning strategy. We explore two novel applications for our semantically aware neural scene representation: 3D novel view and semantic label synthesis given only a single input RGB image or 2D label mask, as well as 3D interpolation of appearance and semantics.' author: - 'Amit Kohli\*' - 'Vincent Sitzmann\*' - Gordon Wetzstein title: | Inferring Semantic Information with\ 3D Neural Scene Representations --- Introduction {#sec:introduction} ============ ![image](figures/pipeline.pdf){width="\textwidth"} Related Work {#sec:related_work} ============ Method {#sec:method} ====== Discussion {#sec:discussion} ==========
--- abstract: | I point out that the energy sources of the Sun may actually involve runaway nuclear reactions as well, developed by the fundamental thermonuclear instability present in stellar energy producing regions. In this paper I consider the conjectures of the derived model for the solar neutrino fluxes in case of a solar core allowed to vary in relation to the surface activity cycle. The observed neutrino flux data suggest a solar core possibly varying in time. In the dynamic solar model the solar core involves a “ quasi-static” energy source produced by the quiet core with a lower than standard temperature which may vary in time. Moreover, the solar core involves another, dynamic energy source, which also changes in time. The sum of the two different energy sources may produce quasi-constant flux in the SuperKamiokande because it is sensitive to neutral currents, axions and anti-neutrinos, therefore it observes the sum of the neutrino flux of two sources which together produce the solar luminosity. A dynamic solar core model is developed to calculate the contributions of the runaway source to the individual neutrino detectors. The results of the dynamic solar core model suggest that since the HOMESTAKE detects mostly the high energy electron neutrinos, therefore the HOMESTAKE data may aniticorrelate with the activity cycle. Activity correlated changes are expected to be present only marginally in the GALLEX and GNO data. The gallium detectors are sensitive mostly to the pp neutrinos, and the changes of the pp neutrinos arising from the SSM-like core is mostly compensated by the high-energy electron neutrinos produced by the hot bubbles of the dynamic energy source. The dynamic solar model suggests that the GALLEX data may show an anti-correlation, while the SuperKamiokande data may show a correlation with the activity cycle. Predictions of the dynamic solar model are presented for the SNO and Borexino experiments which can distinguish between the effects of the MSW mechanism and the consequences of the dynamic solar model. The results of the dynamic solar model are consistent with the present heioseismic measurements and can be checked with future heioseismic measurements as well. Keywords: solar neutrino problems - solar activity - thermonuclear runaways author: - Attila Grandpierre title: ' A Dynamic Solar Core Model: on the activity-related changes of the neutrino fluxes' --- Introduction ============ The presence of the neutrino problems extends not only to the field of astrophysics (Bahcall, Krastev, Smirnov, 1998), but to the anomalies of the atmospheric neutrinos. Apparently, the possible solutions to these unexpected anomalies are not compatible allowing three neutrino flavours (Kayser, 1998). Nevertheless, the apparent paradox may be resolved when taking into account the effects of the thermonuclear runaways present in stellar cores (Grandpierre, 1996). The neutrinos produced by the runaways may contribute to the events detected in the different neutrino detectors. Estimating the terms arising from the runaways, the results obtained here are at present compatible even with standard neutrinos. The objections raised against a possible astrophysical solutions to the solar neutrino problems (see e.g. Hata, Bludman, Langacker, 1994, Heeger, Robertson, 1996, Berezinsky et al., 1996, Hata, Langacker, 1997) are valid only if the assumption, that the solar luminosity is supplied exclusively by $pp$ and $CNO$ chains, is fulfilled. Therefore, when this assumption is not fulfilled, i.e. a runaway energy source supplies in part the total solar energy production, a more general case has to be considered. In this manuscript I attempt to show that the presence of the runaway energy source, indicated already by first physical principles (Grandpierre, 1977, 1984, 1990, 1996; Zeldovich, Blinnikov, Sakura, 1981) could be described in a mathematically and physically consistent way. The contributions of the runaway source to the neutrino detector data may be determined, allowing also solar cycle changes in the neutrino production. The results of Fourier analysis (Haubold, 1997) and wavelet analysis (Haubold, 1998) of the new solar neutrino capture rate data for the Homestake experiment revealed periodicities close to 10 and 4.76 years. Basic equations and the SSM-like approach ========================================== The basic equations of the neutrino fluxes in the standard solar models are the followings (see e.g. Heeger, Robertson, 1996): $$\begin{aligned} S_K = a_{K8} \Phi_8\end{aligned}$$ $$\begin{aligned} S_C = a_{C1} \Phi_1 + a_{C7} \Phi_7 + a_{C8} \Phi_8\end{aligned}$$ $$\begin{aligned} S_G = a_{G1} \Phi_1 + a_{G7} \Phi_7 + a_{G9} \Phi_8 ,\end{aligned}$$ with a notation similar to that of Heeger and Robertson (1996): the subscripts i = 1, 7 and 8 refer to $pp + pep$, $Be + CNO$ and $B$ reactions. The $S_j$-s are the observed neutrino fluxes at the different neutrino detectors, in dimensionless units, j = K, C, G to the SuperKamiokande, chlorine, and gallium detectors. The observed averaged values are $S_K = 2.44$ (SuperKamiokande Collaboration, 1998), $S_C = 2.56$ (Cleveland et al., 1988) and $S_G = 76$ (Kirsten, 1998). $\Phi_i$ are measured in $10^{10} \nu cm^{-2}s^{-1}$. Similar equations are presented by Castellani et al. (1994), Calabresu et al. (1996), and Dar and Shaviv (1998) with slightly different parameter values. Using these three detector-equations to determine the individual neutrino fluxes $\Phi_i$, I derived that $$\begin{aligned} \Phi_8 = S_K/a_{K8}\end{aligned}$$ $$\begin{aligned} \Phi_1 = (a_{G7}S_C -a_{C7}S_G + S_K/a_{K8}(a_{C7}a_{G8} - a_{G7}a_{C8}))/ a_{G7}a_{C1} - a_{C7}a_{G1}\end{aligned}$$ and $$\begin{aligned} \Phi_7 = (a_{G1}S_C - a_{C1}S_G + S_K/a_{K8}(a_{C1}a_{G8}-a_{G1}a_{C8}))/ a_{G1}a_{C7} - a_{C1}a_{G7}.\end{aligned}$$ Now let us see how these relations may serve to solve the solar neutrino problems. There are three solar neutrino problems distinguished by Bahcall (1994, 1996, 1997): the first is related to the lower than expected neutrino fluxes, the second to the problem of missing beryllium neutrinos as relative to the boron neutrinos, and the third to the gallium detector data which do not allow a positive flux for the beryllium neutrinos in the frame of the standard solar model. It is possible to find a solution to all of the three neutrino problems if we are able to find positive values for all of the neutrino fluxes in the above presented relations. I point out, that the condition of this requirement can be formulated with the following inequality: $$\begin{aligned} S_K < (a_{G1}S_C-a_{C1}S_G)/(a_{C1}a_{G8}/a_{K8}-a_{G1}a_{C8}/a_{K8})\end{aligned}$$ Numerically, $$\begin{aligned} \Phi_7 = 0.4647 S_C -0.0014S_G - 0.5125S_K.\end{aligned}$$ If we require a physical $\Phi_7>0$, with the numerical values of the detector sensitivity coefficients, this constraint will take the following form: $$\begin{aligned} S_K < 0.9024S_C - 0.0027S_G \simeq 2.115.\end{aligned}$$ In the obtained solutions the total neutrino flux is compatible with the observed solar luminosity $L_{Sun}$, but the reactions involved in the SSM (the $pp$ and $CNO$ chains) do not produce the total solar luminosity. The detector rate inequalities (7) or (9) can be fulfilled only if we separate a term from $S_K(0)$, $S_K(x)$ which represents the contribution of non-pp,CNO neutrinos to the SuperKamiokande measurements (and, possibly, also allow the existence of $S_C(x)$ and $S_G(x)$). The presence of a non-electron neutrino term in the SuperKamiokande is interpreted until now as indication to neutrino oscillations. Nevertheless, thermal runaways are indicated to be present in the solar core that may produce high-energy electron neutrinos, as well as muon and tau neutrinos, since $T>10^{11} K$ is predicted for the hot bubbles (Grandpierre, 1996). Moreover, the explosive reactions have to produce high-energy axions to which also only the SuperKamiokande is sensitive (Raffelt, 1996, Engel et al., 1990). Also, the SuperKamiokande may detect electron anti-neutrinos arising from the hot bubbles. This indication suggests a possibility to interpret the neutrino data with standard neutrinos as well. To determine the $S_i(x)$ terms I introduced the “a priori” knowledge on the pp,CNO chains, namely, their temperature dependence. This is a necessary step to subtract more detailed information from the neutrino detector data. In this way one can derive the temperature in the solar core as seen by the different type of neutrino detectors. I note that finding the temperature of the solar core as deduced from the observed neutrino fluxes does not involve the introduction of any solar model dependency, since the neutrino fluxes of the SSM pp,CNO reactions depend on temperature only through nuclear physics. Instead, it points out the still remaining solar model dependencies of the previous SSM calculations and allowing other types of chains, it removes a hypothetical limitation, and accepting the presence of explosive chains as well, it probably presents a better approach to the actual Sun. The calculations of the previous section suggested to complete the SuperKamiokande neutrino-equation with a new term $$\begin{aligned} S_K = T^{24.5} \Phi_8(SSM) a_{K8} + S_K(x),\end{aligned}$$ where $T$ is the dimensionless temperature $T = T(actual)/T(SSM)$. The one-parameter allowance describes a quiet solar core with a temperature distribution similar to the SSM, therefore it leads to an SSM-like solution of the standard neutrino flux equations (see Grandpierre, 1998). An essential point in my calculations is that I have to use the temperature dependence proper in the case when the luminosity is not constrained by the SSM luminosity constraint, because another type of energy source is also present. The SSM luminosity constraint and the resulting composition and density readjustments, together with the radial extension of the different sources of neutrinos, modify this temperature dependence. The largest effect arises in the temperature dependence of the $pp$ flux: $\Phi_1 \propto T^{-1/2} $ for the SSM luminosity constraint (see the results of the Monte-Carlo simulations of Bahcall, Ulrich, 1988), but $\Phi_1 \propto T^4$ without the SSM luminosity constraint. Inserting the temperature-dependence of the individual neutrino fluxes for the case when the solar luminosity is not constrained by the usual assumption behind the SSM (Turck-Chieze and Lopes, 1993) into the chlorine-equation, we got the temperature dependent chlorine neutrino-equation $$\begin{aligned} S_C(T) = a_{C1}T_{C,0}^4 \Phi_1(SSM) + a_{C7}T_{C,0}^{11.5} \Phi_7(SSM) + a_{C8}T_{C,0}^{24.5} \Phi_8(SSM) +S_C(x)\end{aligned}$$ Similarly, the temperature-dependent gallium-equation will take the form: $$\begin{aligned} S_G(T) = a_{G1}T_{G,0}^4 \Phi_1(SSM)+ a_{G7}T_{G,0}^{11.5} \Phi_7(SSM) + a_{G8}T_{G,0}^{24.5} \Phi_8(SSM) +S_G(x).\end{aligned}$$ Now let us first determine the solutions of these equations in the case $S_i(x)=0$. The obtained solutions $T_i$ will be relevant to the SSM-like solar core. Now we know that the Sun can have only one central temperature $T$. Therefore, the smallest $T_i$-s will be the closer to the actual $T$ of the SSM-like solar core, and the larger $T_i$-s will indicate the terms arising from the runaways. In this way, it is possible to determine the desired quantities $S_i(x)$. From the observed $S_i$ values, it is easy to obtain $\Phi_1(SSM) = 5.95 \times 10^{10} cm^{-2}s^{-1}$, $\Phi_7(SSM) = 0.594 \times 10^{10} cm^{-2}s^{-1}$ and $\Phi_8(SSM) = 0.000515 \times 10^{10}cm^{-2}s^{-1}$. With these values, the chlorine neutrino-temperature from (11) $T_{Cl} \simeq 0.949 T(SSM)$, the gallium neutrino-temperature is from (12) $T_{Ga} \simeq 0.922 T(SSM)$ and the SuperKamiokande neutrino-temperature is from (10) $T_{SK} \simeq 0.970 T(SSM)$. The neutrino flux equations are highly sensitive to the value of the temperature. Assuming that the actual Sun follows a standard solar model but with a different central temperature, the above result shows that the different neutrino detectors see different temperatures. This result suggest that the different neutrino detectors show sensitivities different from the one expected from the standard solar model, i.e. some reactions produce neutrinos which is not taken into account into the standard solar model, and/or that they are sensitive to different types of non-pp,CNO runaway reactions. Let us explore the consequences of this conjecture. Dynamic models of the solar core ================================ 1.) Static core Obtaining a Ga neutrino-temperature is $T_{Ga} \simeq 0.922 T(SSM)$, this leads to a pp luminosity of the Sun around $L_{pp} \simeq 72 \% L(SSM)$. The remaining part of the solar luminosity should be produced by the hot bubbles, $L_b \simeq 28 \% L(SSM)$. The runaway nuclear reactions proceeding in the bubbles (and possibly in the microinstabilities) should also produce neutrinos, and this additional neutrino-production, $\Phi_b$ should generate the surplus terms in the chlorine and water Cherenkov detectors as well. At present, I was not able to determine directly, which reactions would proceed in the bubbles, and so it is not possible to determine directly the accompanying neutrino production as well. Nevertheless, it is plausible that at high temperatures $10^{10-11} K$ (Grandpierre, 1996), such nuclear reactions occur as at nova-explosions or other types of stellar explosions. Admittedly, these could be rapid hydrogen-burning reactions, explosive CNO cycle, and also nuclear reactions producing heat but not neutrinos, like e.g. the explosive triple-alpha reaction (Audouze,Truran, Zimmermann, 1973, Dearborn, Timsley, Schramm, 1978). At present, I note that the calculated bubble luminosity ($ \simeq 28 \%$) may be easily consistent with the calculated non-pp,CNO neutrino fluxes $\Delta S_{Cl} = S_C(T_{Cl}=0.949) - S_C(T_{Ga}=0.922) \simeq 1.04$, $\Delta S_{Cl}/S_{Cl} \simeq 41 \%$, and $\Delta S_{SK} = S_{SK}(T_{SK}=0.970) - S_{SK}(T_{Ga}=0.922) \simeq 1.74$, $\Delta S_{SK}/S_{SK} \simeq 71 \%$. The above results are in complete agreement with the conclusion of Hata, Bludman and Langacker (1994), namely: “We conclude that at least one of our original assumptions are wrong, either (1) Some mechanism other than the $pp$ and the $CNO$ chains generates the solar luminosity, or the Sun is not in quasi-static equilibrium, (2) The neutrino energy spectrum is distorted by some mechanism such as the MSW effect; (3) Either the Kamiokande or Homestake result is grossly wrong.” These conclusions are concretised here to the following statements: (1) a runaway energy source is present in the solar core, and the Sun is not in a thermodynamic equilibrium, (2) this runaway source distorts the standard neutrino energy spectrum, and perhaps the MSW effect also contributes to the spectrum distortion (3) The Homestake, Gallex and SuperKamiokande results contains a term arising from the non-pp,CNO source, which has the largest contribution to the SuperKamiokande, less to the Homestake, and the smallest to the Gallex. The helioseismic measurements are regarded as being in very good agreement with the SSM. However, the interpretation of these measurements depends on the inversion process, which uses the SSM as its basis. Moreover, the different helioseismic measurements at present are contradicting below $0.2 R_{Sun}$ (Corbard et al., 1998). We can pay attention to the fact that the energy produced in the solar core do not necessarily pours into thermal energy, as other, non-thermal forms of energy may also be produced, like e.g. energy of magnetic fields. The production of magnetic fields can significantly compensate the change in the sound speed related to the lower temperature, as the presence of magnetic fields may accelerate the propagation of sound waves with the inclusion of magnetosonic and Alfven magnetohydrodynamical waves. The continuously present microinstabilities should produce a temperature distribution with a double character, as part of ions may posses higher energies. Their densities may be much lower than the respective ions closer to the standard thermodynamic equilibrium, and so they may affect and compensate the sound speed in a subtle way. Recent calculations of the non-maxwellian character of the energy distribution of particles in the solar core (Degl’Innocenti et al., 1998) indicate that the non-maxwellian character leads to lowering the SSM neutrino fluxes and, at the same time, produces higher central temperatures. This effect may also compensate for the lowering of the sound speed by lowering of central temperature. At the same time, an approach specially developed using helioseismic data input instead of the luminosity constraint, the seismic solar model indicates a most likely solar luminosity around $0.8 L_{Sun}$(Shibahashi, Takata, 1996, Figs. 7-10), which leads to a seismological temperature lower than its SSM counterpart, $\Delta T \simeq 6 \%$. On the other hand, as Bludman et al. (1993) pointed out, the production of high energy $^8B$ neutrinos and intermediate energy $^7Be$ neutrinos depends very sensitively on the solar temperature in the innermost $5 \%$ of the Sun’s radius. Accepting the average value of $R_K = S_{Kam}(obs.)/S_{Kam}(SSM) = 0.474$, this value gives $T_K \simeq 0.97$. With $S_G = 73.4 SNU$, the derived gallium-temperature will be $T_G \simeq 0.93$. With $S_C = 2.56$ (Cleveland, 1998), $T_C \simeq 0.95$. The result that $S_G(x)<S_C(x)<S_K(x)$ can arise from the circumstance that the gallium detectors are less sensitive to intermediate and high-energy neutrinos than the chlorine one, which detects less runaway neutrino than the SuperKamiokande. Therefore, if thermonuclear runaways produce intermediate- and/or high-energy neutrino flux in the Sun, it results a relatively smaller contribution in the gallium detectors than in the chlorine one. Moreover, the SuperKamiokande can detect also runaway muon and tau neutrinos besides the high-energy electron neutrinos, therefore they can contribute with an extra term which would give account why the Kamiokande observes a larger neutrino flux than the Homestake. Therefore, the deduced three temperatures actually indicate that the solar core is actually cooler than the standard one by an amount around $7 \%$. Therefore, the beryllium neutrino flux in the dynamical solar model is estimated as $43 \%$ of its SSM expected value. The luminosity of the SSM-like core is around $75 \%$ of $L_{Sun}$, therefore the bubble luminosity has to be $25 \% L_{Sun}$. The boron neutrino flux of the SSM like core will be $16.9 \%$ of the SSM value. Therefore, the bubbles has to produce the remaining $\Phi_b = 30.5 \% \Phi_K(SSM)$ of the high energy neutrinos observed by the SuperKamiokande. This requirement may be easily satisfied and it may be consistent with the result obtained that the bubble luminosity is $25 \%$ of the solar luminosity, too. The dynamic solar model predicts a beryllium neutrino flux $ \le 43 \%$ of the SSM value, corresponding to a temperature of $T(DSM) \le 93 \%$. This estimation offers a prediction for the Borexino neutrino detector $\Phi(Borexino) = \Phi_{Be}(SSM) \times T(DSM)^{11.5} + \Phi(bubbles) \le 43 \% + \Phi(bubbles)$. Regarding the SNO detector, I can assume that the neutral currents are produced by the electron neutrinos of the SSM-like core plus all kinds of neutrinos produced by the hot bubbles. Therefore, the prediction of the DSM is $\Phi(SNO) = \Phi(SSM) \times T(DSM)^{24.5} + \Phi(bubbles) \simeq 17 \% + \Phi(bubbles)$. These predictions differ significantly from the MSW SSM-values. Therefore, the future observations may definitively decide which model describes better the actual Sun, the SSM-based MSW effect or the dynamic solar model. In the interpretation of the future measurements it will be important also to take into account the possible dependence of the neutrino fluxes on the solar cycle. 2.) Around activity maximum Similarly, we can apply the equations given to derive the temperatures as seen by the different neutrino detectors in relation to the phases of solar activity. Around solar activity maximum the Kamiokande reported no significant deviancies from the averaged neutrino flux, therefore I can take $R_K(max) = 0.474$ which leads to $T_K(max) = 0.97$ again. With the data of Cleveland et al. (1998), neutrino fluxes were measured in two solar activity maximum period, in 1980 the result was $17.2 \%$ and in 1989 around $42.5 \%$, which compares to the reported average value of $47.8 \%$. Since the average absolute flux is $2.56 SNU$, this refers to an expected flux of $5.36 SNU$. These values leads to $S_C(max) \simeq 1.60 SNU$. Also, the Gallex collaboration did not report about activity related changes in their observed neutrino data, therefore $S_G(max) = 76 SNU$ can be used. Solving the neutrino flux equations for an assumed SSM-like solar core, the resulting temperatures will be $T_C(max) \simeq 93 \%$ and $T_G(max) \simeq 0.922$. The obtained results, $T_K(max) \simeq 0.97$, $T_C(max) \simeq 0.93$, $T_G(max) \simeq 0.92$, are consistent with the assumption that in the solar maximum the Gallex and the Homestake detect only the neutrinos from the SSM-like solar core, which has a temperature around $8 \%$ lower than in the SSM, or that in the solar maximum the neutrinos produced by the hot bubbles contribute mostly to the SuperKamiokande data. This results may be regarded as well fitting to the main point of the paper, namely that the neutrino flux produced by the hot bubbles produce muon and tau neutrinos, axions and anti-neutrinos, to which only the SuperKamiokande is sensitive. 3.) Around solar activity minimum Using a value $R_K(min) = 0.474$ for the minimum of the solar activity, the Kamiokande temperature will be $T_K(min) \simeq 0.97$. With the data presented in Cleveland et al. (1998), in the periods around solar minimum the Homestake measured $0.823 \ counts \ day^{-1} $ around 1977, $0.636 \ counts \ day^{-1}$ around 1987, and $0.634 \ counts \ day^{-1} $ around 1997. These values average to $0.299 \ counts \ day^{-1}$, suggesting an $S_C(min) \simeq 3.737$ SNU. With this $S_C(min)$ the neutrino flux equations leads to $T_C(min) \simeq 0.97$. Now the Gallex results marginally indicates larger than average counts around 1995-1997, as reported by the Gallex-IV measurements of $117 \pm 20 \ SNU$. This value leads to a $T_G(min) \simeq 0.99$, i.e. an anti-correlation with the solar cycle. For a temperature of $T_G(min) \simeq 0.97$ the $S_G(min)$ would be $\simeq 96 \ SNU$. The results obtained above suggest that around solar minimum all the neutrino detector data are consistent with a uniform temperature $T_K(min) = T_C(min) = T_G(min) = 0.97$. In this case the results would suggest that all the neutrino detectors observe only the SSM-like solar core and not the neutrinos arising from the hot bubbles of the thermonuclear runaways. In the dynamic solar model there is a quick and direct contact between the solar surface and the solar core. In the dynamic solar model the transit time scale of the hot blobs from he solar core to the surface is estimated to be around one day (Grandpierre, 1996), therefore, the absence of the surface sunspots may indicate the simultaneous absence (or negligible role) of runaways in the core. Therefore, the result that in solar minimum no bubble neutrino flux are observed in each of the neutrino detectors, is consistent with the fact that in solar minimum there are no (or very few) sunspots observed at the solar surface. Discussion and Conclusions ========================== The calculated solutions of the neutrino flux equations are consistent with the data of the neutrino detectors. I have shown that introducing the runaway energy source, it is possible to resolve the apparent contradiction between the different neutrino detectors even assuming standard neutrinos. Moreover, the results presented here suggest that the physical neutrino problems of the atmospheric neutrinos may be consistent with the solution of the solar neutrino problems even without introducing sterile neutrinos. Considering the hypothetical activity-related changes of the solar neutrino fluxes, I found that the twofold energy source of the Sun produces different contributions in the different neutrino detectors. Apparently, it is the SuperKamiokande that is the most sensitive to the runaway processes. The contribution of the runaway neutrinos and the neutrinos of the standard-like quiet solar core runs in anti-correlation to each other. Therefore, their effects may largely compensate each other in the SuperKamiokande data. Nevertheless, it is indicated that intermediate and high-energy neutrinos may produce a slight correlation with the solar activity in the SuperKamiokande data since they correlate more closely with the runaway neutrino fluxes than with the neutrinos of the SSM-like solar core. They give a $64 \%$ of the total counts observed in the SuperKamiokande, therefore, the total flux may slightly correlate with the solar cycle. On the contrary, since the Homestake do not see the runaways, except the intermediate and high-energy electron neutrinos produced by the hot bubbles, its data may anti-correlate with the solar activity. Moreover, the dynamic solar model suggests that GALLEX data may anti-correlate with the solar cycle as well since it is more sensitive to the low-energy neutrinos arising from the proton-proton cycle, although it is also sensitive to the intermediate and high-energy electron neutrinos produced by the hot bubbles. Now, obtaining indications of possible correlations between the solar neutrino fluxes and activity parameters, I can have a short look to the data whether they show or not such changes in their finer details. Such a marginal change may be indicated in the Figure 3. of Fukuda et al. (1996). In this figure, the maximum value is detected just in 1991, at solar maximum, consistently with the results obtained here. Moreover, its value as read from that figure seems to be $68 \%$ of the value expected from the SSM. In 1995, in solar minimum, the lowest value, $34 \%$ is detected, again consistently with the interpretation we reached. Later on, the SuperKamiokande started to work and measured a value of $2.44 \times 10^6 cm^{-2}s^{-1}$ for the boron neutrino flux. Assuming that the values in 1995 and 1996-1997 did not differ significantly, as it is a period of the solar minimum, the two observation can be taken as equal, i.e. the $34 \%$ is equal with $2.44 \times 10^6 cm^{-2}s^{-1}$. This method gives for the $68 \%$ value a boron flux of $4.88 \times 10^6 cm^{-2}s^{-1}$. Now Bahcall, Basu and Pinsonneault (1998) developed an improved standard solar model, with significantly lower $^7Be(p, \gamma)^8B$ cross sections, $5.15 \ cm^{-2}s^{-1}$ instead of the previo us $6.6 \times 10^6 cm^{-2}s^{-1}$ of Bahcall and Pinsonneault (1995). With this improved value the $4.88 \times 10^6 cm^{-2}s^{-1}$ leads to a $\Phi_k(min) \simeq 95 \% \ \Phi_K(SSM)$! This means that actually even the SuperKamiokande data may contain some, yet not noticed correlation tendency with the solar cycle. These indications make the future neutrino detector data more interesting to a possible solar cycle relation analysis. The dynamic solar model has a definite suggestion that below 0.10 solar radius the standard solar model is to be replaced by a significantly cooler and possibly varying core. These predictions can be checked with future helioseismic observations. Helioseismology is not able to tell us the temperature in this deepermost central region. On the other hand, the presence of the thermonuclear micro-instabilities causes a significant departure from the thermal equilibrium and changes the Maxwell-Boltzmann distribution of the plasma particles. It is shown that such modification leads to increase the temperature of the solar core, which can compensate the non-standard cooling (Kaniadakis, Lavagno, Quarati, 1996) and so the simple dynamic solar model can be easily consistent with the helioseismic results as well. The indicated presence of a runaway energy source in the solar core - if it will be confirmed - will have a huge significance in our understanding of the Sun, the stars, and the neutrinos. This subtle and compact phenomena turns the Sun from a simple gaseous mass being in hydrostatic balance to a complex and dynamic system being far from the thermodynamic equilibrium. This complex, dynamic Sun ceases to be a closed system, because its energy production is partly regulated by tiny outer influences like planetary tides. This subtle dynamics is possibly related to stellar activity and variability. Modifying the participation of the MSW effect in the solar neutrino problem, the dynamic energy source has a role in the physics of neutrino mass and oscillation. An achievement of the suggested dynamic solar model is that it may help to solve the physical and astrophysical neutrino problems without the introduction of sterile neutrinos, and, possibly, it may improve the bad fit of the MSW effect (Bahcall. Krastev, Smirnov, 1998). Acknowledgements ================ The work is supported by the Hungarian Scientific Research Foundation OTKA under No. T 014224. Audouze, J., Truran, J. and Zimmermann, B. A. 1973, Astrophys. J. 184, 493 Bahcall, J. N., Basu, S. and Pinsonneault, M. H. 1998, Phys. Lett. B433, 1 Bahcall, J. N., Krastev, P. I. and Smirnov, A. Yu. 1998, hep-ph/9807216 Bahcall, J. N. and Ulrich, R. 1988, Rev. Mod. Phys. 60, 97 Berezinsky, V., Fiorentini, G. and Lissia, M. 1996, Phys. Lett. B365, 185 Bludman, S. A., Hata, N., Kennedy, D. C., and Langacker, P. O. 1993, Phys. Rev. D49, 2220, hep-ph/9207213 Calabresu, E., Fiorentini, G., Lissia, M. and Ricci, B. 1995, hep-ph/9511286 Castellani, V., Degl’Innnocenti, S., Fiorentini, G., Lissia, M. and Ricci, B. 1994, Phys. Lett. B324, 425 Cleveland, B. T., Daily, T., Davis, R., Distel, J. R., Lande, K., Lee, C. K., and Wildenhain, P. S. 1998, ApJ 496, 505 Corbard, T., Di Mauro, M. P., Sekli, T., and the GOLF team, 1998, preprint ESA SP-418, Obs. Astrophys. Catania preprint 16/1998 Dar, A. and Shaviv, G. 1998, astro-ph/9808098 Dearborn, D., Timsley, B. M. and Schramm, D. N. 1978, Astrophys. J. 223, 557 Engel, J., Seckel, D. and Hayes, A. C. 1990. Phys. Rev. Lett. 65, 960 Fukuda, Y. et al., 1996, Phys. Rev. Lett. 77, 1683 Fukuda, Y. et al., The SuperKamiokande Collaboration, 1998, hep-ph/9807003 Grandpierre, A. 1977, University Doctoral Thesis, Polytechnic University, Budapest Grandpierre, A. 1990, Sol. Phys. 128, 3 Grandpierre, A. 1984, in “Theoretical Problems in Stellar Stability and Oscillation”, eds. Noels, A. and Gabriel, M., 48 Grandpierre, A. 1996, Astron. Astrophys. 308, 199 Grandpierre, A. 1998, subm. to Phys. Rev. D., astro-ph/9808349 Hata, N., Bludman, S., and Langacker, P. 1994, Phys. Rev. D49, 3622 Hata, N. and Langacker, P. 1997, Phys. Rev. D56, 6107, hep-ph/9705339 Haubold, H. J. 1997, Nucl. Phys. A621, 341c Haubold, H. J. 1998, astro-ph/9803136 Heeger, K. M. and Robertson, R. G. H. 1996, Phys. Rev. Lett. 77, 3720 Kayser, B. 1998, The European Phys. J. C3, 1, http://pdg.lbl.gov/ Kirsten, T. A. 1998, Progress of Nuclear and Particle Physics, Vol. 40, to appear Raffelt, G. 1997, astro-ph/9707268 Shibahashi, H. and Takata, M. 1996, Publ. Astron. Soc. Japan 48, 377 The Super-Kamiokande Collaboration, Fukuda et al., 1998, hep-ph/9807003 Turck-Chieze, S. and Lopes, L. 1993, ApJ 408, 347 Zeldovich, Ya. B., Blinnikov, S. I., and Sakura, N. I. 1981, The physical basis of stellar structure and evolution, Izd. Moskovskovo Univ., Moskva, 1981, in Russian
--- abstract: 'A chaotic attractor is usually characterised by its multifractal spectrum which gives a geometric measure of its complexity. Here we present a characterisation using a minimal set of independant parameters which are uniquely determined by the underlying process that generates the attractor. The method maps the $f(\alpha)$ spectrum of a chaotic attractor onto that of a general two scale Cantor measure. We show that the mapping can be done for a large number of chaotic systems. In order to implement this procedure, we also propose a generalisation of the standard equations for two scale Cantor set in one dimension to that in higher dimensions. Another interesting result we have obtained both theoretically and numerically is that, the $f(\alpha)$ characterisation gives information only upto two scales, even when the underlying process generating the multifractal involves more than two scales.' author: - 'K. P. Harikrishnan' - 'R. Misra' - 'G. Ambika' - 'R. E. Amritkar' title: Parametric characterisation of a chaotic attractor using two scale Cantor measure --- \[sec:level1\]INTRODUCTION ========================== The existence of a multifractal measure for any system most often indicates an underlying process generating it, be it multiplicative or dynamic. In the context of chaotic attractors arising from dynamical systems, their multifractal measure result from a time ordered process, which may be an iterative scheme or a continuous flow [@eck]. The description of the invariant measures in terms of $D_q$ [@hen] or $f(\alpha)$ [@hal3], however, provides only a characterisation of their geometric complexity. Feigenbaum et.al [@fei; @feig] and Amritkar and Gupte [@gup] have shown that it is also possible to get the dynamical information in some specific cases by inverting the information contained in a multifractal measure using a thermodynamic formalism. In this paper, we seek to get a characterisation of a chaotic attractor in terms of the underlying process that generates it. It appears that the process of generation of a multifractal chaotic attractor is similar to that of a typical Cantor set (where measure reduces after each step), with the *dissipation* in the system playing a major role. We show this specifically below using the example of Cat map which is area preserving. But a key difference is that, for chaotic attractors, the nature of this reduction is governed by the dynamics of the system. This implies that if the $D_q$ and $f(\alpha)$ curves of a chaotic attractor are mapped onto that of a model multiplicative process, one can derive information about the underlying process that generates the strange attractor, provided the mapping is correct. Here we try to implement this idea using an algorithmic scheme and show that this gives a set of parameters that can be used to characterise a given attractor. A similar idea to extract the underlying multiplicative process from a multifractal has been applied earlier by Chhabra et.al [@chh1]. In order to make this inversion process successful, one needs to take into account two aspects, namely, the type of process [@chh1] (whether L process, P process or LP process) and the number of scales involved (whether two scale or multi scale). Chhabra et.al [@chh1] have shown that different multiplicative processes with only three independant parameters produce good fits to many of the observed $D_q$ curves. Thus the extraction of underlying multiplicative process, based solely on the information of $D_q$ curve, is nonunique and additional thermodynamic information is needed for the inversion process. But the problem that we address here is slightly different. In our case, the model multiplicative process is fixed as a general two scale Cantor set which is the simplest nontrivial process giving rise to a multifractal measure. We then scan the whole set of parameters possible for this process (which include the L process, P process and LP process) and choose the statistically best fit $D_q$ curve to the $D_q$ spectrum computed for the attractor from the time series, which is then used to compute the final $f(\alpha)$ spectrum. In this way, the $f(\alpha)$ spectrum of a chaotic attractor gets mapped onto that of a general two scale Cantor set. We show that the mapping can be done for a large number of standard chaotic attractors. The resulting parameters can be considered to be unique to the underlying process that generates the attractor, upto an ambiguity regarding the number of scales involved. The success of this procedure also implies that the $D_q$ and $f(\alpha)$ spectrum of a multiplicative process involving more than two scales also can be mapped onto that of a two scale Cantor set. We prove this theoretically as well as numerically in Sec.IV, by taking Cantor sets with more than two scales. This, in turn, suggests that though the $f(\alpha)$ spectrum has contributions from all the scales involved in the generation of a multifractal, the information contained in an $f(\alpha)$ spectrum is limited only upto two scales. In other words, given an $f(\alpha)$ spectrum, one can retrieve only the equivalent two scales which are different from the actual scales. Thus, while Chhabra et.al [@chh1] argues that additional information is needed to extract the underlying multiplicative process, our result indicate that the $f(\alpha)$ formalism itself is unable to extract more than two scales. The motivation for using a Cantor set to characterise the multifractal structure of a chaotic attractor comes from the fact that some well known chaotic attractors are believed to have underlying Cantor set structure. For example, it has been shown [@spa] that in the $x-y$ plane corresponding to $z = (r-1)$ of the Lorenz attractor, a transverse cut gives a multi fractal with Cantor set structure. Even the chaotic attractor resulting from the experimental Rayleigh-Bernard convection holds a support whose transverse structure is a Cantor set [@jen]. These Cantor sets are known to be characteristic of the underlying dynamics that generate the attractor. ![image](fig1.eps){width="0.9\columnwidth"} A more general arguement to support the above statement is by using the concept of Kolmogorov entropy. Kolmogorov entropy can be obtained by a successively fine partition of the attractor in a hierarchical fashion. Going from one partition to the next gives one set of scales as shown in [@gup]. These can be treated as scales of higher dimensional Cantor sets. In general, there can be several scales. But the $f(\alpha)$ curve appears to be determined by only two scales. In order to implement our idea, the first step is to compute the $D_q$ spectrum of the chaotic attractor from its time series. This is done by the standard delay embedding technique [@gra2], but by extending the nonsubjective scheme recently proposed by us [@kph] for computing $D_2$. The $D_q$ spectrum is then fitted by a smooth $D_q$ curve obtained by inverse Legendre transformation equations [@atm; @gra1] of the $f(\alpha)$ curve for a general two scale Cantor set. The statistically best fit curve is chosen by changing the parameters of the fit from which, the $f(\alpha)$ curve for the time series is evaluated along with a set of independant parameters characteristic of the Cantor set. This procedure also gives a couple of other interesting results. For example, we are able to propose a generalisation of the standard equations of two scale Cantor set for higher dimensions. Moreover, we explicitely derive the equations for $D_q$ and $f(\alpha)$ spectrum of a three scale Cantor set and show that they can be exactly mapped onto that of a two scale Cantor set. Our paper is organised as follows: The details of our computational scheme are presented in Sec.II and it is tested using time series from the logistic map and different Cantor sets with known parameters in Sec.III. In Sec.IV, the $f(\alpha)$ spectrum of Cantor sets with more than two scales is considered both theoretically and numerically. Sec.V is concerned with the application of the scheme to standard chaotic attractors in higher dimensions. The conclusions are drawn in Sec.VI. \[sec:level1\]NUMERICAL SCHEME ============================== As the first step, the spectrum of generalised dimensions $D_q$ are computed from the time series using the delay embedding technique [@gra2]. For a given embedding dimension $M$, the $D_q$ spectrum are given by the standard equation $$\label{e.1} D_{q} \equiv \frac {1}{q-1} \; \lim_{R \rightarrow 0} \frac{\log \; C_q (R)}{\log \; R}$$ where $C_{q} (R)$ represents the generalised correlation sum. In practical considerations, $D_q$ is computed by taking the slope of $ \log C_{q} (R)$ versus $\log R$ over a scaling region. In our scheme, the scaling region is computed algorithmically [@kph] for each $D_q$ using conditions for $R_{min}$ and $R_{max}$ and the spectrum of $D_q$ for $q$ in the range $[-20,20]$ is evaluated with an error bar. Assuming that the corresponding $f(\alpha)$ curve is a smooth convex function, we seek to represent it using the standard equations [@hal3; @amr] of $\alpha$ and $f(\alpha)$ for the general two scale Cantor set $$\label{e.2} \alpha = {{r \log p_1 + (1-r) \log p_2} \over {r \log l_1 + (1-r) \log l_2}}$$ $$\label{e.3} f = {{r \log r + (1-r) \log (1-r)} \over {r \log l_1 + (1-r) \log l_2}}$$ where $l_1$ and $l_2$ are the rescaling parameters and $p_1$ and $p_2$ are the probability measures with $p_2 = (1-p_1)$. Thus there are three independent parameters which are characteristic of the multiplicative process generating a given $f(\alpha)$ curve. Here $r$ is a variable in the range $[0,1]$, with $r \rightarrow 0$ corresponding to one extreme of scaling and $r \rightarrow 1$ corresponding to the other extreme. Taking ${{\log p_2} /{\log l_2}} > {{\log p_1} /{\log l_1}}$, as $r \rightarrow 0$, we get $$\label{e.4} \alpha \rightarrow \alpha_{max} \equiv {{\log p_2} \over {\log l_2}}$$ and as $r \rightarrow 1$ $$\label{e.5} \alpha \rightarrow \alpha_{min} \equiv {{\log p_1} \over {\log l_1}}$$ By inverting Eqs. (\[e.2\]) and  (\[e.3\]) and using the standard Legendre transformation equations [@atm; @gra1] connecting $\alpha$ and $f(\alpha)$ with $q$ and $D_q$, we get $$\label{e.6} q = {d \over {d\alpha}}f(\alpha)$$ $$\label{e.7} D_q = {{{\alpha q} - {f(\alpha)}} \over {(q-1)}}$$ Changing the variable $\eta = 1/r$,  (\[e.2\]) and  (\[e.3\]) reduce to $$\label{e.8} \alpha = {{\log p_1 + (\eta -1) \log p_2} \over {\log l_1 + (\eta -1) \log l_2}}$$ $$\label{e.9} f = {{(\eta - 1) \log (\eta - 1) - {\eta \log \eta}} \over {\log l_1 + (\eta -1) \log l_2}}$$ Differentiating  (\[e.8\]) and  (\[e.9\]) with respect to $\eta$ and combining $$\label{e.10} {{df} \over {d\alpha}} = {{(\log l_1 (\log (\eta -1) - \log \eta) + \log l_2 \log \eta)} \over {(\log l_1 \log p_2 - \log l_2 \log p_1)}}$$ Using eq. (\[e.6\]) and changing back to variable $r$ $$\label{e.11} q = {{df} \over {d\alpha}} = {{\log l_1 \log (1-r) - \log l_2 \log r} \over {\log l_1 \log (1-p_1) - \log l_2 \log p_1}}$$ Eqs. (\[e.11\]) and  (\[e.7\]) give both $q$ and $D_q$ as functions of the three independent parameters $l_1, l_2$ and $p_1$. For a given set of parameters, the $D_q$ curve is determined by varying $r$ in the range $[0,1]$ and fitted with the computed $D_q$ values from the time series. The procedure is repeated by changing the values of $p_1$ in the range $[0,1]$ and for each $p_1$, scanning the values of $l_1$ and $l_2$ with the condition that both $l_1,l_2 < 1$. A statistical $\chi^2$ fitting is undertaken and the best fit curve given by the $\chi^2$ minimum is chosen. The complete $f(\alpha)$ curve is derived from it along with the complete set of parameters $p_1, l_1, l_2, \alpha_{min}$ and $\alpha_{max}$, for a particular time series. ![image](fig2.eps){width="0.9\columnwidth"} ![image](fig3.eps){width="0.9\columnwidth"} \[sec:level1\]TESTING THE SCHEME ================================ In order to illustrate our scheme, we first apply it on standard multifractals where the $f(\alpha)$ curve and the associated parameters are known exactly. In all the examples discussed in this paper, $30000$ data points are used for the analysis. The first one is the time series from the logistic map at the period doubling accumulation point. The $D_q$ spectrum is first computed using Eq. (\[e.1\]) (with $M=1$), for $q$ values in the range $[-20,+20]$. The computation is done taking a step width of $\Delta q = 0.1$. Choosing $p_1 = 0.5, \alpha_{min} = D_{20}$ and $\alpha_{max} = D_{-20}$ as input parameters, the $D_q$ curve is computed from the above set of equations and fitted with the $D_q$ values. The procedure is repeated by scanning $p_1$ in the range $[0,1]$ in steps of $0.01$. For each $p_1$, $\alpha_{min}$ and $\alpha_{max}$ (which in turn determine $l_1$ and $l_2$) are also varied independantly over a small range. The best fit $D_q$ curve is chosen as indicated by the $\chi^2$ minimum. Since the error in $D_q$ generally bulges as $q \rightarrow -20$, the error bar is also taken care of in the fitting process. The $D_q$ values computed from the time series and its best fit curve are shown in Fig. \[f.1\]. The complete $f(\alpha)$ spectrum for the time series is computed from the best fit $D_q$ curve. To make a comparison, the spectrum is also determined from Eqs. (\[e.2\]) and  (\[e.3\]) using the known values of $p_1, l_1$ and $l_2$ for the logistic map, namely, $p_1 = 0.5$, $l_1 = 0.158(1/{\alpha_{F}^2})$ and $l_2 = 0.404(1/{\alpha_{F}})$ where $\alpha_{F}$ is Feigenbaum’s universal number. Both the curves are also shown in Fig. \[f.1\]. The three parameters derived using our scheme are $p_1 = 0.5, l_1 = 0.146$ and $l_2 = 0.416$ which are reasonably accurate considering the finiteness of the data set. *Cantor set No.* Parameters used Parameters computed ------------------- -------------------------------------------------------- ------------------------------------------ Cantor set 1 $p_1 = 0.60$, $l_1 = 0.22$, $l_2 = 0.48$ $p_1 = 0.58$, $l_1 = 0.21$, $l_2 = 0.49$ Cantor set 2 $p_1 = 0.42$, $l_1 = 0.22$, $l_2 = 0.67$ $p_1 = 0.45$, $l_1 = 0.24$, $l_2 = 0.67$ Cantor set 3 $p_1 = 0.66$, $l_1 = 0.18$, $l_2 = 0.62$ $p_1 = 0.69$, $l_1 = 0.19$, $l_2 = 0.64$ Cantor set 4 $p_1 = 0.72$, $l_1 = 0.44$, $l_2 = 0.48$ $p_1 = 0.66$, $l_1 = 0.39$, $l_2 = 0.52$ 3Scale Cantor set $p_1 = 0.25$, $p_2 = 0.35$, $p_3 = 0.4$ $l_1 = 0.12$, $l_2 = 0.35$, $l_3 = 0.18$ $p_1 = 0.50$, $l_1 = 0.26$, $l_2 = 0.52$ 4Scale Cantor set $p_1 = 0.34$, $p_2 = 0.38$, $p_3 = 0.16$, $p_4 = 0.12$ $l_1 = 0.12$, $l_2 = 0.25$, $l_3 = 0.18$, $l_4 = 0.08$ $p_1 = 0.58$, $l_1 = 0.30$, $l_2 = 0.57$ As the second example, we generate time series from four Cantor sets using four different sets of parameters as given in Table  \[t.1\]. Fig. \[f.2\] shows the computed $D_q$ values along with the best fit curves in all the four cases. Note that the fit is extremely accurate for the whole range of $q$ in all cases. The corresponding $f(\alpha)$ curves, both theoretical and computed from scheme are shown in Fig. \[f.3\]. The parameter values derived from our scheme in the four cases are also given in Table  \[t.1\] for comparison. It is clear that the scheme recovers the complete $f(\alpha)$ spectrum and the parameters reasonably well. In order to convince ourselves that the scheme does not produce any spurious effects, we have also applied it to a time series from a pure white noise. The $D_q$ versus $q$ curve for white noise should be a straight line parallel to the $q$ axis with $D_0 = M$. The corresponding $f(\alpha)$ spectrum would be a $\delta$ function which has been verified numerically. ![image](fig4.eps){width="0.9\columnwidth"} From the numerical computations of two scale Cantor sets, we also find the following results: While the end points of the spectrum, $\alpha_{min}$ and $\alpha_{max}$, are determined by the ratios $\log {p_1}/\log {l_1}$ and $\log {p_2}/\log {l_2}$, the peak value $D_0$ is determined by only the rescaling parameters $l_1$ and $l_2$. As $(l_1 + l_2)$ increases (that is, as the gap length decreases), $D_0$ also increases and $D_0 \rightarrow 1$ as $(l_1 + l_2) \rightarrow 1$. In this sense, the gap length also influences the $f(\alpha)$ spectrum indirectly. We will show below that this is not true in the case of three scale Cantor set where we miss some information regarding the scales. We also find that as the difference between $\alpha_{min}$ and $\alpha_{max}$ increases (that is, as the spectrum widens), more number of data points are required, in general, to get good agreement between theoretical and numerical $f(\alpha)$ curves. ![image](fig5.eps){width="0.9\columnwidth"} ![image](fig6.eps){width="0.9\columnwidth"} \[sec:level1\]MULTI SCALE CANTOR SETS ===================================== In this section, we consider the $f(\alpha)$ spectrum of a Cantor set with more than two scales. First we show the numerical results using our scheme. For this, we first generate the time series for a general 3 scale Cantor set and compute its $D_q$ spectrum. The geometrical construction of a general 3 scale Cantor set is shown in Fig. \[f.4\]. At every stage, an interval gets subdivided into three so that the set involves 3 rescaling parameters $l_1$, $l_2$, $l_3$ and 3 probability measures $p_1$, $p_2$, $p_3$ as shown. The numerically computed $D_q$ spectrum for a typical 3 scale Cantor set (with parameters given in Table  \[t.1\]) is shown in Fig. \[f.5\] (upper left panel). The $D_q$ curve can be very well fitted by a 2 scale Cantor set and the complete $f(\alpha)$ spectrum for the 3 scale Cantor set is evaluated (lower left panel). We have repeated our computations for a 4 scale Cantor set as well and the results are also shown in Fig. \[f.5\] (right panel). In both cases, the parameters used for the construction of the Cantor sets and those computed by our scheme are given in Table  \[t.1\]. Thus it is clear that the $f(\alpha)$ spectrum cannot pick up the full information about the various scales and probability measures. No matter how many scales are involved in the generation of the multifractal, the $f(\alpha)$ spectrum can be reproduced by an equivalent 2 scale Cantor set. We now derive explicite expressions for $\alpha$ and $f(\alpha)$ for a 3 scale Cantor set. We follow the arguements given in Halsey et.al [@hal3], Sec.II-C-4. For the 3 scale Cantor set, one can write $$\label{e.12} \Gamma (q,\tau,n) = \left({{p_{1}^q}\over {l_{1}^{\tau}}} + {{p_{2}^q}\over {l_{2}^{\tau}}} + {{p_{3}^q}\over {l_{3}^{\tau}}}\right)^n = 1$$ Expanding $$\label{e.13} \Gamma (q,\tau,n) = \sum_{m_1,m_2} {{n!}\over {m_1!m_2!(m-m_1-m_2)!}}p_1^{m_1q}p_2^{m_2q}p_3^{(m-m_1-m_2)q}l_1^{-m_1\tau}l_2^{-m_2\tau}l_3^{-(m-m_1-m_2)\tau} = 1$$ In the limit $n \rightarrow \infty$, the largest term contributes. Hence we have $$\label{e.14} {{\partial\Gamma}\over {\partial m_1}} = 0$$ $$\label{e.15} {{\partial\Gamma}\over {\partial m_2}} = 0$$ Using the Stirling approximation and simplifying the above two conditions we get $$\label{e.16} -\log r + \log (1-r-s) + q\log (p_1/p_3) - \tau \log (l_1/l_3) = 0$$ $$\label{e.17} -\log s + \log (1-r-s) + q\log (p_2/p_3) - \tau \log (l_2/l_3) = 0$$ where $r = m_1/n$ and $s = m_2/n$ are free parameters. Also from Eq. (\[e.13\]), using a similar procedure, one can show that $$\label{e.18} r\log r - s\log s - (1-r-s)\log (1-r-s) + q(r \log p_1 + s\log p_2 + (1-r-s)\log p_3) - \tau (r\log l_1 + s\log l_2 + (1-r-s)\log l_3) = 0$$ Combining Eqs. (\[e.16\]),  (\[e.17\]) and  (\[e.18\]) and eliminating $\tau$ we get the following relations for $q$ $$\label{e.19} q = {{\log (l_2/l_3)\log ((1-r-s)/r) - \log (l_1/l_3)\log ((1-r-s)/s)}\over {\log (l_1/l_3)\log (p_2/p_3) - \log (l_2/l_3)\log (p_1/p_3)}}$$ $$\label{e.20} q = {{\log (l_1/l_3)(-r\log r - s\log s - (1-r-s)\log (1-r-s)) - (r\log l_1 + s\log l_2 + (1-r-s)\log l_3)\log ((1-r-s)/r)}\over {(r\log l_1 + s\log l_2 + (1-r-s)\log l_3)\log (p_1/p_3) - \log (l_1/l_3)(r\log p_1 + s\log p_2 + (1-r-s)\log p_3)}}$$ These two equations for $q$ can be used to obtain a relation between $r$ and $s$. To compute the $D_q$ spectrum, vary $r$ from 0 to 1. For every value of $r$, the value of $s$ that satisfies the Eqs. (\[e.19\]) and  (\[e.20\])simultaneously is computed numerically, with the condition that $0 < s < (1-r)$. For every value of $r$ and $s$, $q$ and $\tau$ can be determined using Eqs. (\[e.19\]) and  (\[e.16\]), which in turn gives $D_q = \tau /(q-1)$. The singularity exponent $\alpha$ is determined by the condition $$\label{e.21} p_1^{m_1}p_2^{m_2}p_3^{(m-m_1-m_2)} = \left(l_1^{m_1}l_2^{m_2}l_3^{(m-m_1-m_2)}\right)^{\alpha}$$ This gives the expression for $\alpha$ $$\label{e.22} \alpha = {{r\log p_1 + s\log p_2 + (1-r-s)\log p_3}\over {r\log l_1 + s\log l_2 + (1-r-s)\log l_3}}$$ Similarly, the density exponent $f(\alpha)$ is determined by $$\label{e.23} n!\left(l_1^{m_1}l_2^{m_2}l_3^{(m-m_1-m_2)}\right)^{f(\alpha)} = m_1!m_2!(m-m_1-m_2)!$$ which gives the following expression for $f(\alpha)$ $$\label{e.24} f(\alpha) = {{r\log r + s\log s + (1-r-s)\log (1-r-s)}\over {r\log l_1 + s\log l_2 + (1-r-s)\log l_3}}$$ By varying $r$ from 0 to 1, the $f(\alpha)$ spectrum for a given 3 scale Cantor set can be determined theoretically. In Fig. \[f.6\], the theoretically computed $D_q$ and $f(\alpha)$ spectrum for a typical 3 scale Cantor set is shown. Along with the theoretical $f(\alpha)$ curve, we also show the numerical one (points) for the same Cantor set, computed using our scheme. Thus it is evident that the $f(\alpha)$ spectrum of a 3 scale Cantor set can be mapped onto that of a 2 scale Cantor set. Also, our numerical results on 4 scale Cantor set (Fig. \[f.5\]) suggests that this mapping onto 2 scale Cantor set can possibly be extended for the $f(\alpha)$ spectrum of four or more scale Cantor sets. ![image](fig7.eps){width="0.9\columnwidth"} ![image](fig8.eps){width="0.9\columnwidth"} ![image](fig9.eps){width="0.9\columnwidth"} \[sec:level1\]CHARACTERISATION OF STRANGE ATTRACTORS ==================================================== Evaluating the $f(\alpha)$ spectrum of one dimensional sets is straightforward. But computing the spectra of even synthetic higher dimensional attractors is a challenging task. Generally, the $f(\alpha)$ spectrum for higher dimensional chaotic attractors is calculated taking only one dimension [@elg; @chh3; @wik], which characterise the transverse self similar structure on the attractor equivalent to a Cantor set. In the resulting $f(\alpha)$ spectrum, the peak value (that is, $D_0$) will be equal to $1$, as the higher dimensional attractor is projected into one dimension. This is shown in Fig. \[f.7\] for Henon and Lorenz attractors and the results are consistent with the earlier results. In order to extend our scheme to higher dimensional strange attractors, their $f(\alpha)$ spectra are to be considered analogous to a two scale Cantor measure in higher dimensions. While the $f(\alpha)$ curve can be recovered using the correct embedding dimension $M$, the meaning of the parameters have to be interpreted properly. For a one dimensional Cantor set, $p_1$ is a probability measure while $l_1$ and $l_2$ are fractional lengths at each stage. Extending this analogy to two and three dimensions, $p_1$ can still be interpreted as a probability measure for the two higher dimensional scales, say $\tau_1$ and $\tau_2$. These can be considered as fractional measures corresponding to area or volume depending on the embedding dimension $M$. In other words, $p_1$ is a measure representing the underlying dynamics, while $\tau_1$ and $\tau_2$ correspond to geometric scaling. This gives an alternate description of the formation of a strange attractor if it is correlated to a higher dimensional analogue of the Cantor set. As discussed in Sec.II, for the one dimensional Cantor set, $\alpha_{min}$ and $\alpha_{max}$ are given by Eqs. (\[e.4\]) and  (\[e.5\]) in terms of the parameters. For $p_1 = p_2$ and $l_1 = l_2, \alpha_{min} = \alpha_{max} \leq 1$ and the set becomes a simple fractal with $\alpha \equiv f(\alpha) = D_0$, the fractal dimension. Extending this analogy to higher dimensions, we propose that Eqs. (\[e.4\]) and  (\[e.5\]) are to be modified as $$\label{e.25} \alpha_{max} = M {{\log p_2} \over {\log \tau_2}}$$ and $$\label{e.26} \alpha_{min} = M {{\log p_1} \over {\log \tau_1}}$$ As in the case of one dimensional Cantor sets, for $p_1 = p_2$ and $\tau_1 = \tau_2$, $\alpha_{max} = \alpha_{min} \leq M$ and the set is again a simple fractal with fractal dimension $D_0 = \alpha \equiv f(\alpha)$. Rewriting the above equations, $$\label{e.27} \alpha_{max} = {{\log p_2} \over {\log ({\tau_{2}}^{1/M})}} = {{\log p_2} \over {\log l_2}}$$ and $$\label{e.28} \alpha_{min} = {{\log p_1} \over {\log ({\tau_{1}}^{1/M})}} = {{\log p_1} \over {\log l_1}}$$ Replacing $l_1$ and $l_2$ by $\tau_{1}^{1/M}$ and $\tau_{2}^{1/M}$ in Eqs. (\[e.2\]) and  (\[e.3\]), the defining equations for the two scale Cantor set in $M$ dimension can be generalised as $$\label{e.29} \alpha = {M[{r \log p_1 + (1-r) \log p_2}] \over {r \log {\tau_1} + (1-r) \log {\tau_2}}}$$ $$\label{e.30} f = {M[{r \log r + (1-r) \log (1-r)}] \over {r \log {\tau_1} + (1-r) \log {\tau_2}}}$$ Just like $l_1 + l_2 < 1$ for one dimensional Cantor set, we expect $\tau_1 + \tau_2 < 1$ in $M$ dimensions. This is because, the measure keeps on reducing after each time step due to dissipation and $\tau_1$ and $\tau_2$ represent the fractional reduction in the measure for the two scales. It should be noted that since, in general, different scales apply in different directions, $\tau_1$ and $\tau_2$ should be treated as some effective scales in higher dimension. *Attractor* $\alpha_{min}$ $\alpha_{max}$ $D_0$ $p_1$ $\tau_1$ $\tau_2$ ---------------------------- ----------------- ----------------- ----------------- -------- ---------- ---------- Rossler attractor ($a=b=0.2, c=7.8$) $1.46 \pm 0.02$ $3.39 \pm 0.14$ $2.31 \pm 0.02$ $0.65$ $0.42$ $0.41$ Lorenz attractor ($\sigma=10,r=28,b=8/3$) $1.38 \pm 0.03$ $3.71 \pm 0.12$ $2.16 \pm 0.04$ $0.50$ $0.22$ $0.57$ Ueda attractor ($k=0.05,A=7.5$) $1.73 \pm 0.05$ $3.78 \pm 0.13$ $2.62 \pm 0.06$ $0.64$ $0.46$ $0.44$ Duffing attractor ($b=0.25,A=0.4,\Omega=1$) $1.84 \pm 0.04$ $3.59 \pm 0.08$ $2.78 \pm 0.04$ $0.81$ $0.71$ $0.25$ Henon attractor ($a=1.4,b=0.3$) $0.96 \pm 0.02$ $2.27 \pm 0.08$ $1.43 \pm 0.03$ $0.50$ $0.24$ $0.54$ Tinkerbell attractor ($a=0.9,b=-0.6,c=2,d=0.5$) $0.83 \pm 0.02$ $3.43 \pm 0.12$ $1.65 \pm 0.03$ $0.60$ $0.29$ $0.58$ ![image](fig10.eps){width="0.9\columnwidth"} ![image](fig11.eps){width="0.9\columnwidth"} We now check these results using the time series from a standard chaotic attractor, namely the Rossler attractor for parameter values $a = b = 0.2$ and $c = 7.8$ with $30000$ data points. Fig. \[f.8\] shows the $D_q$ spectrum computed from the time series taking $M = 3$, along with the best fit curve applying our scheme. The fit is found to be very good for the whole range of $q$ values. The complete $f(\alpha)$ spectrum computed from the best fit $D_q$ curve is shown in Fig. \[f.9\]. The scheme also calculates the three parameters as $p_1 = 0.65$, $\tau_1 = 0.42$ and $\tau_2 = 0.41$ so that $\tau_1 + \tau_2 = 0.83 < 1$. Thus one can say that if the $f(\alpha)$ spectrum of the Rossler attractor is made equivalent to a two scale Cantor set in three dimension, the resulting probability measures are $0.65$ and $0.35$ and rescaling parameters $0.42$ and $0.41$. Interestingly, it appears that the Rossler attractor is generated by a P process rather than a LP process. The scheme has also been applied to several standard chaotic attractors in two and three dimensions. The $f(\alpha)$ spectrum are shown in Fig. \[f.10\] for four of them, while the complete set of parameters for six standard chaotic attractors are given in Table  \[t.2\]. The error bars given for $\alpha_{max}, \alpha_{min}$ and $D_0$ are those reflected from the computed $D_q$ values. In a way, the two sets of parameters given above, that is $p_1, \tau_1, \tau_2$ and $\alpha_{max}, \alpha_{min}, D_0$, can be considered as complementary to each other. While the former contain the finger prints of the underlying process that generate the strange attractor (the extent of stretching and folding and redistribution of measures at each time step), the latter characterises the geometric complexity of the attractor once it is formed. Both can be independantly used to differentiate between chaotic attractors formed from different systems or from the same system for different parameter values. The former may be more relevant in the case of chaotic attractors obtained from experimental systems. Finally, we wish to point out that dissipation is a key factor leading to the multifractal nature of a chaotic attractor. To show this, we consider a counter example, namely, that of Cat map which is area preserving. The fixed points of the Cat map are *hyperbolic*, which are neither attractors nor repellers and the trajectories uniformly fill the phase space as time $t \rightarrow \infty$. Its $D_q$ spectrum computed from the time series is found to be a straight line as shown in Fig. \[f.11\], just like that of a white noise. The corresponding $f(\alpha)$ curve is a $\delta$ function, also shown in Fig. \[f.11\]. Since $\alpha_{min} = \alpha_{max}$, a two scale fit gives the parameters as $p_1 = 0.5, \tau_1 = 0.49$ and $\tau_2 = 0.51$. Thus the Cat map attractor turns out to be a simple fractal rather than a multifractal. \[sec:level1\]DISCUSSION AND CONCLUSION ======================================= In this paper, we show that a chaotic attractor can be characterised using a set of three independant parameters which are specific to the underlying process generating it. The method relies on a scheme that maps the $f(\alpha)$ spectrum of a chaotic attractor onto that of a general two scale Cantor set. The scheme is first tested using one dimensional chaotic attractors and Cantor sets whose $f(\alpha)$ curves and parameters are known and subsequently applied to higher dimensional cases. In the scheme, the $D_q$ spectrum of a chaotic attractor is compared with the $D_q$ curve computed from a model multiplicative process. Similar idea has also been used to deduce certain statistical characteristics of a system and infer features of the dynamical processes leading to the observed macroscopic parameters. One such example has been provided earlier by Meneveau and Sreenivasan [@sre2] in the study of energy dissipation rate in fully developed turbulent flows. By comparing the experimental $D_q$ data with that of a two scale Cantor measure, they have shown that the dynamics leading to the observed multifractal distributions of the energy dissipation rate can be well approximated by a single multi step process involving unequal energy distribution in the ratio $7/3$. Usually, a multifractal is characterised only by the range of scaling involved $[\alpha_{min},\alpha_{max}]$, which roughly represents the inhomogeinity of the attractor. So the set of parameters computed here seems to give alternative way of characterising them. But we wish to emphasize that the information contained in these parameters is more subtle. For example, once these prameters are known, $\alpha_{min}$ and $\alpha_{max}$ can be determined using Eqs. (\[e.27\]) and  (\[e.28\]). Thus by evaluating $p_1$, $l_1$ and $l_2$, we get additional information regarding the dynamic process leading to the generation of the strange attractor. Moreover, these parameters can also give indication as to *why* the degree of inhomogeinity varies between different chaotic attractors. As is well known from the srudy of Cantor sets, the primary reason for the increased inhomogeinity is the wide difference between the rescaling measures $l_1$ and $l_2$. Looking at the parameter values, rescaling measures $\tau_1$ and $\tau_2$ are very close for Rossler and Ueda attractors which appear less inhomogeneous, while that for Lorenz and Duffing are widely different making them more inhomogeneous with two clear scrolls. Another novel aspect of the scheme worth commenting is the use of two scale Cantor measures in higher dimension as analogues of chaotic attractors. Eventhough such objects are not much discussed in the literature, one can envisage them, for example, as generalisation of the well known Sierpinsky carpets in two dimension or the Menger sponge in three dimension. But a key difference between these and a chaotic attractor is that the rescaled measures are not regular in the generation of the latter. Recently, Perfect et. al [@per] present a general theoretical framework for generating geometrical multifractal Sierpinsky carpets using a generator with variable mass fractions determined by the truncated binomial probability distribution and to compute their generalised dimensions. It turns out that the chaotic attractors are more similar to multifractals generated in higher dimensional support, such as, fractal growth patterns and since the rescaled measures are irregular, a one dimensional measure such as $l_1 = \tau_{1}^{1/M}$ need not have any physical significance. Finally, for a complex chaotic attractor in general, the redistribution of the measures as it evolves in time can take place in more than two scales. Thus it appears that a characterisation based on only two scales is rather approximate as we tend to lose some information regarding the other scales involved. But we have found that the $D_q$ and $f(\alpha)$ curves of a multi scale Cantor set can be mapped onto that of an equivalent two scale Cantor set. These two scales may be functions of the actual scales involved and may contain the missing information in an implicit way. Thus, an important outcome of the present analysis is the realisation that the dynamical information that can be retrieved from the $f(\alpha)$ spectrum is limited to only two scales. In this sense, a two scale Cantor measure can be considered as a good approximation to describe the multifractal properties of natural systems. KPH acknowledges the hospitality and computing facilities in IUCAA, Pune. [1]{} J. P. Eckmann and D. Ruelle, Rev. Mod. Phys. [**57**]{}, 617(1985). H. G. E. Hentschel and I. Proccacia, Physica D [**8**]{}, 435(1983). T. C. Halsey, M. H. Jensen, L. P. Kadanoff, I. Proccacia and B. I. Shraiman, Phys. Rev. A [**33**]{}, 1141(1986). M. J. Feigenbaum, M. H. Jensen and I. Proccacia, Phys. Rev. Lett. [**57**]{}, 1503(1986). M. J. Feigenbaum, J. Stat. Phys. [**46**]{}, 919(1987); 925(1987). R. E. Amritkar and N. Gupte, Phys. Rev. Lett. [**60**]{}, 245(1988). A. B. Chhabra, R. V. Jensen and K. R. Sreenivasan, Phys. Rev. A [**40**]{}, 4593(1989). C. Sparrow, *The Lorenz Equations: Bifurcations, Chaos and Strange Attractors*, (Springer, New York, 1982). M. H. Jensen, L. P. Kadanoff, A. Lichaber, I. Proccacia and J. Stavans, Phys. Rev. Lett. [**55**]{}, 2798(1985). P. Grassberger and I. Proccacia, Physica D [**9**]{}, 189(1983). K. P. Harikrishnan, R. Misra, G. Ambika and A. K. Kembhavi, Physica D [**215**]{}, 137(2006). H. Atmanspacher, H. Scheingraber and G. Wiedenmann, Phys. Rev. A [**40**]{}, 3954(1984). P. Grassberger, R. Badii and A. Politi, J. Stat. Phys. [**51**]{}, 135(1988). R. E. Amritkar, A. D. Gangal and N. Gupte, Phys. Rev. A [**36**]{}, 2850(1987). S. Gratrix and J. N. Elgis, Phys. Rev. Lett. [**92**]{}, 014101(2004). A. B. Chhabra, C. Meneveau, R. V. Jensen and K. R. Sreenivasan, Phys. Rev. A [**40**]{}, 5284(1989). K. O. Wiklund and J. N. Elgin, Phys. Rev. E [**54**]{}, 1111(1996). J. C. Sprott, *Chaos and Time Series Analysis*, (Oxford University Press, New York, 2003). C. Meneveau and K. R. Sreenivasan, Phys. Rev. Lett. [**59**]{}, 1424(1987). E. Perfect, R. W. Gentry, M. C. Sukop and J. E. Lawson, Geoderma [**134**]{}, 240(2006).
--- abstract: 'We propose a method to create superpositions of two macroscopic quantum states of a single-mode microwave cavity field interacting with a superconducting charge qubit. The decoherence of such superpositions can be determined by measuring either the Wigner function of the cavity field or the charge qubit states. Then the quality factor $Q$ of the cavity can be inferred from the decoherence of the superposed states. The proposed method is experimentally realizable within current technology even when the $Q$ value is relatively low, and the interaction between the qubit and the cavity field is weak.' author: - 'Yu-xi Liu' - 'L.F. Wei' - Franco Nori title: Measuring the quality factor of a microwave cavity using superconduting qubit devices --- Introduction ============ Superconducting (SC) Josephson junctions are considered promising qubits for quantum information processing. This “artificial atom", with well-defined discrete energy levels, provides a platform to test fundamental quantum effects, e.g., cavity quantum electrodynamics (QED). The study of the cavity QED of a SC qubit, e.g., in Ref. [@you], can also open new directions for studying the interaction between light and solid state quantum devices. These can result in novel controllable electro-optical quantum devices in the microwave regime, such as microwave single-photon generators and detectors. Cavity QED can allow the transfer of information among SC qubits via photons, used as information bus. Recently, different information buses using bosonic systems, which play a role analogous to a single-mode light field, have been proposed to mediate the interaction between the SC qubits. These bosonic “information bus" systems can be modelled by: nanomechanical resonators (e.g., in Refs. [@nano]); large junctions (e.g., Ref. [@wang]); current-biased large junctions (e.g., Refs. [@large]), and LC oscillators (e.g., Refs. [@lc]). However, the enormous versatility provided by photons should stimulate physicists to pay more attention to SC qubits interacting via photons,while embedded inside a QED cavity. Several theoretical proposals have analyzed the interaction between SC qubits and quantized  [@saidi; @you; @you1; @liu; @gao; @vourdas; @zagoskin] or classical fields [@zhou; @paspalakis; @liu1]. The strong coupling of a single photon to a SC charge qubit has been experimentally demonstrated [@wallraff] by using a one-dimensional transmission line resonator [@blais]. But, the QED effect of the SC qubit inside higher-dimensional cavities has not been experimentally observed. The main roadblocks seem to be: i) whether the cavity quality factor $Q$ can still be maintained high enough when the SC qubit is placed inside the cavity. Different from atoms, the effect of the SC qubit on the $Q$ value of the cavity is not negligible due to its complex structure and larger size. ii) The higher-dimensional QED cavity has relatively large mode volume, making the interaction between the cavity field and the qubit not be strong enough for the required quantum operations within the decoherence time. iii) The transfer of information among different SC qubits requires the qubit-photon interaction to be switched on/off by the external classical flux on time scales of the inverse Josephson energy. A higher cavity $Q$ value, a stronger qubit-photon interaction, and a faster switching interaction for the SC qubit QED experiments, seem difficult to achieve anytime soon. In view of the above problems, it would be desirable to explore the possibility to demonstrate a variety of relatively simple cavity QED phenomena with a SC qubit. The determination of the cavity $Q$ value is a very important first step for the experiments on cavity QED with SC qubits. However, theoretical calculations of the $Q$ value are not always easy to perform because of the complexity of the circuit. Recent experiments [@pkd] on broadband SC detectors showed that the $Q$ value of the SC device can reach $2\times 10^{6}$, which indicates that relatively simple experiments using cavity QED with a SC qubit are possible. In this paper, we propose an experimentally feasible method which can be used to demonstrate a simple cavity QED effect of the SC qubit. For instance, superpositions of two macroscopic quantum states of a single-mode microwave cavity field can be created by the interaction between a SC charge qubit and the cavity field. At this stage, the injected light field is initially a coherent state, which can be easily prepared. The decoherence of the created superposition states can be further determined by measuring either the Wigner function of the cavity field or the charge qubit states. Then the cavity $Q$ value can be inferred from this decoherence measurement. Our proposal only needs few operations with a relatively low Q value. Also, we do not need to assume a very fast sweep rate of the external magnetic field for switching on/off the qubit-field interaction. Furthermore, the qubit-field interaction is not necessarily resonant. We begin in Sec. II with a brief overview of the qubit-field interaction. In Sec. III, we discuss how to prepare superpositions of two different cavity field states under the condition of large detuning. In Sec. IV, the cavity $Q$ value is determined by the tomographic reconstruction of the cavity field Wigner function. In Sec. V, we show an alternative method to determine the $Q$ value according to the states of the qubit. Finally, we list our conclusions. Theoretical model ================= We briefly review a model of a SC charge qubit inside a cavity. The Hamiltonian can be written as [@you; @you1; @liu; @gao] $$\begin{aligned} \label{eq:1} &&H=\hbar\omega a^{\dagger}a+E_{z}\sigma_{z}\\ &&-\,E_{J}\sigma_{x} \cos\left[\frac{\pi}{\Phi_{0}}\left(\Phi_{c} I+\eta \,a+\eta^{*}\,a^{\dagger}\right)\right],\nonumber\end{aligned}$$ where the first two terms respectively represent the free Hamiltonians of the cavity field with frequency $\omega$ for the photon creation (annihilation) operator $a^{\dagger} \,(a)$, and the qubit charging energy $$\label{charge} E_{z}=-2E_{\rm ch}(1-2n_{g})\, ,$$ which depends on the gate charge $n_{g}$. The single-electron charging energy is $E_{\rm ch}=e^2/2(C_{g}+2C_{J})$ with the capacitors $C_{g}$ and $C_{J}$ of the gate and the Josephson junction, respectively. The dimensionless gate charge, $n_{g}=C_{g}V_{g}/2e$, is controlled by the gate voltage $V_{g}$. Here, $\sigma_{z}$, $\sigma_{x}$ are the Pauli operators, and the charge excited state $|e\rangle$ and ground state $|g\rangle$ correspond to the eigenstates $ |\!\downarrow\rangle=\left(\begin{array}{l}0\\1\end{array}\right) $ and $|\!\uparrow\rangle=\left(\begin{array}{l}1\\0\end{array}\right) $ of the spin operator $\sigma_{z}$, respectively. $I$ is an identity operator. The third term is the nonlinear qubit-photon interaction. $E_{J}$ is the Josephson energy for a single junction. The parameter $\eta$ is defined as $\eta=\int_{S} \mathbf{u}(\mathbf{r})\cdot d\mathbf{s}$ with the mode function of the cavity field $\mathbf{u}(\mathbf{r})$, $S$ is the surface defined by the contour of the SQUID. We can decompose the cosine in Eq. (\[eq:1\]) into classical and quantized parts. The quantized parts $\sin[\pi(\eta\, a+H.c.)/\Phi_{0}]$ and $\cos[\pi(\eta\, a+H.c.)/\Phi_{0}]$ can be further expanded as a power series in $a \,(a^{\dagger})$. To estimate the qubit-photon coupling constant, the qubit is assumed to be inside a full-wave cavity with the standing-wave form for a single-mode magnetic field [@scully] $$\label{eq:m1} B_{x}=-i\sqrt{\frac{\hbar\omega}{\varepsilon_{0}V c^{2}}}(a-a^{\dagger})\cos(k z).$$ The polarization of the magnetic field is along the normal direction of the surface area of the SQUID, located at an antinode of the standing-wave mode. The mode function $\sqrt{\hbar\omega/\varepsilon_{0}V c^{2}}\cos(k z)$ can be assumed to be independent of the integral area because the maximum linear dimension of the SQUID, e.g., even for $50 \,\,\mu$m, is much less than $0.1$ cm, the shortest microwave wavelength of the cavity field. Then, in the microwave regime, the estimated range of values for $\pi\eta/\Phi_{0}$ is: $8.55 \times 10^{-6}\leq \pi\eta/\Phi_{0}\leq 1.9\times 10^{-3}$, for a fixed area of the SQUID, e.g., $50 \,\mu$m $\times 50\, \mu$m. If the light field is not so strong (e.g., the average number of photons inside the cavity $N=\langle a^{\dagger}a\rangle\leq 100$), then we can only keep the first order of $\pi\eta/\Phi_{0}$ and safely neglect all higher orders. Thus, the Hamiltonian (\[eq:1\]) becomes $$\begin{aligned} \label{eq:2} &&H=\hbar\omega a^{\dagger}a+E_{z}\sigma_{z}-\,E_{J}\sigma_{x}\cos (\frac{\pi\Phi_{c}}{\Phi_{0}})\nonumber \\ &&+\frac{\pi E_{J}}{\Phi_{0}}\sin (\frac{\pi\Phi_{c}}{\Phi_{0}}) \left(\eta \,a\,\sigma_{+}+\eta^{*}\,a^{\dagger}\,\sigma_{-}\right).\end{aligned}$$ It is clear that the qubit-photon interaction can be controlled by the classical flux $\Phi_{c}$, after neglecting higher-orders in $\pi\eta/\Phi_{0}$. Preparation of macroscopic superposition states =============================================== The qubit-photon system can be initialized by adjusting the gate voltage $V_{g}$ and the external flux $\Phi_{c}$ such that $n_{g}=1/2$ and $\Phi_{c}=0$, then the dynamics of the qubit-field is governed by the Hamiltonian $$\label{eq:3} H_{1}=\hbar\omega a^{\dagger}a-E_{J}\sigma_{x}.$$ Now there is no interaction between the cavity field and the qubit; thus, the cavity field and the qubit evolve according to Eq. (\[eq:3\]). We assume that the qubit-photon system works at low temperatures $T$ ( e.g., $T=30$ mK in Ref. [@nakamura]), then the mean number of thermal photons $\langle n_{th}\rangle$ in the cavity can be negligible in the microwave regime [@liu], and the cavity is approximately considered in the zero temperature environment. The initial state of the cavity field is prepared by injecting a single-mode coherent light $$|\alpha\rangle=\exp\left\{-\frac{|\alpha|^2}{2}\right\}\sum_{n=0}\frac{\alpha^{n}}{\sqrt{n!}}|n\rangle\,,$$ into the cavity. Here, without loss of generality, $\alpha$ is assumed to be a real number, and $a|\alpha\rangle=\alpha|\alpha\rangle$. The qubit is assumed to be initially in the ground state $|g\rangle$. After a time interval $\tau_{1}=\hbar\pi/4E_{J}$, the qubit ground state $|g\rangle$ is transformed as $|g\rangle\rightarrow \left(|g\rangle+i|e\rangle\right)/\sqrt{2}$; then, the qubit-photon state evolves into $$\label{eq:4} |\psi(\tau_{1})\rangle= \frac{1}{\sqrt{2}}(|g\rangle+i|e\rangle)|\alpha\rangle\, .$$ Here we have neglected the free evolution phase factor $e^{-i\omega \tau_{1}}$ in $\alpha$. Now, we assume that the gate voltage and the magnetic flux are switched to $n_{g}\neq 1/2$ (this value of $n_{g}$ will be specified later) and $\Phi_{c}=\Phi_{0}/2$, respectively. Then the qubit-photon interaction appears and the effective Hamiltonian governing the dynamic evolution of the qubit-photon can be written as (see Appendix A) $$\label{eq:5} H_{2}=\hbar\omega_{-} a^{\dagger }a+\frac{1}{2}\hbar \Omega \sigma_{z}+\hbar\frac{|g|^{2}}{\Delta }\left(1+2 a^{\dag }a\right) |e\rangle \langle e|\, ,$$ with $\omega_{-}=\omega -|g|^{2}/\Delta $ and $ g=(\pi\eta E_{J})/(\hbar{\Phi_{0}})$. The detuning $\Delta=\Omega-\omega > 0$ between the qubit transition frequency $\Omega=-4E_{ch}(1-2n_{g})/\hbar$ and the cavity field frequency $\omega$ is assumed to satisfy the large detuning condition $$\label{large} \frac{\pi E_{J}|\eta|}{\hbar\Phi_{0}\Delta}\ll 1.$$ The unitary evolution operator corresponding to Eq. (\[eq:5\]) can be written as $$\begin{aligned} \label{eq:6} U(t)&=&\exp\left[-i\left(\omega_{-} a^{\dagger}a+\frac{\Omega}{2} \sigma_{z}\right)t\right] \nonumber\\ &\times&\exp\left[-it F (a^{\dagger}a)|e\rangle \langle e|\right],\end{aligned}$$ here, the operator $F(a^{\dagger}a)$ is expressed as $$F(a^{\dagger}a)=\frac{|g|^2}{\Delta}(1+2 a^{\dagger}a).$$ With an evolution time $\tau_{2}$, the state (\[eq:4\]) evolves into $$\label{eq:8} |\psi(\tau_{2})\rangle=\frac{1}{\sqrt{2}}\left[|g\rangle|\beta\rangle+ i\exp(i\theta)|e\rangle|\beta^{\prime}\rangle\right]$$ where a global phase $\exp(-i\Omega \tau_{2}/2)$ has been neglected, $\theta=(\Omega-|g|^2/\Delta)\tau_{2}$, $\beta=\alpha \exp[-i\omega_{-} \tau_{2}]$, and $\beta^{\prime}=\beta \exp(-i\phi)$, with $\phi=2|g|^2 \tau_{2}/\Delta$. Equation (\[eq:8\]) shows that a phase shift $\phi$ is generated for the coherent state $|\beta\rangle$ of the cavity field when the qubit is in the excited state $|e\rangle$, but the qubit ground state $|g\rangle$ does not induce an extra phase for the coherent state $|\beta\rangle$. The gate voltage and the magnetic field are now adjusted such that the conditions $n_{g}=1/2$ and $\Phi_{c}=0$ are satisfied; then the qubit-photon interaction is switched off. Now let the system evolve a time $\tau^{\prime}=\tau_{1}=\hbar\pi/4E_{J}$, then Eq. (\[eq:8\]) becomes $$\begin{aligned} \label{eq:9} |\psi(\tau_{2})\rangle&=&\frac{1}{2}|g\rangle\otimes[|\beta\rangle-\exp(i\theta)|\beta^{\prime}\rangle]\nonumber\\ &+&i\frac{1}{2}|e\rangle\otimes[|\beta\rangle+\exp(i\theta)|\beta^{\prime}\rangle],\end{aligned}$$ where a free phase factor $e^{-i\omega \tau_{1}}$ in the cavity field states $|\beta\rangle$ and $|\beta e^{-i\phi}\rangle$ has been neglected. The superpositions of two distinguished coherent states can be conditionally generated by measuring the charge states of the qubit as, $$\label{eq:10} |\beta_{\pm}\rangle=N^{-1}_{\pm}[|\beta\rangle\pm \exp(i\theta)|\beta^{\prime}\rangle],$$ where the $+$ ($-$) correspond to the measurement results $|e\rangle$ ($|g\rangle$), and the normalized constants $N_{\pm}$ are determined by $$N^{2}_{\pm}=2\pm 2\cos\theta^{\prime}\exp\!\left[-2|\alpha|^2\sin^2\left(\frac{\phi}{2}\right)\right],$$ where $\theta^{\prime}=|\alpha|^2\sin\phi-\theta$, and the relation $|\beta|^2=|\alpha|^2$ is used. Due to $\Phi_{c}=0$, after the superpositions in Eq.(\[eq:10\]) are created, the dynamic evolution of the cavity field is only affected by its dissipation, characterized by the decay rate $\gamma$, which can be expressed by virtue of the cavity quality factor $Q$ as $Q=\omega/\gamma$. Now let the cavity field described by the states $|\beta_{+}\rangle$ or $|\beta_{-}\rangle$ evolve a time $\tau_{3}$; then the reduced density matrices of the superpositions can be described by $$\begin{aligned} \label{eq:11} &&\rho_{\pm}(\tau_{3}) =\frac{1}{N^{2}_{\pm}}\left\{|\beta u\rangle\langle \beta u|+|\beta^{\prime} u \rangle\langle \beta^{\prime} u |\right.\nonumber \\ &&\left.\pm C |\beta^{\prime} u \rangle\langle \beta u |\pm C^{*}|\beta u \rangle\langle \beta^{\prime} u |\right\},\end{aligned}$$ where $$C=\exp(i\theta) \exp\left\{|\alpha|^2(1-e^{-i\phi})(u^2-1)\right\}$$ and $$\label{eq:188} u\equiv u(\tau_{3})=\exp\left(-\frac{\gamma}{2} \tau_{3}\right) =\exp\left(-\,\frac{\omega\tau_{3}}{2Q} \right).$$ It is clearly shown that the mixed state in Eq. (\[eq:11\]) is strongly affected by the $Q$ value. Equation (\[eq:11\]) is derived for zero temperature since thermal photons are negligible at low-temperature. Equations (\[eq:11\]-\[eq:188\]) show that the information of the cavity quality factor $Q$ can be encoded in a reduced density matrix of the cavity field. The $Q$ value can be determined using two different methods, after encoding its information in Eqs.(\[eq:11\]-\[eq:188\]). Below, we will discuss these two approaches. Measuring $Q$ by photon state tomography ======================================== ![(Color online) Wigner functions $W_{+}(x,p)$ of Eq. (\[eq:10\]) and $W_{+D}(x,p)$ of Eq. (\[eq:11\]) for the cavity field without and with the energy dissipation are shown in (a) and (b), for the input state $|\alpha\rangle$ with $\overline{n}=16$. Here the Wigner functions $W_{+}(x,p)$ and $W_{+D}(x,p)$ are normalized to $\pi N_{+}$.[]{data-label="fig1"}](fig1a.eps "fig:"){width="42mm"} ![(Color online) Wigner functions $W_{+}(x,p)$ of Eq. (\[eq:10\]) and $W_{+D}(x,p)$ of Eq. (\[eq:11\]) for the cavity field without and with the energy dissipation are shown in (a) and (b), for the input state $|\alpha\rangle$ with $\overline{n}=16$. Here the Wigner functions $W_{+}(x,p)$ and $W_{+D}(x,p)$ are normalized to $\pi N_{+}$.[]{data-label="fig1"}](fig1b.eps "fig:"){width="42mm"} The state $\rho$ of the optical field is generally visualized when it is represented by a Wigner function [@hans] in the position $x$ and momentum $p$ space, which is written as $$W(x,p)=\frac{1}{\pi}\int_{-\infty}^{\infty}\langle x-x^{\prime}|\rho|x+x^{\prime}\rangle e^{i2p x^{\prime}}{\rm d} x^{\prime}.$$ The Wigner function $W(x,p)$ can be experimentally measured by state tomographic techniques [@hans]. For any two coherent states, $|\alpha\rangle$ and $|\beta\rangle$, the Wigner function $W(x,p)$ can be represented as $$\begin{aligned} \label{eq:17} W_{\alpha,\beta}(x,p)&=&\frac{1}{\pi}\int_{-\infty}^{\infty}\langle x-x^{\prime}|\alpha\rangle\langle\beta|x+x^{\prime}\rangle e^{i2p x^{\prime}}{\rm d} x^{\prime} \nonumber\\ &=&\frac{1}{\pi}\exp\left\{-\frac{1}{2}(|\alpha|^2+|\beta|^2-2\alpha\beta^*)\right\}\nonumber\\ &\times&\exp\left\{-(x-q_{1})^2-(p+iq_{2})^2\right\}\end{aligned}$$ with $q_{1}=(\alpha+\beta^*)/\sqrt{2}$ and $q_{2}=(\alpha-\beta^*)/\sqrt{2}$. The Wigner functions $W_{\pm}(x,p)$ and $W_{\pm D}(x,p)$ for the states (\[eq:10\]) and (\[eq:11\]) were calculated (see Appendix B) by using Eq. (\[eq:17\]). Comparing the tomographically measured results for the states (\[eq:10\]) and (\[eq:11\]), the $Q$ factor of the cavity can be finally determined, as explained below by using an example. We further numerically calculate the Wigner functions $W_{\pm}(x,p)$ and $W_{\pm D}(x,p)$ of the states (\[eq:10\]) and (\[eq:11\]) from the SC qubit parameters and given operation durations. Using current values for experimental data, the basic physical parameters can be specified as follows. We assume that the SC Cooper-pair box is made from aluminum, with a BCS energy gap of $\sim 2.4$K (about 50 GHz) [@lehnert], the charge energy $E_{\rm ch}$ and the Josephson energy $E_{\rm J}$ are $4E_{\rm ch}/h=149$ GHz and $2E_{\rm J}/h=13.0$ GHz, respectively [@lehnert]. The frequency of the cavity field is taken as $40$ GHz, corresponding to a wavelength $\sim 0.75$ cm. The above numbers show that the SC energy gap is the largest energy, so the quasi-particle excitation on the island can be well suppressed at low temperatures, e.g., $20$ mK . The SQUID area is assumed to be about $50 \,\mu$m $\times 50\, \mu$m, then the absolute value $|g|$ of the qubit-photon coupling constant is about $|g|=4\times 10^{6}$ rad s$^{-1}$. Let us now prepare entangled qubit-photon states as in Eq. (\[eq:9\]). Any gate charge value $n_{g}$ in Eq. (\[charge\]), in which the large detuning condition in Eq. (\[large\]) is satisfied, can be chosen to realize our proposal. For concreteness, we give an example. The gate voltage is adjusted such that the gate charge is $n_{g}\approx 0.634233$, which can be experimentally achieved [@lehnert], then the detuning $\Delta=\Omega-\omega\approx 9.0\times 10^{6}$ rad s$^{-1}$. Thus, $\Omega$ is about $40$ GHz plus $1.4$ MHz, and $|g|^2/\Delta \,\simeq\, 0.27$ MHz. We can also find that $\Delta/|g|\approx 2.3$, so a large-detuning condition can be used [@xm; @pt]. For a given Josephson energy, $2E_{J}/h=13.0$ GHz, the operation times $\tau_{1}=4.8\times 10^{-12}$ s, required to prepare a superposition of $|e\rangle$ and $|g\rangle$ with equal probabilities, is much less than the qubit relaxation time $T_{1}=1.3\,\mu$s and dephasing time $T_{2}=5$ ns. We can choose the duration $\tau_{2}$ for a given input coherent state $|\alpha\rangle$ with the condition, that the distance $|\beta-\beta^{\prime}|$ between two coherent states $|\beta\rangle$ and $|\beta^{\prime}\rangle$ satisfies $$|\beta-\beta^{\prime}|=2|\alpha| \sin\left(\frac{\phi}{2}\right)>1.$$ So the lower bound of the duration $\tau_{2}$ can be given as $$\label{18} \tau_{2}=\frac{\Delta}{|g|^2}\arcsin\left(\frac{1}{2|\alpha|}\right)\,,$$ when $0 \leq \phi\leq \pi$. Equation (\[18\]) shows that a shorter $\tau_{2}$ can be obtained for a higher intensity $|\alpha|$ with fixed detuning $\Delta$ and coupling constant $g$. As an example, we plot the Wigner function of the superposition $|\beta_{+}\rangle$ in Fig. \[fig1\](a) for an input coherent light $|\alpha\rangle$ with the mean photon number $\overline{n}=|\alpha|^2=16$. We choose a simple case $\phi=\pi$, corresponding to the operation time $\tau_{2}\approx 0.93 \,\mu$s, which is less than the qubit lifetime $T_{1}$ and the cavity field lifetime $T_{\rm ph}\approx 2\,\mu$s for a bad cavity with $Q=5\times 10^{5}$. In such a case, $\beta^{\prime}=-\beta$ and the phase $\theta$ is about $0.996\,[{\rm mod}\, 2\pi]$ rad. Other parameters used in Fig. \[fig1\] are given above. If we set the evolution time $\tau_{3}=0.1\, \mu$s, then the Wigner function of Eq. (\[eq:11\]) for the above cavity quality factor is shown in Fig. \[fig1\](b). The central structure in Fig. \[fig1\](a) represents the coherence of the quantum state. In Fig. \[fig1\](b), we find that the height of the Wigner function $W_{+D}(x,p)$, especially for the central structure, is reduced by the environment. Comparing Fig. \[fig1\](a) and Fig. \[fig1\](b), it is found that the coherence of the superposed states is suppressed by the environment, and the decoherence of superpositions is tied to the energy dissipation of the cavity field. Then, the $Q$ value can in principle be estimated by measuring the Wigner functions of Eqs. (\[eq:11\]) and (\[eq:10\]), and comparing these two kinds of results. Determining $Q$ by readout of charge states =========================================== The determination of the $Q$ value by measuring the Wigner function needs optical instruments. In solid state experiments, the charge states are typically measured. Instead of using optical instruments, it would be desirable to obtain the $Q$ value by measuring charge states. This will be our goal here. The process to achieve this can be described as follows. i\) According to the measurements on the charge qubit states in Eq.(\[eq:9\]), the qubit-photon states are projected to $|g\rangle\otimes|\beta_{-}\rangle$ or $|e\rangle\otimes|\beta_{+}\rangle$, respectively. After the evolution time $\tau^{\prime}_{3}$, a $\pi/2$ quantum operation is performed on the qubit with the duration $T=\hbar\pi/4E_{J}$. Then, the qubit ground state $|g\rangle$, or excited state $|e\rangle$, is transformed into the superposition $(|g\rangle+i|e\rangle)/\sqrt{2}$, or $(i|g\rangle+|e\rangle)/\sqrt{2}$, and the photon states $|\beta_{\pm}\rangle$ evolve into mixed states after the evolution time $\tau=\tau^{\prime}_{3}+T$, and the photon-qubit states can be expressed as $$\label{eq:18} \rho_{Q+F}=\frac{1}{2}(|g\rangle\pm i|e\rangle)(\langle g|\mp i\langle e|)\otimes \rho_{\pm}(\tau)\, ,$$ with subscripts $Q$ and $F$ denoting the qubit and cavity field, respectively. The reduced density matrices $\rho_{\pm}(\tau)$ take the same form as in Eq. (\[eq:11\]) with $\tau$ replacing $\tau_{3}$. ii\) After the above procedure, the qubit-photon interaction is switched on by applying the external magnetic flux $\Phi_{e}=\Phi_{0}/2$. By using Eq. (\[eq:6\]), Eq. (\[eq:18\]) evolves into $$\begin{aligned} \label{eq:19} 2\rho_{A+F}^{(1)} &=&|g\rangle\langle g|\otimes U_{1}(\tau_{4})\rho_{\pm}(\tau)U^{\dagger}_{1}(\tau_{4})\nonumber\\ &+&|e\rangle\langle e|\otimes U_{2}(\tau_{4})\rho_{\pm}(\tau)U^{\dagger}_{2}(\tau_{4}) \\ &\mp& i\exp(-i\Omega_{-}\tau_{4})|g\rangle\langle e|\otimes U_{1}(\tau_{4})\rho_{\pm}(\tau)U^{\dagger}_{2}(\tau_{4})\nonumber\\ &\pm&i\exp(+i\Omega_{-}\tau_{4})|e\rangle\langle g|\otimes U_{2}(\tau_{4})\rho_{\pm}(\tau)U^{\dagger}_{1}(\tau_{4})\nonumber\end{aligned}$$ with $\Omega_{-}=\Omega-|g|^2/\Delta$, and a shorter evolution time $\tau_{4}$. For example, $\tau_{4}$ is less than the lifetime $T_{1}$ of the qubit at least. The time evolution operators $U_{1}(\tau_{4})$ and $U_{2}(\tau_{4})$ in Eq. (\[eq:19\]) are $$\begin{aligned} U_{1}(\tau_{4})&=&\exp\left[-i \omega_{-}\,a^{\dagger}a\,\tau_{4}\right],\\ U_{2}(\tau_{4})&=&\exp\left[-i\omega_{+}\,a^{\dagger}a\,\tau_{4}\right].\end{aligned}$$ with $\omega_{\pm}=\omega\pm|g|^2/\Delta$. After this qubit-photon interaction, the information of the $Q$ value is encoded. iii\) The qubit-photon coupling is switched off and a $\pi/2$ rotation is made on the qubit. If the state of the cavity field is prepared to $|\beta_{-}\rangle$ of Eq. (\[eq:10\]) in the first step, then the qubit is in the ground state $|g\rangle$. After measuring the qubit states, the photon states are projected to $$\label{eq:21} \rho_{e/g}=\frac{1}{4}(A \pm B)$$ where the sign $``+"$ corresponds to the excited state $|e\rangle$ measurement, but $``-"$ corresponds to the ground state $|g\rangle$ measurement. The operators $A$ and $B$ are $$\begin{aligned} A&=&\sum_{i=1}^{2}U_{i}(\tau_{4})\rho_{-}(\tau)U^{\dagger}_{i}(\tau_{4})\label{eq:21b},\\ B&=&2 \,{\rm Re}[\exp(-i\Omega_{-}\tau_{4})U_{1}(\tau_{4})\rho_{-}(\tau)U^{\dagger}_{2}(\tau_{4})]\label{eq:21c}.\end{aligned}$$ After tracing out the cavity field state, the probabilities corresponding to measuring charge states $|e\rangle$ and $|g\rangle$ are $$\begin{aligned} \label{eq:22} P_{e/g}(\tau)&=&{\rm Tr}_{F}\,\left(\rho_{e/g}\right)\nonumber\\ &=&\frac{1}{2}\left\{1\pm {\rm Re}({\rm Tr}_{F}[\exp(-i\varphi)\rho_{-}(\tau)])\right\}\end{aligned}$$ with $\varphi=(\Omega_{-}-2|g|^2 a^{\dagger}a/\Delta)\tau_{4}$. Then the measurement probabilities are related to the $Q$ values. Substituting $\rho_{-}(\tau)$ into Eq.(\[eq:22\]), we can obtain \[eq:24\] $$\begin{aligned} &&{\rm Re}\left\{{\rm Tr}[\exp(-i\varphi)\rho_{-}(\tau)]\right\} \\ &&=\frac{2}{N^2_{-}}\exp\left[-2\alpha(\tau)\sin^2\phi^{\prime}\right] \cos\left[\Omega_{-}\tau_{4}-\alpha(\tau)\sin(2\phi^{\prime})\right]\nonumber\\ &&- \frac{1}{N^2_{-}}\cos\left[\theta_{-} -|\alpha|^2\sin\phi+\theta-\Omega_{-}\tau_{4}\right]\exp\left(+G_{-}-\Gamma\right)\nonumber\\ &&- \frac{1}{N^2_{-}} \cos\left[\theta_{+} +|\alpha|^2\sin\phi-\theta-\Omega_{-}\tau_{4}\right]\exp\left(-G_{+}-\Gamma\right) \nonumber\end{aligned}$$ with the parameters $$\begin{aligned} \phi^{\prime}&=&\frac{|g|^2}{\Delta}\tau_{4},\\ \Gamma&=&2|\alpha|^2\sin^2(\frac{\phi}{2}),\\ \alpha(\tau)&=&|\alpha u(\tau)|^2,\\ G_{\pm}&=&2\alpha(\tau)\sin\phi^{\prime}\sin(\phi\pm\phi^{\prime}),\\ \theta_{\pm}&=&2\alpha(\tau)\cos(\phi\pm \phi^{\prime})\sin\phi^{\prime}.\end{aligned}$$ From Eq. (\[eq:24\]), we find that $\phi^{\prime}$ should satisfy the condition $\phi^{\prime}\neq n\pi$ for $\phi=\pi$, in order to describe the dissipation effect; here $n$ is an integer. Generally speaking, if one of the functions $G_{+}$, $G_{-}$, $\theta_{+}$, $\theta_{-}$, $\sin\phi^{\prime}$, or $\sin(2\phi^{\prime})$ is nonzero, then this is enough to encode the $Q$ value, which can be obtained from Eq. (\[eq:24\]), together with Eq. (\[eq:188\]), using $\tau$ instead of $\tau_{3}$. However, if the superposition of the cavity fields is prepared to the state $|\beta_{+}\rangle$ in the first step, then the ground and excited state measurements make the cavity field collapse to state $$\rho^{\prime}_{g/e}=\frac{1}{4}(A^{\prime}\pm B^{\prime}),$$ where $A^{\prime}$ and $B^{\prime}$ have the same forms as Eqs. (\[eq:21b\]) and (\[eq:21c\]), just with the replacement of $\rho_{-}(\tau)$ by $\rho_{+}(\tau)$. The probabilities $P^{\prime}_{g}(\tau)$ and $P^{\prime}_{e}(\tau)$ to measure the qubit states $|g\rangle$ and $|e\rangle$ corresponding to the prepared state $|\beta_{+}\rangle$ of Eq. (\[eq:10\]) after a dissipation interval $\tau$, can also be obtained as $$\label{eq:25} P_{g/e}^{\prime}(\tau)=\frac{1}{2}\left\{1\pm {\rm Re}({\rm Tr}_{F} [\exp(-i\varphi)\rho_{+}(\tau)])\right\}$$ where ${\rm Re}\{{\rm Tr}_{F} [\exp(-i\varphi)\rho_{+}(\tau)]\}$ can be obtained by replacing $N_{-}$ with $N_{+}$, and replacing the sign $``-"$ before the second and third terms with the sign $``+"$ in Eq. (\[eq:24\]a) ![(Color online) The probability $P_{g}(\tau)$ to measure the qubit ground state $|g\rangle$ as a function of the evolution time $\tau$. This $P_{g}(\tau)$ is shown for several values of the quality factor $Q$ and for different intensities of the input coherent state $|\alpha\rangle$.[]{data-label="fig2"}](fig2a.eps "fig:"){width="42mm"} ![(Color online) The probability $P_{g}(\tau)$ to measure the qubit ground state $|g\rangle$ as a function of the evolution time $\tau$. This $P_{g}(\tau)$ is shown for several values of the quality factor $Q$ and for different intensities of the input coherent state $|\alpha\rangle$.[]{data-label="fig2"}](fig2b.eps "fig:"){width="42mm"} To determine the Q values by probing the charge states, the measurement should be made for two times, the first measurement is for the preparation of the superpositions of the cavity field. After the first measurement, we make a suitable qubit rotation, and then make the qubit interact with the cavity field for a duration $\tau_{4}$. Finally, the second measurement is made and the $Q$ information is encoded in the measured probabilities. The different evolution times $\tau$ correspond to the different measuring probabilities for given $\tau_{4}$ and other parameters $|\alpha|$, $\Delta$, and so on. For example, the probabilities $P_{e/g}(\tau)$ for several special cases are discussed as follows when the prepared state is $|\beta_{-}\rangle$. If we assume that the qubit rotations and qubit-photon dispersive interaction are made without energy dissipation of the cavity field, e.g., $\tau=0$, then the measuring probabilities $P_{e/g}(\tau=0)$ only encode the information of the cavity field but do not include the quality factor $Q$. If the coherence of the states $|\beta_{\pm}\rangle$ nearly disappears after time $\tau$, then the state $|\beta_{-}\rangle$ becomes a classical statistical mixture $$\rho_{-}(\tau)=\frac{1}{N^2_{-}}\left[|\beta u(\tau)\rangle\langle \beta u(\tau)|+|\beta^{\prime} u(\tau)\rangle\langle \beta^{\prime} u(\tau)|\right].$$ The probabilities $P_{e/g}(\tau)$ are then reduced to $$\begin{aligned} P_{e/g}(\tau)&=&\frac{1}{N^2_{-}}\pm \frac{\exp\left[-2\alpha(\tau)\sin^2\phi^{\prime}\right]}{N^2_{-}} \\ &\times&\cos\left[\Omega_{-}\tau_{4}-\alpha(\tau)\sin(2\phi^{\prime})\right],\nonumber\end{aligned}$$ which tends to $1/2$ for $|\alpha|^2\gg 1$. If $\tau >1/\gamma=t_{\rm ph}$ of the single-photon state lifetime, then the photons of the states $|\beta_{\pm}\rangle$ are completely dissipated into the environment. In this case, the cavity quality factor $Q$ cannot be encoded in the probabilities $P_{e/g}(\tau)$ even with some qubit and qubit-photon states operations. As an example, let us consider how $P_{g}(\tau)$ varies with the evolution time $\tau$ with the cavity field dissipation. We assume that the evolution time $\tau_{4}=(\pi/2) (\Delta/|g|^2)$, that is, $\phi^{\prime}=\pi/2$. Then, the $\tau$-dependent probabilities $P_{g}(\tau)$ for the initially prepared state $|\beta_{-}\rangle$ are given in Fig. \[fig2\](a) with the same parameters as in Fig. \[fig1\], except with different cavity quality factors $Q$. In order to see how the probability $P_{g}(\tau)$ changes with the intensity $|\alpha|^2$ of the input coherent state $|\alpha\rangle$, we plot $P_{g}(\tau)$ in Fig. \[fig2\](b) with the same parameters as Fig. \[fig2\](a) except changing the intensity to $|\alpha|^2=4$ from $|\alpha|^2=16$. Figure \[fig2\] shows that both the higher quality factor $Q$ and weaker intensity $|\alpha|^2$ of the input cavity field correspond to a larger probability $P_{g}$ of the ground state for the fixed evolution time $\tau$. For fixed $Q$ and $\tau$, the weaker intensity $|\alpha|^2$ corresponds to a higher measuring probability. We plot $P_{g}(\tau)$ in Fig. \[fig2\] considering the simple case $\phi=\pi$. However, if we consider another $\phi$, then $|\alpha|^2$ should be chosen such that it satisfies the condition $2|\alpha|\sin(\phi/2)>1$. In conclusion, the quality factor $Q$ can be determined from the probabilities $P_{e/g}(\tau)$ of measuring the qubit states with a finite cavity field evolution time $\tau$. Discussions and conclusions =========================== We discussed how to measure the cavity quality factor $Q$ by using the interaction between a single-mode microwave cavity field and a controllable superconducting charge qubit. Two methods are proposed. One measures the Wigner function of the state (\[eq:11\]) by using a standard optical method [@hans]. Another approach measures the qubit states. Using this last method, the information of the $Q$ value can be encoded into the reduced density matrix of the cavity field, and at the same time the qubit makes a $\pi/2$ rotation. Thus, with a suitable qubit-photon interaction time, information on the $Q$ value is then transferred to the qubit-photon states. Finally, after another $\pi/2$ rotation, the charge qubit states are measured, and the $Q$ value can be obtained, as shown in Fig. \[fig2\], Eqs. (\[eq:24\]) and Eq. (\[eq:188\]). However, it should be noticed that it is easy to measure charge states than to measure photon states in superconducting circuits. Our proposal shows that a cavity QED experiment with a SC qubit can be performed even for a relatively low $Q$ values, e.g., $Q\sim 10^{6}$. Initially, a coherent state is injected into the cavity, which is relatively easy to do experimentally. Although all rotations of the qubit are chosen as $\pi/2$ to demonstrate our proposal, other rotations can also be used to achieve our goal. To simplify these studies and without loss of generality, we have assumed two components $|\beta\rangle$ and $|\beta \exp(i\phi)\rangle$ for superpositions with $\phi=\pi$ phase difference in our numerical demonstrations. Of course, other superpositions can also be used to realize our purpose. The only condition to satisfy is that the distance between the two states $|\beta\rangle$ and $|\beta \exp(i\phi)\rangle$ should be larger than one. In order to obtain a numerical estimate for the detuning, we specify a value of the gate charge number $n_{g}$. However, any gate charge that satisfies the large-detuning condition can be chosen to realize our proposal. Although we did not give a detailed description of another resonance-based approach, it should be pointed out that the $Q$ values can also be determined by virtue of the resonant qubit-photon interaction. For example, if the superpositions [@liu] of the vacuum and the single photon state are experimentally prepared, then we can follow the same steps as in Sec. III and IV to obtain the $Q$ value. This method [@rinner] has been applied to micromasers, where the qubits are two-level atoms. However, the coherent states and non-resonant qubit-photon interaction should be easier to do experimentally than the approach using single-photon states and resonant qubit-photon interaction. Our proposal can also be generalized to the models used in Refs. [@wallraff; @blais], which are experimentally accessible. We hope that our proposal can open new doors to experimentally test the $Q$ value and motivate further experiments on cavity quantum electrodynamics with SC qubits. acknowledgments =============== This work was supported in part by the National Security Agency (NSA) and Advanced Research and Development Activity (ARDA) under Air Force Office of Research (AFOSR) contract number F49620-02-1-0334, and by the National Science Foundation grant No. EIA-0130383. Effective hamiltonian with larger detuning ========================================== The Hamiltonian $H=H_{0}+H_{1}$ of the two-level atom interacting with a single-mode cavity field can be written as $$\begin{aligned} H_{0}&=&\frac{1}{2}\hbar\Omega\sigma_{z}+\hbar\omega a^{\dagger }a,\\ H_{1}&=&\hbar (g a^{\dagger }\sigma_{-}+g^{*} a\sigma_{+})\end{aligned}$$ with a complex number $g$. Let us assume $\Delta=\Omega-\omega > 0$ and $g/\Delta \ll 1$. The eigenstates and corresponding eigenvalues of the free Hamiltonian $H_{0}$ are $$\begin{aligned} |e\rangle\otimes|n\rangle&\Longrightarrow& n\hbar\omega+\frac{1}{2}\hbar\Omega,\\ |g\rangle\otimes|m\rangle&\Longrightarrow& m\hbar\omega-\frac{1}{2}\hbar\Omega\end{aligned}$$ In the interaction picture, any state can be written as $$|\psi(t)\rangle=U(t,t_{0})|\psi(t_{0})\rangle$$ with $$\begin{aligned} \label{a4} U(t,t_{0})&=&1+\frac{1}{i\hbar}\int_{t_{0}}^{t}H_{\rm int}(t_{1}){\rm d}t_{1}\\ &+&\left(\frac{1}{i\hbar}\right)^2\int_{t_{0}}^{t}\int_{t_{0}}^{t_{1}}H_{\rm int}(t_{1})H_{\rm int}(t_{2}){\rm d}t_{1}{\rm d}t_{2}+\cdots \nonumber\end{aligned}$$ here $H_{\rm int}=U^{\dagger}_{0}(t)H_{1}U_{0}(t)$ with $U_{0}(t)=\exp\{-iH_{0}t/\hbar\}$. In the basis $\{|E_{l}\rangle\}=\{|e\rangle\otimes|n\rangle,\,\,|g\rangle\otimes|m\rangle\}$, Eq. (\[a4\]) can be expressed as $$\begin{aligned} U(t,t_{0})&=&1+\nonumber\\ &+&\frac{1}{i\hbar}\int_{t_{0}}^{t}\sum_{l,m}|E_{l}\rangle\langle E_{l}|H_{\rm int}(t_{1})|E_{m}\rangle\langle E_{m}|{\rm d}t_{1}+\cdots. \nonumber\end{aligned}$$ After neglecting the fast-oscillating factor and keeping the first order term in $g/\Delta$, $U(t,t_{0})$ $$\begin{aligned} &&U(t,t_{0})=U(t,0)=U(t)\\ &&\approx 1-i\frac{|g|^2}{\Delta}\int_{0}^{t}{\rm d}t_{1}[(n+1)|e,n\rangle\langle e,n|-n|g,n\rangle\langle g,n|]\nonumber,\end{aligned}$$ where we assume $t_{0}=0$. Finally, we obtain the effective Hamiltonian in the interaction picture as $$\label{a6} H_{\rm eff}=\hbar\frac{|g|^2}{\Delta}(|e\rangle \langle e|aa^{\dagger}-|g\rangle \langle g|a^{\dagger}a).$$ Returning Eq. (\[a6\]) to the Schrödinger picture, Eq. (\[eq:6\]) is obtained. This method can be generalized to obtain the effective Hamiltonian of the model with many two-level system interacting with a common single-mode field. Equation (\[eq:6\]) can also be obtained by using the Fröhlich-Nakajima transformation [@Fro; @Nakajima; @wu; @sun]. Wigner functions of superposition and mixed states ================================================== For completeness, we explicitly write the Wigner functions $W_{\pm}(x,p)$ of the superposition states in Eq. (\[eq:10\]) as follows: $$\begin{aligned} &&W_{\pm}(x,p)\nonumber\\ && =\frac{1}{\pi N^{2}_{\pm}}\left\{\exp \left[-\left(x-\sqrt{2}\,{\rm Re}\beta \right)^2-\left(p-\sqrt{2}\,{\rm Im}\beta \right)^2\right]\right.\nonumber\\ &&+\exp \left[-\left(x-\sqrt{2}\,{\rm Re}\beta^{\prime} \right)^2-\left(p-\sqrt{2}\,{\rm Im}\beta^{\prime} \right)^2\right]\nonumber\\ &&\left.\pm 2 {\rm Re}\left[P \exp \left(-\left(x-\wp_{1} \right)^2-\left(p+i\wp_{2} \right)^2\right)\right]\right\}\, ,\end{aligned}$$ with $$\begin{aligned} P&=&\exp(-i\theta)\exp[-|\alpha|^2(1-e^{i\phi})],\\ \wp_{1}&=&\frac{1}{\sqrt{2}}(\beta+\beta^{\prime*}),\\ \wp_{2}&=&\frac{1}{\sqrt{2}}(\beta-\beta^{\prime*}).\end{aligned}$$ The Wigner functions $W_{\pm D}(x,p)$ of the mixed states in Eq. (\[eq:11\]) with dissipation can be written as $$\begin{aligned} &&W_{\pm D}(x,p)\nonumber\\ && =\frac{1}{\pi N^{2}_{\pm}}\left\{\exp \left[-\left(x-u\sqrt{2}\,{\rm Re}\beta \right)^2-\left(p-u\sqrt{2}\,{\rm Im}\beta \right)^2\right]\right.\nonumber\\ &&+\exp \left[-\left(x-u\sqrt{2}\,{\rm Re}\beta^{\prime} \right)^2-\left(p-u\sqrt{2}\,{\rm Im}\beta^{\prime} \right)^2\right]\nonumber\\ &&\left.\pm 2 {\rm Re}\left[P \exp \left(-\left(x-u\,\wp_{1} \right)^2-\left(p+i u \,\wp_{2} \right)^2\right)\right]\right\}.\end{aligned}$$ [99]{} J.Q. You and F. Nori, Phys. Rev. B [**68**]{}, 064509 (2003); Physica E [**18**]{}, 33 (2003). A.D. Armour, M.P. Blencowe, and K.C. Schwab, Phys. Rev. Lett. [**88**]{}, 148301 (2002); E.K. Irish and K. Schwab, Phys. Rev. B [**68**]{}, 155311 (2003); A.N. Cleland and M.R. Geller, Phys. Rev. Lett. [**93**]{}, 070501 (2004); I. Martin, A. Shnirman, L. Tian, and P. Zoller, Phys. Rev. B [**69**]{}, 125339 (2004); L. Tian, quant-ph/0412185. Y.D. Wang, P. Zhang, D.L. Zhou, and C.P. Sun, Phys. Rev. B [**70**]{}, 224515 (2004). L.F. Wei, Yu-xi Liu, and F. Nori, Europhys. Lett. [**67**]{}, 1004 (2004); Phys. Rev. B [**71**]{}, 134506 (2005); A. Blais, A.M. van den Brink, and A.M. Zagoskin, Phys. Rev. Lett. [**90**]{}, 127901 (2003); I. Chiorescu, P. Bertet, K. Semba, Y. Nakamura, C.J.P.M. Harmans, and J.E. Mooij, Nature [**431**]{}, 159 (2004). A. Shnirman, G. Schön, and Z. Hermon, Phys. Rev. Lett. [**79**]{}, 2371 (1997); F. Plastina and G. Falci, Phys. Rev. B [**67**]{}, 224514 (2003). W. A. Al-Saidi and D. Stroud, Phys. Rev. B [**65**]{}, 224512 (2002). J.Q. You, J.S. Tsai, and F. Nori, Phys. Rev. B [**68**]{}, 024510 (2003); Physica E [**18**]{}, 35 (2003). Yu-xi Liu, L.F.Wei, and F. Nori, Erophys. Lett. [**67**]{}, 941 (2004). Y.B. Gao, Y.D. Wang, and C. P. Sun, Phys. Rev. A [**71**]{}, 032302 (2005); P. Zhang, Z. D. Wang, J. D. Sun, C. P. Sun, quant-ph/0407069, Phys. Rev. A (in press). C.P. Yang, S.I. Chu, and S. Han, Phys. Rev. A [**67**]{}, 042311 (2003) A.M. Zagoskin, M. Grajcar, and A.N. Omelyanchouk, Phys. Rev. A [**70**]{}, 060301(R) (2004) Z.Y. Zhou, S.I. Chu, and S. Han, Phys. Rev. B [**66**]{}, 054527 (2002). E. Paspalakis and N.J. Kylstra, J. Mod. Opt. [**51**]{}, 1679 (2004). Yu-xi Liu, J.Q. You, L.F. Wei, C.P. Sun, and F. Nori, quant-ph/0501047. A. Wallraff, D.I. Schuster, A. Blais, L. Frunzio, R.S. Huang, J. Majer, S, Kumar, S.M. Girvin, and R.J. Schoelkopf, Nature [**431**]{}, 162 (2004). A. Blais, R.S. Huang, A. Wallraff, S.M. Girvin, and R.J. Schoelkopf, Phys. Rev. A [**69**]{}, 062320 (2004). P.K. Day, H.G. LeDuc, B. A. Mazin, A. Vayonakis, and J. Zmuidzinas, Nature [**425**]{}, 817 (2003). M.O. Scully and M.S. Zubairy [*Quantum optics*]{} (Cambridge University Press, Cambridge, 1997). Y. Nakamura, Y.A. Pashkin, and J.S. Tsai, Nature [**398**]{}, 786 (1999); Phys. Rev. Lett. [**87**]{}, 246601 (2001); Y. Nakamura, Y.A. Pashkin, T. Yamamoto and J.S. Tsai [*ibid.*]{} [**88**]{}, 047901 (2002); Y.A. Pashkin, T. Yamamoto, O. Astafiev, Y. Nakamura, D.V. Averin, and J.S. Tsai, Nature [**421**]{}, 823 (2003); T. Yamamoto, Y.A. Pashkin, O. Astafiev, Y. Nakamura, and J.S. Tsai, [*ibid.*]{} [**425**]{}, 941 (2003). K.W. Lehnert, K. Bladh, L.F. Spietz, D. Gunnarsson, D.I. Schuster, P. Delsing, and R.J. Schoelkopf, Phys. Rev. Lett. [**90**]{}, 027002 (2003). M. Brune, E. Hagley, J. Dreyer, X. Maître, A. Maali, C. Wunderlich, J.M. Raimond, and S. Haroche, Phys. Rev. Lett. [**77**]{}, 4887 (1996); X. Maître, E. Hagley, G. Nogues, C. Wunderlich, P. Goy, M. Brune, J.M. Raimond, and S. Haroche, Phys. Rev. Lett. [**79**]{}, 769 (1997). D. Vitali, P. Tombesi, and G.J. Milburn, Phys. Rev. Lett. [**79**]{}, 2442 (1997). H. Uwazumi, T. Shimatsu, and Y. Kuboki, J. of Appl. Phys. [**91**]{}, 7095 (2002). S. Rinner, H. Walther, and E. Werner, Phys. Rev. Lett. [**93**]{}, 160407 (2004). H. Fröhlich, Phys. Rev. [**79**]{}, 845 (1950). S. Nakajima, Adv. Phys. [**4**]{}, 463 (1953). Y. Wu, Phys. Rev. A [**54**]{}, 1586 (1996). C.P. Sun, Yu-xi Liu, L. F. Wei, and F. Nori, quant-ph/0506011. Hans-A. Bachor, [*A Guide to Experiments in Quantum Optics*]{} (Wiley-VCH, New York, 1998)
--- abstract: 'The paper focuses on a class of light-tailed multivariate probability distributions. These are obtained via a transformation of the marginals from a heavy-tailed original distribution. This class was introduced in Balkema et al. (2009). As shown there, for the light-tailed meta distribution the sample clouds, properly scaled, converge onto a deterministic set. The shape of the limit set gives a good description of the relation between extreme observations in different directions. This paper investigates how sensitive the limit shape is to changes in the underlying heavy-tailed distribution. Copulas fit in well with multivariate extremes. By Galambos’s Theorem existence of directional derivatives in the upper endpoint of the copula is necessary and sufficient for convergence of the multivariate extremes provided the marginal maxima converge. The copula of the max-stable limit distribution does not depend on the marginals. So marginals seem to play a subsidiary role in multivariate extremes. The theory and examples presented in this paper cast a different light on the significance of marginals. For light-tailed meta distributions the asymptotic behaviour is very sensitive to perturbations of the underlying heavy-tailed original distribution, it may change drastically even when the asymptotic behaviour of the heavy-tailed density is not affected.' author: - 'Guus Balkema$\qquad$ Paul Embrechts$\qquad$ Natalia Nolde$^{,\ast}$' bibliography: - 'bibliography.bib' title: Sensitivity of the asymptotic behaviour of meta distributions --- extremes, limit set, limit shape, meta distribution, regular partition, sensitivity. Introduction ============ In recent years meta distributions have been used in several applications of multivariate probability theory, especially in finance. The construction of meta distributions can be illustrated by a simple example. Start with a multivariate spherical $t$ distribution and transform its marginals to be Gaussian. The new distribution has normal marginals. It is called a *meta distribution* with normal marginals based on the *original* $t$ distribution. Since the copula of a multivariate distribution is invariant under strictly increasing coordinatewise transformations, the original distribution and the meta distribution share their copula and hence have the same dependence structure. The light-tailed meta distribution inherits not only the dependence properties of the original $t$ distribution, but also the asymptotic dependency. These asymptotic properties are of importance to risk theory. They include rank-based measures of tail dependence, and the tail dependence coefficients (see e.g. Chapter 5 in [@McNeil2005]). Vectors with Gaussian densities have asymptotically independent components, whatever the correlation of the Gaussian density. The meta density with standard Gaussian marginals based on a spherically symmetric Student $t$ density has the copula of the $t$ distribution, and the max-stable limit distributions for the coordinatewise maxima also have the same copula. The max-stable limit vectors have dependent components. However, this recipe for constructing distributions with Gaussian, or more generally, light-tailed marginals, based on a heavy-tailed density with a pronounced dependency structure in the limit, has to be treated with caution. The limit shape of the sample clouds from the meta distribution is affected by perturbations of the original heavy-tailed density, perturbations which are so small that they do not affect the multivariate extreme value behaviour. In going from densities with heavy-tailed marginals to the meta densities with light-tailed marginals the dependence structure of the max-stable limit distribution is preserved by a well-known invariance result in multivariate extreme value theory. In this paper we shall give exact conditions on the severity of changes in the original heavy-tailed distribution which are allowed if one wants to retain the asymptotic behaviour of the coordinatewise extremes. It will be shown that perturbations which are negligible compared to these changes may affect the limit shape of the sample clouds of the light-tailed meta distribution. Multivariate distribution functions (dfs) have the property that there is a very simple relation between the df of the original vector and the df of the coordinatewise maximum of any number of independent observations from this distribution. One just raises the df to the given power. This makes dfs and in particular copulas ideal tools to handle coordinatewise maxima, and to study their limit behaviour. This rather analytic approach sometimes obscures the probabilistic content of the results. The approach via densities and probability measures on $\rbb^d$ which is taken in this paper may at first seem clumsy, but it has the advantage that there is a close relation to what one observes in the sample clouds. Our interest is in extremes. The asymptotic behaviour of sample clouds gives a very intuitive view of multivariate extremes. The limit shape of sample clouds, if it exists, describes the relation between extreme observations in different directions; it indicates in which directions more severe extremes are likely to occur, and how much more extreme these will be. It has been shown in [@Balkema2009] that sample clouds from meta distributions in the *standard set-up*, see below, can be scaled to converge onto a *limit set*. The boundary of this limit set has a simple explicit analytic description. The limit shape of sample clouds from the meta distribution contains no information about the shape of the sample clouds from the original distribution. The results of the present paper support this point. The aim of the paper is to investigate stability of the shape of the limit set under changes in the original distribution. We look at changes which do not affect the marginals, or at least their asymptotic behaviour. Keeping marginals (asymptotically) unchanged allows us to isolate the role played by the copula. We shall examine how much the original and meta distributions in the standard set-up may be altered without affecting the asymptotic behaviour of the scaled sample clouds. This shows how robust the limit shape is. Then we move on to explore sensitivity. The limit shape of the scaled sample clouds from the light-tailed meta distribution is very sensitive to certain slight perturbations of the original distribution, perturbations which affect the density in particular regions. For heavy-tailed distributions the region around the coordinate planes seems to be most sensitive. The present paper is a follow-up to [@Balkema2009]. The latter paper contains a detailed analysis of meta densities and gives the motivation and implications of the assumptions in the standard set-up. It presents the derivation and analysis of the limit shape of the sample clouds from the light-tailed meta distribution, and may be consulted for more details on these subjects. In the present paper Section \[sprelim\] introduces the notation and recalls the relevant definitions and results from [@Balkema2009]. Section \[sres\] is the heart of the paper; here we present details of the constructions which demonstrate robustness and sensitivity of the limit shape and the asymptotics of sample clouds from meta distributions. Concluding remarks are given in Section \[sconc\]. The appendix contains a few supplementary results, and a summary of notation which the reader may find useful when reading the paper. Preliminaries {#sprelim} ============= Definitions and standard set-up ------------------------------- The following definition describes the formal procedure for constructing meta distributions. Let $G_1,\ldots,G_d$ be continuous dfs on $\rbb$ which are strictly increasing on the intervals $I_i=\{0<G_i<1\}$. Consider a random vector $\ZB$ in $\rbb^d$ with df $F$ and continuous marginals $F_i$, $i=1,\ldots,d$. Define the transformation K(x\_1,…,x\_d)=(K\_1(x\_1),…,K\_d(x\_d)),K\_i(s)=F\_i\^[-1]{}(G\_i(s))i=1,…,d.The df $G=F\circ K$ is the *meta distribution* (with *marginals* $G_i$) based on the *original* df $F$. The random vector $\XB$ is said to be a *meta vector* for $\ZB$ (with *marginals* $G_i$) if K(). The coordinatewise map $K=K_1\otimes\cdots\otimes K_d$ which maps $\xb=(x_1,\ldots,x_d)\in I=I_1\times\cdots\times I_d$ into the vector $\zb=(K_1(x_1),\ldots,K_d(x_d))$ is called the *meta transformation*. The class of distributions above is too general for our purpose. Hence, we choose to restrict our attention to a subclass by imposing more structure on the original distribution and on the marginals of the meta distribution. The standard set-up of this paper is the same as in [@Balkema2009]. Recall the basic example we started with in the introduction. The multivariate $t$ density has a simple structure. It is fully characterized by the shape of its level sets, scaled copies of the defining ellipsoid, and by the decay $c/r^{\l+d}$ of its tails along rays. The constant $\l>0$ denotes the degrees of freedom, $d$ the dimension of the underlying space, and $c$ is a positive constant depending on the direction of the ray. In the more general setting of the paper, the tails of the density are allowed to decrease as $cL(r)/r^{\l+d}$ for some slowly varying function $L$ and the condition of elliptical level sets is replaced by the requirement that the level sets are equal to scaled copies of a fixed bounded convex or star-shaped set (a set $D$ is star-shaped if $\zb\in D$ implies $t\zb\in D$ for $0\le t<1$). Due to the power decay of the tails, the density $f$ is said to be *heavy-tailed*. Densities with the above properties constitute the class $\FC_\l$. The set $\FC_\l$ for $\l>0$ consists of all positive continuous densities $f$ on $\rbb^d$ which are asymptotic to a function of the form $f_*(n_D(\zb))$ where $f_*(r)=L(r)/r^{(\l+d)}$ is a continuous decreasing function on $[0,\nf)$, $L$ varies slowly, and $n_D$ is the gauge function of the set $D$. The set $D$ is bounded, open and star-shaped. It contains the origin and has a continuous boundary. The reader may keep in mind the case where $D$ is a convex symmetric set. In that case the gauge function is a norm, and $D$ the unit ball. The normal marginals of the meta density are generalized to include densities whose tails are asymptotic to a *von Mises function*: $g_0(s)\sim e^{-\j(s)}$ for $s\to\nf$ with *scale function* $a=1/\j'$, where $\j$ is a $C^2$ function with a positive derivative such that (s),a(s)’0s.This condition on the meta marginals ensures that they lie in the maximum domain of attraction of the Gumbel limit law $\exp(-e^{-x})$, $x>0$; see e.g. Proposition 1.4 in [@Resnick1987]. In this case we say that the meta distribution is *light-tailed*. \[dssu\] In the *standard set-up*, the density $f$ lies in $\FC_\l$ for some $\l>0$, and $g_0$ is continuous, positive, symmetric, and asymptotic to a von Mises function $e^{-\j}$. We assume that $\j$ varies regularly in infinity with exponent $\q>0$. The density $g$ is the meta density with marginals $g_0$, based on $f$. Convergence of sample clouds ---------------------------- An *$n$-point sample cloud* is the point process consisting of the first $n$ points of a sequence of independent observations from a given distribution, after proper scaling. We write N\_n={\_1/a\_n,…,\_n/a\_n} where $\ZB_1,\ZB_2,\ldots$ are independent observations from the given probability distribution on $\rbb^d$, and $a_n$ are positive scaling constants. It is custom to write $N_n(A)$ for the number of the points of the sample cloud that fall into the set $A$. In this section, we discuss the asymptotic behaviour of sample clouds from the original density $f$ and from the associated meta density $g$ in the standard set-up. The difference in the asymptotic behaviour is striking: sample clouds from a heavy-tailed density $f$ converge in distribution to a Poisson point process on $\rbb^d\sm\{\zerob\}$ whereas sample clouds from a light-tailed meta density $g$ tend to have a clearly defined boundary. They converge onto a deterministic set. ### Convergence for densities in $\FC_\l$ and measures in $\DC_\l$ For densities in $\FC_\l$, $\l>0$, there is a simple limit relation: h(),\_n,r\_n, where h()=1/n\_D()\^[ł+d]{}=()/r\^[ł+d]{},r=\_2&gt;0,=/r. Convergence is uniform and $\LB^1$ on the complement of centered balls. If $\ZB_1,\ZB_2,\ldots$ are independent observations from the density $f$ then the sample clouds $N_n$ in (\[q1Nn\]) converge in distribution to the Poisson point process with intensity $h$ weakly on the complement of centered balls for a suitable choice of scaling constants $a_n$. A probability measure $\p$ on $\rbb^d$ varies regularly with scaling function $a(t)\to\nf$ if there is an infinite Radon measure $\r$ on $\rbb^d\sm\{\zerob\}$ such that the finite measures $t\p$ scaled by $a(t)$ converge to $\r$ vaguely on $\rbb^d\sm\{\zerob\}$. The measure $\r$ has the scaling property (rA)=(A)/r\^łr&gt;0 for all Borel sets $A$ in $\rbb^d\sm\{\zerob\}$. The constant $\l\ge0$ is the exponent of regular variation. If it is positive then $\r$ gives finite mass to the complement of the open unit ball $B$ and weak convergence holds on the complement of centered balls. We shall denote the set of all probability measures which vary regularly with exponent $\l>0$ by $\DC_\l$. In particular $\p\in\DC_\l$ if $\p$ has a continuous density in $\FC_{\l}$. As above for independent observations $\ZB_1,\ZB_2,\ldots$ from a distribution $\p\in\DC_\l$ the scaled sample clouds $N_n$ in  (with scaling constants $a(n)$) converge in distribution to a Poisson point process with mean measure $\r$ weakly on the complement of centered balls (since the mean measures converge, see e.g. Proposition 3.21 in [@Resnick1987] or Theorem 11.2.[V]{} in [@Daley2008]). ### Convergence for meta densities in the standard set-up Sample clouds from light-tailed meta densities in the standard set-up, under suitable scaling, converge onto a deterministic set, referred to as the *limit set*, in the sense of the following definition. \[dconv\] For a compact set $E$ in $\rbb^d$, the finite point processes $N_n$ *converge onto* $E$ if for open sets $U$ containing $E$, the probability of a point outside $U$ vanishes, $\pbb\{N_n(U^c)>0\}\to0$, and if $$\pbb\{N_n(\pb+\e B)>m\}\to1\qquad m\ge1,\ \e>0,\ \pb\in E.$$ We now recall Theorem 2.6 of [@Balkema2009], which characterizes the shape of the limit set for meta distributions in the standard set-up. \[tE\] Let $f$, $g$ and $g_0\sim e^{-\j}$ satisfy the assumptions of the standard set-up. Define E:=E\_[ł,]{}={\^d|u\_1|\^++|u\_d|\^+ł(ł+d)\_\^}. If $\j(r_n)\sim\log n$, then for the sequence of independent observations $\XB_1,\XB_2,\ldots$ from the meta density $g$, the sample clouds $M_n=\{\XB_1/r_n,\ldots,\XB_n/r_n\}$ converge onto $E$. Further notation and conventions -------------------------------- In order to ease the exposition, we introduce some additional assumptions and notation. All univariate dfs are assumed to be continuous and strictly increasing. The dfs $F_0$ and $\tilde F_0$ on $\rbb$ are *tail asymptotic* if $$\tilde F_0(-t)/F_0(-t)\to1\qquad(1-\tilde F_0(t))/(1-F_0(t))\to1\qquad t\to\nf.$$ The sample clouds from a heavy-tailed df $\tilde F$ *converge to the point process* $\tilde N$ if $\tilde N$ is a Poisson point process on $\rbb^d\sm\{\zerob\}$, and if the sample clouds converge to $\tilde N$ in distribution weakly on the complement of centered balls, where the scaling constants $c_n$ satisfy $1-\tilde F_0(c_n)\sim1/n$. Two heavy-tailed dfs $F^*$ and $F^{**}$ *have the same asymptotics* if the marginals are tail asymptotic and if the sample clouds converge to the same point process. The light-tailed dfs $G^*$ and $G^{**}$ *have the same asymptotics* if the marginals are tail asymptotic and the sample clouds converge onto the same compact set $E^*$, with scale factors $b_n$ which satisfy $-\log(1-G_0(b_n))\sim\log n$. One may replace a scaling sequence $a_n$ by a sequence asymptotic to $a_n$ without affecting the limit. The scaling of sample clouds is determined up to asymptotic equality by the marginals. Tail asymptotic marginals yield asymptotic scalings. Results {#sres} ======= We now turn our attention to the main issues of the paper. The aim is to investigate how much the dfs $F$ and $G=F\circ K$ in the standard set-up may be altered without affecting the asymptotic behaviour of the scaled sample clouds. For simplicity we assume here that the marginal densities of $F$ also all are equal to a positive continuous symmetric density $f_0$. It follows that the components of the meta transformation $K$ are equal: K:=(K\_0(x\_1),…,K\_0(x\_d))K\_0=F\_0G\_0K\_0(-t)=-K\_0(t).Let $F^*$ be a df with marginal densities $f_0$. Then $G^*=F^*\circ K$ is the meta distribution based on the df $F^*$ with marginals $g_0$. One can pose the following questions: If the scaled sample clouds from $G^*$ and from $G$ converge onto the same set $E$, do the scaled sample clouds from $F^*$ converge to the same point process $N$ as those from $F$? If the scaled sample clouds from $F^*$ and from $F$ converge to the same Poisson point process $N$, do the scaled sample clouds from $G^*$ converge onto the same set $E$ as those from $G$? The answer to the corresponding questions for coordinatewise maxima and their exponent measures (if we also allow translations) is “Yes”. Here, for sample clouds and their limit shape, the answer to both questions is “No”. This section contains some counterexamples which will be worked out further in the next two sections. If we replace $f$ by a weakly asymptotic density $f^*\asymp f$, the asymptotic behaviour of sample clouds from $g^*$ is not affected, since $g^*\asymp g$ (see Lemma \[mrs0\] below), but the scaled sample clouds from $f^*$ obviously need not converge. What if they do? Let $\ZB$ have a spherical Student $t$ density $f(\zb)=f_*(\|\zb\|)$ with marginals $f_0$ and limit function $h(\wb)=1/\|\wb\|^{\l+d}$ with marginals $c/t^{\l+1}$. The vector with components $\ab_1^T\ZB,\ldots,\ab_d^T\ZB$ where $\ab_1,\ldots,\ab_d$ are independent unit vectors will have the same marginals $f_0$ but density $f_*(n_E(\zb))$ with elliptic level sets, which are spherical only if the vectors $\ab_j$ are orthogonal. There are many star-shaped sets $D$ for which $\tilde h(\wb)=1/n_D(\wb)^{\l+d}$ has marginals $c/t^{\l+1}$ as above. Probability densities $\tilde f\sim f_*(n_D)$ will lie in $\FC_\l$ and have marginals asymptotic to the Student $t$ marginals $f_0$ above. All these densities are weakly asymptotic to each other, $\tilde f\asymp f$. The only difference between the densities is in their copulas. The information of the dependency contained in the set $D$ is preserved in the limit of the sample clouds from the density $\tilde f$, but lost in the limit shape $E$ of the sample clouds from the meta density $\tilde g$. Surprisingly the information on the shape is lost in the step to the meta density $\tilde g$, but the tail exponent $\l$ of the marginals is still visible in the limit shape $E$. What we want to do is fix the marginals $f_0$ and $g_0$ (which determine the meta transformation $K$), and then vary the copula and check the limit behaviour of the sample clouds (where we impose the condition that both converge). We are looking for dfs $F^*$ and $G^*$ with the properties: $F^*$ has marginal densities $f_0$; $G^*$ is the meta distribution based on $F^*$ with marginal densities $g_0$; the scaled sample clouds from $F^*$ converge to a Poisson point process $N^*$; the scaled sample clouds from $G^*$ converge onto a compact set $E^*$. Moreover one would like $E^*$ to be the set $E_{\l,\q}$ in , or $N^*$ to have mean measure $\r^*=\r$ with intensity $h$ in . So we either choose $F^*$ to have the same asymptotics as $F$, or $G^*$ to have the same asymptotics as $G$. Note that the four conditions above have certain implications. The mean measure $\r^*$ of the Poisson point process $N^*$ is an excess measure with exponent $\l$, see (\[q1r\]), its marginals are equal and symmetric with intensity $\l/|t|^{\l+1}$ since the marginal densities $f_0$ are equal and symmetric and the scaling constants $c_n$ ensure that $\r\{w_d\ge1\}=1$. The limit set $E^*$ is a subset of the cube $C=[-1,1]^d$ and projects onto the interval $[-1,1]$ in each coordinate, again by our choice of the scaling constants. The two sections below describe procedures for altering distributions without changing the marginals too much. A block partition is a special kind of partition into coordinate blocks. If the blocks are relatively small then the asymptotics of a distribution do not change if one replaces it by one which gives the same mass to each block. Block partitions are mapped into block partitions by $K$. The mass is preserved, but the size and shape of the blocks may change drastically. The block partitions provide insight in the relation between the asymptotic behaviour of the measures $dF^*$ and $dG^*$. In the second procedure we replace $dF$ by a probability measure $d\tilde F$, which agrees outside a bounded set with $d(F+F^o)$ where $F^o$ has lighter marginals than $F$: F\_j\^o(-t)&lt;&lt;F\_0(-t)1-F\_j\^o(t)&lt;&lt;1-F\_0(t)t,j=1,…,d. This condition ensures that $\tilde F$ and $F$ have the same asymptotics. The two corresponding light-tailed meta dfs $\tilde G$ and $G$ on $\xb$-space may have different asymptotics since the scaling constants $b^o_n$ and $b_n$ may be asymptotic even though $G^o$ has lighter tails than $G$. If this is the case, and the scaled sample clouds from $G^o$ converge onto a compact set $E^o$, then those from $\tilde G$ converge onto the union $E\cup E^o$, which may be larger than $E$. These two procedures enable us to construct dfs $F^*$ with marginal densities $f_0$ and meta dfs $G^*=F^*\circ K$ with marginal densities $g_0$ which exhibit unexpected behaviour: $G^*$ and $G$ have the same asymptotics, but the scaled sample clouds from $F^*$ converge to a Poisson point process which lives on the diagonal. (See Theorem \[thc1\] and \[thc2\], and Example \[em1\].) The scaled sample clouds from $G^*$ converge onto $E^*=A\cup E_{00}$, where $E_{00}$ is the *diagonal cross* E\_[00]{}={r0r1, {-1,1}\^d}, and $A\ss[-1,1]^d$ any compact star-shaped set with continuous boundary. The dfs $F^*$ and $F$ have the same asymptotics. The density $f^*$ is asymptotic to $f$ on every ray which does not lie in a coordinate plane. (See Example \[em1\].) What does the copula say about the asymptotics? Everything, since it determines the df if the marginals are given; nothing, since the examples above show that there is no relation between the asymptotics of $F^*$ and the asymptotics of $G^*$ even with the prescribed marginals $f_0$ and $g_0$. One might hope that at least the parameters $\l$ and $\q$, determined by the marginals, might be preserved in the asymptotics. The point process $N^*$ reflects the parameter $\l$ in the marginal intensities $\l/|t|^{\l+1}$. However, $E^*=E_{\l^*,\q^*}$ may hold for any $\l^*$ and $\q^*$ in $(0,\nf)$ by taking $A=E_{\l^*,\q^*}$ in the second example above. We now start with the technical details. The construction procedures discussed above will change an original df $\hat F$ with marginals $F_0$ into a df $\tilde F$ whose marginals $\tilde F_j$ are tail equivalent to $F_0$. This is no serious obstacle. \[prs0\] Let the scaled sample clouds from $\tilde F$ converge to a point process $\tilde N$, and let the scaled sample clouds from $\tilde G=\tilde F\circ K$ converge onto the compact set $\tilde E$. If the marginals $\tilde F_j$ are continuous and strictly increasing and tail asymptotic to $F_0$ then there exists a df $F^*$ with marginals $F_0$ such that $F^*$ has the same asymptotics as $\tilde F$ and $G^*=F^*\circ K$ has the same asymptotics as $\tilde G$. If moreover $\tilde F$ has a density $\tilde f$ with marginals asymptotic to $f_0$ in $\pm\nf$, then $F^*$ has a density $f^*$, and for any vector $\wb$ with non-zero coordinates and any sequence $\wb_n\to\wb$ and $r_n\to\nf$ there is a sequence $\wb'_n\to\wb$ such that $f^*(r_n\wb_n)\sim\tilde f(r_n\wb'_n)$. Let $F^*=\tilde F\circ K_F$ be the meta df based on $\tilde F$ with marginals $F_0$. The components $K_{Fj}=\tilde F_j\circ F_0$ are homeomorphisms and satisfy $K_{Fj}(t)\sim t$ for $|t|\to\nf$. (Here we use that the marginal tails vary regularly with exponent $-\l\ne0$.) It follows that K\_F()-/0. This ensures that $\tilde F$ and $F^*$ have the same asymptotics. (For any $\e>0$ the distance between the scaled sample point $\ZB/c_n$ and $K_F(\ZB)/c_n$ is bounded by $\e\|\ZB\|/c_n$ for $\|\ZB\|\ge\e c_n$ and $n\ge n_\e$.) A similar argument shows that $\tilde G=\tilde F\circ K$ and $G^*=F^*\circ K$, the meta df based on $\tilde G$ with marginals $G_0$, have the same asymptotics. Here we use that $\tilde G_j=\tilde F_j\circ K_0$ is tail asymptotic to $G_0=F_0\circ K_0$ since $\tilde F_j$ is tail asymptotic to $F_0$. Under the assumptions on the density the Jacobean of $K_F$ is asymptotic to one in the points $r_n\wb'_n$ and (\[qrskf\]) gives the limit relation with $r_n\wb'_n=K_F\inv(r_n\wb_n)$. In general, the densities $f^*$ and $\tilde f$ (in the notation of the above proposition) are only weakly asymptotic, as in Proposition 1.8 in [@Balkema2009]. The density $f^*$ on $\zb$-space is related to $f$ in the same way as the density $g^*$ is related to $g$. If $f^*\asymp f$ or $f^*\le Cf$ or $f^*\sim f$, then these relations also holds for $g^*$ and $g$, and vice versa. Similarly for the marginals: $g^*_j\sim g_0$ in $\nf$ implies $f^*_j\sim f_0$ in $\nf$. These results are formalized in the lemma below. \[mrs0\] If $F^*$ has density $f^*$ with marginals $f_j^*$ and $G^*=F^*\circ K$ has density $g^*$ with marginals $g^*_j$, then $$g^*(\xb)/g(\xb)=f^*(\zb)/f(\zb)\qquad g^*_j(s)/g_0(s)=f^*_j(t)/f_0(t)\qquad \zb=K(\xb),\ t=K_0(s).$$ The Jacobean drops out in the quotients. Block partitions {#sdom2} ---------------- We introduce partitions of $\rbb^d$ into bounded Borel sets $B_n$. In our case the sets $B_n$ are coordinate blocks. Since our dfs have continuous marginals the boundaries of the blocks are null sets, and we shall not bother about boundary points, and treat the blocks as closed sets. To construct such a *block partition* start with an increasing sequence of cubes $$s_nC=[-s_n,s_n]^d\qquad 0<s_1<s_2<\cdots,\ s_n\to\nf\qquad C=[-1,1]^d.$$ Subdivide the ring $R_n=s_{n+1}C\sm s_nC$ between two successive cubes into blocks by a symmetric partition of the interval $[-s_{n+1},s_{n+1}]$ with division points $\pm s_{nj}$, $j=1,\ldots,m_n$, with $$-s_{n+1}<-s_n<\cdots<-s_{n1}<s_{n1}<\cdots<s_{nm_n}=s_n<s_{nm_n+1}=s_{n+1}.$$ This gives a partition of the cube $s_{n+1}C$ into $(2m_n+1)^d$ blocks of which $(2m_n-1)^d$ form the cube $s_nC$. The remaining blocks form the ring $R_n$. The meta transformation $K$ transforms block partitions in $\xb$-space into block partitions in $\zb$-space. A comparison of the original block partition with its transform gives a good indication of the way in which the meta transformation distorts space. A partition of $\rbb^d$ into Borel sets $A_n$ is *regular* if the following conditions hold: 1\) The sets $A_n$ are bounded and have positive volume $|A_n|>0$; 2\) Every compact set is covered by a finite number of sets $A_n$; 3\) The sets $A_n$ are relatively small: There exist points $\pb_n\in A_n$ with norm $\|\pb_n\|=r_n>0$ such that for any $\e>0$, $A_n\ss\pb_n+\e r_nB$, $n\ge n_\e$. The block partition introduced above is regular if and only if $s_{n+1}\sim s_n$ and $\Delta_n/s_n\to0$ where $\Delta_n$ is the maximum of $s_{n1}, s_{n2}-s_{n1},\ldots,s_{nm_n}-s_{nm_n-1}$. Regular partitions give a simple answer to the question: If one replaces $f$ or $g$ by a discrete distribution, how far apart are the atoms allowed to be if one wants to retain the asymptotic behaviour of the sample clouds from the given density? \[prs1\] Let $A_1,A_2,\ldots$ be a regular partition. Suppose the sample clouds from the probability distribution $\m$ scaled by $r_n$ converge onto the compact set $E$. Let $\tilde\m$ be a probability measure such that $\tilde\m(A_n)=\m(A_n)$ for $n\ge n_0$. Then the sample clouds from $\tilde\m$ scaled by $r_n$ converge onto $E$. Let $\pb\in E$, and $\e>0$. Let $\m_n$ denote the mean measure from the scaled sample cloud from $\m$ and $\tilde\m_n$ the same for $\tilde\m$. Then $\m_n(\pb+(\e/2)B)\to\nf$. Because the sets $A_n$ are relatively small there exists $n_1$ such that any set $A_n$ which intersects the ball $r_n\pb+r_n\e B$ with $n\ge n_1$ has diameter less than $\e r_n/2$. Let $U_n$ be the union of the sets $A_n$ which intersect $r_n\pb+(r_n\e /2)B$. Then $U_n\ss r_n\pb+\e r_nB$ and hence $$\m_n(\pb+(\e/2)B)\le\m_n(U_n/r_n)=\tilde\m_n(U_n/r_n)\le\tilde\m_n(\pb+\e B).$$ Similarly $\tilde\m_n(U^c)\to0$ for any open set $U$ which contains $E$. The result also holds if $\tilde\m(A_n)\asymp\m(A_n)$ provided $\m(A_n)$ is positive eventually. There is an analogous result for regular partitions in $\zb$-space. \[prs2\] Suppose $\p\in\DC_\l(\r)$ with scaling constants $c_n$. Let $A_1,A_2,\ldots$ be a regular partition and let $\tilde\p$ be a probability measure on $\rbb^d$ such that $\tilde\p(A_n)=\p(A_n)$ for $n\ge n_0$. Then $\tilde\p\in\DC_\l(\r)$ with scaling constants $c_n$. Any closed block $A\ss\rbb^d\sm\{\zerob\}$ whose boundary carries no $\r$-mass is contained in an open block $U$ with $\r(U)<\r(A)+\e$. As in the proof of the previous proposition for $n\ge n_1$ there is a union $U_n$ of atoms $A_n$ such that $A\ss U_n/c_n\ss U$. For excess measures $\r$ with a continuous positive density $h$ there is a converse. For $\p\in\DC_\l(\r)$ with limit $\r$ there is a regular partition $A_1,A_2,\ldots$ such that $\p(A_n)\sim\int_{A_n}f(\zb)d\zb$ with $f\in\FC_\l$, and with the same scaling constants. See [@Balkema2007], Theorem 16.27. This result vindicates our use of densities in $\FC_\l$. Not every distribution in the domain of $h$ has a density, or even a density in $\FC_\l$, but every such distribution is close to a density in $\FC_\l$ in terms of a regular partition. We thus have the following simple situation: $A_1,A_2,\ldots$ is a block partition in $\xb$-space and $B_1=K(A_1),B_2=K(A_2),\ldots$ the corresponding block partition in $\zb$-space. Let $\tilde\p$ be a probability measure in $\zb$-space and $\tilde\m$ a probability measure in $\xb$-space, linked by $K$, i.e. $\tilde\p=K(\tilde\m)$. Then $\tilde\p(A_n)=\tilde\m(B_n)$ for all $n$. So (A\_n)\~\_[A\_n]{}f()d(B\_n)\~\_[B\_n]{}g()d. \[thrs3\] If both partitions are regular and one of the equivalent asymptotic equalities in (\[qrs1\]) holds, then the sample clouds from $\tilde\p$ scaled by $c_n$ converge to the Poisson point process with intensity $h$ in , and the sample clouds from $\tilde\m$ scaled by $r_n$ converge onto the set $E=E_{\l,\q}$ in . Combine Proposition \[prs1\] and \[prs2\]. Unfortunately the meta transformation $K$ is very non-linear. Regularity of one block partition does not imply regularity of the other block partition. We first consider the case when the block partition $(A_n)$ in $\xb$-space is regular, but $(B_n)$ is not. The block partition $(A_n)$ in $\xb$-space is based on a sequence of cubes $s_nC=[-s_n,s_n]^d$. Successive cubes are of the same size asymptotically, $s_{n+1}\sim s_n$. The cubes $t_nC$ in $\zb$-space with $t_n=K_0(s_n)$ may grow very fast. It is possible that $t_n<<t_{n+1}$, as in Proposition \[prs4\] below. The corresponding partition with blocks $B_n=K(A_n)$ in $\zb$-space then certainly is not regular. \[prs4\] Let $\e\in(0,1)$. There is a sequence $0<s_1<s_2<\cdots$ such that $s_n\to\nf$ and $s_{n+1}\sim s_n$, and such that $$t_n=K_0(s_n)=n^{n^{n^{1-\e}}}.$$ We have $g_0\sim e^{-\j}$ implies $1-G_0(s)\sim a(s)g_0(s)\sim e^{-\Psi(s)}$ where $\Psi$ like $\j$ varies regularly with exponent $\q$. Write $s_n=e^{\s_n}$ and $\t\Psi(s_n)\sim e^{\q r(\s_n)}$ where $r$ is a $C^2$ function with $r'(t)\to1$ and $r''(t)\to0$ as $t\to\nf$, and $\t:=1/\l$. It has been shown in [@Balkema2009] (Equation (1.13)) that $$K_0(s)=t\sim ce^{\f(s)}\quad s\to\nf,\quad \f(s)=\t q(\Psi(s))\sim\t\Psi(s),$$ for some positive constant $c$. This gives $\log t_n=\log K_0(s_n)\sim e^{\q r(\s_n)}$. Since $\log\log t_n=n^{1-\e}\log n+\log\log n$ has increments which go to zero, so does $\q r(\s_n)$, and hence also $\s_n$ since $r'$ tends to one. It follows that $s_{n+1}\sim s_n$. Choose $s_{nm_n-1}=s_{n-1}$. Then the cube $[s_{n-1}\eb,s_{n+1}\eb]$ is a union of $2^d$ blocks in the partition, and so is the cube $[t_{n-1}\eb,t_{n+1}\eb]$ in $\zb$-space; $\eb=(1,\ldots,1)$ denotes a vector of ones in $\rbb^d$. The union $U$ of these latter cubes has the property that the scaled sets $U/t_n$ converge to $(0,\nf)^d$ for $t_n\to\nf$ if $t_n<<t_{n+1}$. \[thc1\] Assume the standard set-up with the excess measure $\r$ of the original distribution which does not charge the coordinate planes. Let $\tilde\r$ be an excess measure on $\rbb^d\sm\{\zerob\}$ with marginal densities $\l/|t|^{\l+1}$, and assume that for each orthant $Q_\d$, $\d\in\{-1,1\}^d$, the restrictions of $\tilde\r$ and $\r$ to $Q_\d$ have the same univariate marginals. One may choose $\tilde F$ such that its marginals are tail asymptotic to $F_0$, such that the sample clouds converge to the Poisson point process $\tilde N$ with mean measure $\tilde\r$, and such that the sample clouds from the df $\tilde G=\tilde F\circ K$ converge onto the limit set $E_{\l,\q}$ in . We sketch the construction. Choose $\hat F\in\DC_\l(\tilde\r)$ with density $\hat f$ such that the sample clouds from $\hat F$ scaled by $c_n$ converge to $\tilde N$. For $\d\in\{-1,1\}^d$ let $U_\d$ be the image of the union $U$ in $Q_\d$ by reflecting coordinates for which $\d_j=-1$ (see Figure \[fcase1\] for an illustration). Let $\tilde f$ agree with $\hat f$ on the $2^d$ sets $U_\d$ and with $f$ elsewhere, so that, by the remark above on the convergence of the scaled sets $U$, $\tilde f$ and $\hat f$ differ only on an asymptotically negligible set. Alter $\tilde f$ on a bounded set to make it a probability density. Then the sample clouds from $\tilde F$ scaled by $c_n$ converge to $\tilde N$. In the corresponding partition $(A_n)$ on $\xb$-space we only change the measure on the “tiny” blocks $[s_{n-1},s_{n+1}]^d$ (with $s_{n-1}\sim s_{n+1}$) around the positive diagonal, and their reflections. Hence the scaled sample clouds from $\tilde G=\tilde F\circ K$ converge onto $E_{\l,\q}$. We now discuss the second case: the block partition $(B_n)$ in $\zb$-space is regular, but $(A_n)$ is not. As before, a block partition on $\xb$-space is determined by an increasing sequence of cubes $s_nC=[-s_n,s_n]^d$, and for each $n$ a symmetric partition of $[-s_n,s_n]$ given by a finite sequence of points $s_{n1}<\cdots<s_{nm_n}=s_n$ in $(0,s_{n+1})$. The image blocks are determined by $t_n=K_0(s_n)$ and $t_{nj}=K_0(s_{nj})$. This makes it convenient to define these quantities in terms of upper quantiles. Choose $m_n=n$, probabilities $p_n=e^{-\sqrt n}$, and write 1-F\_0(t\_n)=1-G\_0(s\_n)=p\_n1-F\_0(t\_[nj]{})=1-G\_0(s\_[nj]{})=np\_n/j.One may see $t_{nj}$ as a function $T$ of $j/np_n$, and so too $s_{nj}$ as a function $S$ of $\log(j/np_n)$: $$t_{nj}=T(je^{\sqrt n}/n)\qquad s_{nj}=S(\sqrt n+\log j-\log n)\qquad j=1,\ldots,n,\ n=1,2,\ldots$$ The increasing functions $T$ and $S$ vary regularly with exponents $1/\l$ and $1/\q$ since the inverse functions to $1-F_0$ and $-\log(1-G_0)$ vary regularly in zero with exponents $-1/\l$ and $1/\q$. It follows that $T((j_n/n)e^{\sqrt n})/T(e^{\sqrt n})\to u^{1/\l}$ if $j_n/n\to u\in[0,1]$, and hence $t_{n1}/t_n\to0$ and the maximal increment $t_{nj}-t_{nj-1}$, $j=2,\ldots,n$, is $o(t_n)$. So the block partition in $\zb$-space is regular. However $s_{n1}\sim s_n$ since $\log n=o(\sqrt n)$ implies $S(\sqrt n-\log n)/S(\sqrt n)\to1$. Figure \[fcase2\] depicts sequences of cubes $s_nC$ and $t_nC$ in $\rbb^2$ on which partitions $(A_n)$ and $(B_n)$ are based in the special case when $s_n=\sqrt{n}$ and $t_n=K_0(s_n)=e^{s_n}=e^{\sqrt{n}}$, along with subintervals $[-e^{\sqrt{n}}/n,e^{\sqrt{n}}/n]\times \{e^n\}$ in $\zb$-space mapping onto $[-\sqrt{n}+\log n,\sqrt{n}-\log n]\times\{\sqrt{n}\}$ in $\xb$-space, which correspond to the partition blocks intersecting the coordinate axes. \[thc2\] Assume the standard set-up. There exists a df $\tilde F$ such that the original df $F$ and $\tilde F$ have the same asymptotics, and the scaled sample clouds from the corresponding meta df $\tilde G$ converge onto the diagonal cross $E_{00}$ in . Let $(B_n)$ be the regular block partition in $\zb$-space as above. Construct a density $\tilde f$ by deleting the mass of the original df $F$ in the blocks $B_n$ which intersect one of the $d$ coordinate planes, except for the block containing the origin, where we increase the density by a factor $c>1$ to compensate for the loss of mass elsewhere. The new density $\tilde f$ agrees with $f$ on every block which does not intersect a coordinate plane. The relation $t_{n1}/t_n\to0$ implies that $\tilde f$ and $f$ agree outside a vanishing neighborhood around the coordinate planes. On the other hand the relation $s_{n1}\sim s_n$ implies that $\tilde g$ and $g$ only agree on a vanishing neighborhood around the diagonals. Thus $\tilde f(r_n\wb_n)=f(r_n\wb_n)$ eventually for $r_n\to\nf$ if $\wb_n$ converges to a vector $\wb$ with non-zero components. Conversely, if $r_n\to\nf$ and $\wb_n$ converges to a vector $\wb$ which does not lie on one of the $2^d$ diagonal rays, then $\tilde g(r_n\wb_n)=0$ eventually. The function $\tilde f$ satisfies the same limit relation $\tilde f(r_n\wb_n)/f(r_n\eb)\to h(\wb)$ for $r_n\to\nf$ as $f$, provided $\wb=\lim_{n\to\nf}\wb_n$ has non-zero coordinates. Dominated convergence, by the inequality $\tilde f\le f$ outside $t_1C$, gives $\LB^1$-convergence outside centered balls. It follows that $\tilde F$ and $F$ have the same asymptotics. The scaled sample clouds from $\tilde g$ converge onto the diagonal cross $E_{00}$. The incompatibility of the partitions $(A_n)$ and $(B_n)=(K(A_n))$ introduced in this section gives one technical explanation for the peculiar sensitivity of the limit shape for the meta distribution. If we regard the atoms of the partition $(B_n)$ as nerve cells, then regularity of $(A_n)$ will make the region around the coordinate planes in $\zb$-space far more sensitive than the remainder of the space, and it is not surprising that cutting away these regions has drastic effects on the limit. Mixtures {#sdom3} -------- For a wide class of star-shaped sets $A\ss [-1,1]^d$ it is possible to alter the original density so that the limit set $E$ of the new meta density is the union $A\cup E_{00}$, where $E_{00}$ is the diagonal cross in (\[qrsdx\]). \[thmix\] Assume the standard set-up. Let $A$ be a star-shaped closed subset of the unit cube $[-1,1]^d$ with a continuous boundary and containing the origin as interior point, and let $E_{00}$ be the diagonal cross in . There exists a df $\tilde F$ with the same asymptotics as $F$, such that the scaled sample clouds from the meta distribution $\tilde G$ converge onto the set $E=A\cup E_{00}$. Let $\hat G=\hat F\circ K$ where $\hat F$ has marginal densities $f_0$, and let $G^o=F^o\circ K$ where $F^o$ has continuous marginals $F^o_j$ with lighter tails than $F_0$, see (\[qrsneg\]). Let $d\tilde F$ agree with $d(\hat F+F^o)$ outside a bounded set. The sample clouds from $F^o$ scaled by $c_n$ converge onto $\{\zerob\}$ since $n(1-F_j^o(\e c_n))+nF_j^o(-\e c_n)\to0$ as $n\to\nf$, $j=1,\ldots,d$, $\e>0$. So $\tilde F$ and $\hat F$ have the same asymptotics. Let $n_{A}$ denote the gauge function of $A$ (or its interior), and let $g^o(\xb)=g_*(n_{A}(\xb))$ for a continuous decreasing positive function $g_*$ on $[0,\nf)$. We assume that $g_*$ varies rapidly. The function $g^o$ is continuous and $0<g^o(\xb)\le \bar g(\xb)$. Let $d\tilde G$ agree with $d\hat G(\xb)+g^o(\xb)d\xb$ outside a bounded set. We may assume that $d\hat G$ does not charge coordinate planes $\{x_j=c\}$ and charges all coordinate slices $\{c_1<x_j<c_2\}$. Then the marginals are continuous and strictly increasing. Choosing $\hat F$ to have the same asymptotics as the original df $F$, and so that the scaled sample clouds from $\hat G$ converge onto the diagonal cross $E_{00}$ (see Theorem \[thc2\] and Proposition \[prs0\]), we obtain $\tilde F$ and $F$ with the same asymptotics, and convergence of the scaled sample clouds from $\tilde G$ onto $E_{00}\cup A$. \[em1\] Let $\bar g$ be a density with cubic level sets: $\bar g(\xb)=g_*(\|\xb\|_\nf)$ with $g_*$ as in the above theorem. The marginal densities $\bar g_0$ are symmetric and equal, and asymptotic to $(2s)^{d-1}g_*(s)$. Let $\bar g_0(\bar b_n)\sim1/n$. We may choose $g_*$ such that $\bar b_n\sim b_n$ and $n\bar g_0(b_n)\to0$, (see Lemma \[lAog\] and Proposition \[pAog\] for details). It follows that the sample clouds from $\bar G$ scaled by $\bar b_n$ converge onto the cube $[-1,1]^d$, hence also the sample clouds scaled by $b_n$. In the above theorem, take $\hat F=F$ and $G^o=\bar G$. Then $\tilde F$ has the same asymptotics as $F$ and the scaled sample clouds from $\tilde G=\tilde F\circ K$ converge onto the cube $[-1,1]^d$. If we choose the measure $d\hat F$ to be the image of the marginal $dF_0$ under the map $t\mapsto t\eb$, then $d\hat F$ lives on the diagonal. The scaled sample clouds from $\tilde F$ converge to the Poisson point process on the diagonal with intensity $\l/|t|^{\l+1}$ in the parametrization above, and the scaled sample clouds from $\tilde G$ converge onto $A\cup E_{00}$. If we choose $A=E_{\l,\q}$ then the dfs $G$ and $\tilde G$ have the same asymptotics, but we may also choose $A=E_{\l^*,\q^*}$ for other values $\l^*$ and $\q^*$ in $(0,\nf)$. We have shown that slight changes in $F$, changes which do not affect the asymptotics or the marginals, may yield a meta distribution $\tilde G$ with the marginals of $G$ but with different asymptotics. This makes it possible to start out with a Poisson point process $N^*$ and a star-shaped set $E^*$, and construct dfs $F^*$ and $G^*$ with marginal densities $f_0$ and $g_0$ such that the sample clouds from $F^*$ converge to $N^*$ and those from $G^*$ converge onto $E^*$. The only condition is that the mean measure $\r^*$ of $N^*$ is an excess measure with marginal densities $\l/|t|^{\l+1}$ and that $E^*$ is a closed star-shaped subset of $[-1,1]^d$ containing the $2^d$ vertices and having a continuous boundary. Although the shape of the limit set is rather unstable under even slight perturbations of the original distribution, one may note the persistence of the diagonal cross as a subset of the limit set. Due to equality and symmetry of the marginals, clearly the points on the $2^{d-1}$ diagonals in $\zb$-space are mapped into the points on the diagonals in $\xb$-space. However, Figure \[fcase1\] shows that much larger subsets of the open orthants in $\zb$-space are mapped on the neighborhood of the diagonals in $\xb$-space. \[pdiag\] Consider the standard set-up with the original df $F$ having equal marginals. Let $\tilde F$ be a df on $\rbb^d$ whose marginals are equal and symmetric and tail asymptotic to those of $F$. Assume that the sample clouds from $\tilde F$ converge to a point process with mean measure $\tilde\r$, where $\tilde\r$ charges $(0,\nf)^d$. If sample clouds from the meta distribution $\tilde G=\tilde F\circ K$ can be scaled to converge onto a limit set $\tilde E$ then $\tilde E$ contains the vertex $\eb=(1,\ldots,1)$. Limit sets are always star-shaped (see Proposition 4.1 in [@Kinoshita1991]). Hence, if the limit set $\tilde E$ contains $\eb$, it also contains the line segment $E_{00}^+=E_{00}\cap(0,\nf)^d$ joining $\eb$ and the origin. To see this we again make use of the block partitions of Section \[sdom2\]; in particular consider the situation sketched in Figure \[fcase1\]. Due to symmetry, it suffices to restrict attention to the positive orthant. Consider cubes $C_n^A:=[s_n-Ma(s_n),s_n+Ma(s_n)]^d$ centered at diagonal points $s_n\eb$ for some $M>0$, where $a(s)$ is the scale function of the marginal df $G_0$. Recall that $a'(s)\to0$ and hence $a(s)/s\to0$ as $s\to\nf$. These cubes are asymptotically negligible as $C_n^A/s_n\to\{\eb\}$ for $s_n\to\nf$. The corresponding cubes in $\zb$-space are centered at the diagonal points $t_n\eb$ with $t_n=K_0(s_n)$ and given by $C_n^B:=K(C_n^A)=[K_0(s_n-Ma(s_n)),K_0(s_n+Ma(s_n))]^d=:[t_{n-1},t_{n+1}]^d$. The von Mises condition on $1-G_0$ with scale function $a(s)$ and regular variation of $(1-F_0)\inv$ in zero with exponent $-1/\l$ give $$\begin{aligned} \lim_{n\to\nf}\dfrac{t_{n-1}}{t_{n}} &=\lim_{s_n\to\nf}\dfrac{K_0(s_n-Ma(s_n))}{K_0(s_n)} =\lim_{s_n\to\nf}\dfrac{(1-F_0)\inv(1-G_0(s_n-Ma(s_n)))}{(1-F_0)\inv(1-G_0(s_n))} \\ &=\lim_{s_n\to\nf}\Big(\dfrac{1-G_0(s_n)}{1-G_0(s_n-Ma(s_n))}\Big)^{1/\l}\dfrac{L(1-G_0(s_n-Ma(s_n)))}{L(1-G_0(s_n))}\\ &=\lim_{s_n\to\nf}\Big(\dfrac{e^{-M}(1-G_0(s_n))}{1-G_0(s_n)}\Big)^{1/\l}\dfrac{L(1-G_0(s_n))}{L(e^{-M}(1-G_0(s_n)))} =e^{-M/\l},\end{aligned}$$ for a slowly varying function $L$. Similarly, $t_{n+1}/t_n\to e^{M/\l}$, and thus $C_n^B/t_n\to[e^{-M/\l},e^{M/\l}]^d$. Note that for $M$ large, this limit constitutes a large subset of $(0,\nf)^d$ in $\zb$-space in that it will eventually, for $M$ large enough, contain any compact subset of $(0,\nf)^d$. The points in $C_n^B$ are mapped by the meta transformation $K$ onto the points in $C_n^A$, thus preserving the mass $\pbb\{\ZB\in C_n^B\}=\pbb\{\XB\in C_n^A\}$. This shows that most of the mass on $(0,\nf)^d$ in $\zb$-space is concentrated on just a neighborhood of $E_{00}^+$ in $\xb$-space, and the assumption on $\tilde\r$ ensures that the limit of the scaled sample clouds contain the vertex $\eb$. Extremes and high risk scenarios {#sbp} -------------------------------- There are different ways in which one can look at multivariate extremes. The focus of the paper so far has been on a global view of sample clouds. We have seen that the asymptotic behaviour of sample clouds is described by a Poisson point process for heavy-tailed distributions, and by the limit set for light-tailed distributions. We would now like to complement these global pictures by looking more closely at the edge of sample clouds. In particular, we discuss asymptotic behaviour of coordinatewise maxima and of exceedances over hyperplanes, termed *high risk scenarios* in [@Balkema2007]. ### Biregular partition and additional notation We first introduce a *biregular* block partition, a partition which is regular in both $\zb$-space and in $\xb$-space. For simplicity, we shall restrict attention to a bivariate situation. Recall the block partition defined in terms of quantiles in , which was regular in $\zb$-space, but not in $\xb$-space. We now refine it to make it biregular. A typical block intersecting the positive horizontal axis in $\xb$-space has the form \[s\_[n-1]{},s\_n\]s\_[n-1]{}\~s\_n\~s\_[n1]{}. It is an elongated thin vertical rectangle almost stretching from one diagonal arm to the other. We subdivide it in the vertical direction into $2n$ congruent rectangles by adding the division points $$-(n-1)s_{n1}/n,\ldots,-s_{n1}/n,0,s_{n1}/n,\ldots,(n-1)s_{n1}/n.$$ The corresponding partition $(A_n)$ in $\xb$-space is regular, and so is the partition $(B_n)$ with $B_n=K(A_n)$ in $\zb$-space, since it is a refinement of a regular partition. We shall be interested in three sets $C$, $D$, and $O$ which are disjoint and fill $\rbb^2$. They are defined in terms of the biregular partition we have introduced above. Roughly speaking, $C$ consists of the blocks around the coordinate axes, $D$ consists of the blocks around the diagonals, and $O$ is the remainder. Due to the symmetry of the partition, we distinguish four subsets of $C$ associated with the four halfaxes: $C_N$ with $\{0\}\times(0,\nf)$, $C_E$ with $(0,\nf)\times\{0\}$, and their negative counterparts $C_S$ and $C_W$. Similarly, $D_{NE}$ is the restriction of $D$ to $(0,\nf)^2$ and analogously for the other quadrants $D_{SE}$, $D_{SW}$ and $D_{NW}$ clockwise. Here is the assignment of the blocks $A_n$ (or $B_n$) to the sets $C$, $D$ and $O$: In the set $C_E$ we put the rectangles in , for $n\ge n_0$, or rather the $2n$ blocks into which these rectangles have been subdivided. Similarly, for $C_N$, $C_W$ and $C_S$. In the diagonal sets $D$ we put all blocks in the ring $R_n$ between successive squares, which are determined by subdivision points $\pm s_{nj}$ with $j>j_n=[\sqrt n]$. We claim that $D_{NE}$ asymptotically fills up the open positive quadrant in $\zb$-space. For this we have to show that $t_{nj_n}/t_n\to0$. By definition $t_{nj}$ is the upper $q$-quantile for the df $F_0$ for $q=ne^{-\sqrt n}/j$. Let $t_n$ be the upper quantile for $p_n=e^{-\sqrt n}$ and $r_n$ the upper quantile for $q_n=\sqrt ne^{-\sqrt n}$. We claim that $r_n/t_n\to0$. This follows by regular variation of the tail of $F_0$ because $p_n/q_n\to0$. So in $\zb$-space the diagonal set $D$ asymptotically fills up the whole plane apart from the coordinate axes; in $\xb$-space the coordinate set $C$ fills up the whole plane apart from the two diagonal lines; and there is still a lot of space left for the set $O$ (note that $t_{n1}/t_{nj_n}\to0$). See Figure \[fcase3\] for illustration. The asymptotic behaviour of the probability distribution $F$ is known once we know the probability mass $p_n$ of the atomic blocks $A_n$ in $\xb$-space (or of $B_n=K(A_n)$ in $\zb$-space) (see Propositions \[prs1\] and \[prs2\]). The specification of the probability masses $p_n$ of the atoms is not a very efficient way of describing the original distribution and the meta distribution. Instead we shall describe the asymptotic behaviour of $dF$ on the diagonal sector $D_{NE}$, and the asymptotic behaviour of the meta distribution $dG$ on the coordinate sector $C_E$. We first consider the two pure cases, where all mass lives on one of the sets $C$ or $D$. ### The region $D$ The situation on the region $D$ is similar to that discussed in Theorem \[thc2\]. In $\zb$-space, let $F^*$ be a df in the domain of an excess measure $\r$ on $\rbb^2\sm\{\zerob\}$, where $\r$ has marginal intensities $\l/t^{\l+1}$, $\l>0$, and no mass on the axes. The measure $\r$ might be concentrated on one of the two diagonals. Since we are concentrating on the sector $D_{NE}$ we shall assume that $\r$ charges the positive quadrant. Delete the mass outside $D$, and compensate by adding some mass in a compact set. The marginals of the new df $F$ are still tail asymptotic to $F_0$. The measures $ndF$, scaled properly, converge to $\r$ weakly on the complement of centered balls. The sample clouds and the coordinatewise maxima converge under the same scaling. The exponent measure of the max-stable limit distribution is the image $\r^+$ of the measure $\r$ under the map $\zb=(x,y)\mapsto(x\lor0,y\lor0)=\zb^+$. The following proposition describes the relation between the exponent measures for an original df and the associated meta df. It is in line with e.g. Proposition 5.10 in [@Resnick1987]. One can determine the max-stable limit for a meta df from the max-stable limit for the original df and the marginals of the meta distribution. We shall refer to this fact as the *invariance principle* (for coordinatewise extremes) since the max-stable limit distributions have the same copula. For a df $F$ in the domain of attraction of an extreme value limit law with exponent measure $\r$, we use the notation $F\in DA(\r)$. \[prK\] Let $F\in DA(\r^+)$ be a continuous df on $\rbb^2$ whose exponent measure $\r^+$ has marginal intensities all equal $\l/t^{\l+1}$, $\l>0$. Let $G_0$ be a continuous symmetric df on $\rbb$ whose tails are asymptotic to a von Mises function. If $G$ is the meta df based on $F$ with marginals equal to $G_0$, then $G\in DA(\s^+)$, where $\s^+$ and $\r^+$ are related by a coordinatewise exponential transformation, $\r^+=K(\s^+)$ with equal components $$K:\ub\mapsto \wb,\qquad w_i=K_0(u_i)=e^{u_i/\l}\qquad i=1,2.$$ The map $K$ is the limit of coordinatewise transformations mapping normalized coordinatewise maxima from $G$ into normalized coordinatewise maxima from $F$, hence it is also a coordinatewise transformation. The von Mises condition on $G_0$ implies that $G_0$ is in the domain of attraction of the Gumbel limit law $\exp\{-e^{-x}\}$, and thus $\s^+$ has standard exponential marginals. The relations $$\r^+_i[t,\nf)=\s^+_i[s,\nf) \Longleftrightarrow t^{-\l}=e^{-s}\Longleftrightarrow t=K_0(s)=e^{s/\l},\quad i=1,2,$$ for the marginals determine $K$. In $\xb$-space $D_{NE}$ is a thin strip along the diagonal. However, it follows from Proposition \[prK\], that the coordinatewise maxima in $\xb$-space, centered and scaled, converge to a max-stable distribution whose exponent measure $\s^+$ is the coordinatewise logarithmic transform of $\r^+$. The measure $\s^+$ lives on $[-\nf,\nf)^2\sm\{(-\nf,-\nf)\}$. The restriction $\s$ to $\rbb^2$ is the image of the restriction of $\r^+$ (or of $\r$) to $(0,\nf)^2$. Let us now look at the asymptotic behaviour of the high risk scenarios and sample clouds for $F$ and $G$. Let $H_n=\{\x_n\ge c_n\}$ be halfplanes with direction $\x_n=(a_n,b_n)$ of norm one, and $c_n\to\nf$. For the heavy-tailed df $F\in\DC(\r)$, the domain of attraction of the excess measure $\r$, the situation is simple. If $\x_n\to\x$ and $c_n\to\nf$ then $\ZB^{H_n}\imp\WB$, where $\WB$ has distribution $d\r_\x=1_Hd\r/\r H$ for $H=\{\x\ge1\}$. The map $(u,v)\mapsto(x,y)=(c_nu,c_nv)$ maps $J_n=\{\x_n\ge1\}$ onto $H_n$. By assumption the probability measures $dF$ scaled by $c_n$ and multiplied by a suitable factor converges to $\r$ weakly on the complement of centered balls, hence on $H$. This gives the result when $\x_n=\x$ for all $n$. For the general case, $\x_n\to\x$ use the continuity theorem [@Balkema2007], Proposition  5.13. The same result holds for the light-tailed meta distribution $G$ provided the limit direction $\x$ does not lie on one of the axes. If $\x_n\to\x\in(0,\nf)^2$ and $c_n\to\nf$ then $\a_n\inv(\XB^{H_n})=(\XB^{H_n}-(b_n,b_n))/a_n\imp\UB$ where $(b_n,b_n)\in\prl H_n$ and $a_n=a(b_n)$ for the scale function $a$ associated with the marginal density $g_0$. The limit $\UB$ has distribution $d\s_\x=1_Hd\s/\s H$ for $H=\{\x\ge0\}$. This follows from the weak convergence of $d\p=dG$ normalized by $\a_n\inv$ and multiplied by $1/p_n$: $$\a_n\inv(\p)/p_n\to \s^+/a_0\quad{\rm weakly\ on\ }\rbb^2\sm[-\nf,c]^2\qquad c\in\rbb$$ for $p_n=\p(\rbb^2\sm[b_n,\nf)^2)$ and $a_0=\s^+([-\nf,\nf)^2\sm[-\nf,0]^2)$. Convergence of high risk scenarios implies convergence of sample clouds with the same normalizations for halfplanes that satisfy $\pbb\{\ZB\in H_n\}\sim1/n$ as in [@Balkema2007], Section 14. For the heavy-tailed df $F$ the convergence of the sample clouds $N_n\imp N$ weakly on $\{\x\ge c\}$ for all $c>0$ is no surprise since $N_n\imp N$ holds weakly on the complement of centered disks. For the light-tailed df $G$ weak convergence on $\{\x\ge c\}$ for $\x\in(0,\nf)^2$ follows from weak convergence on $[-\nf,\nf)^2\sm[-\nf,c]^2$. Convergence for horizontal halfspaces $H_n=\{y\ge c_n\}$ presents a different picture, since we do not allow mass to drift off to the vertical line in $-\nf$. \[pD1\] Set $H_t=\{y\ge t\}$ and $\a_t(u,v)=(tu,t+a(t)v)$. Then $\a_t\inv(\XB^{H_t})\imp(U,V)$ where $U$ and $V$ are independent, $V$ is standard exponential, and $U$ assumes only two values, $\pbb\{U=-1\}=p_-=c_-/c$ and $\pbb\{U=1\}=p_+=c_+/c$. Here $c_-=\s^+(\{-\nf\}\times(0,\nf))=\r^+(\{0\}\times(1,\nf))=\r((-\nf,0)\times(1,\nf))$, $c_+=\s(\rbb\times(0,\nf) )=\r((0,\nf)\times(1,\nf))$, and $c=\r(\rbb\times(1,\nf))$ since by assumption $\r$ does not charge the axes. If $\pbb\{\XB\in H_{r_n}\}\sim1/n$ then the sample clouds converge $\tilde M_n=\{\a_{t_n}\inv(\XB_1),\ldots,\a_{t_n}\inv(\XB_n)\}\imp\tilde M$ weakly on $\{v\ge c\}$, $c\in\rbb$. The Poisson point process $\tilde M$ lives on two vertical lines, on $\{u=-1\}$ with intensity $p_- e^{-v}$ and on $\{u=1\}$ with intensity $p_+ e^{-v}$. It suffices to prove the second relation. The limit point process $M$ with mean measure $\s$ above the line $\{v=-C\}$ for $C>1$ corresponds to the sample points $\XB_1,\ldots,\XB_n$ above the line $\{y=r_n-Ca(r_n)\}$. The corresponding points scaled by $r_n$ converge to the vertex $(1,1)$ of the limit set $E$. Under the normalization $\a_{t_n}\inv$ the horizontal coordinate converges to 1, and hence the whole sample cloud converges to the projection of $M$ onto the line $\{u=1\}$. A similar argument holds for the points of the point process with mean measure $\tilde\s$ on $\rbb^2$ associated with restriction of $\r$ to the quadrant $(-\nf,0)\times(0,\nf)$. These yield the points on the vertical line through $(-1,0)$. Since $\r$ does not charge the vertical axis, this accounts for all points. On the region $D$ there is a simple relation between the Poisson point processes associated with the exponent measures and the Poisson point processes associated with the high risk scenarios. ### The region $C$ We shall consider the counterpart of the class $\FC_\l$ for light-tailed densities $g$ on the plane. The density $g$ is assumed to be unimodal, continuous and to have level sets $\{g>c\}$ which are scaled copies of a bounded open star-shaped set $S$ in the plane with continuous boundary. Such densities are called homothetic. They have the form $g=g_*(n_S)$ where $g_*$ is a continuous decreasing function on $[0,\nf)$, the density generator, and $n_S$ is the gauge function of the set $S$ (see Table \[tab3\]). We shall assume that the density generator is asymptotic to a von Mises function, and that $S$ is a subset of the square $(-1,1)^2$. If $S$ is the open unit disk, or more generally a rotund set (i.e., convex with a $C^2$ boundary having positive curvature in each point), then the high risk limit scenarios exist for all directions, and are Gauss-exponential. The associated sample clouds in these directions may be normalized to converge to a Poisson point process with Gauss-exponential intensity $e^{-u^2/2}e^{-v}/\sqrt{2\p}$ on $\rbb^2$ for appropriate coordinates $u,v$. See Sections 9-11 in [@Balkema2007]. If the set $S$ is a convex polygon with one boundary point $\qb=(q_1,1)$ on the line $\{y=1\}$ with $|q_1|<1$, then the horizontal high risk scenarios converge. The associated excess measure has density $h$, where $h$ has conic level sets $\{h>e^{-t}\}=C+t\qb$ for an open cone $C$ in the lower halfplane. The cone $C$ describes the asymptotic behaviour of the set $S-\qb$ in the origin. If $S$ is the square $(-1,1)$ then the horizontal high risk scenarios, normalized by $\a_t(u,v)=(tu,t+a(t)v)$ converge. The associated excess measure has density $1_{[-1,1]}(u)e^{-v}$, and vanishes outside the vertical strip $[-1,1]\times\rbb$. Our density $g$ is defined in terms of a bounded star-shaped set $S$ and the density generator $g_*$ which describes the behaviour of $g$ along rays up to a scale factor depending on the direction. On the other hand we want light-tailed densities with prescribed marginals equal to $g_0$. In general it is not clear whether for a given set $S$ there exists a density generator $g_*$ which produces the marginals $g_0$. Let $g_0$ be the standard normal density. Let $g=g_*(n_S)$. Given a bounded open convex set $S$ can one find a density generator $g_*$ such that $g$ has marginals $g_0$? If $S$ is the unit disk, or an ellipse symmetric around the diagonal, then $g_*(r)=ce^{-r^2/2}$ will do. If $S$ is the square $(-1,1)^2$, then one may choose $g_*(r)\sim ce^{-r^2/2}/r$ so that the tails of the marginals agree with $g_0$, see Section A.2 in [@Balkema2009]. By altering $g$ on a square $[-M,M]^2$ one may achieve standard normal marginals. If $S$ is the diamond spanned by the unit vectors on the four halfaxes, then one can choose $g_*(r)\sim cre^{-r^2/2}$ so that the tails of the marginals agree with $g_0$, see [@McNeil2009]. These three density generators are different. It is not clear how to combine the asymptotic behaviour of these examples. Let us take a mixture of these three densities with weights $1/3$ each. Set $\a_t(u,v)=(tu,t+a(t)v)$ where $a(t)=1/t$ is the scale function associated with the normal marginal $g_0$. Since under the normalization $\a_t$ all three distributions have high risk limit scenarios $(U,V)$ with $U$ and $V$ independent and $V$ standard exponential, and since the normalizations for the vertical coordinate $V$ are determined by the vertical marginal $g_0$ which is the same in the three cases, we concentrate on the horizontal component. For the density with square level sets, $U$ is uniformly distributed on $[-1,1]$, whereas in the other two cases $U$ has a point mass at the origin. Hence, one obtains for the mixture a uniform distribution on $[-1,1]$ with a point mass of weight $2/3$ in the origin. If we use the partial compactification of the plane in [@Heffernan2007] then it is also possible to obtain limit distributions on $[-\nf,\nf]$ for the horizontal component. There are several: A centered Gaussian density with an atom of weight $1/3$ in the origin and two atoms of weight $1/6$ in $\pm\nf$; a Laplace density with two atoms of weight $1/3$ in $\pm\nf$; an atom of weight $1/3$ or $2/3$ in the origin with the remaining mass fairly divided over the two points in $\nf$; all mass in the points in $\nf$. Now consider an open ellipse $E$, symmetric around the diagonal, which has a unique boundary point on the line $\{x_2=1\}$ in $(p,1)$ for some $p\in(0,1)$. Its reflection $E'$ in the vertical axis has the boundary point $(-p,1)$. The union $S=E\cup E'$ is a star-shaped set. What does the excess measure for horizontal halfspaces look like? If we zoom in on $(p,1)$ we obtain a Gauss-exponential measure, but the measure around the point $(-p,1)$ moves off to $-\nf$. If we want weak convergence on horizontal halfspaces we have to use the normalization $\a_t$ above. The limit measure now lives on the two vertical lines $u=\pm p$ and has the same exponential density $e^{-v}$ on each by symmetry. If $S\ss(-1,1)^2$, and the boundary of the star-shaped set $S$ contains points on the interior of the four sides of the square, but does not contain any of the vertices, then the components $X_1$ and $X_2$ of the vector $\XB$ with density $g$ are asymptotically independent, and so are the pairs $(-X_1,X_2)$, $(X_1,-X_2)$, $(-X_1,-X_2)$, see [@Balkema2009a]. This also holds if $S$ is the whole square. So if $g$ is the meta density based on a heavy-tailed density in $\DC(\r)$, then the excess measure $\r$ lives on the four halfaxes, and the distribution may be restricted to the region $C$. If the high risk scenarios for horizontal halfplanes converge with the normalizations $\a_t(u,v)=(tu,t+a(t)v)$ of Proposition \[pD1\] then the limit vector $(U,V)$ has independent components and $U$ lives on the linear set $E_1$ determined by the intersection of the boundary of the limit set $E$ with the horizontal line $\{v=1\}$. Note that in this setting the limit set $E$ is the closure of $S$, see [@Balkema2009a]. Let $G$ be a bivariate df with marginal tails all asymptotic to the von Mises function $e^{-\j}$ with scale function $a$. Let $\XB^t$ denote the high risk scenario $\XB^H$ for the horizontal halfplane $H=\{y\ge t\}$. Let $\a_t(u,v)=(x,y)=(tu,t+a(t)v)$. Suppose $\a_t\inv(\XB^t)\imp\UB=(U,V)$ for $t\to\nf$. Then $U$ and $V$ are independent, $V$ is standard exponential and $|U|\le1$. Let $\j(b_n)=\log n$. If the sample clouds from $G$ scaled by $b_n$ converge onto the compact set $E$ then $\pbb\{(U,1)\in E\}=1$. The distribution of $V$ is determined by the marginal df $G_2$ and is exponential since $1-G_2$ is asymptotic to a von Mises function, see [@Balkema2007], Proposition 14.1. Light tails of the marginal $G_1$ imply $(1-G_1(e^\e t))/(1-G_1(t))\to0$ for any $\e>0$, and hence $\pbb\{U>e^\e\}=0$. Similarly for the left tail. Let $\s$ denote the excess measure extending the distribution of $(U,V)$ to $\rbb^2$; see [@Balkema2007], Section 14.6. Since $\a_t\inv\a_{t+sa(t)}\to\b_s$ where $\b_s(u,v)=(u,v+s)$, it follows that $\s(A+(0,t))=e^{-t}\s(A)$ for all Borel sets $A$ in $\rbb^2$. See Proposition 14.4 in [@Balkema2007]. This implies that $\s$ is a product measure, and hence that $U$ and $V$ are independent. Let $\b_n(u,v)=(b_nu,b_n+b_nv)$. The sample clouds $\{\b_n\inv(\XB_1),\ldots,\b_n\inv(\XB_n)\}$ converge onto the translated set $E-(0,1)$. Now apply the additional normalization $\g_n(u,v)=(u,a_nv/b_n)$ where $a_n=a(b_n)$ implies $a_n/b_n\to0$. Then $\g_n\inv\b_n\inv=\a_n\inv$ where $\a_n(u,v)=(b_nu,b_n+a_nv)$. Hence the renormalized sample clouds converge to the Poisson point process with mean measures $\s$, and the restriction of $\s$ to $\{v\ge0\}$ lives on $E_0\times[0,\nf)$ where $E_0$ is the set $\{u\mid (u,1)\in E\}$. ### The combined situation In a number of examples above of distributions on $C$ the high risk scenarios for horizontal halfplanes, normalized by $\a_t:(u,v)\mapsto(x,y)=(tu,t+a(t)v)$, converge to a random vector $(U,V)$ with independent components, $V$ is standard exponential and $U$ lives on the interval $[-1,1]$. Recall from Proposition \[pD1\] that for meta distributions on $D$ these high risk scenarios with the same normalization also converge to a random vector with independent components. The vertical component is again standard exponential; the horizontal component lives on the two point set $\{-1,1\}$. The associated excess measures are product measures on the vertical strip $[-1,1]\times\rbb$. The density along the vertical line is $c_0e^{-v}/c$. Now consider the sum of these two light-tailed distributions. We can alter this sum on a compact set to make it into a probability measure. The excess measure $\tilde\s$ associated with the high risk limit distribution for this new light-tailed distribution has the same structure: It is a product measure on the vertical strip $[-1,1]\times\rbb$, the vertical component has density $ce^{-v}$, the horizontal component is a probability measure $\s^*$ on $[-1,1]$ with atoms $p_-$ and $p_+$ in the points $\pm1$. If we look at the heavy-tailed distributions we see that the excess measures $\r$ for distributions on $D$ live on the complement of the axes, and for distributions on $C$ live on the axes. So here the sum has an excess measure which has mass both on the axes, and on the complement. The restriction of this measure to the set above the horizontal axis is characterized by the projection on the vertical coordinate with density $c\l/r^{\l+1}$ and a probability measure $\r^*$ on the horizontal line, the spectral measure (see [@Balkema2007], Section 14.8) with the property that $\r^*[c,\nf)=\r(E)/c$, where $E=\{(u,v)\mid u\ge cv, v\ge1\}$ is the set above the horizontal line $\{v=1\}$ and to the right of the ray through the point $(c,1)$. The probability measure $\r^*$ has an atom of weight $p_0$ in the origin. \[pCD1\] The weights $p_-,p_+,p_0$ above have sum $p_-+p_0+p_+\ge1$. If the horizontal component $U$ on $[-1,1]$ from the high risk limit scenario due to the light-tailed density on $C$ lives on the open interval $(-1,1)$ then $p_-+p_0+p_+=1$, and $p_i=c_i/c$ as in Proposition \[pD1\] where now $c_-=\r((-\nf,0)\times[1,\nf))$, $c_0=\r(\{0\}\times[1,\nf))$, $c_+=\r((0,\nf)\times[1,\nf))$, and $c=c_-+c_0+c_+=\r(\rbb\times[1,\nf))$. There is a simple relation between high risk scenarios for horizontal halfspaces for the heavy-tailed density and the light-tailed meta density on $\rbb^2$ in terms of the meta transformation provided the halfspaces have the same probability mass. The inverse $K\inv$ maps such a $\zb$-halfspace $H$ into an $\xb$-halfspace $H'$. In the limit for the excess measures this yields a coordinatewise mapping from $\zb$-space to $\xb$-space: (z\_1,z\_2)(x\_1,x\_2)=((z\_1),z\_2). The transformation for the horizontal coordinate is degenerate. Define the curve $\GG$ as the graph of $x\mapsto\operatorname{sign}(x)$ augmented with the vertical segment at the discontinuity in zero. There is a unique probability measure $\m$ on $\GG$ whose horizontal projection is $\r^*$ and whose vertical projection is $\s^*$. This follows from the inequality in Proposition \[pCD1\]. The excess measures $\r$ on $(0,\nf)^2$ and $\s$ on $\rbb^2$ are linked by the coordinatewise exponential transformation in Proposition \[prK\]. The excess measures $\r$ on $\rbb\times(0,\nf)$ and $\tilde\s$ on $[-1,1]\times\rbb$ are linked by the coordinatewise map (\[qCD1\]). The high risk limit scenarios are limits of vectors $\XB^{H'}$ and $\ZB^H$ where $H'$ and $H$ are corresponding horizontal halfspaces (with the same mass). These vectors are linked by the coordinatewise monotone transformation $K$ restricted to $H'$. In such a situation only a very limited class of transformations is possible in the limit, apart from affine transformations only power transformations, the exponential and its inverse the logarithmic, and four degenerate transformations amongst which the transformation $\operatorname{sign}$ and its inverse, see [@Balkema1973] Chapter 1 for details. Discussion {#sconc} ========== In situations where chance plays a role the asymptotic description often consists of two parts, a deterministic term, catching the main effect, and a stochastic term, describing the random fluctuations around the deterministic part. Thus the average of the first $n$ observations converges to the expectation; under additional assumptions the difference between the average and the expectation, blown up by a factor $\sqrt n$, is asymptotically normal. Empirical dfs converge to the true df; the fluctuations are modeled by a time-changed Brownian bridge. For a positive random variable, the $n$-point sample clouds $N_n$ scaled by the $1-1/n$ quantile converge onto the interval $[0,1]$ if the tail of the df is rapidly varying; if the tail is asymptotic to a von Mises function then there is a limiting Poisson point process with intensity $e^{-s}$. Convergence to the first order deterministic term in these situations is a much more robust affair than convergence of the random fluctuations around this term. So it is surprising that for meta distributions perturbations of the original distribution which do not affect the second order fluctuations of the sample cloud at the vertices may drastically alter the shape of the limit set, the first order term. This paper tries to cast some light on the sensitivity of the meta distribution and the limit set $E$ to small perturbations of the original distribution. Bivariate asymptotics are well expressed in terms of polar coordinates. Two points far off are close together if the angular parts are close and if the quotient of the radial parts is close to one. This geometry is respected by certain partitions. A partition is regular if points in the same atom are uniformly close as one moves out to infinity. Call probability distributions *equivalent* if they give the same or asymptotically the same weight to the atoms of a regular partition. Equivalent distributions have the same asymptotic behaviour with respect to scaling. This paper compares the asymptotic behaviour of a heavy-tailed bivariate density with the asymptotic behaviour of the meta density with light-tailed marginals. Small changes in the heavy-tailed density, changes which have no influence on its asymptotic behaviour, may lead to significant changes in the asymptotic behaviour of the meta distribution. We show that regular partitions for the heavy-tailed distribution and for the light-tailed meta distribution are incommensurate. The atoms at the diagonals in the light-tailed distribution fill up the quadrants for the heavy-tailed distribution; atoms at the axes in the heavy-tailed distribution fill up the four segments between the diagonals for the light-tailed distributions. Section \[sdom2\] shows how equivalent distributions in the one world give rise to different asymptotic behaviour in the other. In our approach the asymptotic behaviour in both worlds is investigated by rescaling. In the heavy-tailed world one obtains a limiting Poisson point process whose mean measure is an excess measure $\r$ which is finite outside centered disks in the plane; in the light-tailed world the sample clouds converge onto a star-shaped limit set $E$. The only relation between $\r$ and $E$ is the parameter $\l$. This parameter describes the rate of decrease of the heavy-tailed marginal distributions; it also is one of the two parameters which determine the shape of the limit set $E$. The measure $\r$ describes the asymptotics for extreme order statistics; the set $E$ for the intermediate ones. In Section \[sdom3\] it is shown that it is possible to manipulate the shape of the limit set $E$ without affecting the distribution of the extremes. There are two worlds, the heavy-tailed and the light-tailed; the bridge linking these worlds is the meta transformation that (in our approach) maps heavy-tailed distributions into light-tailed distributions. Coordinatewise multivariate extreme value theory with its concepts of max-stable laws and exponent measures is able to cross the bridge, and to describe the asymptotic theory of the two worlds in a unified way. The exponent measures of the heavy-tailed world are linked to the light-tailed world by a coordinatewise exponential transformation. A closer look reveals a different universe. The two worlds exist side by side like the two sides of a sheet of paper, each having its own picture. On the one side we see a landscape; on the other the portrait of a youth. Closer inspection shows an avenue leading up to a mansion in the far distance in the landscape, and a youth looking out from one of the windows; in the black pupils of his eyes on the reverse side of the paper we see a reflection of the landscape. In Section \[sbp\] it is shown that it is possible to disentangle the heavy- and the light-tailed parts of a distribution by using the marginals to define biregular partitions. The heavy-tailed distribution gives information about the coordinatewise extremes. The light-tailed distribution gives this information and more: a limit shape, and limit distributions for horizontal and vertical high risk scenarios. The limit shape is a compact star-shaped subset of the unit square; the horizontal and vertical high risk limit scenarios have independent components $(U,V)$. In appropriate coordinates, $V$ is standard exponential and $U$ is distributed over the interval $[-1,1]$. Biregular partitions make it possible to combine a representative of the heavy-tailed distribution (with no excess mass on the axes) with a representative of the light-tailed distribution (with asymptotically independent components) into one probability distribution. The excess measure of the combined distribution is the sum of the two individual excess measures; the limit set is the union of the two individual limit sets. The biregular partition is finer than the regular partition for the light-tailed distribution. It not only sees the limit set, it is fine enough to discern the high risk asymptotics in all directions. While the asymptotics of heavy-tailed multivariate distributions is well understood and nicely reflected in the asymptotic behaviour of the copula at the vertices of the unit cube, for light-tailed distributions there are still many open areas. The possible high risk limit scenarios for light-tailed distributions have been described; see [@Balkema2007], Section 14. However, their domains of attraction are still unexplored territory. This paper shows that the asymptotics of heavy-tailed multivariate distributions do not suffice to describe the asymptotic behaviour of the light-tailed meta distributions. One possible explanation for the loss of information in crossing the bridge between heavy and light tails is the highly non-linear nature of the meta transformation. Coordinate hyperplanes are preserved, and so are centered coordinate cubes because of the equal and symmetric marginals. Geometric concepts such as sphere, convex set, halfspace and direction are lost. The meta transformation maps rays in the $\xb$-world which do not lie in one of the diagonal hyperplanes $\{x_i=x_j\}$ into curves which are asymptotic to the halfaxis in the center of the sector bounded by the diagonal planes. Similarly, the inverse transformation maps rays which do not lie in one of the coordinate planes into curves converging to the diagonal ray in the center of the orthant bounded by the coordinate planes. In the bivariate setting there is a clear duality between diagonals and axes. Heavy-tailed asymptotic dependence reduces to co- or counter-monotonicity in the light-tailed world; horizontal and vertical high risk limit scenarios in the light-tailed world reduce to asymptotic independence in the heavy-tailed world. Biregular partitions allow us to see the combined effect, by describing the two worlds side by side. The asymptotics of light-tailed densities with given marginals is not well understood. We hope to return to this topic later. Similarly it is not clear what role is played by the remainder set $O$ in our decomposition $C\cup D\cup O$. Appendix ======== Supplementary results --------------------- \[lAog\] Let $g$ be a positive continuous symmetric density which is asymptotic to a von Mises function $e^{-\j}$. There exists a continuous unimodal symmetric density $g_1$ such that for all $c\in(0,1)$ $$g_1(s)/g(s)\to0\qquad g_1(cs)/g(s)\to\nf\qquad s\to\nf.$$ Let $M_n(s)=\j(s)-\j(s-s/n)$, and let $M_n^*(s)=\min_{t>s}M_n(t)$ for $n\ge2$. Each function $M_n^*$ is increasing, continuous and unbounded (since $t/a(t)\to\nf$), and for each $s>0$ the sequence $M_n^*(s)$ is decreasing. There exists a continuous increasing unbounded function $b$ such that \_[s]{}M\_n(s)-b(s)=n=1,2,…. Indeed, define $M^*(s)=M_n^*(s)$ on $[a_n,b_n]=\{M_n^*\in[n,n+1]\}$, and $M^*(s)=n$ on $[b_{n-1},a_n]$. Then $M^*(s)$ is increasing and $M^*(s)\le M_n^*(s)$ eventually for each $n\ge2$. Set $b(s)=M^*(s)/2$ to obtain . The function $g_1=e^{-\j_1}$ with $\j_1(s)=\j(s)+b(s)$ is decreasing on $[0,\nf)$ and continuous, and $b(s)\to\nf$ implies $g_1(s)/g(s)\to0$ for $s\to\nf$. For $c=1-1/m$ the relations $$\j(s)-\j_1(cs)=\j(s)-\j(cs)-b(cs)\ge M_m(s)-b(s)\to\nf$$ hold, and yield the desired result. \[pAog\] Let $g_d$ be a continuous positive symmetric density which is asymptotic to a von Mises function $e^{-\j}$. Choose $r_n$ such that $\int_{r_n}^\nf g_d(s)ds\sim1/n$. Let $g_1$ be the probability density in Lemma \[lAog\]. There exists a unimodal density $g(\xb)=g_*(\|\xb\|_\nf)$ on $\rbb^d$ with cubic level sets and marginals $g_1$. The sample clouds from the density $g$, scaled by $r_n$ converge onto the standard cube $[-1,1]^d$. The functions $h_n(\ub)=nr_n^dg(r_n\ub)$ are unimodal with cubic level sets. They satisfy $$h_n(\ub)\to\bcs\nf&\ub\in(-1,1)^d\\0&\ub\not\in[-1,1]^d.\ecs$$ Let $E$ be a closed subset of $C=[-1,1]^d$, containing the origin as interior point, star-shaped with continuous boundary. Set $c_E=|E|/2^d$. Then $g_E(\xb)=g_*(n_E(\xb))/c_E$ is a probability density, and the sample clouds from $g_E$ scaled by $r_n$ converge onto the set $E$. Existence of $g$ follows from Proposition A.3 in [@Balkema2009]. Let $G_1$ be the df with density $g_1$. Then $n(1-G_1(cr_n))\to0$ for $c>1$, and $n(1-G_1(cr_n))\to\nf$ for $c\in(0,1)$. Let $\p$ be the probability distribution with the unimodal density $g$. The limit relations on the marginal dfs $G_1$ imply that $n\p(B_n)\to\nf$ for the block $B_n=[-2r_n,2r_n]^{d-1}\times[cr_n,2r_n]$ for any $c\in(0,1)$. Since $h_n$ is unimodal with cubic level sets it follows that $h_n\to\nf$ uniformly on $[-c,c]^d$ for any $c\in(0,1)$. (Since $h_n(c\oneb)\le k$ implies $n\p(B_n)\le(2c)^dk$.) The area of a horizontal slice of the density $g_E$ at level $y/c_E>0$ is less than the area of the horizontal slice of $g$ at level $y$, but the height of the slice is proportionally more by the factor $c_E$. So the slices have the same volume. The level sets of the scaled densities are related: $$\{h_E\ge t/c_E\}=rE\quad\iff\quad\{h\ge t\}=rC.$$ So the function $h_E$ mimics the behaviour of $h=h_C$. Summary of notation ------------------- This section is intended to provide a guide to the notation. Throughout the paper, it is convenient to keep in mind two spaces: $\zb$-space on which the heavy-tailed dfs $F,F^*, \ldots$ are defined, and $\xb$-space on which the light-tailed dfs $G,G^*,\ldots$ are defined. Table \[tab1\] compares notation used for mathematical objects on these two spaces. Table \[tab2\] can be consulted while reading Section \[sres\] in order to keep track of various symbols used to distinguish original and meta distributions with certain properties and purpose. $\zb$-space $\xb$-space Comments ----------------------------- ------------------------------------- --------------------------------------- $F$ and $f$ $G=F\circ K$ and $g$ joint df and density $F_0$ and $f_0$ $G_0=F_0\circ K_0$ and $g_0$ marginal df and density $\p$ $\m$ probability measures $\p_n$ $\m_n$ mean measures of scaled sample clouds $\ZB$, $\ZB_1,\ZB_2,\ldots$ $\XB$, $\XB_1,\XB_2,\ldots$ random vectors $N_n$ $M_n$ scaled $n$-point sample clouds $N$: Poisson point process $E$: limit set limit of scaled sample clouds $c_n:\;1-F_0(c_n)\sim 1/n$ $b_n:\;-\log(1-G_0(b_n))\sim\log n$ scaling constants $(B_n)$, $t_n=K_0(s_n)$ $(A_n)$, $s_n$ block partitions, division points : Symbols used to distinguish various objects of interest in $\zb$-space and in $\xb$-space. Notation for marginals assumes that all marginal densities are equal and symmetric as in .[]{data-label="tab1"} Symbols Description -------------- ------------------------------------------------------------------------------------------ $F$, $G$ dfs satisfying the assumptions of the standard set-up $F^*$, $G^*$ dfs constructed to have properties (P.1)-(P.4) and yield the same asymptotics for $F$ and $F^*$, or $G$ and $G^*$; $\hat F$ any original df with marginals $F_0$ $\tilde F$ df obtained by changing $\hat F$ to have marginals $\tilde F_j$ tail equivalent to $F_0$ (e.g. mixtures in Section \[sdom3\]) $F^o$ original df with lighter marginals than $F$; see : Symbols used in Section \[sres\] to discuss robustness and sensitivity properties of meta distributions. Where not explicitly mentioned, notation for the associated meta df is analogous, e.g. $\tilde G=\tilde F\circ K$ and similarly for other objects $\tilde f$, $\tilde E$, etc.[]{data-label="tab2"} Symbol Description ------------------- -------------------------------------------------------------------------------------------------------- $f\asymp\tilde f$ ratios $f(\xb)/\tilde f(\xb)$ and $\tilde f(\xb)/f(\xb)$ are bounded eventually for $\|\xb\|\to\nf$ $f\sim\tilde f$ $\tilde f(\xb)/f(\xb)\to 1$ for $\|\xb\|\to\nf$ $n_D$ *gauge function* of set $D$: $n_D(\xb)>0$ for $\xb\neq\zerob$, $D=\{n_D<1\}$ and $n_D(c\xb)=cn_D(\xb)$ for $\xb\in\rbb^d$, $c\ge0$ $\eb$ a vector of ones in $\rbb^d$ $B$ the open Euclidean unit ball in $\rbb^d$ : Miscellaneous symbols.[]{data-label="tab3"}